Is it OK to use cp -a and tar -czf commands to backup my nginx and letsecnrypt folders?

Since i have 70 websites on my droplet I want to copy my nginx and letsencrypt folders then compress them and lastly send them to S3.

Im thinking of using the cp -a command which they tell me preserves symlinks and pretty much everything within the folder structure (permissions for example).

After that i would like to use the tar -czf command to compress the folder before sending it to S3

Is this a sane way of doing this?

Also, a different question: what is the advantage of using rsync intead of cp -a i mean cp -a it’s pretty straight forward. Why would i want to use rsync in this case? (to backup the whole letsencrypt folder for example) .

Thanks in advanced!

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

I believe that this should work properly but would recommend using rsync if possible. The primary difference is that when transfering a tarball you are sending everything, each time. If you use the cp command and then rsync that directory it will only transfer files that have changed since the last sync, saving time, resources and bandwdith. it’s less of an issue with a few config files but in general a good practice.

Let us know how you end up crafting your solution