By luismuzquiz
Since i have 70 websites on my droplet I want to copy my nginx and letsencrypt folders then compress them and lastly send them to S3.
Im thinking of using the cp -a command which they tell me preserves symlinks and pretty much everything within the folder structure (permissions for example).
After that i would like to use the tar -czf command to compress the folder before sending it to S3
Is this a sane way of doing this?
Also, a different question: what is the advantage of using rsync intead of cp -a i mean cp -a it’s pretty straight forward. Why would i want to use rsync in this case? (to backup the whole letsencrypt folder for example) .
Thanks in advanced!
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
I believe that this should work properly but would recommend using rsync if possible. The primary difference is that when transfering a tarball you are sending everything, each time. If you use the cp command and then rsync that directory it will only transfer files that have changed since the last sync, saving time, resources and bandwdith. it’s less of an issue with a few config files but in general a good practice.
Let us know how you end up crafting your solution
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.