Report this

What is the reason for this report?

s3cmd keeps getting killed

Posted on July 4, 2024

does anyone know a way to execute s3cmd tasks more reliable?

a large syncor setacl command gets interrupred regulary even its being executed on a droplet in the same datacenter.



This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Hey!

It is possible that your resources are being exhausted, mainly the RAM on the server.

To verify if this is indeed the case, what you could do is open 2 SSH sessions and run htop in one of the sessions and then start the s3cmd command in another session.

If indeed the RAM on the server is being exhausted, what you could do here is to upgrade your Droplet and add more RAM and CPU. Also, you could consider adding a SWAP file to have some extra buffer:

https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-20-04

Let me know how it goes!

- Bobby

Heya,

It sounds to me like you don’t have enough RAM on your Droplet thus your processed are being killed.

For a short-term solution, you can try adding SWAP to see if that would help with the situation:

https://www.digitalocean.com/community/tutorial-collections/how-to-add-swap-space

Swap is a portion of hard drive storage that has been set aside for the operating system to temporarily store data that it can no longer hold in RAM. This lets you increase the amount of information that your server can keep in its working memory, with some caveats. The swap space on the hard drive will be used mainly when there is no longer sufficient space in RAM to hold in-use application data.

Heya,

On top of what’s already been mentioned, s3cmd supports a --retries option that you can use to automatically retry failed operations.

You can use screen or tmux to run the command in a session that you can detach from and reattach to later. This ensures the command continues to run even if your connection to the server is interrupted.

Another approach will be to break the sync operation into smaller batches if possible. This reduces the chance of failure and makes it easier to retry smaller jobs.

Regards

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.