Report this

What is the reason for this report?

Celery workers dying when running multiple workers or concurrency in Docker

Posted on January 7, 2022

We have docker containers running Django, Celery, Celery Beat and Celery Flower. Recently we have an issue whereby we can only run a single worker with a concurrency of 1, or the worker simply dies when it encounters a task that produces a stacktrace.

For some reason, running a single worker with concurrency of 1 will never die, but as soon as you deviate from this, all workers will die on any stacktrace.

This does not happen on local machine at all, and all docker containers have seemingly enough RAM (we have a 16GB droplet). We do run on a shared CPU, not sure if that is the issue. We were thinking of moving to a new droplet with dedicated CPU and memory optimized, thinking it might be that?

Any ideas?



This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Hello,

Can you share the stack trace that you get when you start the multiple instances?

Also, do you see any errors in your system log on the server itself?

Best,

Bobby

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.