Question

Celery workers dying when running multiple workers or concurrency in Docker

We have docker containers running Django, Celery, Celery Beat and Celery Flower. Recently we have an issue whereby we can only run a single worker with a concurrency of 1, or the worker simply dies when it encounters a task that produces a stacktrace.

For some reason, running a single worker with concurrency of 1 will never die, but as soon as you deviate from this, all workers will die on any stacktrace.

This does not happen on local machine at all, and all docker containers have seemingly enough RAM (we have a 16GB droplet). We do run on a shared CPU, not sure if that is the issue. We were thinking of moving to a new droplet with dedicated CPU and memory optimized, thinking it might be that?

Any ideas?


Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Hello,

Can you share the stack trace that you get when you start the multiple instances?

Also, do you see any errors in your system log on the server itself?

Best,

Bobby