Join 1M+ other developers and:
- Get help and share knowledge in Q&A
- Subscribe to topics of interest
- Get courses & tools that help you grow as a developer or small business owner
What performance fluctuation to expect from a Shared CPU on DO?
We are running a microservice on a shared CPU droplet. This microservice performs occasional, long running, CPU intensive background tasks. The time it takes for a task to finish depends on input, and can take from 10 seconds to an hour (or even more), depending on its complexity.
Because the input data is technically user-provided, I’d like to apply a time duration limit. Say, the task is allowed to run for 30 minutes, and if it’s still not done by then, it’s rejected and marked as non-processable.
However, on a shared CPU, it’s well understood that the available processing power is effectively unknown. Thus, in this case, the very same task can take a different duration to finish.
We want to take this into consideration when deciding on a limit, but for that, a rough estimation of CPU power fluctuation is needed.
Does anyone have any experience as to what to expect here? Say, is there a rough threshold of CPU power that’s always available in practice? For example, can we expect CPU power to be at least 20% of its peak capabilities at all times in practice?
Thanks in advance.
PS: I know simply switching to a dedicated CPU would make this a non-issue, but currently it’s way above our requirements.
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.×