How does horizontal scaling work and what are the limitations?

I’d like to get a better understanding of how horizontal scaling works when you’re not using a basic wsgi (or similar) setup. Does the scaling/load balancing work on any multithreaded or multiprocess applications? And if scaled across multiple containers, can processes communicate with each other via file system/pipes/TCP/UDP?

I’m developing an api where an HTTP handler will fill a job queue. And other processes/threads will get jobs off the queue and do them. The operations require a large amount of computation on set up and so it’s not practical to do that for every request in some sort of wsgi handler.

Does anyone have any idea if the DO horizontal scaling can accommodate this? Thanks in advance.

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

👋 @coopermosshart

In the case of web service components in App Platform the traffic is distributed across the scaled containers. These do not share file systems, but can communicate over http/tcp.

From the scenario you described horizontal scaling sounds like it could be a good option. As you’ve identified putting this all in one container won’t scale. Here’s what I would recommend:

Create an App Platform app with two components an App and a Worker. The App will handle your web traffic, and when a user submits a task via the http handler it will store that in a database. The worker will periodically call the App via an http call to get the next task to be processed. That task is then processed and values returned to the App for storage on completion. With this design, you can add or remove instances of your Worker to fit your task needs. With this design you only need one small, low cost web server, and can let the workers do the hard work scaling them up or down as needed.

Alternatively, your workers could reach out to your databases directly but this might not scale long term with database connection pooling and locking.