Best way to have multiple instances of application running concurrently?

Currently, I’m using a single droplet (Ubuntu Dokku 0.17.9 on 18.04, 3GB Memory, 50GB Disk), which contains 1 instance of my application, plus an old staging site, each on 1 Docker container each. We have a CI/CD pipeline with Github and CircleCI - any merge to master branch will trigger a new build and deploy.

The main application is a website, and we’re still in the early stages of the business, so there isn’t too much traffic at the moment, though we’re expecting more and more visitors over the next few months. Every so often, the site goes down, which requires one of the team to restart it through a dokku ps:restart command, which isn’t ideal. I’m hoping to find the most efficient and simple way to have at least 2 instances of the application running at any one time, and probably a load balancer too to direct traffic into either container, so hopefully our end users will not notice anything if there’s ever a time when the site goes down.

Some options that I’ve come up with so far are:

Having a second Droplet on DigitalOcean containing the exact same application / container as my current droplet, and then setting up a D.O. Load Balancer to distribute traffic between the 2 droplets. Having a second container running on the current droplet and setting up a Load Balancer inside the droplet (have a third container with nginx load balancer? Is there another way?) Migrating to Kubernetes and using a managed Kubernetes solution

The big issue with 1. is that it seems extremely complicated to set up a workflow where both droplets / instances of the application would be updated at the same time when deploying a new build. I may well be missing something here that would make this process easier / doable… For 3, we are definitely planning to move to Kubernetes at some point, but aren’t sure it’s worth taking up developer time at this stage to set it up. So, 2 seems like it would be the best, and easiest, short-term solution, if it’s indeed possible.

I would greatly appreciate any advice or pointers on this, as it’s pretty unfamiliar territory for me, and I want to make sure it’s done in a sustainable and efficient way!

Thank you for your help!

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

Hi there @kimLobster,

Yes, I believe that the 3 solutions that you’ve mentioned are absolutely valid.

With Option 2 and Option 1, you might have to do the same amount of configuration changes.

Here are might thoughts on the 3 options:

  • Option 1: The main problem is to come up with a good mechanism to deploy to the two Droplets after a merge, but if you could get that working for the two Droplets, the load balancing solution would be quite straight forward, as it would be only a matter of adding a load balancer in front of your Droplets and it would take care of the health checks and the balancing for you. So this is definitely a solution that is worth considering.

  • Option 2: Here there are a few things to keep in mind, if you start the 2 containers at the same time, they need to map on different ports as you can have 1 container mapped to 1 port at a time. Once you have the 2 containers running on the same Droplet, you could use Nginx to do the load balancing, your Nginx config could look something like this:

http {
  upstream myproject {
    server weight=3;

  server {
    listen 80;
    location / {
      proxy_pass http://myproject;
  • Option 3: Kubernetes is definitely something to consider in the long run as it will do all of the self-healing, autoscaling and load-balancing for you.

Another thing to consider is to try and get to the bottom of the problem and figure out why the containers are crashing, that way you would not have to manually start the container each time. This would also be beneficial even if you go for option 2 or option 1 as you might have situations where the 2 copies of the container are stopped.

Hope that this helps! Regards, Bobby