Question

Best way to have multiple instances of application running concurrently?

Posted November 2, 2020 178 views
NginxDigitalOceanDokkuLoad BalancingCI/CD

Currently, I’m using a single droplet (Ubuntu Dokku 0.17.9 on 18.04, 3GB Memory, 50GB Disk), which contains 1 instance of my application, plus an old staging site, each on 1 Docker container each. We have a CI/CD pipeline with Github and CircleCI - any merge to master branch will trigger a new build and deploy.

The main application is a website, and we’re still in the early stages of the business, so there isn’t too much traffic at the moment, though we’re expecting more and more visitors over the next few months.
Every so often, the site goes down, which requires one of the team to restart it through a dokku ps:restart command, which isn’t ideal. I’m hoping to find the most efficient and simple way to have at least 2 instances of the application running at any one time, and probably a load balancer too to direct traffic into either container, so hopefully our end users will not notice anything if there’s ever a time when the site goes down.

Some options that I’ve come up with so far are:

Having a second Droplet on DigitalOcean containing the exact same application / container as my current droplet, and then setting up a D.O. Load Balancer to distribute traffic between the 2 droplets.
Having a second container running on the current droplet and setting up a Load Balancer inside the droplet (have a third container with nginx load balancer? Is there another way?)
Migrating to Kubernetes and using a managed Kubernetes solution

The big issue with 1. is that it seems extremely complicated to set up a workflow where both droplets / instances of the application would be updated at the same time when deploying a new build. I may well be missing something here that would make this process easier / doable…
For 3, we are definitely planning to move to Kubernetes at some point, but aren’t sure it’s worth taking up developer time at this stage to set it up.
So, 2 seems like it would be the best, and easiest, short-term solution, if it’s indeed possible.

I would greatly appreciate any advice or pointers on this, as it’s pretty unfamiliar territory for me, and I want to make sure it’s done in a sustainable and efficient way!

Thank you for your help!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
1 answer

Hi there @kimLobster,

Yes, I believe that the 3 solutions that you’ve mentioned are absolutely valid.

With Option 2 and Option 1, you might have to do the same amount of configuration changes.

Here are might thoughts on the 3 options:

  • Option 1: The main problem is to come up with a good mechanism to deploy to the two Droplets after a merge, but if you could get that working for the two Droplets, the load balancing solution would be quite straight forward, as it would be only a matter of adding a load balancer in front of your Droplets and it would take care of the health checks and the balancing for you. So this is definitely a solution that is worth considering.

  • Option 2: Here there are a few things to keep in mind, if you start the 2 containers at the same time, they need to map on different ports as you can have 1 container mapped to 1 port at a time. Once you have the 2 containers running on the same Droplet, you could use Nginx to do the load balancing, your Nginx config could look something like this:

http {
  upstream myproject {
    server 127.0.0.1:8000 weight=3;
    server 127.0.0.1:8001;
  }

  server {
    listen 80;
    server_name www.domain.com;
    location / {
      proxy_pass http://myproject;
    }
  }
}
  • Option 3: Kubernetes is definitely something to consider in the long run as it will do all of the self-healing, autoscaling and load-balancing for you.

Another thing to consider is to try and get to the bottom of the problem and figure out why the containers are crashing, that way you would not have to manually start the container each time. This would also be beneficial even if you go for option 2 or option 1 as you might have situations where the 2 copies of the container are stopped.

Hope that this helps!
Regards,
Bobby

  • Hi @bobbyiliev,

    Thank you so much for your response! It was very helpful.

    Yes, we’ve been trying to get to the bottom of the issue that causes the site to go down, and think we’ve probably found it since it’s not happening as often now. But it made us realise that it’s definitely necessary to have some redundancy in our app.

    I’m looking into doing point 2. (2 containers in the current droplet) as it seems like I’m halfway there with it.

    I recently changed the scale to 2 on Dokku, so now there are always 2 containers running that serve the site, with dokku ps:scale node-js-app web=2.
    The nginx.conf for the main application now looks like this:

    upstream <application>-production-3000 {
    
      server 172.17.0.2:3000;
    
      server 172.17.0.5:3000;
    }
    

    I feel like one should be something other than 3000, as you mentioned above, but not sure exactly where this would be set / changed - do you have any tips on this part?

    Thank you again for your help! I’ve been chasing my tail with this for about a week!

Submit an Answer