Hello DO Community,

This is a small architectural question I guess.

I have a Docker-Droplet, and have learned that its just a Ubuntu with docker and docker-compose installed. That’s fine by me.
On that Droplet created 2 direcories and in each one a docker-compose file.

The first one is 2 containers running NodeJS and MongoDB. It’s called ‘MyAPI’.
The second one is 1 container running a React App on the nginx:alpine image with port 80 exposed.

What I would like is to have both (MyAPI and ReactApp) be available to the public. But I have only one IP address. And I also want them to be accessible only via HTTPS. (I have my own wildcard certificate)

Question is how to set this up.
Option 1: Expose public ports on each docker-compose itself and also handle the HTTPS/Certificate there
Option 2: Put a Nginx layer on top of that and do HTTPS and routing to the dockers there?

A general question is why I can’t find others asking this question. It raises the suspicion that I’m doing this all wrong from the start. Please let me know if that is the case.
(And, No, I don’t want to have a Droplet for each functionality. That would mean 2 Droplets and is overkill for the very small test-app I’m writing)

Thanks in advance for your help and answers,


These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

1 answer

Hi there @BertC,

This is a great question. I would recommend using Nginx as a reverse proxy in order to achieve that.

What you could do is start your containers on different ports than 80 and 443, that way you can setup Nginx and then proxy the traffic to your containers based on location or hostname for example.

You can take a look at this answer here where I explain how to do that with 2 containers:


This includes a short video demo on how to achieve this as well.

Let me know if you have any questions.

Hope that this helps!

Submit an Answer