Cannot figure out how to access website on running docker container on digital ocean droplet

Posted May 8, 2020 8.7k views


So, for background, I’m mostly a front-end engineer who has some minor experience with node, express, docker, etc. But it’s not my strength. I’m trying to deploy a site that runs perfectly fine locally, via docker-compose, to a digital ocean droplet (the one with docker and docker-compose pre-installed)

I cannot, for the life of me, figure this out. It’s been a week now, at least, and I’ve even had other engineers hop on video calls with me and they couldn’t figure it out either. In the past, I’ve used AWS and GCP, and not to be a downer but those services were never so difficult to setup that I couldn’t figure it out.

Here is my setup/workflow.

I build my project using docker-compose. The command looks like this:
docker-compose -f docker-compose.yml -f build --no-cache --parallel

I then deploy to my remote repository on docker hub, and pull that down in my digital ocean droplet. So far, so good.

Since I am just trying to get something up for now, as a stopgap, so I can test and share with the designer, I am wanting to run that same image exactly how i would locally, just in production mode and accessible by the internet. Yes, I know that probably I should just run my image via docker run imagename but I have several other services attached to it (redis, postgres) and I mistakenly thought this would be easier and save me some time to just run it this way.

To that end, I used cat to basically copy/paste my docker compose configs into the droplet.

They look like this:

The base config below sets up redis and postgres.


version: '3'
    container_name: db
    image: postgres:11.2-alpine
    # networks:
    #   - web
      - "54320:5432"
      - "db_data:/var/lib/postgresql/data"

    container_name: redis
    # command: ["redis-server", "--bind", "redis", "--port", "6379"]
    image: redis:alpine
    # networks:
    #   - web
      # fixes warning when using redis with the barebones alpine image
      net.core.somaxconn: '511'

The config below runs the site in prod mode. Again, this works great locally, no issues.

version: '3'
    # build: .
    image: myimage
    container_name: prod
    working_dir: /app
    # networks:
    #   - web
    # links:
    #   - redis:redis
    # command here overrides CMD in Dockerfile. but Dockerfile requires CMD to be valid
    command: "npm run prod"
      - db
      - redis
    # ports:
    #   - ""
    # network_mode: host
      - DB=development #eventually change to production
      - DEBUG=false
      # - ENABLE_IPV6=true
      - NODE_ENV=development
      - DB_HOST=${DB_HOST}
      - DB_DEV_PW=${DB_DEV_PW}
      - DB_PROD_PW=${DB_PROD_PW}
      - DB_TEST_PW=${DB_TEST_PW}
      - PEPPER=${PEPPER}
      # - NETWORK_ACCESS=internal
      # - VIRTUAL_HOST=consensus.local
      # - VIRTUAL_PORT=3001
      # - VIRTUAL_PROTO=https
      - 3001
      # - 3000
    # volumes:
    #   - .:/app
    #   - node_modules:/app/node_modules
    restart: "always"

    # networks:
    #   - web
    restart: "always"

    # networks:
    #   - web
    restart: "always"

# networks:
#   web:

  # node_modules:

I’ve left all the comments in so you get a bit of a sense of all the various things I’ve tried so far.

Locally, I am using Docker for Mac, with my ip set to I am running my web server at as well, port 3001.

When I run my docker image in my digital ocean (ubuntu) host, i use the command:
docker-compose -f docker-compose.yml -f up --remove-orphans --force-recreate

This boots up the container and the linked services (redis, postgres) with no errors (with the current config). But I cannot curl localhost or from within the host. And when I try to access the site publicly by using the ip provided by the droplet, at port 3001, nothing happens. Depending on setup, it either times out and returns and empty response, or it doesn’t connect at all (the current case with the above config). Fwiw, I have reverted to just running my host as root, because when I did otherwise I ran into permissions issues (I couldn’t run a docker container, in a docker droplet, without being root or I would get eacces errors…)

What am I doing wrong? This is driving me insane.

My current best guess is that maybe I’ve configured ufw wrong somehow and all non-ssh connections are just blocked? I followed the tutorial here which seemed to me to just be a standard thing to do when setting up a new droplet, but now I’m concerned I broke something (I am not at all familiar with ufw).

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
4 answers

For future people: I “fixed” it by running docker exec against my postgres container and running migrate and seed (i use knex as an ODM). Obviously, this is not a good solution and you shouldn’t do that. But it was enough to get something up so people could look at test.

I have since setup a dedicated Postgres DB and I wish I had just done that at the very beginning, as it only took maybe 30ish minutes to get that working.

So while the docker-compose multiple container thing is fine for dev, at least at the beginning, I wouldn’t recommend it for production, at least not how I did it.

Update: I was able to get it working by setting up port forwarding (which I had tried before actually) and now it seems reachable. However, now my DB is not attached, so I am currently working on fixing that. I’ll update just in case anyone else is having these issues.

Specifically, the current issue I am now dealing with is that my DB_HOST is set to host.docker.internal locally, and this seems to error when running on my digitalocean host.

Thank you rosspatton for sharing. Your problems and steps help me so much to understand how to work with my containers into the DO droplets. I do not find any tutorial nor guide which talks about.
Thanks, Mateo.