Hey,
So, for background, I’m mostly a front-end engineer who has some minor experience with node, express, docker, etc. But it’s not my strength. I’m trying to deploy a site that runs perfectly fine locally, via docker-compose, to a digital ocean droplet (the one with docker and docker-compose pre-installed)
I cannot, for the life of me, figure this out. It’s been a week now, at least, and I’ve even had other engineers hop on video calls with me and they couldn’t figure it out either. In the past, I’ve used AWS and GCP, and not to be a downer but those services were never so difficult to setup that I couldn’t figure it out.
Here is my setup/workflow.
I build my project using docker-compose. The command looks like this:
docker-compose -f docker-compose.yml -f docker-compose.build.yml build --no-cache --parallel
I then deploy to my remote repository on docker hub, and pull that down in my digital ocean droplet. So far, so good.
Since I am just trying to get something up for now, as a stopgap, so I can test and share with the designer, I am wanting to run that same image exactly how i would locally, just in production mode and accessible by the internet. Yes, I know that probably I should just run my image via docker run imagename
but I have several other services attached to it (redis, postgres) and I mistakenly thought this would be easier and save me some time to just run it this way.
To that end, I used cat to basically copy/paste my docker compose configs into the droplet.
They look like this:
The base config below sets up redis and postgres.
docker-compose.yml
version: '3'
services:
db:
container_name: db
image: postgres:11.2-alpine
# networks:
# - web
ports:
- "54320:5432"
volumes:
- "db_data:/var/lib/postgresql/data"
redis:
container_name: redis
# command: ["redis-server", "--bind", "redis", "--port", "6379"]
image: redis:alpine
# networks:
# - web
sysctls:
# fixes warning when using redis with the barebones alpine image
net.core.somaxconn: '511'
The config below runs the site in prod mode. Again, this works great locally, no issues.
docker-compose.prod.yml
version: '3'
services:
prod:
# build: .
image: myimage
container_name: prod
working_dir: /app
# networks:
# - web
# links:
# - redis:redis
# command here overrides CMD in Dockerfile. but Dockerfile requires CMD to be valid
command: "npm run prod"
depends_on:
- db
- redis
# ports:
# - "127.0.0.1:3001:3001"
# network_mode: host
environment:
- DB=development #eventually change to production
- DEBUG=false
# - ENABLE_IPV6=true
- NODE_ENV=development
- DB_HOST=${DB_HOST}
- DB_DEV_PW=${DB_DEV_PW}
- DB_PROD_PW=${DB_PROD_PW}
- DB_TEST_PW=${DB_TEST_PW}
- MAXMIND_LICENSE_KEY=${MAXMIND_LICENSE_KEY}
- LOG_LEVEL=${LOG_LEVEL}
- PEPPER=${PEPPER}
# - NETWORK_ACCESS=internal
# - VIRTUAL_HOST=consensus.local
# - VIRTUAL_PORT=3001
# - VIRTUAL_PROTO=https
expose:
- 3001
# - 3000
# volumes:
# - .:/app
# - node_modules:/app/node_modules
restart: "always"
db:
# networks:
# - web
restart: "always"
redis:
# networks:
# - web
restart: "always"
# networks:
# web:
volumes:
db_data:
# node_modules:
I’ve left all the comments in so you get a bit of a sense of all the various things I’ve tried so far.
Locally, I am using Docker for Mac, with my ip set to 0.0.0.0. I am running my web server at 0.0.0.0 as well, port 3001.
When I run my docker image in my digital ocean (ubuntu) host, i use the command:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up --remove-orphans --force-recreate
This boots up the container and the linked services (redis, postgres) with no errors (with the current config). But I cannot curl localhost or 0.0.0.0 from within the host. And when I try to access the site publicly by using the ip provided by the droplet, at port 3001, nothing happens. Depending on setup, it either times out and returns and empty response, or it doesn’t connect at all (the current case with the above config). Fwiw, I have reverted to just running my host as root, because when I did otherwise I ran into permissions issues (I couldn’t run a docker container, in a docker droplet, without being root or I would get eacces errors…)
What am I doing wrong? This is driving me insane.
My current best guess is that maybe I’ve configured ufw wrong somehow and all non-ssh connections are just blocked? I followed the tutorial here https://www.digitalocean.com/community/tutorials/how-to-setup-a-firewall-with-ufw-on-an-ubuntu-and-debian-cloud-server which seemed to me to just be a standard thing to do when setting up a new droplet, but now I’m concerned I broke something (I am not at all familiar with ufw).
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
For future people: I “fixed” it by running docker exec against my postgres container and running migrate and seed (i use knex as an ODM). Obviously, this is not a good solution and you shouldn’t do that. But it was enough to get something up so people could look at test.
I have since setup a dedicated Postgres DB and I wish I had just done that at the very beginning, as it only took maybe 30ish minutes to get that working.
So while the docker-compose multiple container thing is fine for dev, at least at the beginning, I wouldn’t recommend it for production, at least not how I did it.
Thank you rosspatton for sharing. Your problems and steps help me so much to understand how to work with my containers into the DO droplets. I do not find any tutorial nor guide which talks about. Thanks, Mateo.
This comment has been deleted