By Tony Tran, Erika Heidi and Manikandan Kurup

Docker simplifies the process of managing application processes in containers. While containers are similar to virtual machines in certain ways, they are more lightweight and resource-friendly. This allows developers to break down an application environment into multiple isolated services.
For applications depending on several services, orchestrating all the containers to start up, communicate, and shut down together can quickly become unwieldy. Docker Compose is a tool that allows you to run multi-container application environments based on definitions set in a YAML file. It uses service definitions to build fully customizable environments with multiple containers that can share networks and data volumes.
This guide will walk you through installing Docker Compose on an Ubuntu server and running a simple container. From there, you will learn to build a multi-service environment using a WordPress application and a MySQL database. We will also cover more advanced topics, including scaling services, defining custom networks, and using modular include directives. Finally, this article provides a migration guide from the older docker-compose (v1) to the modern docker compose (v2) and a detailed section on troubleshooting common issues like port conflicts and permission errors.
Key Takeaways:
apt to install the docker-compose-plugin package.docker compose (with a space). This replaces the deprecated docker-compose (with a hyphen) v1 tool.docker compose up -d to start an application in detached mode and docker compose down to stop and remove all its containers and networks.depends_on directive helps control the startup order of services, such as ensuring a database container starts before a web application container.docker group), port conflicts (solved by changing the host port in the YAML file), and YAML syntax errors (solved by correcting indentation).docker compose up --scale <service_name>=<number> command.include directive allows you to split a large docker-compose.yml file into smaller, more manageable configuration files.To follow this article, you will need:
Note: Starting with Docker Compose v2, Docker has migrated towards using the compose CLI plugin command, and away from the original docker-compose as documented in our How to Install Docker Compose on Ubuntu (Step-by-Step Guide). While the installation differs, in general the actual usage involves dropping the hyphen from docker-compose calls to become docker compose. For full compatibility details, check the official Docker documentation on command compatibility between the new compose and the old docker-compose.
There are two ways to install Docker Compose on Ubuntu:
We’ll discuss both ways in this section.
apt repositoryFirst, let’s set up the Docker apt repository.
- # Add Docker's official GPG key:
- sudo apt-get update
- sudo apt-get install ca-certificates curl
- sudo install -m 0755 -d /etc/apt/keyrings
- sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
- sudo chmod a+r /etc/apt/keyrings/docker.asc
-
- # Add the repository to Apt sources:
- echo \
- "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
- $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
- sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
- sudo apt-get update
Now, you can install Docker Compose using the following command:
- sudo apt install docker-compose-plugin
Docker Compose is now successfully installed on your system. To verify that the installation was successful, you can run:
- docker compose version
You’ll see output similar to this:
OutputDocker Compose version v2.3.3
To make sure you obtain the most updated stable version of Docker Compose, you’ll download this software from its official Github repository.
First, confirm the latest version available in their releases page. At the time of this writing, the most current stable version is v2.40.2.
Use the following command to download:
- mkdir -p ~/.docker/cli-plugins/
- curl -SL https://github.com/docker/compose/releases/download/v2.40.2/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose
Next, set the correct permissions so that the docker compose command is executable:
- chmod +x ~/.docker/cli-plugins/docker-compose
In the next section, you’ll see how to set up a docker-compose.yml file and get a containerized environment up and running with this tool.
docker-compose.yml FileTo demonstrate how to set up a docker-compose.yml file and work with Docker Compose, you’ll create a web server environment using the official Nginx image from Docker Hub, the public Docker registry. This containerized environment will serve a single static HTML file.
Start off by creating a new directory in your home folder, and then moving into it:
- mkdir ~/compose-demo
- cd ~/compose-demo
In this directory, set up an application folder to serve as the document root for your Nginx environment:
- mkdir app
Using your preferred text editor, create a new index.html file within the app folder:
- nano app/index.html
Place the following content into this file:
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Docker Compose Demo</title>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/kognise/water.css@latest/dist/dark.min.css">
</head>
<body>
<h1>This is a Docker Compose Demo Page.</h1>
<p>This content is being served by an Nginx container.</p>
</body>
</html>
Save and close the file when you’re done. If you are using nano, you can do that by typing CTRL+X, then Y and ENTER to confirm.
Next, create the docker-compose.yml file:
- nano docker-compose.yml
Insert the following content in your docker-compose.yml file:
services:
web:
image: nginx:alpine
ports:
- "8000:80"
volumes:
- ./app:/usr/share/nginx/html
In modern Docker Compose, the version field is optional and often omitted, as Compose can automatically detect the configuration version. The example above does not include a version field, which is the recommended approach for most new projects. You only need to specify version for legacy compatibility.
You then have the services block, where you set up the services that are part of this environment. In your case, you have a single service called web. This service uses the nginx:alpine image and sets up a port redirection with the ports directive. All requests on port 8000 of the host machine (the system from where you’re running Docker Compose) will be redirected to the web container on port 80, where Nginx will be running.
The volumes directive will create a shared volume between the host machine and the container. This will share the local app folder with the container, and the volume will be located at /usr/share/nginx/html inside the container, which will then overwrite the default document root for Nginx.
Save and close the file.
You have set up a demo page and a docker-compose.yml file to create a containerized web server environment that will serve it. In the next step, you’ll bring this environment up with Docker Compose.
With the docker-compose.yml file in place, you can now execute Docker Compose to bring your environment up. The following command will download the necessary Docker images, create a container for the web service, and run the containerized environment in background mode:
- docker compose up -d
Docker Compose will first look for the defined image on your local system, and if it can’t locate the image it will download the image from Docker Hub. You’ll see output like this:
OutputCreating network "compose-demo_default" with the default driver
Pulling web (nginx:alpine)...
alpine: Pulling from library/nginx
cbdbe7a5bc2a: Pull complete
10c113fb0c77: Pull complete
9ba64393807b: Pull complete
c829a9c40ab2: Pull complete
61d685417b2f: Pull complete
Digest: sha256:57254039c6313fe8c53f1acbf15657ec9616a813397b74b063e32443427c5502
Status: Downloaded newer image for nginx:alpine
Creating compose-demo_web_1 ... done
Note: If you encounter a “permission denied” error when running docker compose up, this typically means your non-root user does not have permission to access the Docker daemon’s socket.
By default, the Docker daemon binds to a Unix socket (/var/run/docker.sock) which is owned by the root user. To fix this, you must add your non-root user to the docker group, which is created during Docker’s installation.
Run the following command to add your user to the docker group:
- sudo usermod -aG docker ${USER}
After running this command, you will need to log out and log back in for the group changes to take effect. You can also activate the changes for the current terminal session by typing:
- newgrp docker
This command should resolve any permission errors related to the Docker socket. For a full walkthrough, please refer to Step 2 of How to Install Docker on Ubuntu – Step-by-Step Guide.
Your environment is now up and running in the background. To verify that the container is active, you can run:
- docker compose ps
This command will show you information about the running containers and their state, as well as any port redirections currently in place:
Output Name Command State Ports
----------------------------------------------------------------------------------
compose-demo_web_1 /docker-entrypoint.sh ngin ... Up 0.0.0.0:8000->80/tcp
You can now access the demo application by pointing your browser to either localhost:8000 if you are running this demo on your local machine, or your_server_domain_or_IP:8000 if you are running this demo on a remote server.
You’ll see a page like this:

The shared volume you’ve set up within the docker-compose.yml file keeps your app folder files in sync with the container’s document root. If you make any changes to the index.html file, they will be automatically picked up by the container and thus reflected on your browser when you reload the page.
In the next step, you’ll see how to manage your containerized environment with Docker Compose commands.
You’ve seen how to set up a docker-compose.yml file and bring your environment up with docker compose up. You’ll now see how to use Docker Compose commands to manage and interact with your containerized environment.
To check the logs produced by your Nginx container, you can use the logs command:
- docker compose logs
You’ll see output similar to this:
OutputAttaching to compose-demo_web_1
web_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
web_1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
web_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
web_1 | 10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
web_1 | 10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
web_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
web_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
web_1 | 172.22.0.1 - - [02/Jun/2020:10:47:13 +0000] "GET / HTTP/1.1" 200 353 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36" "-"
If you want to pause the environment execution without changing the current state of your containers, you can use:
- docker compose pause
OutputPausing compose-demo_web_1 ... done
To resume execution after issuing a pause:
- docker compose unpause
OutputUnpausing compose-demo_web_1 ... done
The stop command will terminate the container execution, but it won’t destroy any data associated with your containers:
- docker compose stop
OutputStopping compose-demo_web_1 ... done
If you want to remove the containers, networks, and volumes associated with this containerized environment, use the down command:
- docker compose down
OutputRemoving compose-demo_web_1 ... done
Removing network compose-demo_default
Notice that this won’t remove the base image used by Docker Compose to spin up your environment (in your case, nginx:alpine). This way, whenever you bring your environment up again with a docker compose up, the process will be much faster since the image is already on your system.
In case you want to also remove the base image from your system, you can use:
- docker image rm nginx:alpine
OutputUntagged: nginx:alpine
Untagged: nginx@sha256:b89a6ccbda39576ad23fd079978c967cecc6b170db6e7ff8a769bf2259a71912
Deleted: sha256:7d0cdcc60a96a5124763fddf5d534d058ad7d0d8d4c3b8be2aefedf4267d0270
Deleted: sha256:05a0eaca15d731e0029a7604ef54f0dda3b736d4e987e6ac87b91ac7aac03ab1
Deleted: sha256:c6bbc4bdac396583641cb44cd35126b2c195be8fe1ac5e6c577c14752bbe9157
Deleted: sha256:35789b1e1a362b0da8392ca7d5759ef08b9a6b7141cc1521570f984dc7905eb6
Deleted: sha256:a3efaa65ec344c882fe5d543a392a54c4ceacd1efd91662d06964211b1be4c08
Deleted: sha256:3e207b409db364b595ba862cdc12be96dcdad8e36c59a03b7b3b61c946a5741a
Note: Please refer to our guide on How to Install Docker on Ubuntu – Step-by-Step Guide for a more detailed reference on Docker commands.
The true power of Docker Compose is in managing multiple services that work together. The Nginx example was a single service. Let’s create a more practical, multi-service application: a WordPress website connected to a MySQL database.
This setup involves two services: wordpress (running the application) and db (running the database). We will also use Docker volumes to ensure the database data persists even if the container is removed.
Let’s create a new directory for this application:
- mkdir -p ~/compose-demo/wordpressapp
- cd ~/compose-demo/wordpressapp
For this example, we don’t need any local files, as the images will contain all the necessary software.
Create a new docker-compose.yml file:
- nano docker-compose.yml
Paste the following configuration. This file is more complex, so we will examine each part.
services:
db:
image: mysql:8.0
container_name: mysql_db
volumes:
- db_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: <^your_root_password_here<^>
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress_user
MYSQL_PASSWORD: <^your_wordpress_password_here<^>
restart: unless-stopped
wordpress:
image: wordpress:latest
container_name: wordpress_app
ports:
- "8001:80"
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress_user
WORDPRESS_DB_PASSWORD: <^your_wordpress_password_here<^>
WORDPRESS_DB_NAME: wordpress
volumes:
- ./wp_content:/var/www/html/wp-content
depends_on:
- db
restart: unless-stopped
volumes:
db_data:
Note: We’ve hardcoded the values for password here for illustration purposes. When using in an actual environment, make sure to use environment variables in .env file to avoid exposing your credentials.
Save and close the file. Remember to replace the <^...^> placeholders with strong, secure passwords.
Let’s break down the new directives in this file:
services: This block still defines our containers. We now have two: db and wordpress.image: We’re using mysql:8.0 for the database and the wordpress:latest image for the application.container_name: This sets a specific, human-readable name for the container, which is easier to reference than the auto-generated ones.environment: This is a list of environment variables passed into the container. The mysql image uses these to set the root password and create an initial database and user. The wordpress image uses them to know how to connect to its database.volumes (service-level):
db service, db_data:/var/lib/mysql maps a Docker-managed volume named db_data to the MySQL data directory inside the container. This keeps your data safe.wordpress service, ./wp_content:/var/www/html/wp-content maps a local directory (wp_content) to the WordPress content directory. This allows you to directly edit themes and plugins from your host machine.depends_on: This tells Compose to start the db service before it starts the wordpress service. This is important, as WordPress will fail if it can’t find its database on startup.volumes (top-level): This block defines the named volumes. db_data: creates a Docker-managed volume, which is the preferred way to handle persistent data.Now, bring this multi-service application up:
- docker compose up -d
Compose will pull both the mysql and wordpress images and then create the containers, starting the db service first.
OutputCreating network "compose-demo_default" with the default driver
Creating volume "compose-demo_db_data" with default driver
Pulling db (mysql:8.0)...
...
Pulling wordpress (wordpress:latest)...
...
Creating mysql_db ... done
Creating wordpress_app ... done
You can now access your new WordPress site by navigating to localhost:8001 or <^your_server_domain_or_IP<^>:8001 in your browser. You should see the WordPress installation screen. For a more detailed example, check out our article on How To Install WordPress With Docker Compose
Beyond web applications, Docker Compose is an extremely useful tool for creating reproducible data science and machine learning (AI/ML) environments. AI/ML projects are known for their complex dependencies, including specific Python versions, libraries like TensorFlow or PyTorch, and system-level drivers like the NVIDIA CUDA Toolkit. Docker Compose captures this entire environment in configuration files, solving the “it works on my machine” problem, which is critical for reproducible research.
In this example, you will create a multi-service AI/ML environment consisting of:
Dockerfile.Prerequisite: This example requires an NVIDIA GPU on your host machine and the NVIDIA Container Toolkit to be installed. Without it, the container will fail to start when requesting GPU resources.
First, create a directory for your project. Inside it, you will create a docker-compose.yml file, a jupyter directory, a Dockerfile for Jupyter, and a requirements.txt file. The structure should look something like this:
ai-project/
├── docker-compose.yml
└── jupyter/
├── Dockerfile
└── requirements.txt
requirements.txt fileThis file lists the Python packages for your data science environment.
- nano jupyter/requirements.txt
Add your required packages. For this example, we’ll include libraries for data manipulation, database connection, and a deep learning framework.
pandas
scikit-learn
tensorflow
jupyterlab
psycopg2-binary
Save and close the file.
DockerfileThis file defines your custom JupyterLab service. It uses an official Jupyter image as its base and installs the packages from requirements.txt.
- nano jupyter/Dockerfile
Paste the following content:
# Start from a base image that includes Jupyter and scientific libraries
FROM jupyter/scipy-notebook:latest
# Copy our local requirements file into the container
COPY requirements.txt /tmp/requirements.txt
# Install the Python packages
RUN pip install --no-cache-dir -r /tmp/requirements.txt
This file instructs Docker to use jupyter/scipy-notebook as the starting point, copy your requirements.txt into the container, and then use pip to install the packages.
docker-compose.ymlNow, create the main docker-compose.yml file. This file will orchestrate both the db service and your custom jupyter service.
- nano docker-compose.yml
Paste the following configuration, replacing the <^...^> placeholders with your own secure credentials.
services:
db:
image: postgres:15-alpine
container_name: ai_postgres_db
environment:
POSTGRES_USER: <^your_db_user^>
POSTGRES_PASSWORD: <^your_db_pass^>
POSTGRES_DB: experiments
healthcheck:
test: ["CMD-SHELL", "pg_isready -U <^your_db_user^> -d experiments"]
interval: 10s
timeout: 5s
retries: 5
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
networks:
- ai_net
jupyter:
build: ./jupyter
container_name: ai_jupyter_lab
ports:
- "8888:8888"
volumes:
- ./notebooks:/home/jovyan/work
environment:
- POSTGRES_HOST=db
- POSTGRES_USER=<^your_db_user^>
- POSTGRES_PASSWORD=<^your_db_pass^>
- POSTGRES_DB=experiments
restart: unless-stopped
networks:
- ai_net
depends_on:
db:
condition: service_healthy
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
volumes:
postgres_data:
name: ai_project_data
networks:
ai_net:
driver: bridge
This configuration file introduces several important concepts:
build: ./jupyter: This tells Docker Compose to build a custom image for the jupyter service. It looks for a Dockerfile inside the specified ./jupyter directory.volumes: - ./notebooks:/home/jovyan/work: This bind-mounts a local directory named notebooks into the container. This ensures all Jupyter notebooks you create are saved directly on your host machine, persisting them even after the container is removed.healthcheck: As seen in the troubleshooting section, this check validates that the db service is not just running, but ready to accept connections before other services start.depends_on: ... condition: service_healthy: This tells the jupyter service to wait until the db healthcheck passes before starting.deploy.resources.reservations.devices: This is the block that requests GPU access.
driver: nvidia: Specifies the host driver to use.count: 1: Requests one GPU.capabilities: [gpu]: Ensures the container has the necessary capabilities to use the GPU.With your files in place, you are ready to build and run the services.
From your ai-project directory, run the docker compose up command. You must add the --build flag the first time to tell Compose to build your custom jupyter image.
- docker compose up -d --build
Compose will first build the jupyter image (which may take a few minutes as it downloads TensorFlow), then pull the postgres image, and finally start both containers.
Output[+] Building 5.8s (9/9) FINISHED
=> [internal] load build definition from Dockerfile
...
[+] Running 3/3
✔ Network ai-project_ai_net Created
✔ Container ai_postgres_db Started
✔ Container ai_jupyter_lab Started
You can now access the JupyterLab interface by navigating to http://localhost:8888 (or <^your_server_ip^>:8888) in your browser. You will be prompted for a token, which you can get from the container logs:
- docker compose logs jupyter
Look for a line similar to http://127.0.0.1:8888/lab?token=a1b2c3d4e5f6...
Inside a Jupyter notebook, you can now connect to your database using the hostname db and the credentials you provided. Your environment also has access to the host’s GPU for model training.
Docker Compose includes features for scaling services and managing the networks they communicate on.
Imagine your Nginx web server from Step 2 is getting too much traffic. You can scale the web service to run multiple container instances. Docker Compose can manage this automatically.
There are two primary ways to scale a service.
replicas key (Preferred in v2)You can define the desired number of instances directly in your docker-compose.yml file using the deploy and replicas keys. This feature was originally part of Docker Swarm but is now available for standard Compose deployments.
Modify your Nginx docker-compose.yml from Step 2:
services:
web:
image: nginx:alpine
ports:
- "8000:80"
volumes:
- ./app:/usr/share/nginx/html
deploy:
replicas: 3
When you run docker compose up -d, Compose will create three web containers. However, you will have a problem: all three will try to bind to host port 8000. This will cause a “port is already allocated” error for the second and third containers. To resolve this in a production setup, you would remove the ports mapping from the web service. This way, the web containers are only accessible within the Docker network. A separate load balancer service would be the only one with a public port. It would then distribute traffic to the three web replicas.
An example configuration would look like this (this is an advanced example):
services:
web:
image: nginx:alpine
# No 'ports' mapping here.
# The service is only accessible inside the 'web-net' network.
volumes:
- ./app:/usr/share/nginx/html
deploy:
replicas: 3
networks:
- web-net
load_balancer:
image: nginx:latest
ports:
- "80:80" # The load_balancer is the only service with a public port.
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf # A config file that load balances to 'web'.
networks:
- web-net
depends_on:
- web
networks:
web-net:
In this setup, the load_balancer listens on port 80 and routes requests to web_1, web_2, and web_3 internally.
This replicas key is most useful when used with a reverse proxy (like Traefik or another Nginx instance) that can load-balance requests between the replicas within the Docker network, without each replica needing to expose a port on the host.
--scale flagA common method for scaling is the --scale flag, a carry-over from Compose v1. This flag is applied at runtime and overrides any replicas key.
However, you must be careful with port definitions. If your service defines a fixed host port mapping (e.g., "8000:80"), running docker compose up --scale will cause an error for the second and third containers as they all try to bind to the same host port.
To use --scale for services that expose ports, you must not use a fixed host port mapping.
Option 1: Map to Random Host Ports (Good for Development)
You can modify your docker-compose.yml to specify only the container port. This tells Docker to map port 80 in each container to a random, available port on your host machine.
services:
web:
image: nginx:alpine
ports:
- "80" # No fixed host port
volumes:
- ./app:/usr/share/nginx/html
Now, when you run the scale command:
- docker compose up -d --scale web=3
Run docker compose ps to see the result. Each container will be running on a different, randomly assigned host port:
Output Name Command State Ports
----------------------------------------------------------------------------------
compose-demo_web_1 /docker-entrypoint.sh ngin ... Up 0.0.0.0:49154->80/tcp
compose-demo-web_2 /docker-entrypoint.sh ngin ... Up 0.0.0.0:49155->80/tcp
compose-demo-web_3 /docker-entrypoint.sh ngin ... Up 0.0.0.0:49156->80/tcp
Option 2: Use a Reverse Proxy (Good for Production)
The other solution, as mentioned in the replicas section, is to remove the ports directive from the web service entirely. You would then use a separate load balancer container (which has the only public port) to manage and distribute traffic to the scaled replicas within the Docker network.
To stop and remove all three containers, the command remains the same:
- docker compose down
By default, Docker Compose creates a single bridge network for your application. Every service in the file is attached to it, which is how the wordpress container was able to find the db container just by using its service name (db).
However, you can define your own custom networks for better isolation and control.
Let’s modify the WordPress example to use a custom bridge network.
services:
db:
image: mysql:8.0
...
networks:
- app_net
wordpress:
image: wordpress:latest
...
ports:
- "8001:80"
depends_on:
- db
networks:
- app_net
volumes:
db_data:
networks:
app_net:
driver: bridge
Here’s what we added:
networks (top-level): This defines a new network named app_net and specifies it should use the standard bridge driver.networks (service-level): Under both db and wordpress, this key attaches them to our app_net network.If you had a third service (like an analytics tool) that you did not attach to app_net, it would be completely isolated and unable to communicate with the db or wordpress containers.
For multi-host clustering with Docker Swarm, you would change the driver from bridge to overlay. The bridge driver is for communication between containers on a single host, which is the standard for most Docker Compose use cases.
As your applications grow, your docker-compose.yml file can become large and difficult to manage. Docker Compose supports an include directive, allowing you to split your configuration across multiple files.
Imagine you want to separate your WordPress and database definitions, and perhaps have a common docker-compose.override.yml for development-specific settings (like binding a port on the database).
Your directory structure might look like this:
compose-demo/
├── docker-compose.yml
├── docker-compose.db.yml
├── docker-compose.web.yml
This file will only define the db service.
services:
db:
image: mysql:8.0
container_name: mysql_db
volumes:
- db_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: <^your_root_password_here<^>
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress_user
MYSQL_PASSWORD: <^your_wordpress_password_here<^>
restart: unless-stopped
volumes:
db_data:
This file will only define the wordpress service.
services:
wordpress:
image: wordpress:latest
container_name: wordpress_app
ports:
- "8001:80"
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress_user
WORDPRESS_DB_PASSWORD: <^your_wordpress_password_here<^>
WORDPRESS_DB_NAME: wordpress
volumes:
- ./wp_content:/var/www/html/wp-content
depends_on:
- db
restart: unless-stopped
Now, your main docker-compose.yml file becomes very simple. It just uses include to pull in the other files.
include:
- docker-compose.db.yml
- docker-compose.web.yml
When you run docker compose up -d, Compose will read all three files, merge the configurations, and start the db and wordpress services exactly as if they were defined in a single file. This approach makes your configuration much more modular and reusable.
Note: The include: directive requires Docker Compose v2.20 or later. If your system uses an older version, you can combine files using docker compose -f docker-compose.db.yml -f docker-compose.web.yml up.
docker-compose (v1) to docker compose (v2)The docker compose command you installed in Step 1 is “Compose v2.” It is a Go-based plugin built directly into the Docker CLI.
The original version, “Compose v1,” was a separate Python tool invoked with docker-compose (a hyphen). As of July 2023, Compose v1 is no longer supported and has been deprecated.
docker-compose) is now a space (docker compose).pip. v2 is included with Docker Desktop or installed as a CLI plugin, as you did in Step 1.docker-compose.yml files are almost 100% backward compatible. Compose v2 fully supports file versions 3.x. You do not need to change your YAML files for migration, only your commands.--project-name flag: The -p flag still works, but the full flag is now --project-name instead of --project_name.Most commands are identical, just with the hyphen removed. Here is a table comparing common v1 commands to their v2 equivalents.
docker-compose (v1) Command |
docker compose (v2) Command |
Notes |
|---|---|---|
docker-compose up -d |
docker compose up -d |
No change in syntax. |
docker-compose down |
docker compose down |
No change in syntax. |
docker-compose ps |
docker compose ps |
No change in syntax. |
docker-compose logs |
docker compose logs |
No change in syntax. |
docker-compose stop |
docker compose stop |
No change in syntax. |
docker-compose build |
docker compose build |
No change in syntax. |
docker-compose exec web bash |
docker compose exec web bash |
No change in syntax. |
docker-compose run web bash |
docker compose run web bash |
run creates a new container. exec runs in an existing one. |
docker-compose up --scale web=3 |
docker compose up --scale web=3 |
The --scale flag is still supported in v2. |
As you can see, for most day-to-day use, the only change is docker-compose -> docker compose. If you use scripts, you can update them by simply removing the hyphen.
If you still have the old docker-compose v1 installed, you can remove it to avoid confusion:
- sudo pip3 uninstall docker-compose
Or, if it was installed by apt:
- sudo apt remove docker-compose
When you work with Docker Compose, you may encounter issues related to file syntax, permissions, or container runtime conflicts. Most problems can be diagnosed and resolved by methodically checking your configuration, permissions, and container logs.
Let’s look at some of the most common errors and their solutions.
This is one of the most common errors for new Docker users. You run docker compose up and see an error message about the Docker daemon socket.
Outputpermission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock
Problem: Your non-root user does not have permission to communicate with the Docker daemon, which runs as root.
Solution: You must add your user to the docker group, which was created during Docker’s installation.
Add your user to the docker group:
- sudo usermod -aG docker ${USER}
For the new group membership to take effect, you must log out and log back in.
Alternatively, you can activate the group changes for your current terminal session by typing:
- newgrp docker
This should resolve any permission errors related to the Docker socket.
Docker Compose fails to run and reports that your docker-compose.yml file is invalid.
OutputERROR: The Compose file './docker-compose.yml' is invalid because:
services.web.ports contains an invalid type, it should be a list
Or:
Outputmapping values are not allowed in this context at line 5
docker-compose.yml file relies on strict YAML syntax. The most common mistake is incorrect indentation. YAML uses spaces, not tabs, to define structure.docker-compose.yml file.
version, services, volumes) start in the same column.image, ports, volumes under a service) are indented two spaces more than their parent service (e.g., web).You try to start your environment, but the command fails with an error message that the address is already in use.
OutputError starting userland proxy: listen tcp 0.0.0.0:8000: bind: address already in use
Problem: Another process on your host machine is already listening on the port you are trying to map (in this case, port 8000). This is often another Docker container or a local development server.
Solution: You have two options:
Stop the other process. You can find the process using the port with this command:
- sudo lsof -i :8000
If it is another Docker container, stop it with docker stop <container_id>.
Change the host port in your docker-compose.yml file. This is often the simplest fix. Change the ports mapping from "8000:80" to a different port, such as "8001:80".
...
ports:
- "8001:80" # Changed from 8000
...
Connection Refused (depends_on)In a multi-service application (like the WordPress and MySQL example), the wordpress container starts but its logs show Connection refused or MySQL server has gone away when trying to connect to the db service.
Problem: You have used depends_on, but this directive only waits for the db container to start. It does not wait for the MySQL application inside the container to be fully initialized and ready to accept connections.
Solution: The application (in this case, WordPress) must be configured to retry its connection. Most modern images have this retry logic built-in. For images that do not, you must implement a healthcheck.
You can add a healthcheck to your db service. The wordpress service’s depends_on can then be configured to wait for the database to be “healthy,” not just “started.”
Example Healthcheck for MySQL:
services:
db:
image: mysql:8.0
...
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "-p<^your_root_password_here<^>"]
interval: 10s
timeout: 5s
retries: 5
wordpress:
image: wordpress:latest
...
depends_on:
db:
condition: service_healthy # This now waits for the healthcheck
...
You may encounter permission errors in your container logs, or find that your container’s data directory is empty.
Problem: The container runs as a specific user (e.g., www-data with User ID 33), but the host directory you mounted is owned by your user (e.g., ubuntu with User ID 1000). The container’s user does not have permission to write to the host directory.
Solution: Change the ownership of the host directory to match the user ID inside the container. You can find the container’s user ID by running docker compose exec <service_name> id. For example, if the ID is 33 (common for www-data):
- sudo chown -R 33:33 ./wp_content
Problem: Your docker-compose.yml uses a relative path like ./app, but your index.html file is not being served.
Solution: Docker Compose interprets relative paths from the directory where you run the docker compose up command. Always run Compose commands from the same directory that contains your docker-compose.yml file.
Your application container logs show Host not found or Could not resolve host: db.
wordpress container cannot find the db container on the network.Check Service Names: Ensure the hostname in your application’s configuration (e.g., the WORDPRESS_DB_HOST environment variable) is exactly the same as the service name in your docker-compose.yml (e.g., db).
Inspect the Network: Use docker compose ps to find your project’s name (e.g., compose-demo). Then, inspect the default network:
- docker network inspect compose-demo_default
The JSON output will list all containers attached to this network. If one of your services is missing, check your docker-compose.yml for any custom networks configuration that might be isolating it.
You type docker-compose up (with a hyphen) and see a “command not found” error.
Outputdocker-compose: command not found
-) with a space.docker-compose (v1) |
docker compose (v2) |
|---|---|
docker-compose up |
docker compose up |
docker-compose down |
docker compose down |
docker-compose ps |
docker compose ps |
docker-compose logs |
docker compose logs |
Always use docker compose (space) when following this guide.
Docker Compose is a tool for defining and running multi-container Docker applications.
It uses a single YAML file (by default, docker-compose.yml) to configure all of your application’s components, which are called services. This file also defines the networks that allow the services to communicate with each other and the volumes used for persistent data.
With this single file, you can manage your entire application stack with simple commands:
docker compose up starts and runs your entire application, including all specified containers, networks, and volumes.docker compose down stops and removes all the containers, networks, and volumes created by your application.It is most useful for managing applications that require multiple components, such as a website that needs a web server (like Nginx), an application backend (like WordPress), and a database (like MySQL).
The recommended method is to install Docker Compose as a plugin for the Docker CLI. This is done by installing the docker-compose-plugin package from Docker’s official apt repository.
First, ensure you have followed the official Docker documentation to set up Docker’s apt repository on your Ubuntu system.
Update your package list:
- sudo apt update
Install the Docker Compose plugin:
- sudo apt install docker-compose-plugin
Verify the installation by checking the version. The command uses a space, not a hyphen.
- docker compose version
Because Docker Compose (v2) is installed as a system package using apt, you can update it using the standard Ubuntu software update process.
Refresh your local package index:
- sudo apt update
Run a system-wide upgrade, which will include the Compose plugin:
- sudo apt upgrade
Alternatively, if you only want to update the plugin itself, you can run:
- sudo apt install --only-upgrade docker-compose-plugin
Yes, Docker Compose is frequently used in production, particularly for applications that run on a single host.
It provides a straightforward way to define, deploy, and manage the lifecycle of your application’s services, networks, and volumes. For a single-server deployment, it is a very effective and simple-to-manage solution.
For more complex scenarios that require coordinating containers across multiple hosts (a cluster), other tools are more common. These include Docker Swarm (which uses a similar Compose file syntax) and Kubernetes (which is the industry standard for large-scale container orchestration).
This table outlines the primary differences between the Docker CLI and the Docker Compose CLI.
| Feature | docker (Docker Engine) |
docker compose (Compose Plugin) |
|---|---|---|
| Scope | Manages individual Docker objects. | Manages a complete, multi-container application as a single unit. |
| Primary Use | Building images (docker build), running single containers (docker run), managing containers (docker ps), images, volumes, and networks. |
Orchestrating multiple services defined in a docker-compose.yml file. |
| Commands | Low-level commands focused on one object. Example: docker run -d -p 80:80 -v ./:/app nginx |
High-level commands for the whole application. Example: docker compose up |
| Analogy | A building block (a single container). | The blueprint and construction manager for the entire building (the application). |
In short, you use docker to interact with a single container. You use docker compose to manage your entire application stack (e.g., web, app, db) all at once.
You run multiple containers by defining them as services in your docker-compose.yml file.
Create a file named docker-compose.yml.
Inside this file, use the services: key to define each container you want to run.
Here is a practical example that defines and runs two containers: a WordPress site and a MySQL database.
services:
db:
# This is the first service (container)
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: <^your_password_here<^>
MYSQL_DATABASE: wordpress
volumes:
- db_data:/var/lib/mysql
wordpress:
# This is the second service (container)
image: wordpress:latest
ports:
- "8001:80"
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_PASSWORD: <^your_password_here<^>
depends_on:
- db
volumes:
db_data:
Save the file and open a terminal in the same directory.
Run a single command to start both containers:
- docker compose up -d
Docker Compose will read the file, create a shared network for the services, pull both the mysql and wordpress images, and start a container for each service.
Docker Compose is a standard tool for development environments because it solves several common problems.
Consistent Environments: It ensures every developer on a team runs the exact same services (database, cache, web server) with the exact same versions and configurations. This is defined in the docker-compose.yml file, which is committed to version control. This eliminates the “it works on my machine” problem.
Simplicity: It replaces complex setup scripts and long docker run commands. A developer only needs to run docker compose up to start the entire application stack and docker compose down to stop it.
Service Isolation: Developers can work on multiple projects on the same machine without dependency conflicts. Project A can use PostgreSQL 9.6 and Project B can use PostgreSQL 14, as each database runs in an isolated container managed by its own Compose file.
Easy Integration Testing: Because Compose starts all of an application’s dependencies together, it creates a perfect local environment for running integration tests that verify how services interact with each other.
In this guide, you installed Docker Compose and configured a complete multi-container application. You started with a basic docker-compose.yml file for an Nginx web server and progressed to a more complex, realistic stack involving a WordPress application and a MySQL database. You have learned to manage the entire application lifecycle, from building and running services to stopping and removing them.
You are now familiar with key concepts for managing applications effectively, including service scaling, custom network definitions, and splitting your configuration into modular files using the include directive. By following the migration guide and troubleshooting steps, you can also resolve common issues like port conflicts, “Permission denied” errors, and YAML syntax mistakes. The skills covered here will allow you to build consistent, reproducible development environments and deploy single-host applications with confidence.
For a complete reference of all available docker compose commands, check the official documentation.
For more Docker-related tutorials, check out the following articles:
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This curriculum introduces open-source cloud computing to a general audience along with the skills necessary to deploy applications and websites securely to the cloud.
Browse Series: 39 tutorials
Dev/Ops passionate about open source, PHP, and Linux. Former Senior Technical Writer at DigitalOcean. Areas of expertise include LAMP Stack, Ubuntu, Debian 11, Linux, Ansible, and more.
With over 6 years of experience in tech publishing, Mani has edited and published more than 75 books covering a wide range of data science topics. Known for his strong attention to detail and technical knowledge, Mani specializes in creating clear, concise, and easy-to-understand content tailored for developers.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Wonderful tutorial! I followed this tutorial for installing. Then I found this one that has a good tip on using bash aliases.
Is there any difference in installing docker compose in .docker/cli-plugins/ vs installing it in /usr/local/bin/ ? The latter is what the linked guide says to do. This is what your older guide (20.04) and the one linked above is asking me to do.
Thx for the tutorial! I am constantly running into the issue that I cant mount any directory into the container which is outside of the current folder structure of the home directory of the user who installed docker compose. The container directory is empty but should contain the files in the host directory. E.g.
volumes:
- ./www:/var/www/html
works but
volumes:
- /web/www:/var/www/html
does not.Any idea?
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.