Sr Technical Writer and Team Lead

A private Docker registry is a self-hosted server that stores and distributes Docker container images within a controlled environment. Unlike public registries such as Docker Hub, a private registry restricts access to authorized users and keeps your images off the public internet. It is built on the open-source Distribution project (maintained by the CNCF) and exposed through the Docker Registry HTTP API V2.
Teams choose self-hosted registries for several reasons: keeping proprietary application code private, reducing image pull latency by hosting images close to deployment infrastructure, maintaining full control over access policies, and operating in air-gapped or compliance-restricted environments. The trade-off is that you take on responsibility for uptime, TLS certificate renewal, storage management, and backups, tasks that managed services like DigitalOcean Container Registry handle for you.
In this tutorial, you will set up and secure your own private Docker registry on an Ubuntu server. You will use Docker Compose to define the registry container configuration, Nginx as a reverse proxy with TLS termination, and htpasswd for HTTP Basic authentication. By the end, you will be able to push a custom Docker image to your private registry and pull it securely from a remote server.
Note: This tutorial was originally written for Ubuntu 22.04 and has been verified to work on Ubuntu 24.04 and later LTS releases. The commands and configuration files used here are not version-specific and apply across current Ubuntu LTS versions.
registry Docker image (based on the CNCF Distribution project) is open-source and free to use. Your primary costs are the server, storage, and a domain with TLS certificates.5000.htpasswd file restricts who can push and pull images. Every Docker client must run docker login before interacting with the registry.To complete this tutorial, you will need the following:
sudo non-root user and a firewall. One server will host your private Docker registry, and the other will be your client server.On the host server, you will also need:
your_domain.Running Docker on the command line works well for starting out and testing containers, but managing larger deployments with multiple containers running in parallel requires a better approach.
With Docker Compose, you write a single .yml file to define each container’s configuration and the relationships between containers. You can then use the docker compose command to manage all components as a group.
The Docker Registry is itself an application with multiple components, so you will use Docker Compose to manage it. To start an instance of the registry, you will set up a docker-compose.yml file to define it and specify where the registry stores its data on disk.
Create a directory called docker-registry on the host server to hold the configuration:
- mkdir ~/docker-registry
Navigate into the directory:
- cd ~/docker-registry
Create a subdirectory called data where the registry will store image layers:
- mkdir data
Create and open a file called docker-compose.yml:
- nano docker-compose.yml
Add the following content, which defines a basic instance of a Docker Registry:
services:
registry:
image: registry:2
ports:
- "5000:5000"
environment:
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
volumes:
- ./data:/data
This configuration defines a single service called registry using the registry:2 image from Docker Hub. Under ports, it maps port 5000 on the host to port 5000 in the container, allowing requests sent to the host on that port to reach the registry process.
The environment section sets REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY to /data, telling the registry where to store image layers inside the container. The volumes section maps the ./data directory on the host to /data in the container, so image data persists on the host file system even if the container is recreated.
Note: The version key in Docker Compose files is obsolete as of Docker Compose v2.0 and later. If you include it, Docker Compose will print a warning and ignore it. Modern compose files start directly with the services key.
Save and close the file.
Start the configuration by running:
- docker compose up
The registry container and its dependencies will be downloaded and started:
Output[+] Running 2/2
✔ Network docker-registry_default Created 0.1s
✔ Container docker-registry-registry-1 Created 0.1s
Attaching to docker-registry-registry-1
docker-registry-registry-1 | time="2024-01-15T10:31:20.404Z" level=warning msg="No HTTP secret provided - generated random secret. This may cause problems with uploads if multiple registries are behind a load-balancer. To provide a shared secret, fill in http.secret in the configuration file or set the REGISTRY_HTTP_SECRET environment variable." ...
docker-registry-registry-1 | time="2024-01-15T10:31:20.405Z" level=info msg="redis not configured" ...
docker-registry-registry-1 | time="2024-01-15T10:31:20.412Z" level=info msg="using inmemory blob descriptor cache" ...
docker-registry-registry-1 | time="2024-01-15T10:31:20.413Z" level=info msg="listening on [::]:5000" ...
...
You will address the No HTTP secret provided warning message later in this tutorial.
The last line of the output confirms the registry has started successfully and is listening on port 5000.
Press CTRL+C to stop execution.
In this step, you created a Docker Compose configuration that starts a Docker Registry listening on port 5000. In the next steps, you will expose it at your domain and set up authentication.
As part of the prerequisites, you enabled HTTPS at your domain. To expose your secured Docker Registry there, you need to configure Nginx to forward traffic from your domain to the registry container.
You already set up the /etc/nginx/sites-available/your_domain file containing your server configuration. Open it for editing:
- sudo nano /etc/nginx/sites-available/your_domain
Find the existing location block:
...
location / {
...
}
...
You need to forward traffic to port 5000, where your registry will be listening. You also want to append headers to the request forwarded to the registry, which provides additional information from the server about the request itself. Replace the existing contents of the location block with the following lines:
...
location / {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
proxy_pass http://localhost:5000;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
...
The if block checks the user agent of the request and verifies that the Docker client version is above 1.5 and that it is not a Go application trying to access the registry directly. For more details on this configuration, see the Docker registry Nginx guide. The proxy_set_header directives forward the original client information to the registry container, which is important for logging and access control. For more on Nginx reverse proxy configuration, see How To Configure Nginx as a Reverse Proxy on Ubuntu 22.04.
Save and close the file when you are done. Apply the changes by restarting Nginx:
- sudo systemctl restart nginx
If you receive an error message, double-check the configuration you added.
To confirm that Nginx is properly forwarding traffic to your registry container on port 5000, start the registry:
- docker compose up
Then, in a browser window, navigate to your domain and access the v2 endpoint:
https://your_domain/v2
The browser will load an empty JSON object:
{}
In your terminal, you will receive output confirming that a GET request was made to /v2/. The container received the request through Nginx port forwarding and returned a response of {}. The status code 200 means the container handled the request successfully.
Press CTRL+C to stop execution.
Now that you have set up port forwarding, you will improve the security of your registry by adding authentication.
Nginx allows you to set up HTTP authentication for the sites it manages, which you can use to restrict access to your Docker Registry. To achieve this, you will create an authentication file with htpasswd and add username and password combinations to it. That process enables HTTP Basic Auth for your registry.
Install the htpasswd utility by installing the apache2-utils package:
- sudo apt install apache2-utils -y
Create a directory to store the authentication file under ~/docker-registry/auth:
- mkdir ~/docker-registry/auth
Navigate to it:
- cd ~/docker-registry/auth
Create the first user, replacing username with the username you want to use. The -B flag specifies the bcrypt algorithm, which Docker requires:
- htpasswd -Bc registry.password username
Enter a password when prompted. The credentials will be written to registry.password.
Note: To add more users, re-run the previous command without -c:
- htpasswd -B registry.password username
The -c flag creates a new file. Removing it appends to the existing file instead.
Now update docker-compose.yml to tell Docker to use the credentials file for authentication. Open it for editing:
- nano ~/docker-registry/docker-compose.yml
Update the file to include the authentication environment variables and the auth volume:
services:
registry:
image: registry:2
ports:
- "5000:5000"
environment:
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry
REGISTRY_AUTH_HTPASSWD_PATH: /auth/registry.password
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
volumes:
- ./auth:/auth
- ./data:/data
The new environment variables configure the registry to use HTTP Basic Auth with htpasswd. REGISTRY_AUTH is set to htpasswd, REGISTRY_AUTH_HTPASSWD_PATH points to the credentials file inside the container, and REGISTRY_AUTH_HTPASSWD_REALM sets the authentication realm name displayed in the browser login prompt. The ./auth directory is mounted into the container so the registry can read the credentials file.
Save and close the file.
Verify that authentication works correctly. Navigate to the main directory:
- cd ~/docker-registry
Then start the registry:
- docker compose up
In your browser, refresh the page at your domain. You will be prompted for a username and password.
After providing valid credentials, you will see the page with the empty JSON object:
{}
You have successfully authenticated and gained access to the registry. Press CTRL+C in your terminal to stop.
Your registry is now secured and can be accessed only after authentication. Next, you will configure it to run as a background process that survives reboots.
You can ensure that the registry container starts every time the system boots up, or after it crashes, by instructing Docker Compose to always keep it running.
Open docker-compose.yml for editing:
- nano docker-compose.yml
Add the restart directive and an REGISTRY_HTTP_SECRET environment variable to the registry service:
services:
registry:
restart: always
image: registry:2
ports:
- "5000:5000"
environment:
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry
REGISTRY_AUTH_HTPASSWD_PATH: /auth/registry.password
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
REGISTRY_HTTP_SECRET: your_random_secret
volumes:
- ./auth:/auth
- ./data:/data
Setting restart to always ensures the container restarts automatically after crashes or server reboots. The REGISTRY_HTTP_SECRET value should be a long, random string. You can generate one with:
- openssl rand -hex 32
This secret is used to sign state that the registry saves to the client. Replace your_random_secret with the generated value. If you run multiple registry instances behind a load balancer, all instances must share the same secret.
Save and close the file.
Start your registry as a background process by passing in the -d flag:
- docker compose up -d
With the registry running in the background, you can close the SSH session and the registry will continue operating.
Because Docker images can be very large, you will next increase the maximum file upload size that Nginx accepts.
Before you push an image to the registry, you need to ensure that Nginx can handle large file uploads. The default maximum body size in Nginx is 1m, which is not enough for Docker images. To increase it, edit the main Nginx configuration file at /etc/nginx/nginx.conf:
- sudo nano /etc/nginx/nginx.conf
Add the following line inside the http block:
...
http {
client_max_body_size 16384m;
...
}
...
The client_max_body_size parameter is now set to 16384m, making the maximum upload size 16 GB.
Save and close the file.
Restart Nginx to apply the configuration changes:
- sudo systemctl restart nginx
You can now upload large images to your Docker Registry without Nginx blocking the transfer.
Now that your Docker Registry server is running and accepting large file sizes, you can try pushing an image to it. Since you do not have any images ready, you will use the ubuntu image from Docker Hub as a test.
In a new terminal session on your client server, run the following command to download the ubuntu image, run it, and get access to its shell:
- docker run -t -i ubuntu /bin/bash
The -i and -t flags give you interactive shell access into the container.
Once inside, create a file called SUCCESS:
- touch /SUCCESS
This customization will let you confirm later that you are working with the exact same image.
Exit the container shell:
- exit
Create a new image from the container you just customized:
- docker commit $(docker ps -lq) test-image
The new image is available locally. Before pushing it to your private registry, log in:
- docker login https://your_domain
Enter the username and password you defined in Step 3 when prompted.
The output will be:
Output...
Login Succeeded
Once logged in, tag the image with your registry’s domain:
- docker tag test-image your_domain/test-image
Push the tagged image to your registry:
- docker push your_domain/test-image
You will receive output similar to the following:
OutputUsing default tag: latest
The push refers to a repository [your_domain/test-image]
1cf9c9034825: Pushed
f4a670ac65b6: Pushed
latest: digest: sha256:95112d0af51e5470d74ead77932954baca3053e04d201ac4639bdf46d5cd515b size: 736
You have verified that your registry handles user authentication and allows authenticated users to push images.
Now that you have pushed an image to your private registry, you will try pulling it.
On the host server, log in with the username and password you set up previously:
- docker login https://your_domain
Pull the test-image:
- docker pull your_domain/test-image
Docker will download the image. Run the container:
- docker run -it your_domain/test-image /bin/bash
List the files present:
- ls
The output will include the SUCCESS file you created earlier, confirming that this container uses the same image:
SUCCESS bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
Exit the container shell:
- exit
You can also query your registry’s catalog through the HTTP API to list all stored repositories:
- curl -u username https://your_domain/v2/_catalog
After entering your password, you will see:
Output{"repositories":["test-image"]}
You have tested pushing and pulling images and confirmed your private Docker registry is fully operational.
By default, the Docker registry stores image layers on the local file system. For production deployments where you need durability, scalability, and offsite backups, you can configure the registry to use an S3-compatible object storage backend instead. DigitalOcean Spaces is an S3-compatible object storage service that works with the registry’s built-in S3 storage driver.
To use Spaces as your storage backend, you will need:
Update your docker-compose.yml to replace the filesystem storage configuration with S3-compatible storage:
services:
registry:
restart: always
image: registry:2
ports:
- "5000:5000"
environment:
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry
REGISTRY_AUTH_HTPASSWD_PATH: /auth/registry.password
REGISTRY_HTTP_SECRET: your_random_secret
REGISTRY_STORAGE: s3
REGISTRY_STORAGE_S3_ACCESSKEY: your_spaces_access_key
REGISTRY_STORAGE_S3_SECRETKEY: your_spaces_secret_key
REGISTRY_STORAGE_S3_BUCKET: your_spaces_bucket_name
REGISTRY_STORAGE_S3_REGION: your_spaces_region
REGISTRY_STORAGE_S3_REGIONENDPOINT: https://your_spaces_region.digitaloceanspaces.com
volumes:
- ./auth:/auth
Replace the placeholder values with your actual Spaces credentials and bucket details. The REGISTRY_STORAGE_S3_REGION should match your Spaces region (for example, nyc3, sfo3, or ams3). The REGISTRY_STORAGE_S3_REGIONENDPOINT must include the full endpoint URL.
Notice that the ./data:/data volume mapping and REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY are removed since images are now stored in Spaces instead of the local file system.
Restart the registry to apply the changes:
- docker compose down && docker compose up -d
Push a test image to verify that layers are being written to your Spaces bucket. You can confirm this by checking the bucket contents in the DigitalOcean Control Panel or with the s3cmd tool.
For a more detailed walkthrough on using Spaces with Docker Registry and Kubernetes, see How To Set Up a Private Docker Registry on Top of DigitalOcean Spaces.
After your registry has been running in production, image layers accumulate on disk (or in your object storage bucket). The Docker registry does not automatically clean up unreferenced layers when you overwrite a tag with a new image. Over time, this leads to wasted storage. The registry provides a garbage collection process to reclaim this space.
By default, the registry API does not allow image deletion. To enable it, add the following environment variable to your docker-compose.yml:
environment:
...
REGISTRY_STORAGE_DELETE_ENABLED: "true"
Restart the registry after making this change:
- docker compose down && docker compose up -d
To delete an image, you first need to retrieve its digest. Query the registry for the image’s manifest:
- curl -u username -sS -H "Accept: application/vnd.docker.distribution.manifest.v2+json" \
- https://your_domain/v2/test-image/manifests/latest \
- -o /dev/null -D - | grep Docker-Content-Digest
The output will show a digest starting with sha256:. Use that digest to delete the manifest:
- curl -u username -X DELETE \
- https://your_domain/v2/test-image/manifests/sha256:digest_value
After deleting manifests, the image layers remain on disk until you run garbage collection. The registry should be in read-only mode or stopped during this process to prevent data inconsistency.
Run garbage collection in dry-run mode first to see what would be removed:
- docker compose exec registry bin/registry garbage-collect --dry-run /etc/docker/registry/config.yml
If the output looks correct, run garbage collection without the --dry-run flag:
- docker compose exec registry bin/registry garbage-collect /etc/docker/registry/config.yml
To check how much space your registry data is consuming on the local file system:
- du -sh ~/docker-registry/data
For Spaces-backed registries, check your bucket usage in the DigitalOcean Control Panel or with:
- s3cmd du s3://your_spaces_bucket_name
This section covers errors you are likely to encounter when setting up and operating a private Docker registry.
This error occurs when the Docker client does not trust the TLS certificate presented by your registry. Common causes include:
- sudo certbot certificates
If the certificate has expired, renew it:
- sudo certbot renew
- sudo systemctl restart nginx
date and correct it using NTP.This means the credentials you provided do not match what is stored in your htpasswd file. Verify the file exists and contains the expected user:
- cat ~/docker-registry/auth/registry.password
You can recreate a user’s credentials by running:
- htpasswd -B ~/docker-registry/auth/registry.password username
After updating the file, restart the registry:
- cd ~/docker-registry && docker compose restart
Check the container logs for error details:
- docker compose logs registry
Common causes include:
docker-compose.yml file. Validate YAML syntax with docker compose config.5000 already in use by another process. Check with sudo lsof -i :5000.This means Nginx cannot reach the registry container. Verify that the registry container is running:
- docker compose ps
If the container is not running, start it with docker compose up -d. Also confirm that the proxy_pass directive in your Nginx configuration points to http://localhost:5000 and that port 5000 is not blocked by your firewall:
- sudo ufw status
Choosing between a self-hosted registry and a managed service depends on your team’s operational capacity and requirements. The following table compares the most common options:
| Feature | Self-Hosted (Distribution) | DigitalOcean Container Registry | Docker Hub (Free Tier) | Harbor |
|---|---|---|---|---|
| Cost | Server + storage costs | Starts at $0/month (Starter) | Free for public repos | Server + storage costs |
| Setup Complexity | You manage everything | Managed by DigitalOcean | No setup needed | You manage everything |
| Private Repositories | Unlimited | 1 (Starter) to Unlimited (Professional) | 1 free private repo | Unlimited |
| TLS/Certificate Management | Manual (Let’s Encrypt or self-signed) | Handled automatically | Handled automatically | Manual |
| Vulnerability Scanning | Not built-in (requires separate tooling) | Built-in | Paid plans only | Built-in (Trivy) |
| Garbage Collection | Manual CLI command | Automatic | Automatic | Automatic scheduling |
| Kubernetes Integration | Manual configuration | Native integration with DOKS | Manual configuration | Helm chart available |
| Access Control | htpasswd or token-based | IAM and API tokens | Docker Hub teams | RBAC, LDAP, OIDC |
| Storage Backend Options | Local filesystem, S3, Azure, GCS | Managed by DigitalOcean | Managed by Docker | Local filesystem, S3, Azure, GCS |
| Web UI | None (API only) | DigitalOcean Control Panel | Docker Hub UI | Built-in web UI |
For teams that want full control or need to operate in restricted network environments, a self-hosted registry is the right choice. For teams that prefer to focus on building applications rather than managing registry infrastructure, DigitalOcean Container Registry provides a managed solution that integrates directly with DigitalOcean Kubernetes.
Once your private registry is running, you can automate image builds and pushes from your CI/CD pipeline. Here is a minimal example using GitHub Actions:
name: Build and Push
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Log in to private registry
run: echo "${{ secrets.REGISTRY_PASSWORD }}" | docker login https://your_domain -u ${{ secrets.REGISTRY_USERNAME }} --password-stdin
- name: Build image
run: docker build -t your_domain/my-app:${{ github.sha }} .
- name: Push image
run: docker push your_domain/my-app:${{ github.sha }}
Replace your_domain with your registry’s domain name. Store your registry username and password as GitHub Actions encrypted secrets.
This workflow logs in to your private registry, builds the Docker image with the commit SHA as a tag (which avoids caching issues and makes deployments traceable), and pushes it to the registry.
For teams using DigitalOcean’s managed registry, see How to Use CI/CD Systems with Your Container Registry for integration guides covering GitHub Actions, GitLab CI, and other platforms.
A private Docker registry is a self-hosted server that stores and distributes Docker container images within a controlled environment. Unlike Docker Hub, a private registry restricts access to authorized users and does not expose images publicly. It is built on the open-source CNCF Distribution project and communicates through the Docker Registry HTTP API V2.
The Docker registry image (registry:2) is open-source and free to use. The main costs are the server running the registry, storage for image layers, and any domain or TLS certificate expenses. On a DigitalOcean Droplet, the starting cost is the price of the smallest Droplet that meets your memory and storage requirements.
Authentication is added using HTTP Basic Auth backed by an htpasswd credentials file. Generate the file with the htpasswd utility from the apache2-utils package, then set the REGISTRY_AUTH, REGISTRY_AUTH_HTPASSWD_REALM, and REGISTRY_AUTH_HTPASSWD_PATH environment variables in the registry container configuration. Clients authenticate using docker login before pushing or pulling images.
In most production setups, yes. The Docker client requires HTTPS connections to registries by default. Without TLS, you must configure every Docker client with an insecure-registries exception in the Docker daemon configuration, which is not recommended outside of local development. A valid domain with a Let’s Encrypt certificate is the standard approach.
First, enable the deletion API by setting REGISTRY_STORAGE_DELETE_ENABLED to true. Then use the registry’s DELETE API endpoint to remove image manifests. After deleting manifests, run garbage collection with docker compose exec registry bin/registry garbage-collect /etc/docker/registry/config.yml to remove unreferenced layers from storage.
In this tutorial, you set up your own private Docker registry on Ubuntu, secured it with Nginx and TLS, added HTTP Basic Authentication, and configured it to run as a persistent background service. You also learned how to push and pull images, use DigitalOcean Spaces as a storage backend, perform registry maintenance with garbage collection, and integrate the registry with CI/CD pipelines.
A self-hosted registry gives you full control over your container image infrastructure. For teams that prefer a managed solution, DigitalOcean Container Registry handles TLS, storage, garbage collection, and Kubernetes integration out of the box.
To continue building on what you learned in this tutorial, explore the following tutorials and documentation:
Ready to get started with containers on DigitalOcean? Try DigitalOcean Container Registry for a managed private registry, or deploy your containers to DigitalOcean Kubernetes for a fully managed Kubernetes experience. Sign up today and get $200 in free credits to explore.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This curriculum introduces open-source cloud computing to a general audience along with the skills necessary to deploy applications and websites securely to the cloud.
Browse Series: 39 tutorials
I help Businesses scale with AI x SEO x (authentic) Content that revives traffic and keeps leads flowing | 3,000,000+ Average monthly readers on Medium | Sr Technical Writer(Team Lead) @ DigitalOcean | Ex-Cloud Consultant @ AMEX | Ex-Site Reliability Engineer(DevOps)@Nutanix
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
I have “unknown blob” error. Do you have any idea how to solve this? Thanks Petr
Rubbish article. How come the author access https without even generating or installing a certificate?
Hi, When I try to login from my local pc to my docker registry its throwing error like below:
C:\Users\rahupathy.m\Documents\DockerSignaDartAI>docker login https://domain.ai
Username: cccc
Password:
Error response from daemon: login attempt to https://domain.ai/v2/ failed with status: 502 Bad Gateway
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
From GPU-powered inference and Kubernetes to managed databases and storage, get everything you need to build, scale, and deploy intelligent applications.