By Tony Tran and Vinayak Baranwal
A reverse proxy is the recommended method to expose an application server to the internet. Whether you are running a Node.js application in production or a minimal built-in web server with Flask, these application servers will often bind to localhost
with a TCP port. This means by default, your application will only be accessible locally on the machine it resides on. While you can specify a different bind point to force access through the internet, these application servers are designed to be served from behind a reverse proxy in production environments. This provides security benefits in isolating the application server from direct internet access, the ability to centralize firewall protection, and a minimized attack plane for common threats such as denial of service attacks.
From a client’s perspective, interacting with a reverse proxy is no different from interacting with the application server directly. It is functionally the same, and the client cannot tell the difference. A client requests a resource and then receives it, without any extra configuration required by the client.
This tutorial will demonstrate how to set up a reverse proxy using Nginx, a popular web server and reverse proxy solution. You will install Nginx, configure it as a reverse proxy using the proxy_pass
directive, and forward the appropriate headers from your client’s request. If you don’t have an application server on hand to test, you will optionally set up a test application with the WSGI server Gunicorn.
Nginx as a Reverse Proxy for Security and Scalability: Deploying Nginx as a reverse proxy protects your application servers from direct internet exposure, centralizes SSL/TLS management, and enables advanced features like WebSocket support, load balancing, and multi-application hosting. This approach is widely recommended for production environments to enhance both security and performance.
Universal Compatibility Across Ubuntu LTS Versions: The configuration steps and commands in this guide are fully compatible with Ubuntu 22.04 LTS, 24.04 LTS, and future LTS releases. Nginx package names, configuration file locations, and service management commands remain consistent, ensuring a smooth experience regardless of your chosen Ubuntu version.
Automated, Production-Ready SSL with Certbot and Let’s Encrypt: Secure your reverse proxy with HTTPS by integrating Certbot and Let’s Encrypt. This combination provides free, automated SSL certificates and seamless renewals, meeting modern security standards and reducing manual maintenance for your web infrastructure.
Critical Header Forwarding for Accurate Client Information: Properly forwarding headers such as Host
, X-Forwarded-For
, X-Real-IP
, and X-Forwarded-Proto
ensures your backend applications receive accurate client IP addresses and protocol details. This is essential for logging, security, and application logic that depends on original client information.
Advanced Features: WebSocket Support and Load Balancing: Nginx can be configured to handle WebSocket connections and distribute traffic across multiple backend servers using upstream blocks. These advanced capabilities enable real-time communication and horizontal scaling, making your infrastructure robust and responsive to high traffic loads.
AI-Enhanced Workflows with FastMCP Proxy Integration: For developers leveraging AI tools, integrating FastMCP Proxy allows you to bridge different transport protocols and compose remote tool servers into your local development workflow. This streamlines AI-assisted configuration generation and testing, accelerating DevOps and MLOps processes with modern, flexible proxying solutions.
Note: This tutorial has been validated on Ubuntu 22.04 LTS and Ubuntu 24.04 LTS, and is generally applicable to later LTS releases without changes.
To complete this tutorial, you will need:
http://127.0.0.1:8000
), or a unix domain socket (such as http://unix:/tmp/pgadmin4.sock
for pgAdmin). If you do not have an application server set up to test with, you will be guided through setting up a Gunicorn application which will bind to http://127.0.0.1:8000
.Nginx is available for installation with apt
through the default repositories. For details, see How to Install Nginx on Ubuntu 20.04. Update your repository index, then install Nginx:
- sudo apt update
- sudo apt install nginx
Press Y
to confirm the installation. If you are asked to restart services, press ENTER
to accept the defaults.
You need to allow access to Nginx through your firewall. Having set up your server according to the initial server prerequisites, add the following rule with ufw
:
- sudo ufw allow 'Nginx HTTP'
Now you can verify that Nginx is running:
- systemctl status nginx
Output● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2022-08-29 06:52:46 UTC; 39min ago
Docs: man:nginx(8)
Main PID: 9919 (nginx)
Tasks: 2 (limit: 2327)
Memory: 2.9M
CPU: 50ms
CGroup: /system.slice/nginx.service
├─9919 "nginx: master process /usr/sbin/nginx -g daemon on; master_process on;"
└─9920 "nginx: worker process
Next you will add a custom server block with your domain and app server proxy.
It is recommended practice to create a custom configuration file for your new server block additions, instead of editing the default configuration directly. Create and open a new Nginx configuration file using nano
or your preferred text editor:
- sudo nano /etc/nginx/sites-available/your_domain
Insert the following into your new file, making sure to replace your_domain
and app_server_address
. If you do not have an application server to test with, default to using http://127.0.0.1:8000
for the optional Gunicorn server setup in Step 3:
server {
listen 80;
listen [::]:80;
server_name your_domain www.your_domain;
location / {
proxy_pass app_server_address;
include proxy_params;
}
}
Save and exit, with nano
you can do this by hitting CTRL+O
then CTRL+X
.
This configuration file begins with a standard Nginx setup, where Nginx will listen on port 80
and respond to requests made to your_domain and www.your_domain
. Reverse proxy functionality is enabled through Nginx’s proxy_pass
directive. With this configuration, navigating to your_domain in your local web browser will be the same as opening app_server_address on your remote machine. While this tutorial will only proxy a single application server, Nginx is capable of serving as a proxy for multiple servers at once. By adding more location blocks as needed, a single server name can combine multiple application servers through proxy into one cohesive web application.
All HTTP requests come with headers, which contain information about the client who sent the request. This includes details like IP address, cache preferences, cookie tracking, authorization status, and more. Nginx provides some recommended header forwarding settings you have included as proxy_params
, and the details can be found in /etc/nginx/proxy_params
:
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
With reverse proxies, your goal is to pass on relevant information about the client, and sometimes information about your reverse proxy server itself. There are use cases where a proxied server would want to know which reverse proxy server handled the request, but generally the important information is from the original client’s request. In order to pass on these headers and make information available in locations where it is expected, Nginx uses the proxy_set_header
directive.
By default, when Nginx acts as a reverse proxy it alters two headers, strips out all empty headers, then passes on the request. The two altered headers are the Host
and Connection
header. There are many HTTP headers available, and you can check this detailed list of HTTP headers for more information on each of their purposes, though the relevant ones for reverse proxies will be covered here later.
Here are the headers forwarded by proxy_params
and the variables it stores the data in:
$http_host
variable.$proxy_add_x_forwarded_for
variable.$scheme
variable.Next, enable this configuration file by creating a link from it to the sites-enabled
directory that Nginx reads at startup:
- sudo ln -s /etc/nginx/sites-available/your_domain /etc/nginx/sites-enabled/
You can now test your configuration file for syntax errors:
- sudo nginx -t
With no problems reported, restart Nginx to apply your changes:
- sudo systemctl restart nginx
Nginx is now configured as a reverse proxy for your application server, and you can access it from a local browser if your application server is running. If you have an intended application server but do not have it running, you can proceed to starting your intended application server. You can skip the remainder of this tutorial.
Otherwise, proceed to setting up HTTPS with Let’s Encrypt or testing your reverse proxy with Gunicorn.
Securing your reverse proxy with HTTPS is essential for protecting data in transit, establishing user trust, and supporting modern SEO best practices. Let’s Encrypt provides free SSL/TLS certificates that can be easily installed and automatically renewed using Certbot. For a complete LEMP stack setup that includes Nginx, PHP, and MySQL, refer to How to Install the LEMP Stack on Ubuntu.
First, update your package list and install Certbot along with the Nginx plugin:
sudo apt update
sudo apt install certbot python3-certbot-nginx
Run Certbot to obtain and install a certificate for your domain. Replace your_domain.com
with your actual domain name (do not include angle brackets):
sudo certbot --nginx -d your_domain.com -d www.your_domain.com
Certbot will automatically modify your Nginx configuration to enable SSL, redirect HTTP traffic to HTTPS, and reload Nginx.
Let’s Encrypt certificates are valid for 90 days. Certbot sets up a systemd timer or cron job to renew certificates automatically. You can test the renewal process with:
sudo certbot renew --dry-run
If no errors occur, your certificates will renew automatically before expiration.
Using SSL/TLS encrypts all data between clients and your server, protecting sensitive information from interception or tampering. It also enables browsers to display secure padlocks, which enhance user confidence. Additionally, many modern web features require HTTPS to function properly.
By enabling HTTPS with Let’s Encrypt and Certbot, you ensure your reverse proxy setup is secure, trustworthy, and compliant with best practices.
If you had an application server prepared and running before beginning this tutorial, you can visit it in your browser now:
your_domain
However, if you don’t have an application server on hand to test your reverse proxy, you can go through the following steps to install Gunicorn along with a test application. Gunicorn is a Python WSGI server that is often paired with an Nginx reverse proxy.
Update your apt
repository index and install gunicorn
:
- sudo apt update
- sudo apt install gunicorn
You also have the option to install Gunicorn through pip
with PyPI for the latest version that can be paired with a Python virtual environment, but apt
is used here as a quick test bed.
Next, you’ll write a Python function to return “Hello World!” as an HTTP response that will render in a web browser. Create test.py
using nano
or your preferred text editor:
- nano test.py
Insert the following Python code into the file:
def app(environ, start_response):
start_response("200 OK", [])
return iter([b"Hello, World!"])
This is the minimum required code by Gunicorn to start an HTTP response that renders a string of text in your web browser. After reviewing the code, save and close your file.
Now start your Gunicorn server, specifying the test
Python module and the app
function within it. Starting the server will take over your terminal:
- gunicorn --workers=2 test:app
Output[2022-08-29 07:09:29 +0000] [10568] [INFO] Starting gunicorn 20.1.0
[2022-08-29 07:09:29 +0000] [10568] [INFO] Listening at: http://127.0.0.1:8000 (10568)
[2022-08-29 07:09:29 +0000] [10568] [INFO] Using worker: sync
[2022-08-29 07:09:29 +0000] [10569] [INFO] Booting worker with pid: 10569
[2022-08-29 07:09:29 +0000] [10570] [INFO] Booting worker with pid: 10570
The output confirms that Gunicorn is listening at the default address of http://127.0.0.1:8000
. This is the address that you set up previously in your Nginx configuration to proxy. If not, go back to your /etc/nginx/sites-available/your_domain
file and edit the app_server_address associated with the proxy_pass
directive.
Open your web browser and navigate to the domain you set up with Nginx:
your_domain
Your Nginx reverse proxy is now serving your Gunicorn web application server, displaying “Hello World!”.
Nginx reverse proxy setups are versatile and commonly used in many production environments. Some typical use cases include:
These use cases highlight the flexibility and power of Nginx as a reverse proxy in modern web infrastructure.
Nginx’s flexibility extends far beyond basic reverse proxying. Two advanced scenarios frequently encountered in modern architectures are proxying WebSocket connections and load balancing across multiple backend servers.
WebSockets enable real-time, bidirectional communication between clients and servers. Unlike standard HTTP, WebSockets require special handling to upgrade the protocol and maintain persistent connections. Nginx supports WebSockets by forwarding the necessary headers.
server {
listen 443 ssl;
server_name <your_domain>;
ssl_certificate /etc/ssl/certs/your_domain.crt;
ssl_certificate_key /etc/ssl/private/your_domain.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
location /ws/ {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
You should use this WebSocket reverse proxy configuration when your backend application provides a WebSocket endpoint—commonly at a path such as /ws/
—and you want Nginx to seamlessly forward and maintain persistent, real-time connections between clients and your application server. This setup is essential for applications that require low-latency, bidirectional communication, such as chat systems, live notifications, collaborative tools, or real-time dashboards.
By configuring Nginx to handle the WebSocket protocol upgrade and forward the necessary headers, you ensure that clients can establish and maintain WebSocket connections through your reverse proxy without interruption. This approach also allows you to centralize SSL termination, security controls, and logging at the proxy layer, while keeping your backend application isolated from direct internet exposure.
Typical scenarios for this configuration include:
In summary, use this configuration whenever your application relies on WebSockets and you want Nginx to securely and efficiently proxy those connections from the public internet to your backend service.
To maximize scalability and reliability, Nginx can distribute incoming requests across a pool of backend servers using the upstream
directive. This approach is fundamental for high-availability and horizontal scaling.
upstream backend_nodes {
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
}
server {
listen 80;
server_name <your_domain>;
location / {
proxy_pass http://backend_nodes;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
This configuration is ideal when you operate multiple instances of your application (e.g., Node.js, Python, or containerized apps) and want Nginx to balance traffic between them, improving performance and fault tolerance.
Nginx supports advanced load balancing features such as session persistence (sticky sessions), health checks, and weighted backends. Review the official Nginx documentation for further customization.
With the rise of AI-powered code assistants, developers can rapidly generate and validate Nginx reverse proxy configurations for a wide range of scenarios. Here are example prompts you can use with AI tools to streamline your workflow:
These prompts enable you to quickly obtain tailored configuration snippets for complex requirements, including WebSocket support, advanced load balancing, SSL offloading, or multi-domain setups.
Always adapt generated configurations to your infrastructure and security requirements, and verify them against the official Nginx documentation before deploying to production.
If you use AI assistants (e.g., Claude Desktop) to scaffold Nginx configs, FastMCP Proxy lets you run a local MCP server that proxies to remote MCP servers while preserving advanced protocol features.
What proxying means (at a glance): Your local FastMCP instance receives a request (like tools/call
) and forwards it to a backend MCP server (local or remote, different transport), then relays the response back to your client—giving you a single, consistent endpoint.
Key benefits (developer‑focused):
Performance note: Proxies to HTTP/SSE backends can add latency (hundreds of ms for list_tools()
vs single‑digit ms locally). If ultra‑low latency is required, consider static composition (e.g., importing tools at startup) instead of runtime proxying.
Quick start (bridging remote SSE → local stdio):
Use when you have a remote MCP server exposed via SSE/HTTP and you want desktop clients to treat it as a local stdio server.
from fastmcp import FastMCP
from fastmcp.server.proxy import ProxyClient
proxy = FastMCP.as_proxy(
ProxyClient("https://example.com/mcp/sse"),
name="Remote-to-Local Bridge"
)
if __name__ == "__main__":
proxy.run() # runs via stdio for local clients
Local → HTTP exposure:
Use when you need to expose a local stdio‑based MCP server over HTTP on a chosen host and port so other machines can reach it.
from fastmcp import FastMCP
from fastmcp.server.proxy import ProxyClient
local_proxy = FastMCP.as_proxy(
ProxyClient("local_server.py"),
name="Local-to-HTTP Bridge"
)
if __name__ == "__main__":
proxy = local_proxy
proxy.run(transport="http", host="0.0.0.0", port=8080)
See FastMCP Proxy Server docs: https://gofastmcp.com/servers/proxy
Even with a correct setup, you may encounter common issues when configuring Nginx as a reverse proxy. Below are frequent problems and how to resolve them.
Cause: This error usually means Nginx cannot communicate with the backend application server. It might be down, misconfigured, or listening on a different port/socket.
Fixes:
proxy_pass
directive in your Nginx config matches the backend address exactly.Example command to check if Gunicorn is listening:
ss -tuln | grep 8000
Restart your backend server if needed.
Cause: SSL errors can arise from expired, missing, or misconfigured certificates, or incorrect Certbot installation paths.
Fixes:
/etc/letsencrypt/live/<your_domain>/
.sudo certbot certificates
to list installed certificates and their expiry.ssl_certificate
and ssl_certificate_key
.sudo certbot renew --dry-run
to test auto-renewal.Cause: Your domain may not resolve to your server’s IP, causing connection failures.
Fixes:
dig +short your_domain
nslookup your_domain
By following these troubleshooting steps, you can quickly identify and resolve common problems encountered when setting up Nginx as a reverse proxy.
What is a reverse proxy, and why is Nginx recommended for this role?
A reverse proxy acts as an intermediary, forwarding client requests to backend application servers and returning responses. Nginx is widely trusted for this purpose due to its high performance, low resource usage, robust security features, and native support for SSL/TLS, caching, and load balancing. Using Nginx as a reverse proxy enhances security, scalability, and maintainability in production environments.
How do I install Nginx on Ubuntu 22.04 or 24.04 LTS?
To install Nginx on Ubuntu 22.04 or 24.04, update your package index with sudo apt update
and install Nginx using sudo apt install nginx
. The package name and service management commands are consistent across recent Ubuntu LTS releases, ensuring a straightforward and reliable installation process.
What is the purpose of the proxy_pass
directive in Nginx?
The proxy_pass
directive specifies the backend server to which Nginx should forward incoming client requests. This can be an IP address and port, a UNIX domain socket, or an upstream group for load balancing. Proper configuration of proxy_pass
is essential for correct request routing and application availability.
How do I enable HTTPS on my Nginx reverse proxy?
Enable HTTPS by installing Certbot (python3-certbot-nginx
) and running sudo certbot --nginx -d your_domain -d www.your_domain
to obtain and configure a free SSL certificate from Let’s Encrypt. Test automatic renewals with sudo certbot renew --dry-run
to ensure ongoing security and compliance with modern web standards.
Can Nginx proxy WebSocket connections?
Yes, Nginx fully supports proxying WebSocket connections. To enable this, set proxy_http_version 1.1
and include the following headers in your configuration:
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
Can I use Nginx reverse proxy for multiple domains or apps?
Yes—create separate server
blocks for each domain or application with distinct server_name
values. You can also route by path within a single block, proxying requests to different upstreams as needed.
How do I troubleshoot Nginx reverse proxy errors?
Ensure your backend is listening on the correct address and port (ss -tuln
). Validate your proxy_pass
targets, run sudo nginx -t
to check configs, monitor /var/log/nginx/error.log
, and confirm SSL certificate files exist under /etc/letsencrypt/live/<domain>/
.
With this tutorial you have configured Nginx as a reverse proxy to enable access to your application servers that would otherwise only be available locally. Additionally, you configured the forwarding of request headers, passing on the client’s header information.
For examples of a complete solution using Nginx as a reverse proxy, check out how to serve Flask applications with Gunicorn and Nginx on Ubuntu 22.04 or how to run a Meilisearch frontend using InstantSearch on Ubuntu 22.04.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
Building future-ready infrastructure with Linux, Cloud, and DevOps. Full Stack Developer & System Administrator @ DigitalOcean | GitHub Contributor | Passionate about Docker, PostgreSQL, and Open Source | Exploring NLP & AI-TensorFlow | Nailed over 50+ deployments across production environments.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
This comment has been deleted
If you get an error like this when testing the config:
nginx: [emerg] invalid URL prefix in /etc/nginx/sites-enabled/your_domain:8
nginx: configuration file /etc/nginx/nginx.conf test failed
Check to make sure your app_server_address in the server block has the ‘http://’ prefix, like this:
proxy_pass http://your_IP
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.