How to Setup a DigitalOcean Load Balancer Domain to a Nginx reverse proxy which proxies a nodejs server

I’ve followed many tutorials that are almost based on my question but none of them worked unfortunately :(. (Note: I’m rookie on making these DevOps stuff :P but I’ll thank you if you could help me).

In order to explain the issue, I will mention what I have done so far:

In my DO(DigitalOcean) LoadBalancer setup

  • Forwards Rules: HTTP on port 80 => HTTP on port 80 and HTTPS on port 443 => HTTP on port 80
  • Algorithm: ‘Least Connections’
  • Health checks:
  • Sticky sessions: ‘Off’
  • SSL: ‘Redirect HTTP to HTTPS’
  • Proxy Protocol: ‘Enabled’
  • Backend Keepalive: Disabled

Also I’ve included a single DO Droplet to this DO LoadBalancer (for testing purpose). However, one doubt I have regarding this is if it’s necessary to turn on the ‘Proxy Protocol’ since I don’t think I need to get the original client’s IP once this LoadBalancer will overlap the Client’s IP with its own IP BUT I left it as is because during testing process, the Nginx server is working with ‘Proxy Protocol’ turned on.

In my DO(DigitalOcean) domain

  • I’ve created a custom domain, i.e. ‘’ which has a ‘A’ type DNS record which points to my DO LoadBalancer IP address.

In my DO(DigitalOcean) Droplet with Ubuntu Server 18.04

  • I’ve a Node.js server which is running locally with port 7890. I’m using the following bash command: screen (creates a background terminal which will be active while is running the Node.js server). The Node.js server is an Express server which has a single GET API using HTTP protocol and it doesn’t have any Authentication or Security process setup there.
  • I’ve installed, run and configured an Nginx server + enabled the instance Firewall to listen 80, 443, and 22 ports.

In the Nginx server, I’ve created my own configuration saved in sites-available folder (absolute path: /etc/nginx/) and made a symlink to sites-enabled to run by the mentioned server and with the following configuration:

server {
        listen  80 proxy_protocol;
        listen 443 ssl proxy_protocol;
        set_real_ip_from {loadBalancerIP};
        location / {
                proxy_pass http://localhost:7890;
                proxy_set_header Host            $proxy_host;
                proxy_set_header X-Real-IP       $remote_addr;
                proxy_set_header X-Forwarded-For $remote_addr;
        location /api {
                proxy_pass http://localhost:7890/api/;
                proxy_set_header Host            $proxy_host;
                proxy_set_header X-Forwarded-For $remote_addr;
                proxy_set_header X-Real-IP       $remote_addr;
        location ~ /\.ht {
                deny all;

Where {loadBalancerIP} is my load balancer’s IP setup in my Digital LoadBalancer.

Also, I have renamed the default configuration set in sites-available folder which nginx server has by default (default file name I renamed to disabled.bkp) and removed the ‘default’ symlink file from sites-enabled in order to avoid Nginx runtime errors.

Issue After setting up those processes, I made an API call to my DO cloud server with the following URL: and the server’s response is an HTTP error status code 503 (Service Unavailable). Now, before showing me this error, my Nginx server was working by using the default and my customized configuration AND most important it was showing me the Nginx default web page. After renaming and removing Nginx default configuration files, the 503 error was and still persisting…

The most possible problem I think is the Nginx server setup and my DO LoadBalancer configuration. Also I was thinking to dockerize everything but I don’t know is the ‘proper’ way to go.

Please, I would like to have some guidance or help regarding this issue which is driving me nuts a couple of days.

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others. I translate: “When using SSL passthrough (e.g. port 443 to 443), load balancers do not support headers that preserve client information, such as X-Forwarded-Proto, X-Forwarded-Port, or X-Forwarded-For. Load balancers only inject those HTTP headers when the entry and target protocols are HTTP, or HTTPS with a certificate (not passthrough).”

I have a found a solution. I found this useful and great tool: DO Nginx config generator in this community site and it worked. I setup the required and minimal configurations for my Nginx proxy server using the tool and it provided me the following config files:

/etc/nginx/nginx.conf file

# Generated by

user www-data;
pid /run/;
worker_processes auto;
worker_rlimit_nofile 65535;

events {
	multi_accept on;
	worker_connections 65535;

http {
	charset utf-8;
	sendfile on;
	tcp_nopush on;
	tcp_nodelay on;
	server_tokens off;
	log_not_found off;
	types_hash_max_size 2048;
	client_max_body_size 16M;

	include mime.types;
	default_type application/octet-stream;

	# logging
	access_log /var/log/nginx/access.log;
	error_log /var/log/nginx/error.log warn;

	# load configs
	include /etc/nginx/conf.d/*.conf;
	include /etc/nginx/sites-enabled/*;

/etc/nginx/sites-available/ file

server {
	listen 80;
	listen [::]:80;


	# security

	# reverse proxy
	location /api {

	# additional config

/etc/nginx/ file(Custom configuration made by the tool, folder has been created manually)

proxy_http_version	1.1;
proxy_cache_bypass	$http_upgrade;

proxy_set_header Upgrade		$http_upgrade;
proxy_set_header Connection 		"upgrade";
proxy_set_header Host			$host;
proxy_set_header X-Real-IP		$remote_addr;
proxy_set_header X-Forwarded-For	$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto	$scheme;
proxy_set_header X-Forwarded-Host	$host;
proxy_set_header X-Forwarded-Port	$server_port;

/etc/nginx/ file

# security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;

# . files
location ~ /\.(?!well-known) {
	deny all;

/etc/nginx/ file

# favicon.ico
location = /favicon.ico {
	log_not_found off;
	access_log off;

# robots.txt
location = /robots.txt {
	log_not_found off;
	access_log off;

Once these configurations were made on Nginx server (Just in case, I made a backup of these files as the tool suggests), I tried to call my single API using my domain and it worked. Also, the proxy protocol was desactivated from DO Load Balancer and from my Node.js app.