I’ve followed many tutorials that are almost based on my question but none of them worked unfortunately :(. (Note: I’m rookie on making these DevOps stuff :P but I’ll thank you if you could help me).
In order to explain the issue, I will mention what I have done so far:
In my DO(DigitalOcean) LoadBalancer setup
HTTP on port 80 => HTTP on port 80 and HTTPS on port 443 => HTTP on port 80http://0.0.0.0:80/DisabledAlso I’ve included a single DO Droplet to this DO LoadBalancer (for testing purpose). However, one doubt I have regarding this is if it’s necessary to turn on the ‘Proxy Protocol’ since I don’t think I need to get the original client’s IP once this LoadBalancer will overlap the Client’s IP with its own IP BUT I left it as is because during testing process, the Nginx server is working with ‘Proxy Protocol’ turned on.
In my DO(DigitalOcean) domain
In my DO(DigitalOcean) Droplet with Ubuntu Server 18.04
screen (creates a background terminal which will be active while is running the Node.js server). The Node.js server is an Express server which has a single GET API using HTTP protocol and it doesn’t have any Authentication or Security process setup there.In the Nginx server, I’ve created my own configuration saved in sites-available folder (absolute path: /etc/nginx/) and made a symlink to sites-enabled to run by the mentioned server and with the following configuration:
server {
listen 80 proxy_protocol;
listen 443 ssl proxy_protocol;
server_name mydomain.com;
set_real_ip_from {loadBalancerIP};
location / {
proxy_pass http://localhost:7890;
proxy_set_header Host $proxy_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
location /api {
proxy_pass http://localhost:7890/api/;
proxy_set_header Host $proxy_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Real-IP $remote_addr;
}
location ~ /\.ht {
deny all;
}
}
Where {loadBalancerIP} is my load balancer’s IP setup in my Digital LoadBalancer.
Also, I have renamed the default configuration set in sites-available folder which nginx server has by default (default file name I renamed to disabled.bkp) and removed the ‘default’ symlink file from sites-enabled in order to avoid Nginx runtime errors.
Issue After setting up those processes, I made an API call to my DO cloud server with the following URL:
https://mydomain.com/api/getData and the server’s response is an HTTP error status code 503 (Service Unavailable). Now, before showing me this error, my Nginx server was working by using the default and my customized configuration AND most important it was showing me the Nginx default web page. After renaming and removing Nginx default configuration files, the 503 error was and still persisting…
The most possible problem I think is the Nginx server setup and my DO LoadBalancer configuration. Also I was thinking to dockerize everything but I don’t know is the ‘proper’ way to go.
Please, I would like to have some guidance or help regarding this issue which is driving me nuts a couple of days.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
I have a found a solution. I found this useful and great tool: DO Nginx config generator in this community site and it worked. I setup the required and minimal configurations for my Nginx proxy server using the tool and it provided me the following config files:
/etc/nginx/nginx.conf file
# Generated by nginxconfig.io
# https://www.digitalocean.com/community/tools/nginx#?0.domain=twserver.emersonrojas.com&0.document_root=%2F&0.redirect=false&0.https=false&0.php=false&0.proxy&0.proxy_path=%2Ftwitter&0.proxy_pass=http:%2F%2F127.0.0.1:7890%2Ftwitter&0.root=false&gzip=false
user www-data;
pid /run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 65535;
events {
multi_accept on;
worker_connections 65535;
}
http {
charset utf-8;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
log_not_found off;
types_hash_max_size 2048;
client_max_body_size 16M;
# MIME
include mime.types;
default_type application/octet-stream;
# logging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log warn;
# load configs
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
/etc/nginx/sites-available/mydomain.com.conf file
server {
listen 80;
listen [::]:80;
server_name mydomain.com;
# security
include nginxconfig.io/security.conf;
# reverse proxy
location /api {
proxy_pass http://127.0.0.1:7890/api;
include nginxconfig.io/proxy.conf;
}
# additional config
include nginxconfig.io/general.conf;
}
/etc/nginx/nginxconfig.io/proxy.conf file(Custom configuration made by the tool, nginxconfig.io folder has been created manually)
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
/etc/nginx/nginxconfig.io/security.conf file
# security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;
# . files
location ~ /\.(?!well-known) {
deny all;
}
/etc/nginx/nginxconfig.io/general.conf file
# favicon.ico
location = /favicon.ico {
log_not_found off;
access_log off;
}
# robots.txt
location = /robots.txt {
log_not_found off;
access_log off;
}
Once these configurations were made on Nginx server (Just in case, I made a backup of these files as the tool suggests), I tried to call my single API using my domain and it worked. Also, the proxy protocol was desactivated from DO Load Balancer and from my Node.js app.
https://docs.digitalocean.com/products/networking/load-balancers/#proxy-protocol I translate: “When using SSL passthrough (e.g. port 443 to 443), load balancers do not support headers that preserve client information, such as X-Forwarded-Proto, X-Forwarded-Port, or X-Forwarded-For. Load balancers only inject those HTTP headers when the entry and target protocols are HTTP, or HTTPS with a certificate (not passthrough).”
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.