I’ve followed many tutorials that are almost based on my question but none of them worked unfortunately :(. (Note: I’m rookie on making these DevOps stuff :P but I’ll thank you if you could help me).
In order to explain the issue, I will mention what I have done so far:
In my DO(DigitalOcean) LoadBalancer setup
HTTP on port 80 => HTTP on port 80 and HTTPS on port 443 => HTTP on port 80
http://0.0.0.0:80/
Disabled
Also I’ve included a single DO Droplet to this DO LoadBalancer (for testing purpose). However, one doubt I have regarding this is if it’s necessary to turn on the ‘Proxy Protocol’ since I don’t think I need to get the original client’s IP once this LoadBalancer will overlap the Client’s IP with its own IP BUT I left it as is because during testing process, the Nginx server is working with ‘Proxy Protocol’ turned on.
In my DO(DigitalOcean) domain
In my DO(DigitalOcean) Droplet with Ubuntu Server 18.04
screen
(creates a background terminal which will be active while is running the Node.js server). The Node.js server is an Express server which has a single GET API using HTTP protocol and it doesn’t have any Authentication or Security process setup there.In the Nginx server, I’ve created my own configuration saved in sites-available
folder (absolute path: /etc/nginx/) and made a symlink to sites-enabled
to run by the mentioned server and with the following configuration:
server {
listen 80 proxy_protocol;
listen 443 ssl proxy_protocol;
server_name mydomain.com;
set_real_ip_from {loadBalancerIP};
location / {
proxy_pass http://localhost:7890;
proxy_set_header Host $proxy_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
location /api {
proxy_pass http://localhost:7890/api/;
proxy_set_header Host $proxy_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Real-IP $remote_addr;
}
location ~ /\.ht {
deny all;
}
}
Where {loadBalancerIP}
is my load balancer’s IP setup in my Digital LoadBalancer.
Also, I have renamed the default configuration set in sites-available
folder which nginx server has by default (default
file name I renamed to disabled.bkp
) and removed the ‘default’ symlink file from sites-enabled
in order to avoid Nginx runtime errors.
Issue After setting up those processes, I made an API call to my DO cloud server with the following URL:
https://mydomain.com/api/getData
and the server’s response is an HTTP error status code 503 (Service Unavailable). Now, before showing me this error, my Nginx server was working by using the default and my customized configuration AND most important it was showing me the Nginx default web page. After renaming and removing Nginx default configuration files, the 503 error was and still persisting…
The most possible problem I think is the Nginx server setup and my DO LoadBalancer configuration. Also I was thinking to dockerize everything but I don’t know is the ‘proper’ way to go.
Please, I would like to have some guidance or help regarding this issue which is driving me nuts a couple of days.
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Click below to sign up and get $100 of credit to try our products over 60 days!
https://docs.digitalocean.com/products/networking/load-balancers/#proxy-protocol I translate: “When using SSL passthrough (e.g. port 443 to 443), load balancers do not support headers that preserve client information, such as X-Forwarded-Proto, X-Forwarded-Port, or X-Forwarded-For. Load balancers only inject those HTTP headers when the entry and target protocols are HTTP, or HTTPS with a certificate (not passthrough).”
I have a found a solution. I found this useful and great tool: DO Nginx config generator in this community site and it worked. I setup the required and minimal configurations for my Nginx proxy server using the tool and it provided me the following config files:
/etc/nginx/nginx.conf file
/etc/nginx/sites-available/mydomain.com.conf file
/etc/nginx/nginxconfig.io/proxy.conf file(Custom configuration made by the tool, nginxconfig.io folder has been created manually)
/etc/nginx/nginxconfig.io/security.conf file
/etc/nginx/nginxconfig.io/general.conf file
Once these configurations were made on Nginx server (Just in case, I made a backup of these files as the tool suggests), I tried to call my single API using my domain and it worked. Also, the proxy protocol was desactivated from DO Load Balancer and from my Node.js app.