Can't figure out 502 Bad Gateway issue

July 19, 2018 327 views
Node.js Nginx Debian

My nodejs / expressjs site that I run using forever is giving the following error:

502 bad gateway (nginx/1.6.2)

Weird thing is I didn't change anything. The error has been showing up for a couple months, but I haven't made any changes for half a year.

The error from sudo tail -f /var/log/nginx/error.log looks like this (removed my ip and actual site name):

2018/07/19 12:38:49 [error] 1852#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 111.11.1.111, server: mysite.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:3010/favicon.ico", host: "mysite.com", referrer: "http://mysite.com/"

My nginx conf looks like this:


server {
        listen 80;
        server_name www.mysite.com;
        return 301 $scheme://mysite.com$request_uri;
}

server {
    listen 80;

    server_name mysite.com;

    location / {
        proxy_pass http://localhost:3010;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Help please!

2 Answers

Hello friend!

Sorry to hear about the trouble this is causing for you. What this indicates is that the service running on port 3010 is not accepting the web server's connection, most likely because it is not running. A common reason that this might occur without you making any changes is that the application receives a request, or a series of requests, that require it to use more memory than the droplet has to give it. The reaction of the operating system is then to kill the process to keep the system alive. Though not the only possible cause, this cause is so common that it is the only one worth suggesting.

It is entirely possible for an application to run for a very long time without hitting memory limits and then begin hitting them. This can be due to optimization of the application, in such a way that it's overhead slowly increases over time (observable in applications that write database entries on each visit and then query those entries on each visit, without any cleanup function). It can also be due to a pattern of traffic that was never experienced before, either human or bot, malicious or otherwise.

It's hard to say with 100% certainty from this angle what it could be, but I recommend opening the web console, clicking in the black box, then hitting the Enter key. If you see a message about the system running out of memory and killing a process, you might assume I'm on the right path.

From that point, if you can verify that it is memory, you have a choice of path to go down. Most people say upgrade memory, and yes that is the easy way out. But the real problem is figuring out why it's using that memory, and that may require debugging the application code, digging through logs and attempting to reproduce traffic patterns, etc. There is no one correct way to handle that optimization, as everyone's code will be different.

Kind Regards,
Jarland

  • Thanks for your reply Jarland. I logged in via the console and hit the Enter key multiple times. The system responded as expected. Hence, I don't believe it is a memory issue.

    Any other ideas on what might be causing this issue? Appreciate the help!

Consider enabling logging on the app itself to figure out why it crashes or why it stops listening on the specified port. Might just be that you updated some packages and your app code requires some changes or it is indeed a memory issue as @Jarland stated above which will also be visible in the logs.

Have another answer? Share your knowledge.