Can't figure out 502 Bad Gateway issue

My nodejs / expressjs site that I run using forever is giving the following error:

502 bad gateway (nginx/1.6.2)

Weird thing is I didn’t change anything. The error has been showing up for a couple months, but I haven’t made any changes for half a year.

The error from sudo tail -f /var/log/nginx/error.log looks like this (removed my ip and actual site name):

2018/07/19 12:38:49 [error] 1852#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client:, server:, request: “GET /favicon.ico HTTP/1.1”, upstream: “”, host: “”, referrer: “

My nginx conf looks like this:

server {
        listen 80;
        return 301 $scheme://$request_uri;

server {
    listen 80;


    location / {
        proxy_pass http://localhost:3010;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;

Help please!

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.


Consider enabling logging on the app itself to figure out why it crashes or why it stops listening on the specified port. Might just be that you updated some packages and your app code requires some changes or it is indeed a memory issue as @Jarland stated above which will also be visible in the logs.

Hello friend!

Sorry to hear about the trouble this is causing for you. What this indicates is that the service running on port 3010 is not accepting the web server’s connection, most likely because it is not running. A common reason that this might occur without you making any changes is that the application receives a request, or a series of requests, that require it to use more memory than the droplet has to give it. The reaction of the operating system is then to kill the process to keep the system alive. Though not the only possible cause, this cause is so common that it is the only one worth suggesting.

It is entirely possible for an application to run for a very long time without hitting memory limits and then begin hitting them. This can be due to optimization of the application, in such a way that it’s overhead slowly increases over time (observable in applications that write database entries on each visit and then query those entries on each visit, without any cleanup function). It can also be due to a pattern of traffic that was never experienced before, either human or bot, malicious or otherwise.

It’s hard to say with 100% certainty from this angle what it could be, but I recommend opening the web console, clicking in the black box, then hitting the Enter key. If you see a message about the system running out of memory and killing a process, you might assume I’m on the right path.

From that point, if you can verify that it is memory, you have a choice of path to go down. Most people say upgrade memory, and yes that is the easy way out. But the real problem is figuring out why it’s using that memory, and that may require debugging the application code, digging through logs and attempting to reproduce traffic patterns, etc. There is no one correct way to handle that optimization, as everyone’s code will be different.

Kind Regards, Jarland