One-click MEAN stack port confusion

I’m trying to setup HTTPS with letsencrypt on my MEAN droplet and I’m a little stuck with some port issues. I’m trying to listen on port 80 in my www file of my node application:


  require('https').createServer(lex.httpsOptions, lex.middleware(app)).listen(443, function () {
  console.log("Listening for ACME http-01 challenges on", this.address());

However, Nginx is using port 80 so it throws an error from port 80 already being in use. Though, the app is served on port 3000 and I can’t find in the Nginx configuration how that is being done. I’m hesitant to mess around with the Nginx configuration without understanding how it’s currently configured a little better. What should I be doing to properly direct incoming traffic? Can someone give me some guidance here?

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.


When it comes to NGINX + NodeJS, the best way to set things up is to allow NGINX to function as a proxy to your NodeJS application.

How this works is NGINX handles requests on ports 80 and 443 (HTTP and HTTPS/SSL) and then it proxies the request to the backend, which would be your NodeJS application. The reason this setup is recommended is because NodeJS applications shouldn’t run as the root user and to run your app on port 80/443 would require root or permission modification.

So what I’d recommend is doing as noted above – let NGINX handle requests on port 80/443 and then run your application on another port that doesn’t require root, such as 2346 (random port off the top of my head).

To setup your site with NGINX, you’d create a server block and within it, you’d have something like this:

server {
    listen 80;
    listen [::]:80;

    return 301 https://$host$request_uri;

    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    add_header X-Frame-Options SAMEORIGIN;
    add_header X-Content-Type-Options nosniff;
    add_header X-XSS-Protection "1; mode=block";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";

    resolver valid=300s
    resolver_timeout 5s;

    ssl on;
    ssl_certificate /etc/nginx/ssl/cert.pem;
    ssl_certificate_key /etc/nginx/ssl/privatekey.pem;

    ssl_dhparam /etc/nginx/ssl/dhparam.pem;
    ssl_ecdh_curve secp384r1;
    ssl_prefer_server_ciphers on;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_session_cache shared:SSL:50m;
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_session_tickets off;
    ssl_session_timeout 5m;

    location /

        proxy_buffers 16 32k;
        proxy_buffer_size 64k;
        proxy_busy_buffers_size 128k;
        proxy_cache_bypass $http_pragma $http_authorization;
        proxy_connect_timeout 59s;
        proxy_hide_header X-Powered-By;
        proxy_http_version 1.1;
        proxy_ignore_headers Cache-Control Expires;
        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_404;
        proxy_no_cache $http_pragma $http_authorization;
        proxy_pass_header Set-Cookie;
        proxy_read_timeout 600;
        proxy_redirect off;
        proxy_send_timeout 600;
        proxy_set_header Accept-Encoding '';
        proxy_set_header Cookie $http_cookie;
        proxy_set_header Host $host;
        proxy_set_header Referer $http_referer;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Server $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_temp_file_write_size 64k;
        proxy_set_header X-Original-Request $request_uri;

Now, let’s break that down.

To start, you need to change the 2x instances of the following to match your domain name:


You’ll then need to change the following lines to set the correct paths to your certificate and private key files.

    ssl_certificate /etc/nginx/ssl/cert.pem;
    ssl_certificate_key /etc/nginx/ssl/privatekey.pem;

Next, if you don’t already have a dhparem.pem file, now’s a good time to create one. It’s used for SSL, though it does take a little while to generate, so you’ll have to be patient. If it wasn’t needed, I’d recommend not worrying about it, but it is part of SSL setup, so I would advise generating one.

We do this by:

mkdir -p /etc/nginx/ssl
openssl dhparam -out /etc/nginx/ssl/dhparam.pem 4096

Once that file is done, there’s nothing more you need to do with it as the configuration is already set to the path of that file.

Now you should only need to change one more line and that’d be:


You’ll want to use the IP and Port your application is running on. If you run it on and choose to use port 2346, then you don’t need to change anything here.

Once you’ve set the above in your websites configuration file for NGINX, restart NGINX for the changes to take effect.

You may be wondering about this line:

    resolver valid=300s

Since Google’s DNS have been having issues, we’re using OpenDNS’s DNS server IP’s in place. These shouldn’t need to be changed.