One-click MEAN stack port confusion

March 27, 2017 252 views
MEAN Nginx Ubuntu

I'm trying to setup HTTPS with letsencrypt on my MEAN droplet and I'm a little stuck with some port issues. I'm trying to listen on port 80 in my www file of my node application:

require('http').createServer(lex.middleware(require('redirect-https')())).listen(80);

  require('https').createServer(lex.httpsOptions, lex.middleware(app)).listen(443, function () {
  console.log("Listening for ACME http-01 challenges on", this.address());
});

However, Nginx is using port 80 so it throws an error from port 80 already being in use. Though, the app is served on port 3000 and I can't find in the Nginx configuration how that is being done. I'm hesitant to mess around with the Nginx configuration without understanding how it's currently configured a little better. What should I be doing to properly direct incoming traffic? Can someone give me some guidance here?

1 Answer

@cassiusclayd

When it comes to NGINX + NodeJS, the best way to set things up is to allow NGINX to function as a proxy to your NodeJS application.

How this works is NGINX handles requests on ports 80 and 443 (HTTP and HTTPS/SSL) and then it proxies the request to the backend, which would be your NodeJS application. The reason this setup is recommended is because NodeJS applications shouldn't run as the root user and to run your app on port 80/443 would require root or permission modification.

So what I'd recommend is doing as noted above -- let NGINX handle requests on port 80/443 and then run your application on another port that doesn't require root, such as 2346 (random port off the top of my head).

To setup your site with NGINX, you'd create a server block and within it, you'd have something like this:

server {
    listen 80;
    listen [::]:80;
    server_name yourdomain.com www.yourdomain.com;

    return 301 https://$host$request_uri;
}

server
{
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name yourdomain.com www.yourdomain.com;

    add_header X-Frame-Options SAMEORIGIN;
    add_header X-Content-Type-Options nosniff;
    add_header X-XSS-Protection "1; mode=block";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";

    resolver 208.67.222.222 208.67.220.220 valid=300s
    resolver_timeout 5s;

    ssl on;
    ssl_certificate /etc/nginx/ssl/cert.pem;
    ssl_certificate_key /etc/nginx/ssl/privatekey.pem;

    ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
    ssl_dhparam /etc/nginx/ssl/dhparam.pem;
    ssl_ecdh_curve secp384r1;
    ssl_prefer_server_ciphers on;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_session_cache shared:SSL:50m;
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_session_tickets off;
    ssl_session_timeout 5m;

    location /
    {
        proxy_pass http://127.0.0.1:2346;

        proxy_buffers 16 32k;
        proxy_buffer_size 64k;
        proxy_busy_buffers_size 128k;
        proxy_cache_bypass $http_pragma $http_authorization;
        proxy_connect_timeout 59s;
        proxy_hide_header X-Powered-By;
        proxy_http_version 1.1;
        proxy_ignore_headers Cache-Control Expires;
        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_404;
        proxy_no_cache $http_pragma $http_authorization;
        proxy_pass_header Set-Cookie;
        proxy_read_timeout 600;
        proxy_redirect off;
        proxy_send_timeout 600;
        proxy_set_header Accept-Encoding '';
        proxy_set_header Cookie $http_cookie;
        proxy_set_header Host $host;
        proxy_set_header Referer $http_referer;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Server $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_temp_file_write_size 64k;
        proxy_set_header X-Original-Request $request_uri;
    }
}

Now, let's break that down.

To start, you need to change the 2x instances of the following to match your domain name:

server_name yourdomain.com www.yourdomain.com;

You'll then need to change the following lines to set the correct paths to your certificate and private key files.

    ssl_certificate /etc/nginx/ssl/cert.pem;
    ssl_certificate_key /etc/nginx/ssl/privatekey.pem;

Next, if you don't already have a dhparem.pem file, now's a good time to create one. It's used for SSL, though it does take a little while to generate, so you'll have to be patient. If it wasn't needed, I'd recommend not worrying about it, but it is part of SSL setup, so I would advise generating one.

We do this by:

mkdir -p /etc/nginx/ssl
openssl dhparam -out /etc/nginx/ssl/dhparam.pem 4096

Once that file is done, there's nothing more you need to do with it as the configuration is already set to the path of that file.

Now you should only need to change one more line and that'd be:

proxy_pass http://127.0.0.1:2346;

You'll want to use the IP and Port your application is running on. If you run it on 127.0.0.1 and choose to use port 2346, then you don't need to change anything here.

Once you've set the above in your websites configuration file for NGINX, restart NGINX for the changes to take effect.

...

You may be wondering about this line:

    resolver 208.67.222.222 208.67.220.220 valid=300s

Since Google's DNS have been having issues, we're using OpenDNS's DNS server IP's in place. These shouldn't need to be changed.

  • So would you recommend against using the greenlock/letsencrypt module? I was intrigued by the ease of setup, retrieving a cert from an authority automatically, and autorenew of certs. Is there an alternative way you would recommend doing this with Nginx?

    • @cassiusclayd

      The above configuration will work with LetsEncrypt once you've generated your certificate. You just need to shutdown NGINX first:

      service nginx stop
      

      ... and then run:

      letsencrypt certonly -d yourdomain.com -d www.yourdomain.com
      

      Once done, it'll give you a path to your new certificate files and you'd use that path to modify these lines:

          ssl_certificate /etc/nginx/ssl/cert.pem;
          ssl_certificate_key /etc/nginx/ssl/privatekey.pem;
      

      I know LE has it's own NGINX module built in, but it doesn't include the same detailed config that is shown above. I much prefer to use full configuration over partial.

  • Okay, I've got this working, thanks for the help! Though, I did have to switch resolver IP to 8.8.8.8 (Google) because the one you supplied was timing out.

    One question though -- my mobile app is still making API calls via http and for now I have everything on one server (will separate in the near future). The API is not public at the moment. I'd like to keep http calls working for a while until enough devices have updated the mobile app. Is there a simple way to forward http calls to https without breaking due to the redirect, or enabling both to work for the time being?

    Should I just specify a different location path for the api calls to not redirect to 443 from port 80?

    Maybe just 308 redirect instead?

    • @cassiusclayd

      The redirect shouldn't actually break anything unless you have a custom setup that handles the headers being passed differently than what would traditionally be done.

      All the first server block does is enforce SSL. The request is redirected, in full, and that's the time that the proxy takes over and proxies the request to the backend (your NodeJS app).

      You should still be able to make calls over HTTP, but they'd be redirected to HTTPS -- your application would handle it from there in terms of responding with approval or denial of the request.

      For example (a very basic one at that), if you enter in:

      http://yourdomain.com/?var=value&var2=value2
      

      The redirect doesn't destroy what's after the ?, you'd simply see it turn in to:

      https://yourdomain.com/?var=value&var2=value2
      

      Likewise, if you use cURL, or some other means of making a method call (GET, POST, PUT, DELETE, HEAD, PATCH, etc), the same thing should happen.

      ...

      You can setup different server blocks. If don't want the redirect in place right now, the best thing I'd recommend would be setting up a new server block and use a sub-domain, such as api.yourdomain.com that way it's isolated from the rest of your configuration.

      You'd simply add something like:

      server {
          listen 80;
          listen [::]:80;
          server_name api.yourdomain.com www.api.yourdomain.com;
      
          location / {
              proxy_pass http://127.0.0.1:2346;
              ...
              ...
              ...
              ...
          }
      }
      

      Where ... is the same proxy_* configuration as I provided in the SSL example (as it's not SSL specific, it's general configuration for proxying).

      • Now I'm incredibly confused about what's going on. It seems I must have another issue. When I visit my site it is not automatically forwarding to https, but if I manually go to https it works. On my android device in Chrome, however, it says the cert authority is invalid. I've cleared history & cache on both my desktop and mobile, but these behaviors persist. My sites-enabled/default is exactly as you have it, simply with my domain and port selection for my app (3000) changed. I see no errors when i restart nginx. . . . this is frustrating.

        • Well, I figured out i needed to be using fullchain.pem instead of cert.pem so now the cert is trusted on all of the browsers I'm testing, but for some reason it's still not auto-redirecting to https. Still working on this.

Have another answer? Share your knowledge.