Node.js + Nginx + Letsencrypt + two Top-Level-Domains

November 2, 2016 717 views
Node.js Nginx Let's Encrypt DigitalOcean Ubuntu 16.04

Hey what is the best way to accomplish a good and secure setup with Node.js + Nginx + Letsencrypt and two Top-Level-Domains? I'm getting a little bit stuck in the moment.

First of all my domain setup: Now in the moment both DNS are managed over digitalocean and both have the A (and AAA) Records for my staging environments (live, dev, www -> cname)

But I configured both domains as the same (but I think that is bad practise for google)

My Nodejs application has two folders (for the dev and live system) in the /opt/… directory
I manage it with pm2 and ran every application on a different port.

In my Nginx sites-enabled folder, I have to files one for the dev and one for the live environment:

looks like that:

# Nginx config for /etc/nginx/sites-enabled/

limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;
limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;
limit_req_status 444;
limit_conn_status 503;

proxy_cache_key "$scheme$request_method$host$request_uri$is_args$args";
proxy_cache_valid 404 1m;

    upstream {
        server localhost:3325;

    server {
        listen 80;
        limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;
        return 301 https://$host$request_uri$is_args$args;

    server {
        listen 443 ssl http2;
        keepalive_timeout 300;

        auth_basic "Restricted";
        auth_basic_user_file /etc/nginx/.htpasswd;

        ssl on;
        ssl_stapling on;
        ssl_stapling_verify on;
        # Use certificate and key provided by Let's Encrypt:
        ssl_certificate /etc/letsencrypt/live/;
        ssl_certificate_key /etc/letsencrypt/live/;
        ssl_session_timeout 5m;
        ssl_session_cache shared:SSL:10m;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;

        # `gzip` Settings
        gzip on;
        gzip_disable "MSIE [1-6]\.";

        gzip_vary on;
        gzip_proxied any;
        gzip_comp_level 7;
        gzip_buffers 16 8k;
        gzip_http_version 1.1;
        gzip_min_length 256;
        gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/ application/x-font-ttf font/opentype image/svg+xml image/x-icon;

        root /opt/;

        resolver valid=300s;
        add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains';
        add_header X-Cache $upstream_cache_status;

        client_body_buffer_size 8K;
        client_max_body_size 20m;
        client_body_timeout 10s;
        client_header_buffer_size 1k;
        large_client_header_buffers 2 16k;
        client_header_timeout 5s;

        location / {
            proxy_cache_valid 200 30m;
            proxy_cache_valid 404 1m;
            proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
            proxy_ignore_headers Set-Cookie;
            proxy_hide_header Set-Cookie;
            proxy_hide_header X-powered-by;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto https;
            proxy_set_header Host $http_host;
            expires 10m;

        location ~ \.(aspx|php|jsp|cgi)$ {
            return 404;

        location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
            # root /opt/;
            # Per RFC2616 - 1 year maximum expiry
            expires 1y;
            add_header Cache-Control public;
            access_log  off;
            log_not_found off;

            # Some browsers still send conditional-GET requests if there's a
            # Last-Modified header or an ETag header even if they haven't
            # reached the expiry date sent in the Expires header.
            add_header Last-Modified "";
            add_header ETag "";

        location ~ /.well-known {
            allow all;

The problem is when I want to add with certbot my domains it always has a 404 in the .well-known host directory...

certbot commmand

./certbot-auto certonly --webroot \
    -w /opt/ \
    -d \
    -d \
    -d \
    -d \
    -d \
    -d \
    -w /opt/ \
    -d \
    -d \
    --non-interactive --agree-tos --email

I hope you can help me to make letsencrypt work for all domains.

Regards Lukas

3 Answers
rob107421 March 30, 2017
Accepted Answer

My tip: Monitor the certificate to check that your ssl renewal happens reliably.

I wrote which is one option to do this
(hosted on DO and signed by Let's Encrypt)

It checks your certificate(s) regularly and notifies you if they get too close to expiry.
(There are other services which do the same.)

I would recommend removing the 301 redirect for the moment and adding

        location ~ /.well-known {
            allow all;

To your server block for port 80. I believe that the system looks for this over http first and may not follow the redirect properly. At least this is the way that certbot auto-configured a couple copies of nginx I used it with.

Thanks a lot, great work! But I got rid of LetsEncrypt. Instead, I am using CloudFlare. They offer a free CDN and a free SSL wildcard certificate. So two things are done with a simple DNS config.

Have another answer? Share your knowledge.