Question

How to scale socket.io & node.js app

Posted January 26, 2020 328 views
NginxApacheNode.jsUbuntu 18.04

I have a node.js app which runs currently on the smallest droplet available. I am about to use socket.io for real time updates, e.g. for chat, in-app notifications etc. I have now come to the conclusion that I can have a couple hundred concurrent connections. I use Artillery to test the connections:

config:
  target: "https://my.server.com"
  socketio:
    - transports: ["websocket"] // optional, same results if I remove this
  phases:
    - duration: 600
      arrivalRate: 20
scenarios:
- name: "A user that just connects"
  weight: 90
  engine: "socketio"
  flow:
    - get:
        url: "/"
    - think: 600

After a couple hundred connections I receive the following error:

Errors:
  ECONNRESET: 1
  Error: xhr poll error: 12

Then, when I resize the droplet (temporarily) to 8 vCPU’s and 32 GB RAM, I can have upwards to 1.700 concurrent connections. No matter how much more I resize the hardware, it’s stuck on this number.

So, how can I increase this number? How can I allow more concurrent connections into socket.io using node.js? My configuration looks like this:

sysctl -p

fs.file-max = 2097152
vm.swappiness = 10
vm.dirty_ratio = 60
vm.dirty_background_ratio = 2
net.ipv4.tcp_synack_retries = 2
net.ipv4.ip_local_port_range = 2000 65535
net.ipv4.tcp_rfc1337 = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 15
net.core.rmem_default = 31457280
net.core.rmem_max = 12582912
net.core.wmem_default = 31457280
net.core.wmem_max = 12582912
net.core.somaxconn = 4096
net.core.netdev_max_backlog = 65536
net.core.optmem_max = 25165824
net.ipv4.tcp_mem = 65536 131072 262144
net.ipv4.udp_mem = 65536 131072 262144
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.udp_rmem_min = 16384
net.ipv4.tcp_wmem = 8192 65536 16777216
net.ipv4.udp_wmem_min = 16384
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.tcp_tw_reuse = 1

ulimit

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 3838
max locked memory       (kbytes, -l) 16384
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65535
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 10000000
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

nginx.conf

user www-data;
worker_processes auto;
worker_rlimit_nofile 1000000;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
        multi_accept on;
        use epoll;
        worker_connections 1000000;
}

http {

        ##
        # Basic Settings
        ##

        client_max_body_size 50M;
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 120;
        keepalive_requests 10000;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # SSL Settings
        ##

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
        ssl_prefer_server_ciphers on;

        ##
        # Logging Settings
        ##

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;
        ##
        # Gzip Settings
        ##

        gzip on;

        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        ##
        # Virtual Host Configs
        ##

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

So first up: is there any way to increase the concurrent connections on a single droplet?

Second of all, I am considering spinning up different droplets and proxy them to allow more connections. I have read up about this for two days now, and I would go with Redis and nginx to proxy different connections to different node instances. Redis would make sure they are all connected to each other. However, I can’t figure out how this would work in practice. Should I have, let’s say, 4 different droplets running all the same configuration and all the same node.js app (server.js)? So would I basically upload the app 4 times? Any advice regarding this is more than welcome.

0 answers
Submit an answer

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!