I have a node.js app which runs currently on the smallest droplet available. I am about to use socket.io for real time updates, e.g. for chat, in-app notifications etc. I have now come to the conclusion that I can have a couple hundred concurrent connections. I use Artillery to test the connections:

  target: "https://my.server.com"
    - transports: ["websocket"] // optional, same results if I remove this
    - duration: 600
      arrivalRate: 20
- name: "A user that just connects"
  weight: 90
  engine: "socketio"
    - get:
        url: "/"
    - think: 600

After a couple hundred connections I receive the following error:

  Error: xhr poll error: 12

Then, when I resize the droplet (temporarily) to 8 vCPU’s and 32 GB RAM, I can have upwards to 1.700 concurrent connections. No matter how much more I resize the hardware, it’s stuck on this number.

So, how can I increase this number? How can I allow more concurrent connections into socket.io using node.js? My configuration looks like this:

sysctl -p

fs.file-max = 2097152
vm.swappiness = 10
vm.dirty_ratio = 60
vm.dirty_background_ratio = 2
net.ipv4.tcp_synack_retries = 2
net.ipv4.ip_local_port_range = 2000 65535
net.ipv4.tcp_rfc1337 = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 15
net.core.rmem_default = 31457280
net.core.rmem_max = 12582912
net.core.wmem_default = 31457280
net.core.wmem_max = 12582912
net.core.somaxconn = 4096
net.core.netdev_max_backlog = 65536
net.core.optmem_max = 25165824
net.ipv4.tcp_mem = 65536 131072 262144
net.ipv4.udp_mem = 65536 131072 262144
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.udp_rmem_min = 16384
net.ipv4.tcp_wmem = 8192 65536 16777216
net.ipv4.udp_wmem_min = 16384
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.tcp_tw_reuse = 1


core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 3838
max locked memory       (kbytes, -l) 16384
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65535
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 10000000
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited


user www-data;
worker_processes auto;
worker_rlimit_nofile 1000000;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
        multi_accept on;
        use epoll;
        worker_connections 1000000;

http {

        # Basic Settings

        client_max_body_size 50M;
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 120;
        keepalive_requests 10000;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        # SSL Settings

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
        ssl_prefer_server_ciphers on;

        # Logging Settings

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;
        # Gzip Settings

        gzip on;

        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        # Virtual Host Configs

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;

So first up: is there any way to increase the concurrent connections on a single droplet?

Second of all, I am considering spinning up different droplets and proxy them to allow more connections. I have read up about this for two days now, and I would go with Redis and nginx to proxy different connections to different node instances. Redis would make sure they are all connected to each other. However, I can’t figure out how this would work in practice. Should I have, let’s say, 4 different droplets running all the same configuration and all the same node.js app (server.js)? So would I basically upload the app 4 times? Any advice regarding this is more than welcome.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

2 answers

I also hit the 1700 connections limit and finally found an issue:
Socketio longpool
Solution is to force websocket connection on client with:
let socket = IO(remoteServer, { transports: [“websocket”] });

Bit late to this question but In case anyone else comes here looking for similar questions and answers..

Here is what you should be using

1) Socket.io with websockets (as noted)
2) PM2 clustering (node is single threaded so use the CPUs you got)
3) Socket.io-redis (allows you to cluster socket.io)
4) Redis server ( allows for redis :D)

Once you have this all working correctly you should be able to handle a lot more concurrent connection. Running load tests I can have upwards of 15 to 20 k connections running on a 2 vcpu box. I would test more however my machine (local) hit an upper limit on CPU

Next, you may look at putting this into containers OR place a load balancer in front of your app and just add new droplets behind it.

Submit an Answer