Easily configure a performant, secure, and stable NGINX server.Learn More
How to scale socket.io & node.js app
I have a node.js app which runs currently on the smallest droplet available. I am about to use socket.io for real time updates, e.g. for chat, in-app notifications etc. I have now come to the conclusion that I can have a couple hundred concurrent connections. I use Artillery to test the connections:
config: target: "https://my.server.com" socketio: - transports: ["websocket"] // optional, same results if I remove this phases: - duration: 600 arrivalRate: 20 scenarios: - name: "A user that just connects" weight: 90 engine: "socketio" flow: - get: url: "/" - think: 600
After a couple hundred connections I receive the following error:
Errors: ECONNRESET: 1 Error: xhr poll error: 12
Then, when I resize the droplet (temporarily) to 8 vCPU’s and 32 GB RAM, I can have upwards to 1.700 concurrent connections. No matter how much more I resize the hardware, it’s stuck on this number.
So, how can I increase this number? How can I allow more concurrent connections into socket.io using node.js? My configuration looks like this:
fs.file-max = 2097152 vm.swappiness = 10 vm.dirty_ratio = 60 vm.dirty_background_ratio = 2 net.ipv4.tcp_synack_retries = 2 net.ipv4.ip_local_port_range = 2000 65535 net.ipv4.tcp_rfc1337 = 1 net.ipv4.tcp_fin_timeout = 15 net.ipv4.tcp_keepalive_time = 300 net.ipv4.tcp_keepalive_probes = 5 net.ipv4.tcp_keepalive_intvl = 15 net.core.rmem_default = 31457280 net.core.rmem_max = 12582912 net.core.wmem_default = 31457280 net.core.wmem_max = 12582912 net.core.somaxconn = 4096 net.core.netdev_max_backlog = 65536 net.core.optmem_max = 25165824 net.ipv4.tcp_mem = 65536 131072 262144 net.ipv4.udp_mem = 65536 131072 262144 net.ipv4.tcp_rmem = 8192 87380 16777216 net.ipv4.udp_rmem_min = 16384 net.ipv4.tcp_wmem = 8192 65536 16777216 net.ipv4.udp_wmem_min = 16384 net.ipv4.tcp_max_tw_buckets = 1440000 net.ipv4.tcp_tw_reuse = 1
core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 3838 max locked memory (kbytes, -l) 16384 max memory size (kbytes, -m) unlimited open files (-n) 65535 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 10000000 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
So first up: is there any way to increase the concurrent connections on a single droplet?
Second of all, I am considering spinning up different droplets and proxy them to allow more connections. I have read up about this for two days now, and I would go with Redis and nginx to proxy different connections to different node instances. Redis would make sure they are all connected to each other. However, I can’t figure out how this would work in practice. Should I have, let’s say, 4 different droplets running all the same configuration and all the same node.js app (server.js)? So would I basically upload the app 4 times? Any advice regarding this is more than welcome.
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.×