Socket/Node app memory leak or garbage collection? Or HAproxy/redis issue?

August 29, 2014 3.5k views

Hi everyone. Looking for some insight or guidance..

[HAProxy/Redis LB server] -> [broker] -> [rabbitmq] all over wildcard SSL

I have a HAproxy/redis loadbalancer pointing at a socket.io/express broker.
This app continues to use more and more MB's of memory, so much that it starts to hiccup, then eventually clog the whole system (even if I cluster the broker layer, if 1 broker clogs/stops it halts all.. which is why I thought it was an HAproxy problem. No major errors in the logs there, but I do see a ssl missing handshake, but I couldn't tell if thats the problem or a product of the problem? the SSL works fine, up until one of the socket brokers starts to choke. I explored this avenue (trying to fix my SSL cert order, etc) but with no luck.

Pic of my socker servers Memory usage
So I started thinking it was my socket server, because the memory seems to be just going up and up, until the hiccups/stall happens. The drops you see are me restarting the server with forever

The broker app does not actually fail/crash, it just seems to get clogged.. and it stores all subsequent messages in the memory, even when down.. So I woke up this morning with a queue of messages in memory (i THINK this is what is going on) that all ended up passing through when I restarted the node/broker app.

I would love to be able to clarify my question better than this, but I simply can't right now. Can anyone point me in the right direction? RIght now I am looking into things like –-nouse-idle-notification and --expose-gc to see if the js garbage collector is the cause. I really have no idea whats going on.

Any help from the community would be greatly appreciated!
Thanks everyone

  • Jordan

Edit:

  1. Is it possible for me to inspect my servers memory closer, to find out what exactly is causing it to steadily rise?
  2. I've tried starting it with –-nouse-idle-notification .. dosen't seem to affect anything
  3. Also tried --expose-gc .. again, no change in server response. Mem steadily climbs.
2 comments
  • Since you are using node.js, you could use take a look at Google's heap profiler.

  • Hi,

    We are also experiencing a similar memory issue on DO servers only. We have a nodejs app that forks child processes and uses the inbuilt IPC protocol to communicate between the parent and child processes. on local systems (ubuntu and mac) the application works like a charm but when we put it on the server there is a massive memory leak happening due to sockets used for IPC communication not being gc'ed. We have docker images for the app based from node 8.9.4 docker image. I also tested this on an amazon ec2 instance and there also it works properly. What seems to be the issue with DO server?

Be the first one to answer this question.