Question

NPM gets killed

No matter what I do with NPM it gets killed. I know, there are many reports about this and everybody says increasing RAM or SWAP should help. But for some reason it is not in my case. First: I increased RAM up to 3G and SWAP up to 2G and it didn’t help.

deployer@staging:~/apps/naprok/releases/20181112091712$ /usr/bin/env npm audit fix
npm WARN deprecated browserslist@1.5.2: Browserslist 2 could fail on reading Browserslist >3.0 config used in other tools.
npm WARN deprecated text-encoding@0.6.4: no longer maintained
npm WARN deprecated browserslist@1.7.7: Browserslist 2 could fail on reading Browserslist >3.0 config used in other tools.
npm WARN deprecated nomnom@1.6.2: Package no longer supported. Contact support@npmjs.com for more info.
npm WARN deprecated circular-json@0.3.3: CircularJSON is in maintenance only, flatted is its successor.
Killed.............] \ fetchMetadata: sill resolveWithNewModule util-deprecate@1.0.2 checking installable status

deployer@staging:~/apps/naprok/releases/20181112091712$ /usr/bin/env npm install 
npm WARN deprecated browserslist@1.5.2: Browserslist 2 could fail on reading Browserslist >3.0 config used in other tools.
npm WARN deprecated text-encoding@0.6.4: no longer maintained
npm WARN deprecated browserslist@1.7.7: Browserslist 2 could fail on reading Browserslist >3.0 config used in other tools.
npm WARN deprecated nomnom@1.6.2: Package no longer supported. Contact support@npmjs.com for more info.
npm WARN deprecated circular-json@0.3.3: CircularJSON is in maintenance only, flatted is its successor.
Killed.............] / loadDep:yargs: sill resolveWithNewModule astral-regex@1.0.0 checking installable status

Second. I’m facing this on my staging server. But I also have production which has even less memory but it works without being killed all the time.

Thanks for any help!

Subscribe
Share

Some update:

  • increased SWAP up to 4 gigabytes, didn’t help
  • began to face the same issue on both droplets

Submit an answer
You can type!ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Hey friend!

These logs don’t seem to indicate why the process died. Do you have any logs that do? Something like /var/log/syslog or /var/log/messages should be where you find OOM errors if it’s memory. You’d also see a clear memory kill on the web console if you went there, if that was the cause.

Note that if it is memory, comparing to another server is a difficult thing to do. I know this is a point of confusion for a lot of people when they face this, but the simple reality of the internet is that no two public facing systems are ever truly the same. I’ll give you a theoretical (but plausible) scenario to explain that:

IP 1.1.1.1 has no reputation of successfully receiving attacks in last 5 years, therefore is distributed on less attack lists than 1.1.1.2, which was successfully attacked in 2013 and therefore receives more automated attacks which causes elevated memory usage in public facing applications. That, then, causes IP 1.1.1.2 to see more memory usage than 1.1.1.1 despite having no differences in software stack. This theory is only one of many that you can’t really know or prove/disprove, but is plausible based on things we know about how attackers work (and the fact that not all IPs are equal in attack frequency). Everything on the internet is under constant attack by someone, the only things that somewhat vary are the frequency of attacks, the attackers, and the reason for the attacks. This is why you can never truly compare one server to another unless you’ve compared all inbound traffic.

Moving beyond that, I wouldn’t assume it’s memory until you know for sure. If you do know for sure, then you have two options:

  1. Add more memory
  2. Build the application stack to better handle the load

I rarely recommend #1. If you’re setting fireworks off in the living room and the ceiling catches fire, raising the ceiling seems like a bad solution to me. The fireworks just go higher, new ceiling catches fire too. To me, it makes more sense to better contain the fireworks and not let them reach the ceiling. That’s a horrible illustration for actually reaching the conclusion, but I think you get where my brain is going.

If we know it’s memory and this is a public facing application, use a caching layer that better handles memory usage. This reduces requests to the backend application, and the caching software probably handles itself better in releasing memory back to the system. Nginx is pretty much the top choice for this, just use it as a reverse proxy with caching features enabled.

Hope that helps :)

Jarland