Question

104: Connection reset by peer on nginx rtmp server

on my droplet all of sudden client live stream upload gets disconnected and later reconnected, and below error is fired…

It has NGINX RTMP Live Video Streaming. No PHP installed.

Error Log of RTMP at that time:

2018/03/23 22:02:22 [debug] 12805#0: *4 hls: update fragment 2018/03/23 22:02:22 [info] 12805#0: *1 recv() failed (104: Connection reset by peer), client: 36.255.234.243, server: 0.0.0.0:1935 2018/03/23 22:02:22 [info] 12805#0: *1 disconnect, client: 36.255.234.243, server: 0.0.0.0:1935 2018/03/23 22:02:22 [info] 12805#0: *1 deleteStream, client: 36.255.234.243, server: 0.0.0.0:1935

Is this error related to bandwidth port speed on my server or issue at my clients end?

This issue hapoens to 2 / 2 streams but not at same time.


Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

Hi @rohirnaik44,

I bet that if you check dmesg you’ll see some errors about php-fpm, is that correct?

Such errors usually occurs when server is running out of resources, assuming that you’re running most recent stable versions of php and php-fpm.

Check there’s enough space on the disk Make sure to check the open file limits on the server. You’re interested especially in the hard limit (-Hn):

$ ulimit -Hn
4096
$ ulimit -Sn
1024

Check number of currently opened file descriptors on the server:

sysctl fs.file-nr
fs.file-nr = 1440       0       790328

Modern servers are capable of handling many files, usually ulimits are set to unnecessary low values.

Then check nginx.conf, at the beginning there’s something like:

worker_processes 4;
events {
  worker_connections 1024;
}

If you’re proxying request for each connection you’d need 2 file handles. Which means that in case of many connections you’d reach the limit quite quickly.

nginx has a worker_rlimit_nofile directive for restricting opened files per worker process (top level directive like worker_processes 4;):

worker_rlimit_nofile    1024;

Just do the math and calculate how many opened file descriptors you’d need when all connections are used (a bit extreme case). Also consider all other services running on that server.

Regards, KDSys