By MondArt
Hi!
We are using the Managed Kafka service for a project with:
During a stress test, we observed that:
{"level":"ERROR","timestamp":"[REDACTED]","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Connection error: read ECONNRESET","retryCount":0,"retryTime":280} {"level":"ERROR","timestamp":"[REDACTED]","logger":"kafkajs","message":"[Connection] Connection error: read ECONNRESET","broker":"kafka-***.ondigitalocean.com:****","clientId":"kafka-test-***","stack":"Error: read ECONNRESET\n at TLSWrap.onStreamRead (node:internal/stream_base_commons:218:20)"}
We don’t believe Kafka itself imposes these limits. Based on past experience with Redis, we suspect it might be related to:
Questions:
Is there a rate limit on concurrent connections or message throughput for DigitalOcean Managed Kafka?
Are there recommendations to produce high volumes of events without causing service disconnections?
Could the issue be caused by network/firewall limits rather than Kafka itself?
If this is not a service limit, and the error is in our application (i.e., there is absolutely no limit on Kafka connections with private or public connection strings), how can we detect or fix the problem?
If this is a DigitalOcean service limit, how can we change or increase the concurrent connection limit?
We want to ensure our microservices can produce and consume high volumes of events without bottlenecking other services.
Thanks in advance for your guidance.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Heya, @mondart
DigitalOcean’s Kafka limits are mostly about cluster count, node count, versions, and trusted sources (IP allowlist), not per-client connection or message rate.
For managed databases, including Kafka, you can’t attach DigitalOcean Cloud Firewalls directly to the cluster, instead, you use “trusted sources” (VPC resources and IPs). That means DO’s own firewall isn’t randomly closing your connections mid-stream.
If at all possible, connect over the private/VPC endpoint from Droplets/Kubernetes/App Platform in the same region rather than the public hostname. That removes the public internet and most middleboxes from the equation.
Hope that this helps!
Hi,
As far as I am aware, Managed Kafka itself doesn’t impose hard throughput caps at the level you;re describing. When this happens during stress tests, I would guess that it’s tied to connection churn, TLS handshakes, client configuration, or something on the network path rather than a fixed Kafka limit.
If you’re sure the clients aren’t opening too many short-lived connections or overwhelming the brokers with handshake spikes, the best next step is to open a ticket with the DigitalOcean support team. They can check the broker logs and the network layer to tell you if something is being throttled or terminated on the managed service side.
If it turns out to be on the client side, stabilizing the connection pool and reusing long-lived connections usually solves it. If it’s on the service side, support can advise how to scale or adjust the limits for your setup.
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.