So currently I am loadtesting a application I have. I am using the App platform ($12 container) along with a $15 redis managed db.
I am trying to loadtest by sending 3000 requests all at the same time, and in this logic for the request the app platform is incrementing a redis key by 1.
I understand 3000 requests all at the same time is a lot but I look at insights into the db and the app platform, both ram and cpu never get a high enough point for things to throttle or bog down. I would love support for this!
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Hi there,
When it comes to load testing, an incremental approach can provide more insight into the scalability and performance bottlenecks of your system. Rather than sending all 3000 requests simultaneously, you can gradually increase the number of requests. This method helps identify at what load level the connections begin to reset and allows you to adjust your configurations or scale your resources accordingly before reaching a point where the performance is impacted.
The DigitalOcean App Platform does not impose a hard limit on the number of concurrent connections to your application. However, it’s important to remember that each active connection consumes system resources, and a large volume of connections can impact your app’s performance, especially under heavy load. If you’re noticing performance issues when your app is handling many simultaneous connections, it may be beneficial to consider scaling your app horizontally or vertically to better handle the increased workload:
For Redis, there’s a cap on how many new connections can be handled per second for each CPU in your cluster, which is set at up to 200 new connections. Attempts to establish new connections beyond this threshold within the same second are likely to be rejected. Clients that are affected by this limitation should retry the connection attempt. A practical approach to mitigate this limitation is to implement connection pooling in your application. Connection pooling allows you to reuse a set of pre-established connections to the Redis server, which can significantly reduce the frequency of connection establishment and thereby the likelihood of hitting the connection rate limit.
Moreover, the maximum number of simultaneous connections for Redis nodes is determined by either 10,000 connections or 4 connections per megabyte of assigned memory, depending on which is greater. For instance, a Redis node configured with 1GiB (1024MB) of memory would support up to 10,000 simultaneous connections. Conversely, a node with 4GiB (4096MB) of memory could sustain up to 16,384 simultaneous connections (4 connections multiplied by 4096MB):
Hope that this helps and let me know how it goes!
Best,
Bobby