Director
Hi,
I have a load balancer serving 2 droplets (both are healthy 100% - Debian do-kube-1.16.6-do.2 in SFO2 running a NodeJS application). Looking at the graphs for the load balancer, the session duration is currently running at around 900 seconds. The graph shows some abrupt spikes of some 100+ seconds. The graph creeps, spikes and occasionally returns to 0.
Not able to figure out what triggers these. There are no corresponding errors in the nginx monitoring logs. I see that the information for this graph says “This is the average TCP session time measured at the Load Balancer and is useful for detecting anomalies.”
Are these session times & the behaviour normal? I am new to K8S, nginx & load balancers. Any help would be much appreciated.
Thanks, KSA
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Hi there,
TCP session duration as measured at the load balancer level simply represents the average time that connections between clients and your load balancer stay open. This includes both active time (when data is being transferred) and idle time (when the connection is open, but no data is being transferred).
For a web application, a typical HTTP request/response cycle should be fairly quick - perhaps a few hundred milliseconds to a few seconds, depending on the complexity of the request. However, HTTP keep-alive, WebSocket connections, long polling, and other techniques can result in connections that stay open for much longer periods of time.
You mentioned that you’re seeing an average session duration of around 900 seconds, or 15 minutes. This seems quite long for a typical web application. However, it may be perfectly normal depending on the nature of your application. For instance, if your Node.js application uses WebSockets or long polling to keep connections open for real-time updates, then these longer session times could be expected.
The spikes in session duration could be due to a number of factors. Perhaps there’s a periodic task that takes a long time to run, or perhaps there are intermittent network issues that are resulting in delayed responses:
Long-running requests: Check your application logs to see if there are any requests that take a long time to process. This could be due to large file uploads, complex queries, etc.
Network issues: Network latency or instability could result in prolonged session durations as packets take longer to be transmitted and acknowledged.
Application errors or hangs: If your application encounters an error or hangs for some reason, it might keep the connection open longer than expected. Check your application logs for any errors or unusual behavior.
Client-side behavior: Some clients might hold connections open for a long time, either due to their own behavior or network issues on their end.
If none of the above appear to be the case, it’s also possible that this is normal behavior for your application, particularly if you’re not noticing any impact on application performance or user experience.
Best,
Bobby
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.