Question

Kubernetes Load Balancer Removes Droplets

I have a DO managed Kubernetes setup consisting of 3 droplets. Months ago when I set this up, I thought that all 3 were connected to the load balancer created by the ingress. Sometimes it drops to two and if I happen to notice it, I will then add back the missing droplet. Is this expected behavior? Why would the droplet just vanish from the LB?


Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

Hi there,

This is not the expected bahavior for LB’s provisioned with DOKS. In our current implementation all DOKS nodes should be added to the LB as potential targets. There are different behaviors you can set with regards to how traffic is sent to them using the externaltrafficpolicy setting on your loadbalancer services. I would need to look closer at your specific cluster in order to debug why nodes are being removed from your LB. Can you please open a support ticket so I can triage further?

Regards,

John Kwiatkoski Senior Developer Support Engineer

I have since recycled all droplets in the cluster because I couldn’t let this just not work.

In terms of more detail, the LB would literally just report 2/2 droplets. The denominator value wasn’t the “3” value that I expected it to be given that the DO managed cluster has 3 droplets.

Another issue is that even after adding the droplet back so that I would see 3/3, the health check on each droplet, which I honestly know no details about because it is created by the nginx-ingress I am using (I assume), would drop from 100%. More specifically, two of the droplets would have ~50% health check rate with the remaining droplet hovering around ~20%. I was odd and very frustrating so I recycled them.

I have a similar setup and the LB is creating on kubernetes as Nginx ingress I never noticed any similar behaviour. How do you detect that is being removed?