Kubernetes Load Balancer Removes Droplets

July 20, 2019 404 views
Load Balancing Kubernetes

I have a DO managed Kubernetes setup consisting of 3 droplets. Months ago when I set this up, I thought that all 3 were connected to the load balancer created by the ingress. Sometimes it drops to two and if I happen to notice it, I will then add back the missing droplet. Is this expected behavior? Why would the droplet just vanish from the LB?

3 Answers

I have a similar setup and the LB is creating on kubernetes as Nginx ingress I never noticed any similar behaviour. How do you detect that is being removed?

  • This is for a side project that I don't often check. The DO web interface, to my surprise, told me that there were only 2 droplets attached. The kubernetes cluster has three droplets. I tried attaching the missing droplet. Some point later in the day it was back to two droplets. I didn't have the patience to sort it out and restarted the entire cluster.

    I am really wondering if a droplet becomes unavailable from the LB point of view when health checking, does it actually remove the droplet from the list? If that is expected behavior, I am surprised.

    I wish I had more info to bring to the table but I'm still relatively new to kubernetes + helm. So it could be something I did I suppose.

I have since recycled all droplets in the cluster because I couldn't let this just not work.

In terms of more detail, the LB would literally just report 2/2 droplets. The denominator value wasn't the "3" value that I expected it to be given that the DO managed cluster has 3 droplets.

Another issue is that even after adding the droplet back so that I would see 3/3, the health check on each droplet, which I honestly know no details about because it is created by the nginx-ingress I am using (I assume), would drop from 100%. More specifically, two of the droplets would have ~50% health check rate with the remaining droplet hovering around ~20%. I was odd and very frustrating so I recycled them.

Hi there,

This is not the expected bahavior for LB's provisioned with DOKS. In our current implementation all DOKS nodes should be added to the LB as potential targets. There are different behaviors you can set with regards to how traffic is sent to them using the externaltrafficpolicy setting on your loadbalancer services. I would need to look closer at your specific cluster in order to debug why nodes are being removed from your LB. Can you please open a support ticket so I can triage further?

Regards,

John Kwiatkoski
Senior Developer Support Engineer

  • John,

    Did open a ticket, but given that I recycled the nodes, I'm not sure what can be found at this point. Another issue, and this might have something to do with a failure on my part, is that the cluster itself just seems to recreate itself once every few weeks. I don't know why this happens as I'm not yet very familiar with kubernetes, but it surprises me to say the least. I have an open ticket about this now as my cluster was restarted less than a day ago (not by myself).

    If there is a way to permanently save logs so that I can get more insight into what is happening that would be great. I rely on the metrics in the dashboard right now and with what appears to be a full cluster restart, any history is lost. So it is a bit frustrating.

    • Thank you for the info. I can take another look if you want into your ticket. If you simply reply to it It should show up in my queue.

      This is not a failure on your part. This is the behavior that is expected in our managed product. Nodes are often patched during your set maintenance window. This is the period where node will receive bug fixes, security updates, and general maintenance. I would ensure your window is set to the most convenient time for you.

      You can look to setup some sort of logging solution within the cluster if your workloads or organization policies rely on that. I would look link below for deployment help:

      https://github.com/helm/charts/tree/master/stable/elastic-stack

      As for the LB issues we are currently working on enhancing the flexibility of the LB's as renaming or modifying them manually can have quite averse affects. Any modifications or changes should be done through the service object in kubernetes and annotations:

      https://www.digitalocean.com/docs/kubernetes/how-to/configure-load-balancers/

      John

Have another answer? Share your knowledge.