Question

Loadbalancer created by Kubernetes shows status "Issue" unless all workers has a service's target pod running on them

Posted July 26, 2019 743 views
DigitalOceanLoad BalancingKubernetes

I’m running a 4 node Kubernetes cluster with a loadbalancer as the service frontend for my application. The application is running in a Deployment with 3 replicas, so there will always be one worker without a pod for this service. The loadbalancer however includes all workers regardless, and puts it in the status “Issue” as one of the workers isn’t responding to healthchecks. Am I setting something up incorrectly or can the loadbalancers logic be changed to only target the workers backing the service when launched from Kubernetes?

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
1 answer

Hi there!

Yes I understand the product here is misleading at best. This is a UI issue we are working on addressing. You are correct, that it tells the users there is an issue when in fact all is operating as expected. We are evaluating our options for reporting the status differently when the LB’s are provisioned by, or associated with DOKS.

There is no acceptable workaround to this behavior at the moment. Changing your externaltrafficpolicy on your service will allow all nodes to accept traffic and report as healthy. However, that setting will also affect network functionality and probably wont be suitable a lot of workloads.

I would recommend using an alternative method of validating an application’s health until we can sort out how we want our LB’s to behave.

Regards,

John Kwiatkoski
Senior Developer Support Engineer

  • Hi John,

    Thanks for the reply and acknowledging this is known and being looked into. Looking forward to having it resolved, meanwhile I have other means of checking the health of my application.

    Best regards,
    Marcus

  • I just ran into the same issue, so it seems this wasn’t addressed in the meantime.

    I think that this issue is quite a large time sink. I tried redeploying several times, changing configs. After some googling I finally found this.

    Is there any new developments on this since July?

    Thanks!

    • Hi there!

      Unfortunately, this would require a rework of our underlying LB product on how “health” is determined and is not as strightforward as it may seem as ‘health’ is subjective and differs from user to user.

      We actually recommend a proper monitoring solution to check your applications accessibility rather than our simple LB healthchecks. These monitoring solutions can check more than just whether a 200 response is given but potentially provide insight into other issues occurring on the cluster.

      For the time being. A few projects that come to mind are prometheus and kube-state-metrics.

      Regards,

      John Kwiatkoski
      Senior Developer Support Engineer

Submit an Answer