Loadbalancer created by Kubernetes shows status "Issue" unless all workers has a service's target pod running on them

July 26, 2019 251 views
DigitalOcean Load Balancing Kubernetes

I’m running a 4 node Kubernetes cluster with a loadbalancer as the service frontend for my application. The application is running in a Deployment with 3 replicas, so there will always be one worker without a pod for this service. The loadbalancer however includes all workers regardless, and puts it in the status “Issue” as one of the workers isn’t responding to healthchecks. Am I setting something up incorrectly or can the loadbalancers logic be changed to only target the workers backing the service when launched from Kubernetes?

1 Answer
jkwiatkoski July 26, 2019
Accepted Answer

Hi there!

Yes I understand the product here is misleading at best. This is a UI issue we are working on addressing. You are correct, that it tells the users there is an issue when in fact all is operating as expected. We are evaluating our options for reporting the status differently when the LB’s are provisioned by, or associated with DOKS.

There is no acceptable workaround to this behavior at the moment. Changing your externaltrafficpolicy on your service will allow all nodes to accept traffic and report as healthy. However, that setting will also affect network functionality and probably wont be suitable a lot of workloads.

I would recommend using an alternative method of validating an application’s health until we can sort out how we want our LB’s to behave.

Regards,

John Kwiatkoski
Senior Developer Support Engineer

  • Hi John,

    Thanks for the reply and acknowledging this is known and being looked into. Looking forward to having it resolved, meanwhile I have other means of checking the health of my application.

    Best regards,
    Marcus

Have another answer? Share your knowledge.