I’m running a 4 node Kubernetes cluster with a loadbalancer as the service frontend for my application. The application is running in a Deployment with 3 replicas, so there will always be one worker without a pod for this service. The loadbalancer however includes all workers regardless, and puts it in the status “Issue” as one of the workers isn’t responding to healthchecks. Am I setting something up incorrectly or can the loadbalancers logic be changed to only target the workers backing the service when launched from Kubernetes?
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Hi there!
Yes I understand the product here is misleading at best. This is a UI issue we are working on addressing. You are correct, that it tells the users there is an issue when in fact all is operating as expected. We are evaluating our options for reporting the status differently when the LB’s are provisioned by, or associated with DOKS.
There is no acceptable workaround to this behavior at the moment. Changing your externaltrafficpolicy on your service will allow all nodes to accept traffic and report as healthy. However, that setting will also affect network functionality and probably wont be suitable a lot of workloads.
I would recommend using an alternative method of validating an application’s health until we can sort out how we want our LB’s to behave.
Regards,
John Kwiatkoski Senior Developer Support Engineer