Question
Loadbalancer created by Kubernetes shows status "Issue" unless all workers has a service's target pod running on them
I’m running a 4 node Kubernetes cluster with a loadbalancer as the service frontend for my application. The application is running in a Deployment with 3 replicas, so there will always be one worker without a pod for this service. The loadbalancer however includes all workers regardless, and puts it in the status “Issue” as one of the workers isn’t responding to healthchecks. Am I setting something up incorrectly or can the loadbalancers logic be changed to only target the workers backing the service when launched from Kubernetes?
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
×