Loadbalancer created by Kubernetes shows status "Issue" unless all workers has a service's target pod running on them
I’m running a 4 node Kubernetes cluster with a loadbalancer as the service frontend for my application. The application is running in a Deployment with 3 replicas, so there will always be one worker without a pod for this service. The loadbalancer however includes all workers regardless, and puts it in the status “Issue” as one of the workers isn’t responding to healthchecks. Am I setting something up incorrectly or can the loadbalancers logic be changed to only target the workers backing the service when launched from Kubernetes?