Can't use two load balancers for the same Kubernetes pool?

September 3, 2019 392 views
DigitalOcean Kubernetes DigitalOcean Managed Kubernetes Load Balancing

I set up my Kubernetes. At first, it was one load balancer serving my static content and everything was working fine. Now, I’ve tried adding a second load balancer to serve my API but it doesn’t seem to work. In the control panel, it shows it’s not working Screenshot

And when I run kubectl get all it appears to all be working fine:

NAME                                READY   STATUS              RESTARTS   AGE                                                                                                                                                              pod/cinch-engine-8b8fb784b-zqkbw    1/1     Running             0          28m
pod/cinch-static-64f9b98d88-tj5jl   1/1     Running             0          73m
pod/pgdb-85c5d747cc-n6jn8           1/1     Running             5          73m

NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
service/cinch-engine-load-balancer   LoadBalancer   10.245.68.233    68.183.249.0     80:31173/TCP   73m
service/cinch-static-load-balancer   LoadBalancer   10.245.114.184   138.197.233.57   80:30119/TCP   73m
service/kubernetes                   ClusterIP      10.245.0.1       <none>           443/TCP        4d5h
service/pgdb                         ClusterIP      10.245.25.230    <none>           5432/TCP       73m

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/cinch-engine   1/1     1            1           73m
deployment.apps/cinch-static   1/1     1            1           73m
deployment.apps/pgdb           1/1     1            1           73m

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/cinch-engine-8b8fb784b    1         1         1       28m
replicaset.apps/cinch-static-64f9b98d88   1         1         1       73m
replicaset.apps/pgdb-85c5d747cc           1         1         1       73m

NAME                             COMPLETIONS   DURATION   AGE
job.batch/cinch-engine-migrate   1/1           4m21s      73m
2 comments
  • Hi there,

    What you’re attempting should be possible to do. I would double check that your healthchecks are set to what you’re expecting. You can configure the healthchecks using service annotations:

    https://www.digitalocean.com/docs/kubernetes/how-to/configure-load-balancers/

    Also note that the externaltrafficpolicy on your service will determine the rules for a node showing as “healthy”.

    ‘Local’ - Only nodes hosting a pod of that service will show healthy.
    'Cluster’ - All nodes will show healthy, but you will incur a extra network hop as all nodes can accept traffic and then forward the traffic to a node with the pod. You will loose the original clientIP using this setting.

    Let me know if you have any additional questions.

    Regards,

    John Kwiatkoski
    Senior Developer Support Engineer

  • Thank you for the help. It turns out I was binding to 127.0.0.1:8000 instead of 0.0.0.0:8000. The problem has nothing to do with Kubernetes, DigitalOcean, or LoadBalancers. This question can be deleted as I can’t imagine it will be relevant to anyone else.

1 Answer

This question was answered by @clondon:

Thank you for the help. It turns out I was binding to 127.0.0.1:8000 instead of 0.0.0.0:8000. The problem has nothing to do with Kubernetes, DigitalOcean, or LoadBalancers. This question can be deleted as I can’t imagine it will be relevant to anyone else.

View the original comment

Have another answer? Share your knowledge.