Question

Why is Load Balancer routiung traffic to only 1 node

Information

Within my cluster, I have 2 nodes running. On top of my cluster, I have a load balancer setup with the Round Robin algorithm defined. I however notice within the Digital Ocean Control Panel, it shows that only 1 of the node is Accepting traffic whereas another shows as No traffic.

When I dived into the insight of both nodes, I see very different set of data shows (CPU Usage, Load Average, Memory Usage, .etc.) Hence I assume, both the nodes are given traffics equaly.

Question

Can someone please help to confirm whether it’s expected to see LoadBalancer telling only 1 of the node is accepting the traffic? I expect to see both nodes are accepting traffic.


Submit an answer


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Bobby Iliev
Site Moderator
Site Moderator badge
May 21, 2024

Hey Tommy,

From what you’re describing, it indeed is not expected for a Load Balancer configured with a Round Robin algorithm to route traffic to only one node consistently.

What I could suggest here is checking the following things:

  1. Make sure that both nodes are passing the health checks defined in your load balancer configuration. If a node fails the health check, the load balancer will stop routing traffic to it.

    Check health checks configuration:

    kubectl describe service <your-load-balancer-service>
    
  2. Check that both nodes are in a ready state. Sometimes, nodes might not be ready to accept traffic due to various reasons like resource constraints, network issues, etc.

    Check node status:

    kubectl get nodes
    
  3. Verify that your pods are evenly distributed across both nodes. Sometimes, Kubernetes may not distribute pods evenly due to resource availability or affinity/anti-affinity rules. So if all pods are scheduled on 1 node, that could explain why you are seeing that specific behaviour:

    Check pod distribution:

    kubectl get pods -o wide
    
  4. Confirm that your Kubernetes service is correctly configured to use a load balancer and is exposing the necessary ports.

    Check service details:

    kubectl get service <your-service> -o yaml
    

Also, you can check tge logs for any errors that might be preventing a node from accepting traffic.

kubectl logs <pod-name> --tail=50

If you’ve gone through these steps and everything looks good, yet the problem persists, reaching out to the DigitalOcean support team with your findings could be the next best step.

Let me know how it goes!

- Bobby

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Become a contributor for community

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

DigitalOcean Documentation

Full documentation for every DigitalOcean product.

Resources for startups and SMBs

The Wave has everything you need to know about building a business, from raising funding to marketing your product.

Get our newsletter

Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

New accounts only. By submitting your email you agree to our Privacy Policy

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.