How to Add Load Balancers to Kubernetes Clusters

The DigitalOcean Cloud Controller supports provisioning DigitalOcean Load Balancers in a cluster’s resource configuration file. The DigitalOcean Load Balancer Service routes load balancer traffic to all worker nodes on the cluster. Only nodes configured to accept the traffic will pass healthchecks. Any other nodes will fail and show as unhealthy, but this is expected. Our community article, How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes, provides a detailed, practical example.

The example configuration below will define a load balancer and create it if one with the same name does not already exist. Additional configuration examples are available in the DigitalOcean Cloud Controller Manager repository.

Create a Configuration File

You can add an external load balancer to a cluster by creating a new configuration file or adding the following lines to your existing service config file. Note that both the type and ports values are required for type: LoadBalancer:

spec: 
  type: LoadBalancer
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
      name: http      

In the context of a service file, this might look like:

apiVersion: v1
kind: Service
metadata:
  name: sample-load-balancer
spec:
  type: LoadBalancer
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
      name: http

This is the minimum definition required to trigger creation of a DigitalOcean Load Balancer on your account and billing begins once the creation is completed. Currently, you cannot assign a floating IP address to a DigitalOcean Load Balancer.

Show Load Balancers

Once you apply the config file to a deployment, use kubectl get services to see its status:

kubectl --kubeconfig=[full path to cluster config file] get services
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)        AGE
kubernetes             ClusterIP      192.0.2.1       <none>           443/TCP        2h
sample-load-balancer   LoadBalancer   192.0.2.167     <pending>   80:32490/TCP        6s

When the load balancer creation is complete, <pending> will show the external IP address instead. In the PORT(S) column, the first port is the incoming port (80), and the second port is the node port (32490), not the container port supplied in the targetPort parameter.

Show Details for One Load Balancer

To get detailed information about the load balancer configuration of a single load balancer, use kubectl’s describe service command:

kubectl -kubeconfig=[full path to cluster config file] describe service [NAME]
Name:                     sample-load-balancer
Namespace:                default
Labels:                   <none>
Annotations:              kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"sample-load-balancer","namespace":"default"},"spec":{"ports":[{"name":"https",...
Selector:                 <none>
Type:                     LoadBalancer
IP:                       192.0.2.167
LoadBalancer Ingress:     203.0.113.86
Port:                     https  80/TCP
TargetPort:               443/TCP
NodePort:                 https  32490/TCP
Endpoints:                <none>
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason                Age               From                Message
  ----    ------                ----              ----                -------
  Normal  EnsuringLoadBalancer  3m (x2 over 38m)  service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   1m (x2 over 37m)  service-controller  Ensured load balancer

References

For more about managing load balancers, see: