Use Load Balancer with Let's Encrypt in DigitalOcean Kubernetes


I just tried the DigitalOcean Managed Kubernetes and loved it! Great work 🙌

I successfully deployed a first app with load balancer on TCP port 80, and can access it from the internet with the domain.

But now I would like to change the Load Balancer to HTTPS with let’s encrypt, (using DO built in let’s encrypt) but I’m struggling to connect that to my Kubernetes service.

How can I describe this kind of deployment in my load-balancer YAML file?


Show comments

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

Hi and thanks for your question!

We recently published How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes, which you may find helpful.

This similar question may also be helpful.

I managed to get it working by updating the rule that maps port 443 -> Service Port. Then I was able to create a certificate for the domain I had pointed to the load balancer.

However, the customization of the rule breaks if the service changes or if I have a node failure. I’m going to post separately on that issue.

So, from a Kubernetes perspective, I created a simple Nginx deployment:

# 1-deployment.yaml
apiVersion: apps/v1
kind: Deployment
  name: nginx-deployment
      app: nginx
  replicas: 6
        app: nginx
      - name: nginx
        image: nginxdemos/hello:latest
        - containerPort: 80

kubectl create -f 1-deployment.yaml

Created that and I had a number of simple POD’s running.

Then I set up a service to connect the load balancer to the POD’s:

# 2-service.yaml
apiVersion: v1
kind: Service
  name: nginx-service
    app: nginx
  type: LoadBalancer
    - port: 443
      targetPort: 80
      name: https

kubectl create -f 2-service.yaml

When I run the create on the service, the D.O. Load Balancer is created and bound to my droplets. It takes a bit to get up and running, but you’ll have to wait for it finish.

Then I had to set up a DNS A record to point to the load balancer. This was required to create the certificate later. I have a demo record from one of my domains pointing to the load balancer, so I just made sure it was updated. The TTL is 3600, so it took a while to update. I could probably lower that, but I’m not sure what the consequences are so I left it (I’m not a DNS guru).

Once the DNS entry is at least set up (even if it hasn’t propagated), go into the Load Balancer and update the 1st forward rule that says TCP 443 -> TCP 3xxxx (whatever port is assigned to the service COPY THE PORT).

As soon as I switch the incoming protocol to “HTTPS”, the output port gets updated to 80, so you’ll want to the past the port you copied. Now it should be HTTPS 443 -> HTTP 3xxxx.

AND… now you have the ability to select a certificate. If you haven’t created a Let’s Encrypt certificate on DO before, you can do it here on your DNS entry.

When all that is in place, you’ll have just wait for the DNS to finish propagated and you’re good to go.

Just pray you don’t have to rebuild the load balancer, or the DNS will have to be updated. This is where I wish there was a dynamic IP for the load balancer.


This is exactly what I’m trying to do, but somehow it doesn’t work on my side

If you mind sharing your ingress + cert manager configuration, it would be great :)