By Paul Kimbrel
I created an HTTPS service:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
type: LoadBalancer
ports:
- port: 443
targetPort: 80
name: https
This brought up a load balancer with a protocol of TCP 443 -> TCP 3xxxxx (varying based on service port). This didn’t really terminate HTTPS the way I wanted so I switched the incoming protocol to HTTPS, which nuked the output port. Thankfully I copied it, so I updated that rule back to appropriate output service port and had a proper HTTPS channel going through the load balancer to the K8S service. Woot!
However, I still wanted an LE certificate, so I set up a DNS entry pointing to my LB (side quest - any way to use an elastic IP to point to my LB so I don’t have to update the DNS if I have to rebuild the LB? Gold star if there is…).
So with the DNS entry set up, I could add an LE cert to the forwarding rule I added. Woot! Woot!
HOWEVER…
Now if I have a node failure, the replaced droplet isn’t getting added back into the LB pool. So I’m fine until the last node goes down - and if my PODs didn’t get properly distributed to the last node, I might run into trouble until K8S rebalances. Even if I have all my nodes back up, as soon as I lose the last of the first set of nodes, then I have NO nodes left in the LB and I’m dead in the water (so to speak). Plus, as the nodes were going down and coming back up, I ended up with nodes outside the LB pool which are costing money but not, you know, DOING anything. Unwoot.
Now, if I simply, manually, re-add all the droplets of the restarted nodes back to the LB, I’m back in business. But… that seems like something that should happen automatically.
Is it because I’m manually dorking around with the LB? Is there any way I could have the proper protocol, SSL termination, LE encrypt all be configurable from K8S (or maybe that’s coming?)?
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Heya,
To achieve proper SSL termination and Let’s Encrypt certificate handling, you can use Kubernetes Ingress along with an Ingress controller like NGINX Ingress controller or Traefik. This will allow you to manage and route incoming traffic and handle SSL termination, without needing to configure the load balancer manually.
DigitalOcean Load Balancers can integrate with Let’s Encrypt, but this integration is not designed for use with DigitalOcean Kubernetes configurations, as it can lead to the issues you’ve mentioned. Instead, use Ingress resources and an Ingress controller, which will enable you to configure SSL termination, protocol settings, and manage Let’s Encrypt certificates from within your Kubernetes cluster.
For more details on how to configure an Ingress resource and deploy an Ingress controller with DigitalOcean Kubernetes, please follow this guide: How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes.
Hope that this helps!
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.