I created an HTTPS service:

apiVersion: v1
kind: Service
  name: nginx-service
    app: nginx
  type: LoadBalancer
    - port: 443
      targetPort: 80
      name: https

This brought up a load balancer with a protocol of TCP 443 -> TCP 3xxxxx (varying based on service port). This didn’t really terminate HTTPS the way I wanted so I switched the incoming protocol to HTTPS, which nuked the output port. Thankfully I copied it, so I updated that rule back to appropriate output service port and had a proper HTTPS channel going through the load balancer to the K8S service. Woot!

However, I still wanted an LE certificate, so I set up a DNS entry pointing to my LB (side quest - any way to use an elastic IP to point to my LB so I don’t have to update the DNS if I have to rebuild the LB? Gold star if there is…).

So with the DNS entry set up, I could add an LE cert to the forwarding rule I added. Woot! Woot!


Now if I have a node failure, the replaced droplet isn’t getting added back into the LB pool. So I’m fine until the last node goes down - and if my PODs didn’t get properly distributed to the last node, I might run into trouble until K8S rebalances. Even if I have all my nodes back up, as soon as I lose the last of the first set of nodes, then I have NO nodes left in the LB and I’m dead in the water (so to speak). Plus, as the nodes were going down and coming back up, I ended up with nodes outside the LB pool which are costing money but not, you know, DOING anything. Unwoot.

Now, if I simply, manually, re-add all the droplets of the restarted nodes back to the LB, I’m back in business. But… that seems like something that should happen automatically.

Is it because I’m manually dorking around with the LB? Is there any way I could have the proper protocol, SSL termination, LE encrypt all be configurable from K8S (or maybe that’s coming?)?

Submit an answer

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!