Question
kubernetes load balancer forwarding rules port changing randomly
Our load balancer is set up to terminate HTTPS (443) and HTTP (80) connections both to an nginx ingress controller (internal port 30684), using the DO managed load balancer solution. The certificate is managed as well using Let’s Encrypt automatically from the control panel (no cert manager service). We followed this tutorial, minus the cert manager, as we assumed the DO managed solution would take care of that portion (and it works well aside from this issue!) - https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
We have had a few occasions where the forwarding rules are changing at what appears to be random (maybe 3 or 4 times in the past year). It may be associated with the automated certificate renewal, as the last time it happened we also received a renewal notification email on the same day.
Here is some information from kubectl, not much but may show something. The ports here (30684/30397) are correct and expected, but the load balancer forwarding rules in the DO control panel change occasionally and must be set back to those shown here.
C:\Users\x>kubectl get svc --namespace=ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer ###.###.###.### ###.###.###.### 80:30684/TCP,443:30397/TCP 368d
C:\Users\x>kubectl get pods --namespace cert-manager
No resources found in cert-manager namespace.
The forwarding rules in the control panel should be:
TCP 80 -> TCP 30684
HTTPS 443 -> HTTP 30684 (with managed certificate)
When this issue occurs, we can go into the control panel and update the rules back to what they should be to resolve the issue. This results in HTTPS communication failures for our application until an admin can sign in to fix it. Is there a way to prevent this from happening without manual intervention? Do we need to switch to the cert-manager solution recommended by the tutorial?
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
×
(Accidentally left this as an answer, lol. Reposting as a comment.)
I just ran into this issue too—we suddenly started seeing 400 errors that said “HTTP request was sent to HTTPS port” and it took me two hours to find the cause. Manually changing the ports in the load balancer worked for us as well. The worrying thing is I have no idea why this happened or when it might happen again.
Noah - that’s exactly what we saw! Support has indicated that changes in the control panel are not guaranteed to be persisted, which lines up with our observations. Changes must be added as ‘annotations’ on the load balancer in kubernetes.
I received links to the following, but I’m not sure which annotations need to be defined on my load balancer. Maybe someone with more experience can weigh in:
https://www.digitalocean.com/docs/kubernetes/how-to/configure-load-balancers/
https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/annotations.md
I do see this annotation on my balancer, so I think this is the right train of thought (running >kubectl get svc ingress-nginx –namespace=ingress-nginx -o yaml)
service.beta.kubernetes.io/do-loadbalancer-certificate-id