As I was following through the tutorial on the same topic (https://docs.digitalocean.com/products/kubernetes/how-to/create-internal-load-balancer/), I have a couple of questions regarding “However, if a node goes down, then the DNS record syncs and the Droplets need to get the updated DNS. This results in some downtime.”.
Thank you.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Sign up for Infrastructure as a Newsletter.
Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Hey!
I just bought that up internally and can confirm the following:
Yes, this issue would apply when the Cluster Autoscaler removes nodes. It doesn’t matter how the nodes are removed; the effect will be the same. Node additions typically shouldn’t cause issues, but node removals can lead to downtime if requests are routed to deleted nodes before the DNS record is updated and the TTL expires.
The downtime largely depends on how ExternalDNS is implemented, with the configured TTL being the dominant factor. If the TTL is short, downtime should be minimal, but longer TTLs can lead to more noticeable disruptions.
As far as I know this issue is a known trade-off when working with dynamic, autoscaling environments like Kubernetes, particularly when dealing with DNS and load balancers in any cloud environments.
Hope that this helps!
- Bobby