How do I set DNS records for Kubernetes with a load balancer?
I am using Helm to create a Kubernetes deployment. In front of this, there is a load balancer and an ingress server - see below for an abridged version of
helm status auth
==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE auth-mongodb ClusterIP 10.245.186.209 <none> 27017/TCP 74s auth-redis-master ClusterIP 10.245.34.39 <none> 6379/TCP 74s auth-redis-slave ClusterIP 10.245.213.27 <none> 6379/TCP 74s auth LoadBalancer 10.245.82.177 126.96.36.199 80:32645/TCP 74s ==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE auth auth.feature-deploy.example.com 80 74s
I’m happy enough with all this and it works great on DO. The only thing I can’t work out is how to update the DNS A record.
The domain name is managed on the DO control panel. If I manually set
auth.feature-deploy.example.com and point to the load balancer, it all works fine. However, if I destroy/update the deployment, I’ll need to do this again manually.
Is there a way of getting Kubernetes to set a DNS A record that I’m missing, have I found an issue with the DO K8S stack or am I missing something?
# Source: auth/templates/ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: auth labels: app.kubernetes.io/name: auth helm.sh/chart: auth-0.1.0 app.kubernetes.io/instance: auth app.kubernetes.io/managed-by: Tiller spec: rules: - host: "auth.feature-deploy.example.com" http: paths: - path: / backend: serviceName: auth servicePort: http