My Kubernetes cluster only has 1 node for now - managed by DigitalOcean.
The web application that I deployed runs in 3 pods - all on ONE node. I used the external DigitalOcean’s load balancer to expose the application outside the cluster.
Here’s the k8s resource definitions:
apiVersion: apps/v1 kind: Deployment metadata: name: shovik-com labels: app: shovik-com spec: replicas: 3 selector: matchLabels: app: shovik-com template: metadata: labels: app: shovik-com spec: containers: - name: shovik-com image: aspushkinus/shovik:latest imagePullPolicy: Always ports: - containerPort: 80 envFrom: - secretRef: name: shovik-com --- apiVersion: v1 kind: Service metadata: name: shovik-com-balancer annotations: service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443" service.beta.kubernetes.io/do-loadbalancer-certificate-id: "do-cert-id" service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true" spec: type: LoadBalancer selector: app: shovik-com ports: - name: http protocol: TCP port: 80 targetPort: 80 - name: https protocol: TCP port: 443 targetPort: 80
This works great and the website is live: https://shovik.com/
Whenever I deploy the new version of the app using the standard k8s rolling strategy, my app goes down for a minute and DigitalOcean’s load balancer responds with “503 Service Unavailable”. This is despite the fact that at any given time there are at least 2 pods in the “running” status.
How can I implement a zero-downtime deployment using DigitalOcean’s k8s and load balancer? Should I put another NodePort service in front of the LoadBalancer?
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.