Question
Kubernetes cluster not working, status shows down
I have followed https://www.digitalocean.com/community/tutorials/how-to-automate-deployments-to-digitalocean-kubernetes-with-circleci and https://www.digitalocean.com/docs/kubernetes/how-to/add-load-balancers/ to setup cicd and a loadbalancer configuration. However my cluster is clearly not working as it’s show status down. Could someone enlighten me on what I could be doing wrong? Below is some output of the services and pods.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
do-kubernetes-sample-app ClusterIP 10.245.218.135 <none> 80/TCP 10d
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 11d
sample-load-balancer LoadBalancer 10.245.8.9 XXX.XXX.XXX.XXX 80:32454/TCP 38h
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default do-kubernetes-sample-app-6fbf58f5bb-hfhpw 1/1 Running 0 19h
default do-kubernetes-sample-app-79665478bf-b8dm5 0/1 InvalidImageName 0 9h
kube-system cilium-operator-6b899cc7db-946pr 1/1 Running 19 10d
kube-system cilium-vtvw8 1/1 Running 6 10d
kube-system coredns-78dc9d6fc7-8lx2p 1/1 Running 2 10d
kube-system coredns-78dc9d6fc7-zz6jk 1/1 Running 2 10d
kube-system csi-do-node-6cx7t 2/2 Running 2 10d
kube-system do-node-agent-zmffw 1/1 Running 1 10d
kube-system kube-proxy-j8j9s 1/1 Running 1 10d
$ kubectl get pods --field-selector=status.phase=Running
NAME READY STATUS RESTARTS AGE
do-kubernetes-sample-app-6fbf58f5bb-hfhpw 1/1 Running 0 19h
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
×
Hi there!
When you say it shows as down, are you talking about the application itself? the LB? or the cluster/nodes?
Can you provide error logs or open up a support case so I can take a deeper look?
Regards,
John Kwiatkoski
Senior Developer Support Engineer
I’m getting a similar issue? My load balancer says that my nodes have a status of down.
I’m getting same error too
Same here. Load balancer says all the nodes are down. kubectl works fine, and all the pods are running as usual. Our website just can’t be reached from the outside anymore.
Any Updates on how to fix it, I have the same issue.