Question

How to get to work Dashboard and Helm in DO K8s cluster?

Hello. I’ve created new k8s cluster with DigitalOcean. And there are few things I cannot figure out how to solve:

  1. kubectl shows that Dashboard pod is deployed, its service is ready to serve. With kubeclt proxy I managed to try to load dashboard via browser:
Error: 'dial tcp 10.244.5.3:8443: i/o timeout'
Trying to reach: 'https://10.244.5.3:8443/'
  1. With helm init I tried to install helm to cluster. And after that I can see that tiller-deploy pod is running. But with helm version I’ve got errror:
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Error: forwarding ports: error upgrading connection: error dialing backend: dial tcp 10.131.80.172:10250: connect: no route to host

Same error with helm install stable/nginx-ingress

Looks like in both cases I cannot connect to cluster private network IPs. Any ideas how to solve it? Thanks in advance.

Show comments

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Accepted Answer

Hey there,

It looks like you (and the others reporting the issue in the comments) may have ran into an issue with the Kubernetes master node not being properly assigned to your private network.

We’re working to resolve this now.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

Had the same kind of issue twice tonight, deploying traefik in Kubernetes NYC

Just discovered: deleting k8s cluster will not delete your load balancers that were made when type=LoadBalancer services were deployed. I don’t know if it’s a bug or feature :) Digitalocean-cloud-controller-manager usually deletes load balancer right after you delete deployed service.

It doesn’t work so far, in kubctl proxy I see:

I1003 19:15:27.825295   47342 logs.go:41] http: proxy error: context canceled
I1003 19:15:33.511797   47342 logs.go:41] http: proxy error: context canceled
I1003 19:15:49.507189   47342 logs.go:41] http: proxy error: context canceled
I1003 19:16:19.680289   47342 logs.go:41] http: proxy error: dial tcp 192.168.99.100:8443: i/o timeout
I1003 19:16:58.140826   47342 logs.go:41] http: proxy error: dial tcp 192.168.99.100:8443: i/o timeout
I1003 19:17:14.335255   47342 logs.go:41] http: proxy error: dial tcp 192.168.99.100:8443: i/o timeout
I1003 19:17:52.565512   47342 logs.go:41] http: proxy error: dial tcp 192.168.99.100:8443: i/o timeout
I1003 19:18:07.954353   47342 logs.go:41] http: proxy error: dial tcp 192.168.99.100:8443: i/o timeout

and no dashboard in browser. I’ll create new cluster and try again.