How to get to work Dashboard and Helm in DO K8s cluster?

Posted October 3, 2018 12.4k views

I’ve created new k8s cluster with DigitalOcean. And there are few things I cannot figure out how to solve:

  1. kubectl shows that Dashboard pod is deployed, its service is ready to serve. With kubeclt proxy I managed to try to load dashboard via browser:

    Error: 'dial tcp i/o timeout'
    Trying to reach: ''
  2. With helm init I tried to install helm to cluster. And after that I can see that tiller-deploy pod is running. But with helm version I’ve got errror:

    Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
    Error: forwarding ports: error upgrading connection: error dialing backend: dial tcp connect: no route to host

    Same error with helm install stable/nginx-ingress

Looks like in both cases I cannot connect to cluster private network IPs. Any ideas how to solve it? Thanks in advance.

  • Same here.. Also there is quite same problem with logs:

    > kubectl get po
    NAME                                  READY     STATUS    RESTARTS   AGE
    kubernetes-bootcamp-69bf88c8c-j8mlt   1/1       Running   0          18m

    Trying to get the pod logs:

    > kubectl logs kubernetes-bootcamp-69bf88c8c-j8mlt
    Error from server: Get dial tcp connect: no route to host
  • I’ve got the exact same issue with dashboard & the the logs.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
7 answers

Hey there,

It looks like you (and the others reporting the issue in the comments) may have ran into an issue with the Kubernetes master node not being properly assigned to your private network.

We’re working to resolve this now.

At the moment we’re working to resolve an issue with Kubernetes clusters in regions that are not NYC1 & FRA1.

As an immediate resolution, you can redeploy your cluster to one of those two regions, while we fix the underlying problem.

We’ve resolved the issue that was impacting Kubernetes clusters in regions that weren’t NYC1 and FRA1- everything should be behaving as expected. If you’re still experiencing any issues, please feel free to open up a support ticket so that we can investigate!

It doesn’t work so far, in kubctl proxy I see:

I1003 19:15:27.825295   47342 logs.go:41] http: proxy error: context canceled
I1003 19:15:33.511797   47342 logs.go:41] http: proxy error: context canceled
I1003 19:15:49.507189   47342 logs.go:41] http: proxy error: context canceled
I1003 19:16:19.680289   47342 logs.go:41] http: proxy error: dial tcp i/o timeout
I1003 19:16:58.140826   47342 logs.go:41] http: proxy error: dial tcp i/o timeout
I1003 19:17:14.335255   47342 logs.go:41] http: proxy error: dial tcp i/o timeout
I1003 19:17:52.565512   47342 logs.go:41] http: proxy error: dial tcp i/o timeout
I1003 19:18:07.954353   47342 logs.go:41] http: proxy error: dial tcp i/o timeout

and no dashboard in browser. I’ll create new cluster and try again.

  • With new cluster dashboard I cannot sign in with kubeconfig file that downloaded from admin area:

    Not enough data to create auth info structure.
    • I’m getting this, too. Anyone else able to get authenticated?

      • I ran into this problem as well today and found a solution. I’m very new to Kubernetes, so I’m not sure if what I’m saying is correct or not, but hey, it works! I found the answer here:

        After creating my DO cluster, I had to create a Service account and a ClusterRoleBinding. This is the config I used, I chose the user name from the config file I downloaded from DO.


        apiVersion: v1
        kind: ServiceAccount
          name: your-username
          namespace: kube-system
        kind: ClusterRoleBinding
          name: your-username
          kind: ClusterRole
          name: cluster-admin
        - kind: ServiceAccount
          name: your-username
          namespace: kube-system

        Apply the config with:

        kubectl --kubeconfig="digitalocean-kubeconfig.yaml" apply -f admin.yml

        Then you need to extract the token for the newly created service account.

        kubectl --kubeconfig="digitalocean-kubeconfig.yaml" get secret -n kube-system
        # note the secret name for the service account you just created
        kubectl --kubeconfig="digitalocean-kubeconfig.yaml" describe secret your-username-token-12345 -n kube-system

        The response to the last command will list a token. Copy that token into the Dashboard login screen if you select token and you should be able to log in.

        Alternatively, if you would like to log in to the dashboard with the config file, you can also copy that token to the bottom of your Digital Ocean config file, so that the bottom of that config file looks something like that:

        - name: your-username
            client-certificate-data: [...]
            client-key-data: [...]
            token: [TheTokenYouJustCopied]

        Hope that helps!

Just discovered: deleting k8s cluster will not delete your load balancers that were made when type=LoadBalancer services were deployed. I don’t know if it’s a bug or feature :) Digitalocean-cloud-controller-manager usually deletes load balancer right after you delete deployed service.

  • At the moment this is intended behavior, as we’re trying to prevent users from being able to unintentionally delete resources that were tied to a K8S cluster, especially in cases where deleting that resource may cause a separate outage or data loss (such as an LB that still has an active DNS record to something not in the cluster, or a block storage volume that has data you didn’t mean to delete)

    We want to ensure the destruction process is as explicit as possible- so you should either delete the LBaaS (or volume) from kubectl prior to deleting the cluster, or you will have to manually delete it in the cloud control panel.

    This is a workflow we’re still fleshing out and working to improve, so we welcome any feedback on how that feels, as a user. :)

Had the same kind of issue twice tonight, deploying traefik in Kubernetes NYC