This is the third time when Kubernetes starts to be unavailable for “kubectl”. Sometimes I cannot connect to the cluster using Kubernetes CLI and get the following errors:

Unable to connect to the server: dial tcp x.x.x.x:443: i/o timeout

Or

Unable to connect to the server: net/http: TLS handshake timeout

The indicator near the k8s logo lights yellow whereas it’s green when all Okey. When I try to add extra nodes it stucks in “loading” state without any changes. We rely on your servers and clusters, but it lets us down every week. On your servers, we host production environments of projects and monitoring infrastructure.
I cannot Google any information about this issue.
What do we do wrong? How to avoid these problems in the future?

2 comments
  • Did anyone find a solution to this? I am having the exact issue with one of my clusters today.

  • I’m having this issue as well. Not with anything crazy or particularly API heavy. I can’t imagine what would be causing these kinds of problems. I’ve enabled auto-scaling on my pool and all its running is a proxy and monitoring.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
9 answers

The same problem.
Cluster is not accessible for the last 2 hours.
Apps running on cluster are not accessible too.

Managed Kubernetes in DO is not production ready at all :(

Hi there!

This can occur if you have any API heavy workloads deployed that put strain on the master node. If you want you can open a support ticket and I can dig in to see whats occurring on your cluster’s control plane.

Regards,

John Kwiatkoski
Senior Developer Support Engineer - Kubernetes

Did you find a solution to this? I’ve had this issue twice today, and droplets disappearing from the load balancer as well. Popped in a ticket, and have had no reply.

Feels like the Digital Ocean’s Kubernetes solution is not quite production-ready.

Same here.

Cluster is not accessible since 16hours now (sometimes it works, you have to try it about 50 times to have a single lucky connection).

support tickets are simply ignored or answered with “everything seems to work”.

Sorry, managed kubernetes in DO seems to be toy. and the support too.

Same problem. Tasks like “helm upgrade” for a simple Prometheus installation cause the DO k8s going yellow in DO panel and cannot get any answer to kubectl commands. There was not heavy use of k8s API.

It seems to me that DO k8s is not production ready yet.

The same issue, clusters change from ready to unready and no way to debug is available not even shell over the browser. Total invisibility on what is going on.

Completely agree. Managed Kubernetes in DO is not production ready at all :(
I have the same problem.
Fortunately I tested it in test mode.

This issue still exists, looks like I won’t be able to use the Kubernetes offering for the time being.

Happened twice already today when installing the Prometheus operator on a cluster with more than enough capacity. Had to create a new cluster both times as it was completely stuck: no kubectl connectivity, no Kubernetes dashboard, and adding more nodes / replacing existing ones didn’t work. It was simply unusable afterwards.

Really hope this gets fixed :)

As of today is still happening, when installing prometheus-operator chart it loses the connectivity to the cluster to the point where any kubectl command it’s not working.

Even a single kubectl apply -f for a single resource, triggered the connectivity loss to the cluster for more than 2 hours.

And adding nodes to the current pool, took more than 3 hours to complete.

I hope we get more transparency of what is happening with our clusters.

Submit an Answer