Can I stop my kubeconfig file from expiring every 7 days?

January 4, 2019 1.8k views
Kubernetes

We have an application that makes pod deployments to the Kubernetes cluster, but it started failing a week later saying “Unauthorized”. I noticed it’s because the config file on DigitalOcean is being reset.
How can I work around this? We register the cluster to this application using the config file, but we can’t be changing it every week.

5 Answers
clenn January 15, 2019
Accepted Answer

you could make a cronjob once a week downloading the new kubeconfig file trough doctl

with a command like this:

 doctl kubernetes cluster kubeconfig show INSERT_CLUSTER_ID > ~/.kube/YOURKUBECONFIGNAME.yaml
  • Thanks for the doctl tip @clenn, I couldn’t find any info on the kubeconfig being available via the API. I consider this a stopgap solution and really hope DO comes up with a less burdensome solution. I don’t know of any other Kubernetes provider with that type of limitation.

  • This solved my problem. Thanks

Same question, looks like a showstopper for us - can’t use DO Kubernetes until this is resolved

so I had the same question and what I am going to try is to create a service account, assign it cluster-admin rbac rights, then update the kubeconfig using this service account’s token. i’ll post again here if it all works.

so here is what i did - a simple script to grant the default service account cluster-admin role. i am not guaranteeing this is secure or anything - use at your own risk. let’s see if markdown works fora code block here in the comments.

#!/bin/bash


NAMESPACE=default
SA_NAME=default
CRB_NAME=sa-admin
CONTEXT=digitalocean # this is not the default I edited my kubeconfig after download

cat > /tmp/clusterrolebinding.yaml <<EOF
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: ${CRB_NAME}
subjects:
  - kind: ServiceAccount
    name: ${SA_NAME}
    namespace: ${NAMESPACE}
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
EOF

kubectl create -f /tmp/clusterrolebinding.yaml

SECRET_ID=$(kubectl get secrets --namespace $NAMESPACE | awk "/$SA_NAME/"'{print $1}')
TOKEN=$(kubectl get secrets $SECRET_ID -n $NAMESPACE -o json | jq '.data.token' | tr -d '"' | base64 -D)

kubectl config set-credentials $SA_NAME --token=$TOKEN
kubectl config set-context ${CONTEXT} --user ${SA_NAME} --namespace ${NAMESPACE}

Relevant: https://www.digitalocean.com/docs/kubernetes/overview/#known-issues

The certificate authority, client certificate, and client key data in the kubeconfig.yaml file are rotated weekly. If you run into errors like the server doesn’t have a resource type “<resource>”, Unauthorized, or Unknown resource type: nodes, try downloading a new cluster configuration file. The certificates will be valid for one week from the time of the download.

I talked to support about this, and it seems that they classified this as an actual Issue their engineers are working on. Also this apperantely is probably one of many blockers for Kubernetes Services getting out of LTD.

Have another answer? Share your knowledge.