kubectl with managed certificate not working

June 13, 2019 427 views
DigitalOcean Kubernetes

I've created a Kubernetes cluster and followed the instructions to install doctl and authenticate it. I already had kubectl 1.14.x installed.

doctl is working fine, but following the instructions to use automated certificate management for kubectl isn't working. kubectl complains that it can't initialize the api client:

~/s/doctl $ kubectl get nodes
Error: unable to initialize DigitalOcean api client: access token is required. (hint: run 'doctl auth init')
Error: unable to initialize DigitalOcean api client: access token is required. (hint: run 'doctl auth init')
Error: unable to initialize DigitalOcean api client: access token is required. (hint: run 'doctl auth init')
Error: unable to initialize DigitalOcean api client: access token is required. (hint: run 'doctl auth init')
Error: unable to initialize DigitalOcean api client: access token is required. (hint: run 'doctl auth init')
Unable to connect to the server: getting credentials: exec: exit status 1

doctl is installed via snap as per the instructions:

~ $ doctl version
doctl version 1.20.0-dev
release 1.20.0 is available, check it out! 

kubectl version is 1.14.3:

~ $ kubectl version
Error: unable to initialize DigitalOcean api client: access token is required. (hint: run 'doctl auth init')
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: getting credentials: exec: exit status 1

Running doctl auth init does this:

~ $ doctl auth init
Using token [7aee1cbdddb5a0161f05fea33f08a167a66972096c6202ad381410f6c73bb97e]

Validating token... OK

And doctl itself is working find and can see the cluster:

~ $ kubectl config get-contexts
CURRENT   NAME               CLUSTER            AUTHINFO                 NAMESPACE
*         do-lon1-azathoth   do-lon1-azathoth   do-lon1-azathoth-admin   

EDIT Just want to add, if I download the kubeconfig from the control panel it's fine, but obv I'd rather have the managed authentication set up:

~ $ kubectl --kubeconfig=./Downloads/azathoth-kubeconfig.yaml get nodes
NAME                  STATUS   ROLES    AGE   VERSION
pool-f9sj2mv1p-x1hz   Ready    <none>   38h   v1.14.1
4 Answers
asb MOD June 13, 2019
Accepted Answer

Hi @n3dst4! We're currently working to resolve a number of issues with doctl's handling of kubeconfig files when installed as a Snap package. Due to the way Snap packages confine applications, the kubeconfig file is currently not saved to the correct location. You can follow the issue here:

https://github.com/digitalocean/doctl/issues/457

If you need to work with both kubectl and doctl, we currently advise installing doctl manually.

That said, you should be able to work around the issue if you'd like to continue using the Snap package. Re-running doctl k8s cluster config save <cluster> will print out the location where the kubeconfig file is being saves. For example:

$ doctl k8s cluster config save example-01
Notice: adding cluster credentials to kubeconfig file found in "/home/asb/snap/doctl/64/.kube/config"
Notice: setting current-context to do-nyc1-example-01

You can then point kubectl to it directly:

kubectl --kubeconfig=${HOME}/snap/doctl/64/.kube/config get nodes

Thanks for the reply, asb. Interestingly, when I try it with --kubeconfig I still get the

Error: unable to initialize DigitalOcean api client: access token is required. (hint: run 'doctl auth init')
Unable to connect to the server: getting credentials: exec: exit status 1

message. I'll try downloading a doctl binary and report back.

(Shout out to Hilary for updating doctl's README with a note about snap/kuibectl :) )

Using a binary build of doctl works great. Thanks for your time @asb !

Have another answer? Share your knowledge.