I’ve created a Kubernetes cluster and followed the instructions to install doctl
and authenticate it. I already had kubectl
1.14.x installed.
doctl
is working fine, but following the instructions to use automated certificate management for kubectl
isn’t working. kubectl
complains that it can’t initialize the api client:
~/s/doctl $ kubectl get nodes
Error: unable to initialize DigitalOcean api client: access token is required. (hint: run 'doctl auth init')
Error: unable to initialize DigitalOcean api client: access token is required. (hint: run 'doctl auth init')
Error: unable to initialize DigitalOcean api client: access token is required. (hint: run 'doctl auth init')
Error: unable to initialize DigitalOcean api client: access token is required. (hint: run 'doctl auth init')
Error: unable to initialize DigitalOcean api client: access token is required. (hint: run 'doctl auth init')
Unable to connect to the server: getting credentials: exec: exit status 1
doctl is installed via snap as per the instructions:
~ $ doctl version
doctl version 1.20.0-dev
release 1.20.0 is available, check it out!
kubectl version is 1.14.3:
~ $ kubectl version
Error: unable to initialize DigitalOcean api client: access token is required. (hint: run 'doctl auth init')
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: getting credentials: exec: exit status 1
Running doctl auth init
does this:
~ $ doctl auth init
Using token [7aee1cbdddb5a0161f05fea33f08a167a66972096c6202ad381410f6c73bb97e]
Validating token... OK
And doctl itself is working find and can see the cluster:
~ $ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* do-lon1-azathoth do-lon1-azathoth do-lon1-azathoth-admin
EDIT Just want to add, if I download the kubeconfig from the control panel it’s fine, but obv I’d rather have the managed authentication set up:
~ $ kubectl --kubeconfig=./Downloads/azathoth-kubeconfig.yaml get nodes
NAME STATUS ROLES AGE VERSION
pool-f9sj2mv1p-x1hz Ready <none> 38h v1.14.1
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Hi @n3dst4! We’re currently working to resolve a number of issues with doctl’s handling of kubeconfig files when installed as a Snap package. Due to the way Snap packages confine applications, the kubeconfig file is currently not saved to the correct location. You can follow the issue here:
https://github.com/digitalocean/doctl/issues/457
If you need to work with both kubectl and doctl, we currently advise installing doctl manually.
That said, you should be able to work around the issue if you’d like to continue using the Snap package. Re-running
doctl k8s cluster config save <cluster>
will print out the location where the kubeconfig file is being saves. For example:You can then point kubectl to it directly:
Using a binary build of doctl works great. Thanks for your time @asb !
(Shout out to Hilary for updating doctl’s README with a note about snap/kuibectl :) )