Let’s say I want to programmatically save the Kubeconfig of my Kubernetes cluster using doctl :

doctl -t "$DIGITALOCEAN_ACCESS_TOKEN" \
      kubernetes cluster kubeconfig save "$CLUSTER_NAME"

I have at my disposal an Access token. However doing this creates a new API Key on my account, which I don’t understand the reason. Since I am providing a API token why should I create a new one? This may result in creating an excessive number of tokens.

Is this normal? Can we force doctl to use the provided API token throughout the full process?

Cheers

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
1 answer

Is this normal?

Yes! You are seeing this due to a recent change in how DOKS Kubernetes clusters handle authentication. Rather than using certificates, they now use API tokens. This gives team administrators much finer control over how their clusters can be accessed. Rather than a shared certificate that can not be revoked, a token tied to a specific kubeconfig file can be revoked without breaking other users.

You can read more about those changes here:

Can we force doctl to use the provided API token throughout the full process?

I’d love to know a more about your use case. Are you calling this as part of a CI process? If you are able to reuse ~/.config/doctl/cache/exec-credential across CI runs, only one token will be generated and reused until it expires. When a token expires, it will be automatically removed from your account.

To prevent the tokens from being generated all together, you could craft a kubeconfig file using a normal DO API token:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <CA data>
    server: https://<cluster-id>.k8s.ondigitalocean.com
  name: do-region-cluster-name
contexts:
- context:
    cluster: do-region-cluster-name
    user: do-region-cluster-name-admin
  name: do-region-cluster-name
current-context: do-region-cluster-name
kind: Config
preferences: {}
users:
- name: do-region-cluster-name-admin
  user:
    token: <a DO api token>
  • I had the same issue and questions, and came across this thread. Sorry if I’m hijacking a bit…

    To answer the above questions: yes I am using this as part of CI using Github Actions. In particular, I’m using https://github.com/digitalocean/action-doctl to save the config for my kubernetes cluster to a file, to then allow me to deploy new versions from a deploy CI workflow.

    I’ve cached the folder as you suggested, but this did not seem to work (i.e. it still created another token). I added:

        - name: Cache digitalocean access tokens from doctl k8s config
          uses: actions/cache@v1
          with:
            path: ~/.config/doctl/cache/exec-credential
            key: ${{ runner.os }}-digitalocean
    

    It could simply be that this path is wrong on github actions as I’m not sure what ~ would resolve to. That said, I do notice on my PC that I actually have two keys in that folder…

    Regardless of the caching issue, and whilst I understand the reasoning behind creating access tokens (for better control/security), I do have a few questions:

    1) If it’s going to auto-create tokens (and in my case it’s creating a LOT of tokens as it’s part of CI), it would be nice if I could specify some timeframe in which these are valid for. In my case, it would be nice to auto-delete them once the CI workflow has finished (or e.g. one hour after they’re created they should expire and be deleted). Is this possible? You mention in your answer that “When a token expires, it will be automatically removed from your account.”. When do they expire?

    2) Is it possible to provide some kind of name (or prefix to the name) of the access token? Looking at the API page on the DigitalOcean website, I can’t really tell which keys were generated from CI, and which ones were generated by me.

    3) If I’m to cache these, is that secure? I’m using what I believe to be a DigitalOcean managed github action to do this, and therefore would hope that it is. However I can’t find anything on how secure github actions caches are… what are the security implications if this folder was leaked?

    4) Potentially a bit of a side note here, but similar to the above, is it secure that the doctl does write the kubeconfig to a file? (As in the example usage of the doctl action).

    5) Another side note: I’d ideally like to be able to allow CI to have access only to update the images in the default namespace of my kubernetes config. I really don’t want it to be able to read any secrets etc. I’m struggling to find any DO resources on how to do this or if it’s possible… do you have any resources on this?

    Any tips much appreciated, and I’d be very happy to feedback more fully about my use-case, having just set up k8s with digital ocean.

    • Great questions! I’ll try to answer them but will also pass this on to the DOKS team directly as this is very helpful feedback.

      When do they expire?

      By default, the token expires after seven days.

      it would be nice if I could specify some timeframe in which these are valid for.

      Setting a custom expiry time is actually supported in the API. We should get it exposed in doctl.

      https://developers.digitalocean.com/documentation/v2/#retrieve-credentials-for-a-kubernetes-cluster

      Is it possible to provide some kind of name (or prefix to the name) of the access token?

      Unfortunately, that is not possible at the moment.

      I can’t really tell which keys were generated from CI, and which ones were generated by me.

      The token names are in the format doks-${CLUSTER_UUID}-$(date -u +%Y%m%d%H%M%S). I know it’s not ideal, but that timestamp should help identify them.

      If I’m to cache these, is that secure?

      Using GitHub Actions, it is not secure. They explicitly warn against using actions/cache to cache secrets.

      Warning: We recommend that you don’t store any sensitive information in the cache of public repositories. For example, sensitive information can include access tokens or login credentials stored in a file in the cache path. Also, command line interface (CLI) programs like docker login can save access credentials in a configuration file. Anyone with read access can create a pull request on a repository and access the contents of the cache. Forks of a repository can also create pull requests on the base branch and access caches on the base branch.

      https://help.github.com/en/actions/configuring-and-managing-workflows/caching-dependencies-to-speed-up-workflows#about-caching-workflow-dependencies

      is it secure that the doctl does write the kubeconfig to a file? (As in the example usage of the doctl action).

      When it comes to GitHub Actions:

      secrets are not passed to the runner when a workflow is triggered from a forked repository.

      https://help.github.com/en/actions/configuring-and-managing-workflows/creating-and-storing-encrypted-secrets#using-encrypted-secrets-in-a-workflow

      Since a run triggered by a PR will not have access to the $DIGITALOCEAN_ACCESS_TOKEN secret of the base repository, it will not be able to retrieve your kubeconfig file. But if you are working on a public repository, I would encourage caution reviewing PRs that touch a workflow file. Merging a PR that does something malicious like cating the kubeconfig file’s contents could be an attack vector.

      I’d ideally like to be able to allow CI to have access only to update the images in the default namespace of my kubernetes config. I really don’t want it to be able to read any secrets etc. I’m struggling to find any DO resources on how to do this or if it’s possible… do you have any resources on this?

      Kubernetes role-based access control (RBAC) is enabled by default on DOKS, but the kubeconfig file that is generated with doctl is for an admin user. Steps 1 & 2 of this tutorial should cover creating a user with more limited permissions:

      https://www.digitalocean.com/community/tutorials/recommended-steps-to-secure-a-digitalocean-kubernetes-cluster

      by Damaso Sanoja
      In this guide, you will take basic steps to secure your DigitalOcean Kubernetes cluster. You will configure secure local authentication with TLS/SSL certificates, grant permissions to local users with Role-based access controls (RBAC), grant permissions to Kubernetes applications and deployments with service accounts, and set up resource limits with the ResourceQuota and LimitRange admission controllers.
Submit an Answer