Linux box 18.04, starts a droplet using images stored in a DO registry. I am creating the droplet, updating it, then copying all my assets into it ( including my docker-compose.YAML and everything needed to start the app). ( current end goal is to see my API at an address/port. I can’t pull my images out of the registry ( yet ) without installing doctl, authorizing it ( which includes pasting my API key at a prompt ) then logging into the machine, then I can run my docker-compose up, and everything goes as planned. I want to automate the entire process. My root image is alpine, I would rather not install bash, but if it helps this problem go away I will LOL. Any advice is appreciated!!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
1 answer

There are a few ways that you can acomplish this using environment variables:

1) If you pass the --access-token flag when configuring auth. The command will run without needing user input:

doctl auth init --access-token $DO_TOKEN

Note that this will print the token to STOUT. So if you are logging the script, you might want to redirect that so your token can not be viewed in logs:

doctl auth init --access-token $DO_TOKEN> /dev/null

2) You could also write a configuration file containing the token manually:

echo "access-token: $DO_TOKEN" > ~/.config/doctl/config.yaml

3) The --access-token flag works with all commands, you could just always pass it when using doctl:

doctl compute droplet create \
  --image ubuntu-20-04-x64 \
  --size s-1vcpu-1gb \
  --region nyc1 \
  --access-token $DO_TOKEN
  • How do I store that locally so it’s available for the docker-machine run, and later for use inside the droplet for doctl?
    Just a simple env folder that’s copied into the droplet?

  • #!/usr/bin/env bash
    #test script to clean
    set -e
    docker-machine create --digitalocean-size "s-2vcpu-4gb" --digitalocean-image "ubuntu-18-04-x64" --driver digitalocean --digitalocean-access-token $DO_TOKEN --engine-install-url "" laminar

    gets me a complaint about the hostname “Invalid command line. Found extra arguments [laminar]” Urg, it works when I have the token in there instead of the variable

    edited by MattIPv4
    • The commands above assume that the token was stored in an environment variable. So you would first run:

      export DO_TOKEN=mysecrectapitoken

      I think I understand you'r use case better now. Unfortunately, I’m not a regular docker-machine user. So there is probably a better way to acomplish this, but this should work.

      In order to authorize a Docker host created with docker-machine to pull images from a private DigitalOcean container register, you can include a “user data” script when you create the machine to install doctl there and use docker-machine ssh to run the doctl command.

      # Create Docker Host
      docker-machine create \
        --driver digitalocean \
        --digitalocean-access-token $DO_TOKEN \
        --digitalocean-size "s-2vcpu-4gb" \
        --digitalocean-image "ubuntu-18-04-x64" \
        --engine-install-url "" \
        --digitalocean-userdata ~/examples/ \
      # Authenticate with container registry
      docker-machine ssh example-droplet \
        doctl registry login --access-token $DO_TOKEN

      The contents of the script located at ~/examples/ would look like:

      wget \
        -O /tmp/doctl.tar.gz
      tar xf /tmp/doctl.tar.gz -C /tmp/
      mv /tmp/doctl /usr/local/bin