How to use cloud-init to mount block storage that's already formatted and ready to mount?

February 15, 2017 6.7k views
System Tools Block Storage

I’m using Ubuntu 16.04 on a droplet using Terraform. I have an existing volume that’s already been formatted that I would like to mount to /home so I can persist my user directory between applications from terraform.

Unfortunately, while /etc/cloud/cloud.cfg lists mounts in it’s cloudinitmodules no entry is ever written to /etc/fstab.

This is in my userdata:

  - - '/dev/disk/by-id/scsi-0DO_Volume_volume-name-here-part1'
    - '/home'
    - 'ext4'
    - 'defaults,nofail,discard'
    - '0'
    - '2'
  - zsh
  - git
  - ufw
  - name: demo
    groups: sudo
    shell: /bin/zsh
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
      - 'ssh-rsa <snip>'
  # Secure SSHD
  - [ sed, -i, -e, 's/^PermitRootLogin yes/PermitRootLogin no/', '/etc/ssh/sshd_config' ]
  - [ service, sshd, restart]
  - [ rm, -f, /root/.ssh/authorized_keys ]
  # Secure UFW
  - ufw default deny incoming
  - ufw default allow outgoing
  - ufw allow ssh
  - ufw enable

If I run this command as root, cloud-init -d single -n mounts, the entry is written to /etc/fstab and /home is mounted. Then I need to run mkhomedir_helper demo to recreate my home directory.

How can I get the mounts module to run automatically? The cloud-config docs are…less than ideal and don’t really explain anything in the examples except how to structure the mounts hash.

1 Answer
asb MOD February 16, 2017
Accepted Answer

The cloud-config docs are indeed less than ideal… I took some time to investigate this and discussed it with one of our engineers. It looks like due to changes in the version shipping with Ubuntu 16.04, the mounts module is not currently run on our platform requiring some changes in the vendor-data we provide.

I can confirm that this does work as expected on Ubuntu 14.04:

 - [ /dev/disk/by-id/scsi-0DO_Volume_volume-nyc1-01, /mnt/volume-nyc1-01, "ext4", "defaults,nofail,discard", "0", "0" ]

As a work-a-round on Ubuntu 16.04, you would be able to translate this into something usable by runcmd like so:

 - mkdir -p /mnt/volume-nyc1-01
 - sudo mount -o discard,defaults /dev/disk/by-id/scsi-0DO_Volume_volume-nyc1-01 /mnt/volume-nyc1-01
 - echo /dev/disk/by-id/scsi-0DO_Volume_volume-nyc1-01, /mnt/volume-nyc1-01, "ext4", "defaults,nofail,discard", "0", "0" | tee -a /etc/fstab

I’ll provide an update here when a fix has been released. Thanks for helping us catch this!

  • @davidkolb Just wanted to let you know a fix was deployed for this. I’ve been able to successfully mount a volume on boot with the cloud-config file above.

    • Is this supposed to work “as-is” (the first example) for 16.04? There’re no authentication steps with DO’s API or anything?

      I tried it and it creates the mount directory, but doesn’t automatically grab the volume from DO – I still had to call the web API in order to get it to mount. Is this the intended function?

      • This example was assuming that the volume has been provisioned and being “attached” to the Droplet at the time of creation. So in Terraform, it might look like:

        resource "digitalocean_volume" "baz" {
          region                  = "nyc1"
          name                    = "baz"
          size                    = 100
          description             = "an example volume"
        resource "digitalocean_droplet" "foobar" {
          name       = "foobar"
          size       = "s-1vcpu-1gb"
          image      = "ubuntu-18-04-x64"
          region     = "nyc1"
          volume_ids = ["${}"]
          user_data = <<-EOF
                         - [ /dev/disk/by-id/scsi-0DO_Volume_baz, /mnt/baz, "ext4", "defaults,nofail,discard", "0", "0" ]

        We’ve actually made this even easier via our API since I wrote this answer. You can now ask for a pre-formatted volume that will be automatically mounted when attached by specifying a filesystem_type (or in Terraform initial_filesystem_type). For the details, check out:

  • I’ve tried this as well using cloud-init provisioned with terraform.
    I’ve found that the volume name is not present, but instead is converted to “sdb”:


    I’ve tested this with both Ubuntu 16.04 and 18.04 and get the same result, but if I manually create a droplet and attach a volume it does show up with the volume name.

    The simple work-around is to just use sdb, but this may be unreliable if I start attaching more than one volume. Any thoughts on this?

    edited by asb
    • Hi @jkirkham. Volumes should still be identified by the pattern /dev/disk/by-id/scsi-0DO_Volume_$VOLUME_NAME For instance, I was just able to successfully create a volume named “baz” that showed up as /dev/disk/by-id/scsi-0DO_Volume_baz Can you open a support ticket so the team can investigate?

      Also, while this approach should still work, it’s worth pointed that we’ve actually made this even easier via our API since I wrote this answer. You can now ask for a pre-formatted volume that will be automatically mounted when attached by specifying a filesystem_type. For the details, check out:

      Blog post:
      The API changelog:

      • @asb Thanks for the quick follow up.
        I was surprised by this behaviour too. Previously I had it working in terraform by deploying a mount script and running it, but (it seems) when I switched to using templated cloud-configs from terraform the “by-id” volume name changed. I suspect it is some interaction between my cloud-config and the standard configs used to setup a new droplet. I prefer to continue to use terraform and templated cloud-configs for my solution so I will open a ticket.
        Also, another factor is that the web UI or API auto-format and mount feature doesn’t seem to support custom mount points. Plus once created I will likely want to preserve the volume and remount it on updated droplets (in terraform that means a new droplet).
        If I do find a solution I’ll post it here.


        • Ok, I think I’ve solve the issue. After reading the API docs again I noticed the volume name only permits alphanumeric characters plus the “-”. The example I encounter the problem with had an underscore (“_”) in the volume name. It appears this caused the volume “by-id” name to revert to the “sdb” device.
          Thanks again. Hopefully this can be highlighted somewhere so others don’t run into the issue.


Have another answer? Share your knowledge.