Question

How to use cloud-init to mount block storage that's already formatted and ready to mount?

Posted February 15, 2017 9.2k views
System ToolsUbuntu 16.04Block Storage

I’m using Ubuntu 16.04 on a droplet using Terraform. I have an existing volume that’s already been formatted that I would like to mount to /home so I can persist my user directory between applications from terraform.

Unfortunately, while /etc/cloud/cloud.cfg lists mounts in it’s cloudinitmodules no entry is ever written to /etc/fstab.

This is in my userdata:

#cloud-config
mounts:
  - - '/dev/disk/by-id/scsi-0DO_Volume_volume-name-here-part1'
    - '/home'
    - 'ext4'
    - 'defaults,nofail,discard'
    - '0'
    - '2'
packages:
  - zsh
  - git
  - ufw
users:
  - name: demo
    groups: sudo
    shell: /bin/zsh
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
    ssh-authorized-keys:
      - 'ssh-rsa <snip>'
runcmd:
  # Secure SSHD
  - [ sed, -i, -e, 's/^PermitRootLogin yes/PermitRootLogin no/', '/etc/ssh/sshd_config' ]
  - [ service, sshd, restart]
  - [ rm, -f, /root/.ssh/authorized_keys ]
  # Secure UFW
  - ufw default deny incoming
  - ufw default allow outgoing
  - ufw allow ssh
  - ufw enable

If I run this command as root, cloud-init -d single -n mounts, the entry is written to /etc/fstab and /home is mounted. Then I need to run mkhomedir_helper demo to recreate my home directory.

How can I get the mounts module to run automatically? The cloud-config docs are…less than ideal and don’t really explain anything in the examples except how to structure the mounts hash.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
1 answer

The cloud-config docs are indeed less than ideal… I took some time to investigate this and discussed it with one of our engineers. It looks like due to changes in the version shipping with Ubuntu 16.04, the mounts module is not currently run on our platform requiring some changes in the vendor-data we provide.

I can confirm that this does work as expected on Ubuntu 14.04:

#cloud-config
mounts:
 - [ /dev/disk/by-id/scsi-0DO_Volume_volume-nyc1-01, /mnt/volume-nyc1-01, "ext4", "defaults,nofail,discard", "0", "0" ]

As a work-a-round on Ubuntu 16.04, you would be able to translate this into something usable by runcmd like so:

#cloud-config
runcmd:
 - mkdir -p /mnt/volume-nyc1-01
 - sudo mount -o discard,defaults /dev/disk/by-id/scsi-0DO_Volume_volume-nyc1-01 /mnt/volume-nyc1-01
 - echo /dev/disk/by-id/scsi-0DO_Volume_volume-nyc1-01, /mnt/volume-nyc1-01, "ext4", "defaults,nofail,discard", "0", "0" | tee -a /etc/fstab

I’ll provide an update here when a fix has been released. Thanks for helping us catch this!

  • @davidkolb Just wanted to let you know a fix was deployed for this. I’ve been able to successfully mount a volume on boot with the cloud-config file above.

    • Is this supposed to work “as-is” (the first example) for 16.04? There’re no authentication steps with DO’s API or anything?

      I tried it and it creates the mount directory, but doesn’t automatically grab the volume from DO – I still had to call the web API in order to get it to mount. Is this the intended function?

      • This example was assuming that the volume has been provisioned and being “attached” to the Droplet at the time of creation. So in Terraform, it might look like:

        resource "digitalocean_volume" "baz" {
          region                  = "nyc1"
          name                    = "baz"
          size                    = 100
          description             = "an example volume"
        }
        
        resource "digitalocean_droplet" "foobar" {
          name       = "foobar"
          size       = "s-1vcpu-1gb"
          image      = "ubuntu-18-04-x64"
          region     = "nyc1"
          volume_ids = ["${digitalocean_volume.baz.id}"]
          user_data = <<-EOF
                        #cloud-config
                        mounts:
                         - [ /dev/disk/by-id/scsi-0DO_Volume_baz, /mnt/baz, "ext4", "defaults,nofail,discard", "0", "0" ]
                        EOF
        }
        

        We’ve actually made this even easier via our API since I wrote this answer. You can now ask for a pre-formatted volume that will be automatically mounted when attached by specifying a filesystem_type (or in Terraform initial_filesystem_type). For the details, check out: https://developers.digitalocean.com/documentation/changelog/api-v2/auto-formatting-support-for-volumes/

        • Thanks for this info. Would you please update the blog post to clearly state the path of the mounted volume? I suppose it is /mnt/VOLUME_ID .

          • There is now some documentation:
            https://www.digitalocean.com/docs/volumes/how-to/create/#automatically-format–mount

            it also doesn’t explicitly state the mount path.

            I’ve found that automounting only happens when attaching a volume to a droplet that is already running. Start a droplet with its volumes already attached and they won’t get mounted. I filed a support ticket asking about this. This is disappointing because it causes two problems for me:

            1. I must include mount entries in cloud-init. Changing cloud-init causes hosts to be recreated. This makes deployments take longer and ads more risk.
            2. It’s a lot of extra code in my Terraform configs.

            I’m starting to understand why people put up with the complexity of Kubernetes. If you try to deploy systems using infrastructure-as-code with plain Docker on VMs, you still have to deal with tons of complexity, it’s just different complexity. And there’s lots of brokenness.

  • I’ve tried this as well using cloud-init provisioned with terraform.
    I’ve found that the volume name is not present, but instead is converted to “sdb”:

    /dev/disk/by-id/scsi-0DO_Volume_sdb
    

    I’ve tested this with both Ubuntu 16.04 and 18.04 and get the same result, but if I manually create a droplet and attach a volume it does show up with the volume name.

    The simple work-around is to just use sdb, but this may be unreliable if I start attaching more than one volume. Any thoughts on this?

    edited by asb
    • Hi @jkirkham. Volumes should still be identified by the pattern /dev/disk/by-id/scsi-0DO_Volume_$VOLUME_NAME For instance, I was just able to successfully create a volume named “baz” that showed up as /dev/disk/by-id/scsi-0DO_Volume_baz Can you open a support ticket so the team can investigate?

      Also, while this approach should still work, it’s worth pointed that we’ve actually made this even easier via our API since I wrote this answer. You can now ask for a pre-formatted volume that will be automatically mounted when attached by specifying a filesystem_type. For the details, check out:

      Blog post: https://blog.digitalocean.com/auto-format-and-mount/
      The API changelog: https://developers.digitalocean.com/documentation/changelog/api-v2/auto-formatting-support-for-volumes/

      • @asb Thanks for the quick follow up.
        I was surprised by this behaviour too. Previously I had it working in terraform by deploying a mount script and running it, but (it seems) when I switched to using templated cloud-configs from terraform the “by-id” volume name changed. I suspect it is some interaction between my cloud-config and the standard configs used to setup a new droplet. I prefer to continue to use terraform and templated cloud-configs for my solution so I will open a ticket.
        Also, another factor is that the web UI or API auto-format and mount feature doesn’t seem to support custom mount points. Plus once created I will likely want to preserve the volume and remount it on updated droplets (in terraform that means a new droplet).
        If I do find a solution I’ll post it here.
        Thanks.

        John

        • Ok, I think I’ve solve the issue. After reading the API docs again I noticed the volume name only permits alphanumeric characters plus the “-”. The example I encounter the problem with had an underscore (“_”) in the volume name. It appears this caused the volume “by-id” name to revert to the “sdb” device.
          Thanks again. Hopefully this can be highlighted somewhere so others don’t run into the issue.

          John

Submit an Answer