Introduction

Terraform is a tool for building and managing infrastructure in an organized way. It can be used to manage DigitalOcean droplets and DNS entries, in addition to a large variety of services offered by other providers. It is controlled via an easy to use command-line interface, and can run from your desktop or a remote server.

Terraform works by reading configuration files that describe the components that make up your application environment or datacenter. Based on the configuration, it generates an execution plan, which describes what it will do to reach the desired state. The plan is then executed to build the infrastructure. When changes to the configuration occur, Terraform can generate and execute incremental plans to update the existing infrastructure to the newly described state.

In this tutorial, you’ll use Terraform to create an infrastructure that consists of two Nginx servers that are load balanced by a DigitalOcean load balancer. Then you’ll use Terraform to add a DNS entry on DigitalOcean that points to your load balancer. This will help you get started with using Terraform, and give you an idea of how you can use it to manage and deploy a DigitalOcean-based infrastructure that meets your own needs.

This tutorial uses Terraform 0.12.

Prerequisites

To complete this tutorial, you’ll need:

Step 1 — Configuring your Environment

Terraform will use your DigitalOcean Personal Access Token to communicate with the DigitalOcean API and manage resources in your account. Don’t share this key with others, and keep it out of scripts and version control. Export your DigitalOcean Personal Access Token to an environment variable called DO_PAT. This will make using it in subsequent commands easier and keep it separate from your code:

  • export DO_PAT={YOUR_PERSONAL_ACCESS_TOKEN}

Next, you’ll need the MD5 fingerprint of the public key you’ve associated with your account, so Terraform can add it to each machine it provisions. Assuming that your private key is located at ~/.ssh/id_rsa, use the following command to get the MD5 fingerprint of your public key:

  • ssh-keygen -E md5 -lf ~/.ssh/id_rsa.pub | awk '{print $2}'

This will output something like the following:

Output
md5:e7:42:16:d7:...9e:92:f7

You will provide this fingerprint, minus the md5: prefix, when running Terraform. To make this easier, export your SSH fingerprint to your environment as well:

  • export DO_SSH_FINGERPRINT="e7:42:16:d7:e5:a0:43:29:82:7d:a0:59:cf:9e:92:f7"

Now that you have your environment variables configured, let’s install Terraform.

Step 2 — Installing Terraform

Terraform can run on your desktop or on a remote server. To install it, download it and place it on your PATH.

First, download the appropriate package for your OS and architecture from the official Downloads page. For this tutorial, download Terraform to your local machine and save it to the ~/Downloads directory.

If you’re on macOS or Linux, you can download Terraform with curl.

On macOS, use this command to download Terraform and place it in the Downloads folder:

  • curl -o https://releases.hashicorp.com/terraform/0.12.24/terraform_0.12.24_darwin_amd64.zip ~/Downloads/terraform.zip

On Linux, use this command:

  • curl -O https://releases.hashicorp.com/terraform/0.12.24/terraform_0.12.24_linux_amd64.zip ~/Downloads/terraform.zip

Then extract Terraform and place it in the ~/opt/terraform directory with the following commands:

  • mkdir -p ~/opt/terraform
  • unzip ~/Downloads/terraform.zip -d ~/opt/terraform

This unarchives the package to the opt/terraform/ directory, within your home directory.

Finally, add ~/opt/terraform/bin, to your PATH environment variable so you can execute the terraform command without specifying the full path to the executable.

On Linux, add the path to the file .bashrc:

  • nano ~/.bashrc

To append Terraform’s path to your PATH, add the following line at the end of the file:

.bashrc
export PATH=$PATH:~/opt/terraform/bin

Save the file and exit the editor

Now all of your new bash sessions will be able to find the terraform command. To load the new PATH into your current session, type the following:

  • . .bashrc

If you’re on macOS and you’re using the Bash shell, add the code to .bash_profile instead. On macOS with zsh, add the line to ~/.zshrc.

To verify that you have installed Terraform correctly, let’s try and run it. In a terminal, run Terraform:

  • terraform

If your path is set up properly, you will see output that is similar to the following:

Output
Available commands are: apply Builds or changes infrastructure graph Create a visual graph of Terraform resources output Read an output from a state file plan Generate and show an execution plan refresh Update local state file against real resources show Inspect Terraform state or plan version Prints the Terraform version

These are the commands that Terraform accepts. Their brief described here, but we will get into how to use them later.

Now that Terraform is installed, let’s start writing a configuration to describe our infrastructure!

Step 3 — Configuring Terraform for DigitalOcean

Terraform supports a variety of service providers through providers that ship with it. We are interested in DigitalOcean provider, which Terraform will use to interact with the DigitalOcean API to build our infrastructure. The first step to building an infrastructure with Terraform is to define the provider you’re going to use by creating some Terraform configuration files.

Create a directory that will store your configuration files for a given project. The name of the directory does not matter, but we will use “loadbalance” for the example (feel free to change its name):

  • mkdir ~/loadbalance

Terraform configurations are text files that end with the .tf file extension. They are human-readable and they support comments. Terraform also supports JSON-format configuration files, but we won’t cover those here. Terraform will read all of the configuration files in your working directory in a declarative manner, so the order of resource and variable definitions do not matter. Your entire infrastructure can exist in a single configuration file, but we will separate our configuration files by resources in this tutorial.

Change your current directory to the newly created directory:

  • cd ~/loadbalance

From now on, we will assume that your working directory is the one that we just changed to. If you start a new terminal session, be sure to change to the directory that contains your Terraform configuration.

If you happen to get stuck, and Terraform is not working as you expect, you can start over by deleting the terraform.tfstate file, and manually destroying the resources that were created (e.g. through the control panel.

Note: You may also want to enable logging to stdout, so you can see what Terraform is trying to do. Do that by running the following command:

  • export TF_LOG=1

The first step to using the DigitalOcean provider is configuring it with the proper credential variables. Let’s do that now.

Create a file called provider.tf:

  • nano provider.tf

Add the following lines into the file:

provider.tf
variable "do_token" {}
variable "pub_key" {}
variable "pvt_key" {}
variable "ssh_fingerprint" {}

provider "digitalocean" {
  token = var.do_token
}

Save and exit. Here is a breakdown of the first four lines:

  • variable “do_token”: your DigitalOcean Personal Access Token
  • variable “pub_key”: public key location, so it can be installed into new droplets
  • variable “pvt_key”: private key location, so Terraform can connect to new droplets
  • variable “ssh_fingerprint”: fingerprint of SSH key

The other lines specify the credentials for your DigitalOcean account by assigning “token” to the do_token variable. We will pass the values of these variables into Terraform, when we run it.

The official Terraform documentation for the DigitalOcean provider is located here: DigitalOcean Provider.

Each provider has its own specifications, which generally map to the API of its respective service provider. In the case of the DigitalOcean provider, we are able to define three types of resources:

  • digitalocean_droplet: Droplets (servers)
  • digitalocean_loadbalancer: Load balancer
  • digitalocean_domain: DNS domain entries
  • digitalocean_record: DNS records

Let’s start by creating a Droplet which will run an Nginx server.

Step 4 — Defining the First Nginx Server

You can use Terraform to create a DigitalOcean Droplet and install software on the Droplet once it spins up. In this step you’ll provision a single Ubuntu 18.04 Droplet and install the Nginx web server using Terraform.

Create a new Terraform configuration file called www-1.tf which will hold the configuration for the Droplet:

  • nano www-1.tf

Insert the following lines to define the Droplet resource:

resource "digitalocean_droplet" "www-1" {
    image = "ubuntu-18-04-x64"
    name = "www-1"
    region = "nyc2"
    size = "s-1vcpu-1gb"
    private_networking = true
    ssh_keys = [
      var.ssh_fingerprint
    ]

In the preceding configuration, the first line defines a digitalocean_droplet resource named www-1. The rest of the lines specify the droplet’s attributes, including its data center and the slug which identifies the size of the Droplet you want to configure. In this case we’re using s-1vcpu-1gb, which will create a Droplet with one CPU and 1GB of RAM. Visit this size slug chart to see the available slugs you can use.

When you run Terraform against the DigitalOcean API, it will collect a variety of information about the Droplet, such as its public and private IP addresses. This information can be used by other resources in your configuration.

If you are wondering which arguments are required or optional for a Droplet resource, please refer to the official Terraform documentation: DigitalOcean Droplet Specification.

Now, we will set up a connection which Terraform can use to connect to the server via SSH. Insert the following lines at the end of the file:

  connection {
    host = self.ipv4_address
    user = "root"
    type = "ssh"
    private_key = file(var.pvt_key)
    timeout = "2m"
  }

These lines describe how Terraform should connect to the server, so Terraform can connect over SSH to install Nginx: (note the use of the private key variable).

Now that you have the connection set up, configure the remote-exec provisioner, which you’ll use to install Nginx. Add the following lines to the configuration to do just that:

  provisioner "remote-exec" {
    inline = [
      "export PATH=$PATH:/usr/bin",
      # install nginx
      "sudo apt-get update",
      "sudo apt-get -y install nginx"
    ]
  }
}

Note that the strings in the inline array are the commands that the root user will run to install Nginx.

The completed file looks like this:

www-1.tf
resource "digitalocean_droplet" "www-1" {
    image = "ubuntu-18-04-x64"
    name = "www-1"
    region = "nyc2"
    size = "s-1vcpu-1gb"
    private_networking = true
    ssh_keys = [
      var.ssh_fingerprint
    ]
  connection {

    host = self.ipv4_address
    user = "root"
    type = "ssh"
    private_key = file(var.pvt_key)
    timeout = "2m"
  }
  provisioner "remote-exec" {
    inline = [
      "export PATH=$PATH:/usr/bin",
      # install nginx
      "sudo apt-get update",
      "sudo apt-get -y install nginx"
    ]
  }
}

Save the file and exit the editor.

Step 5 — Using Terraform to Create the Nginx Server

Currently, your Terraform configuration describes a single Nginx server. Let’s test it out.

First, initialize Terraform for your project. This will read your configuration files and install the plugins for your provider:

  • terraform init

You’ll see this output:

Output
1 resource "digitalocean_droplet" "www-1" { Initializing the backend... Initializing provider plugins... - Checking for available provider plugins... - Downloading plugin for provider "digitalocean" (terraform-providers/digitalocean) 1.6.0... The following providers do not have any version constraints in configuration, so the latest version was installed. To prevent automatic upgrades to new major versions that may contain breaking changes, it is recommended to add version = "..." constraints to the corresponding provider blocks in configuration, with the constraint strings suggested below. * provider.digitalocean: version = "~> 1.18" Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.

Next, run the terraform plan command to see what Terraform will attempt to do to build the infrastructure you described (i.e. see the execution plan). You will have to specify the values for your DigitalOcean Access Token, the path to your public and private key, and the fingerprint for your key, as your Terraform configuration files use this information to access the DigitalOcean API and log in to your Droplet to install Nginx. Execute the following command:

  • terraform plan \
  • -var "do_token=${DO_PAT}" \
  • -var "pub_key=$HOME/.ssh/id_rsa.pub" \
  • -var "pvt_key=$HOME/.ssh/id_rsa" \
  • -var "ssh_fingerprint=${DO_SSH_FINGERPRINT}"

You’ll see this output:

Output
Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.www-1 will be created + resource "digitalocean_droplet" "www-1" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "www-1" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = true + region = "nyc2" + resize_disk = true + size = "s-1vcpu-1gb" + ssh_keys = [ + "your_ssh_key_hash, ] + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ------------------------------------------------------------------------

The + digitalocean_droplet.www-1 line means that Terraform will create a new droplet resource called www-1, with the details that follow it. That’s exactly what we want, so let’s execute the plan. Run the following terraform apply command to execute the current plan. Again, specify all the values for the variables:

  • terraform apply \
  • -var "do_token=${DO_PAT}" \
  • -var "pub_key=$HOME/.ssh/id_rsa.pub" \
  • -var "pvt_key=$HOME/.ssh/id_rsa" \
  • -var "ssh_fingerprint=${DO_SSH_FINGERPRINT}"

You’ll see the same output as before, but this time, Terraform will ask you if you want to proceed:

Output
Plan: 1 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes

Enter yes and press ENTER. Terraform will provision your Droplet:

Outut
digitalocean_droplet.www-1: Creating...

After a bit of time, you’ll see Terraform installing Nginx with the remote-exec provisioner, and then the process will complete:


digitalocean_droplet.www-1: Provisioning with 'remote-exec'...

....


digitalocean_droplet.www-1: Creation complete

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
...

Terraform updates the state file terraform.tfstate every time it executes a plan or “refreshes” its state.

To view the current state of your environment, use the following command:

  • terraform show terraform.tfstate

Note: If you modify your infrastructure outside of Terraform, your state file will be out of date. If your resources are modified outside of Terraform, you may refresh the state file to bring it up to date. This command will pull the updated resource information from your provider(s):

  • terraform refresh \
  • -var "do_token=${DO_PAT}" \
  • -var "pub_key=$HOME/.ssh/id_rsa.pub" \
  • -var "pvt_key=$HOME/.ssh/id_rsa" \
  • -var "ssh_fingerprint=$DO_SSH_FINGERPRINT"

At this point, Terraform has created a new Droplet called www-1 and installed Nginx on it. If you visit the public IP address of your new Droplet, you’ll see the Nginx welcome screen.

Step 6 — Creating the Second Nginx Server

Now that you have described an Nginx server, you can add a second quickly by copying the existing server’s configuration file and replacing the name and hostname of the Droplet resource.

You can do this manually, but it’s faster to use the sed command to substitute all instances of www-1 with www-2 and create a new file. Here is the sed command to do that:

  • sed 's/www-1/www-2/g' www-1.tf > www-2.tf

Learn more about sed in Using sed.

Now run terraform plan again to preview the changes that Terraform will make:

  • terraform plan \
  • -var "do_token=${DO_PAT}" \
  • -var "pub_key=$HOME/.ssh/id_rsa.pub" \
  • -var "pvt_key=$HOME/.ssh/id_rsa" \
  • -var "ssh_fingerprint=${DO_SSH_FINGERPRINT}"

The output shows the second server:

Output
... Terraform will perform the following actions: # digitalocean_droplet.www-2 will be created + resource "digitalocean_droplet" "www-2" { + backups = false + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "www-2" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = true + region = "nyc2" + resize_disk = true + size = "s-1vcpu-1gb" + ssh_keys = [ + "your_ssh_key_hash, ] + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ...

Then run terraform apply again to create the second server.

  • terraform apply \
  • -var "do_token=${DO_PAT}" \
  • -var "pub_key=$HOME/.ssh/id_rsa.pub" \
  • -var "pvt_key=$HOME/.ssh/id_rsa" \
  • -var "ssh_fingerprint=${DO_SSH_FINGERPRINT}"

You have both servers configured. Now let’s configure the load balancer.

Step 7 — Creating the Load balancer

We’ll use a DigitalOcean Load Balancer to route traffic between our two web servers. The DigitalOcean Terraform provider supports this as well.

Create a new Terraform configuration file called loadbalancer.tf:

  • nano loadbalancer.tf

Insert the following lines to define the load balancer:

loadbalancer.tf
resource "digitalocean_loadbalancer" "www-lb" {
  name = "web-lb"
  region = "nyc2"

  forwarding_rule {
    entry_port = 80
    entry_protocol = "http"

    target_port = 80
    target_protocol = "http"
  }

  healthcheck {
    port = 22
    protocol = "tcp"
  }

  droplet_ids = [digitalocean_droplet.www-1.id, digitalocean_droplet.www-2.id ]
}

The load balancer definition specifies the name of the load balancer, the datacenter, the ports it should listen on to balance traffic, configuration for the health check, and the IDs of the Droplets it should balance, which we fetch using Terraform variables.

Save the file and exit the editor.

Run terraform plan command again to see the new execution plan:

  • terraform plan \
  • -var "do_token=${DO_PAT}" \
  • -var "pub_key=$HOME/.ssh/id_rsa.pub" \
  • -var "pvt_key=$HOME/.ssh/id_rsa" \
  • -var "ssh_fingerprint=$DO_SSH_FINGERPRINT"

You’ll see several lines of output, including the following lines:

Output
... digitalocean_droplet.www-1: Refreshing state... [id=155205581] digitalocean_droplet.www-2: Refreshing state... [id=155305680] ... + digitalocean_loadbalancer.www-lb ...

This means that the www-1 and www-2 Droplets already exist, and Terraform will create the haproxy-www Droplet. Let’s run terraform apply to build the remaining components:

  • terraform apply \
  • -var "do_token=${DO_PAT}" \
  • -var "pub_key=$HOME/.ssh/id_rsa.pub" \
  • -var "pvt_key=$HOME/.ssh/id_rsa" \
  • -var "ssh_fingerprint=$DO_SSH_FINGERPRINT"

You’ll see output that contains the following lines (truncated for brevity):

Output
... digitalocean_loadbalancer.web-lb: Creating... ... digitalocean_loadbalancer.web-lb: Creation complete after 1m18s [id=e517d65b-68a3-4923-82c3-28bc48e50c12] ... Apply complete! Resources: 1 added, 0 changed, 0 destroyed. ...

If you visit the public IP address of the www-lb Droplet, you’ll see an Nginx welcome screen, because the load balancer is sending traffic to one of the two Nginx servers.

The rest of the tutorial includes information about configuring DNS domain and record resources with Terraform, and information on how to use the other Terraform commands.

Step 8 — Creating DNS Domains and Records

Terraform can also create DNS domain and record domains. For example, if you want to point your domain to your load balancer, you can create a Terraform configuration file for that.

Note: Use your own, unique, domain name or this step will fail.

Create a new file to describe your DNS:

  • nano your_domain.tf

Insert the following domain resource:

your_domain.tf
resource "digitalocean_domain" "default" {
   name = "your_domain"
   ip_address = digitalocean_loadbalancer.www-lb.ipv4_address
}

And while we’re at it, let’s add a CNAME record that points www.your_domain to your_domain:

your_domain.tf
resource "digitalocean_record" "CNAME-www" {
  domain = digitalocean_domain.default.name
  type = "CNAME"
  name = "www"
  value = "@"
}

Save and exit.

To add the DNS entries, run terraform plan followed by terraform apply, as with the other resources.

Step 9 — Destroying Your Infrastructure

Although not commonly used in production environments, Terraform can also destroy infrastructures that it creates. This is mainly useful in development environments that are built and destroyed multiple times.

First, create an execution plan to destroy the infrastructure by using terraform plan -destroy like this:

  • terraform plan -destroy -out=terraform.tfplan \
  • -var "do_token=${DO_PAT}" \
  • -var "pub_key=$HOME/.ssh/id_rsa.pub" \
  • -var "pvt_key=$HOME/.ssh/id_rsa" \
  • -var "ssh_fingerprint=$DO_SSH_FINGERPRINT"

Terraform will output a plan with resources marked in red, and prefixed with a minus sign, indicating that it will delete the resources in your infrastructure.

Use terraform apply to run the plan:

  • terraform apply terraform.tfplan

Terraform will destroy the resources, as indicated in the destroy plan.

Conclusion

In this tutorial you used Terraform to build a load-balanced web infrastructure on DigitalOcean, with two Nginx web servers running behind a DigitalOcean Load Balancer. You know how to create and destroy resources, and use Terraform to configure DNS entries.

Now that you understand how Terraform works, feel free to create configuration files that describe a server infrastructure that is useful to you. The example setup is simple, but demonstrates how easy it is to automate the deployment of servers. If you already use configuration management tools, like Puppet or Chef, you can call those with Terraform’s provisioners to configure servers as part of their creation process.

Terraform has many more features, and can work with other providers. Check out the official Terraform Documentation to learn more about how you can use Terraform to improve your own infrastructure.

17 Comments

Creative Commons License