Tutorial

How To Build a Custom Terraform Module

How To Build a Custom Terraform Module

The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

Introduction

Terraform modules allow you to group distinct resources of your infrastructure into a single, unified resource. You can reuse them later with possible customizations, without repeating the resource definitions each time you need them, which is beneficial to large and complexly structured projects. You can customize module instances using input variables you define as well as extract information from them using outputs. Aside from creating your own custom modules, you can also use the pre-made modules published publicly at the Terraform Registry. Developers can use and customize them using inputs like the modules you create, but their source code is stored in and pulled from the cloud.

In this tutorial, you’ll create a Terraform module that will set up multiple Droplets behind a Load Balancer for redundancy. You’ll also use the for_each and count looping features of the Hashicorp Configuration Language (HCL) to deploy multiple customized instances of the module at the same time.

Prerequisites

Note: This tutorial has specifically been tested with Terraform 1.1.3.

Module Structure and Benefits

In this section, you’ll learn what benefits modules bring, where they are usually placed in the project, and how they should be structured.

Custom Terraform modules are created to encapsulate connected components that are used and deployed together frequently in bigger projects. They are self-contained, bundling only the resources, variables, and providers they need.

Modules are typically stored in a central folder at the root of the project, each in its respective subfolder underneath. In order to retain a clean separation between modules, always architect them to have a single purpose and make sure they never contain submodules.

It is useful to create modules from your resource schemes when you find yourself repeating them with infrequent customizations. Packaging a single resource as a module can be superfluous and gradually removes the simplicity of the overall architecture.

For small development and test projects, incorporating modules is not necessary because they do not bring much improvement in those cases. With their ability for customization, modules are the building element of complexly structured projects. Developers use modules for larger projects because of the significant advantages in avoiding code duplication. Modules also offer the benefit that definitions only need modification in one place, which will then be propagated through the rest of the infrastructure.

Next you’ll define, use, and customize modules in your Terraform projects.

Creating a Module

In this section, you’ll define multiple Droplets and a Load Balancer as Terraform resources and package them into a module. You’ll also make the resulting module customizable using module inputs.

You’ll store the module in a directory named droplet-lb, under a directory called modules. Assuming you are in the terraform-modules directory you created as part of the prerequisites, create both at once by running:

  1. mkdir -p modules/droplet-lb

The -p argument instructs mkdir to create all directories in the supplied path.

Navigate to it:

  1. cd modules/droplet-lb

As was noted in the previous section, modules contain the resources and variables they use. Starting from Terraform 0.13, they must also include definitions of the providers they use. Modules do not require any special configuration to note that the code represents a module, as Terraform regards every directory containing HCL code as a module, even the root directory of the project.

Variables defined in a module are exposed as its inputs and can be used in resource definitions to customize them. The module you’ll create will have two inputs: the number of Droplets to create and the name of their group. Create and open for editing a file called variables.tf where you’ll store the variables:

  1. nano variables.tf

Add the following lines:

modules/droplet-lb/variables.tf
variable "droplet_count" {}
variable "group_name" {}

Save and close the file.

You’ll store the Droplet definition in a file named droplets.tf. Create and open it for editing:

  1. nano droplets.tf

Add the following lines:

modules/droplet-lb/droplets.tf
resource "digitalocean_droplet" "droplets" {
  count  = var.droplet_count
  image  = "ubuntu-20-04-x64"
  name   = "${var.group_name}-${count.index}"
  region = "fra1"
  size   = "s-1vcpu-1gb"
}

For the count parameter, which specifies how many instances of a resource to create, you pass in the droplet_count variable. Its value will be specified when the module is called from the main project code. The name of each of the deployed Droplets will be different, which you achieve by appending the index of the current Droplet to the supplied group name. Deployment of the Droplets will be in the fra1 region and they will run Ubuntu 20.04.

When you are done, save and close the file.

With the Droplets now defined, you can move on to creating the Load Balancer. You’ll store its resource definition in a file named lb.tf. Create and open it for editing by running:

  1. nano lb.tf

Add its resource definition:

modules/droplet-lb/lb.tf
resource "digitalocean_loadbalancer" "www-lb" {
  name   = "lb-${var.group_name}"
  region = "fra1"

  forwarding_rule {
    entry_port     = 80
    entry_protocol = "http"

    target_port     = 80
    target_protocol = "http"
  }

  healthcheck {
    port     = 22
    protocol = "tcp"
  }

  droplet_ids = [
    for droplet in digitalocean_droplet.droplets:
      droplet.id
  ]
}

You define the Load Balancer with the group name in its name in order to make it distinguishable. You deploy it in the fra1 region together with the Droplets. The next two sections specify the target and monitoring ports and protocols.

The highlighted droplet_ids block takes in the IDs of the Droplets, which should be managed by the Load Balancer. Since there are multiple Droplets, and their count is not known in advance, you use a for loop to traverse the collection of Droplets (digitalocean_droplet.droplets) and take their IDs. You surround the for loop with brackets ([]) so that the resulting collection will be a list.

Save and close the file.

You’ve now defined the Droplet, Load Balancer, and variables for your module. You’ll need to define the provider requirements, specifying which providers the module uses, including their version and where they are located. Since Terraform 0.13, modules must explicitly define the sources of non-Hashicorp maintained providers they use; this is because they do not inherit them from the parent project.

You’ll store the provider requirements in a file named provider.tf. Create it for editing by running:

  1. nano provider.tf

Add the following lines to require the digitalocean provider:

modules/droplet-lb/provider.tf
terraform {
  required_providers {
    digitalocean = {
      source = "digitalocean/digitalocean"
      version = "~> 2.0"
    }
  }
}

Save and close the file when you’re done. The droplet-lb module now requires the digitalocean provider.

Modules also support outputs, which you can use to extract internal information about the state of their resources. You’ll define an output that exposes the IP address of the Load Balancer, and store it in a file named outputs.tf. Create it for editing:

  1. nano outputs.tf

Add the following definition:

modules/droplet-lb/outputs.tf
output "lb_ip" {
  value = digitalocean_loadbalancer.www-lb.ip
}

This output retrieves the IP address of the Load Balancer. Save and close the file.

The droplet-lb module is now functionally complete and ready for deployment. You’ll call it from the main code, which you’ll store in the root of the project. First, navigate to it by going upward through your file directory two times:

  1. cd ../..

Then, create and open for editing a file called main.tf, in which you’ll use the module:

  1. nano main.tf

Add the following lines:

main.tf
module "groups" {
  source = "./modules/droplet-lb"

  droplet_count = 3
  group_name    = "group1"
}

output "loadbalancer-ip" {
  value = module.groups.lb_ip
}

In this declaration you invoke the droplet-lb module located in the directory specified as source. You configure the input it provides, droplet_count and group_name, which is set to group1 so you’ll later be able to discern between instances.

Since the Load Balancer IP output is defined in a module, it won’t automatically be shown when you apply the project. The solution to this is to create another output retrieving its value (loadbalancer_ip).

Save and close the file when you’re done.

Initialize the module by running:

  1. terraform init

The output will look like this:

Output
Initializing modules... - groups in modules/droplet-lb Initializing the backend... Initializing provider plugins... - Finding digitalocean/digitalocean versions matching "~> 2.0"... - Installing digitalocean/digitalocean v2.19.0... - Installed digitalocean/digitalocean v2.19.0 (signed by a HashiCorp partner, key ID F82037E524B9C0E8) ... Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.

You can try planning the project to see what actions Terraform would take by running:

  1. terraform plan -var "do_token=${DO_PAT}"

The output will be similar to this:

Output
... Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # module.groups.digitalocean_droplet.droplets[0] will be created + resource "digitalocean_droplet" "droplets" { ... + name = "group1-0" ... } # module.groups.digitalocean_droplet.droplets[1] will be created + resource "digitalocean_droplet" "droplets" { ... + name = "group1-1" ... } # module.groups.digitalocean_droplet.droplets[2] will be created + resource "digitalocean_droplet" "droplets" { ... + name = "group1-2" ... } # module.groups.digitalocean_loadbalancer.www-lb will be created + resource "digitalocean_loadbalancer" "www-lb" { ... + name = "lb-group1" ... } Plan: 4 to add, 0 to change, 0 to destroy. ...

This output details that Terraform would create three Droplets, named group1-0, group1-1, and group1-2, and would also create a Load Balancer called group1-lb, which will manage the traffic to and from the three Droplets.

You can try applying the project to the cloud by running:

  1. terraform apply -var "do_token=${DO_PAT}"

Enter yes when prompted. The output will show all the actions and the IP address of the Load Balancer will also be shown:

Output
module.groups.digitalocean_droplet.droplets[1]: Creating... module.groups.digitalocean_droplet.droplets[0]: Creating... module.groups.digitalocean_droplet.droplets[2]: Creating... ... Apply complete! Resources: 4 added, 0 changed, 0 destroyed. Outputs: loadbalancer-ip = ip_address

You’ve created a module containing a customizable number of Droplets and a Load Balancer that will automatically be configured to manage their ingoing and outgoing traffic.

Renaming Deployed Resources

In the previous section, you deployed the module you defined and called it groups. If you ever wish to change its name, simply renaming the module call will not yield the expected results. Renaming the call will prompt Terraform to destroy and recreate resources, causing excessive downtime.

For example, open main.tf for editing by running:

  1. nano main.tf

Rename the groups module to groups_renamed, as highlighted:

main.tf
module "groups_renamed" {
  source = "./modules/droplet-lb"

  droplet_count = 3
  group_name    = "group1"
}

output "loadbalancer-ip" {
  value = module.groups_renamed.lb_ip
}

Save and close the file. Then, initialize the project again:

  1. terraform init

You can now plan the project:

  1. terraform plan -var "do_token=${DO_PAT}"

The output will be long, but will look similar to this:

Output
... Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create - destroy Terraform will perform the following actions: # module.groups.digitalocean_droplet.droplets[0] will be destroyed ... # module.groups_renamed.digitalocean_droplet.droplets[0] will be created ...

Terraform will prompt you to destroy the existing instances and create new ones. This is destructive and unnecessary, and may lead to unwanted downtime.

Instead, using the moved block, you can instruct Terraform to move old resources under the new name. Open main.tf for editing and add the following lines to the end of the file:

moved {
  from = module.groups
  to   = module.groups_renamed
}

When you’re done, save and close the file.

You can now plan the project:

  1. terraform plan -var "do_token=${DO_PAT}"

When you plan with the moved block present in main.tf, Terraform wants to move the resources, instead of recreate them:

Output
Terraform will perform the following actions: # module.groups.digitalocean_droplet.droplets[0] has moved to module.groups_renamed.digitalocean_droplet.droplets[0] ... # module.groups.digitalocean_droplet.droplets[1] has moved to module.groups_renamed.digitalocean_droplet.droplets[1] ...

Moving resources changes their place in Terraform state, meaning that the actual cloud resources won’t be modified, destroyed, or recreated.

Because you’ll modify the configuration significantly in the next step, destroy the deployed resources by running:

  1. terraform destroy -var "do_token=${DO_PAT}"

Enter yes when prompted. The output will end in:

Output
... Destroy complete! Resources: 4 destroyed.

In this section, you renamed resources in your Terraform project without destroying them in the process. You’ll now deploy multiple instances of a module from the same code using for_each and count.

Deploying Multiple Module Instances

In this section, you’ll use count and for_each to deploy the droplet-lb module multiple times with customizations.

Using count

One way to deploy multiple instances of the same module at once is to pass in how many to the count parameter, which is automatically available to every module. Open main.tf for editing:

  1. nano main.tf

Modify it to look like this, removing the existing output definition and moved block:

main.tf
module "groups" {
  source = "./modules/droplet-lb"

  count  = 3

  droplet_count = 3
  group_name    = "group1-${count.index}"
}

By setting count to 3, you instruct Terraform to deploy the module three times, each with a different group name. When you’re done, save and close the file.

Plan the deployment by running:

  1. terraform plan -var "do_token=${DO_PAT}"

The output will be long, and will look like this:

Output
... Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # module.groups[0].digitalocean_droplet.droplets[0] will be created ... # module.groups[0].digitalocean_droplet.droplets[1] will be created ... # module.groups[0].digitalocean_droplet.droplets[2] will be created ... # module.groups[0].digitalocean_loadbalancer.www-lb will be created ... # module.groups[1].digitalocean_droplet.droplets[0] will be created ... # module.groups[1].digitalocean_droplet.droplets[1] will be created ... # module.groups[1].digitalocean_droplet.droplets[2] will be created ... # module.groups[1].digitalocean_loadbalancer.www-lb will be created ... # module.groups[2].digitalocean_droplet.droplets[0] will be created ... # module.groups[2].digitalocean_droplet.droplets[1] will be created ... # module.groups[2].digitalocean_droplet.droplets[2] will be created ... # module.groups[2].digitalocean_loadbalancer.www-lb will be created ... Plan: 12 to add, 0 to change, 0 to destroy. ...

Terraform details in the output that each of the three module instances would have three Droplets and a Load Balancer associated with them.

Using for_each

You can use for_each for modules when you require more complex instance customization, or when the number of instances depends on third-party data (often presented as maps) that is not known while writing the code.

You’ll now define a map that pairs group names to Droplet counts and deploy instances of droplet-lb according to it. Open main.tf for editing by running:

  1. nano main.tf

Modify the file to make it look like this:

main.tf
variable "group_counts" {
  type    = map
  default = {
    "group1" = 1
    "group2" = 3
  }
}

module "groups" {
  source   = "./modules/droplet-lb"
  for_each = var.group_counts

  droplet_count = each.value
  group_name    = each.key
}

You first define a map called group_counts that contains how many Droplets a given group should have. Then, you invoke the module droplet-lb, but specify that the for_each loop should operate on var.group_counts, the map you’ve defined just before. droplet_count takes each.value, the value of the current pair, which is the count of Droplets for the current group. group_name receives the name of the group.

Save and close the file when you’re done.

Try applying the configuration by running:

  1. terraform plan -var "do_token=${DO_PAT}"

The output will detail the actions Terraform would take to create the two groups with their Droplets and Load Balancers:

Output
... Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # module.groups["group1"].digitalocean_droplet.droplets[0] will be created ... # module.groups["group1"].digitalocean_loadbalancer.www-lb will be created ... # module.groups["group2"].digitalocean_droplet.droplets[0] will be created ... # module.groups["group2"].digitalocean_droplet.droplets[1] will be created ... # module.groups["group2"].digitalocean_droplet.droplets[2] will be created ... # module.groups["group2"].digitalocean_loadbalancer.www-lb will be created ...

In this step, you’ve used count and for_each to deploy multiple customized instances of the same module from the same code.

Conclusion

In this tutorial, you created and deployed Terraform modules. You used modules to group logically linked resources together and customized them in order to deploy multiple different instances from a central code definition. You also used outputs to show attributes of resources contained in the module.

If you would like to learn more about Terraform, check out our How To Manage Infrastructure with Terraform series.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about us


Tutorial Series: How To Manage Infrastructure with Terraform

Terraform is a popular open source Infrastructure as Code (IAC) tool that automates provisioning of your infrastructure in the cloud and manages the full lifecycle of all deployed resources, which are defined in source code. Its resource-managing behavior is predictable and reproducible, so you can plan the actions in advance and reuse your code configurations for similar infrastructure.

In this series, you will build out examples of Terraform projects to gain an understanding of the IAC approach and how it’s applied in practice to facilitate creating and deploying reusable and scalable infrastructure architectures.

About the authors
Default avatar
Savic

author



Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
2 Comments


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

dantani
DigitalOcean Employee
DigitalOcean Employee badge
December 30, 2021

This comment has been deleted

    in the top section ‘Creating a Module’ the apply step errors out


    2021-09-24T15:17:05.124-0400 [WARN] ValidateProviderConfig from “provider["registry.terraform.io/digitalocean/digitalocean"]” changed the config value, but that value is unused 2021-09-24T15:17:05.129-0400 [WARN] Provider “registry.terraform.io/digitalocean/digitalocean” produced an invalid plan for module.groups.digitalocean_droplet.droplets[1], but we are tolerating it because it is using the legacy plugin SDK. The following problems may be the cause of any confusing errors from downstream operations: - .backups: planned value cty.False for a non-computed attribute - .resize_disk: planned value cty.True for a non-computed attribute - .monitoring: planned value cty.False for a non-computed attribute - .ipv6: planned value cty.False for a non-computed attribute 2021-09-24T15:17:05.129-0400 [WARN] Provider “registry.terraform.io/digitalocean/digitalocean” produced an invalid plan for module.groups.digitalocean_droplet.droplets[2], but we are tolerating it because it is using the legacy plugin SDK. The following problems may be the cause of any confusing errors from downstream operations: - .monitoring: planned value cty.False for a non-computed attribute - .ipv6: planned value cty.False for a non-computed attribute - .backups: planned value cty.False for a non-computed attribute - .resize_disk: planned value cty.True for a non-computed attribute 2021-09-24T15:17:05.129-0400 [WARN] Provider “registry.terraform.io/digitalocean/digitalocean” produced an invalid plan for module.groups.digitalocean_droplet.droplets[0], but we are tolerating it because it is using the legacy plugin SDK. The following problems may be the cause of any confusing errors from downstream operations: - .resize_disk: planned value cty.True for a non-computed attribute - .monitoring: planned value cty.False for a non-computed attribute - .ipv6: planned value cty.False for a non-computed attribute - .backups: planned value cty.False for a non-computed attribute 2021-09-24T15:17:05.134-0400 [WARN] Provider “registry.terraform.io/digitalocean/digitalocean” produced an invalid plan for module.groups.digitalocean_loadbalancer.www-lb, but we are tolerating it because it is using the legacy plugin SDK. The following problems may be the cause of any confusing errors from downstream operations: - .size: planned value cty.StringVal(“lb-small”) for a non-computed attribute - .enable_backend_keepalive: planned value cty.False for a non-computed attribute - .redirect_http_to_https: planned value cty.False for a non-computed attribute - .algorithm: planned value cty.StringVal(“round_robin”) for a non-computed attribute - .droplet_ids: planned value cty.UnknownVal(cty.Set(cty.Number)) does not match config value cty.SetVal([]cty.Value{cty.UnknownVal(cty.Number), cty.UnknownVal(cty.Number), cty.UnknownVal(cty.Number)}) - .enable_proxy_protocol: planned value cty.False for a non-computed attribute - .healthcheck[0].check_interval_seconds: planned value cty.NumberIntVal(10) for a non-computed attribute - .healthcheck[0].healthy_threshold: planned value cty.NumberIntVal(5) for a non-computed attribute - .healthcheck[0].response_timeout_seconds: planned value cty.NumberIntVal(5) for a non-computed attribute - .healthcheck[0].unhealthy_threshold: planned value cty.NumberIntVal(3) for a non-computed attribute - .sticky_sessions: attribute representing nested block must not be unknown itself; set nested attribute values to unknown instead

    Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:

    • create

    Terraform will perform the following actions:

    module.groups.digitalocean_droplet.droplets[0] will be created

    • resource “digitalocean_droplet” “droplets” {
      • backups = false
      • created_at = (known after apply)
      • disk = (known after apply)
      • id = (known after apply)
      • image = “ubuntu-18-04-x64”
      • ipv4_address = (known after apply)
      • ipv4_address_private = (known after apply)
      • ipv6 = false
      • ipv6_address = (known after apply)
      • locked = (known after apply)
      • memory = (known after apply)
      • monitoring = false
      • name = “group1-0”
      • price_hourly = (known after apply)
      • price_monthly = (known after apply)
      • private_networking = (known after apply)
      • region = “fra1”
      • resize_disk = true
      • size = “s-1vcpu-1gb”
      • status = (known after apply)
      • urn = (known after apply)
      • vcpus = (known after apply)
      • volume_ids = (known after apply)
      • vpc_uuid = (known after apply) }

    module.groups.digitalocean_droplet.droplets[1] will be created

    • resource “digitalocean_droplet” “droplets” {
      • backups = false
      • created_at = (known after apply)
      • disk = (known after apply)
      • id = (known after apply)
      • image = “ubuntu-18-04-x64”
      • ipv4_address = (known after apply)
      • ipv4_address_private = (known after apply)
      • ipv6 = false
      • ipv6_address = (known after apply)
      • locked = (known after apply)
      • memory = (known after apply)
      • monitoring = false
      • name = “group1-1”
      • price_hourly = (known after apply)
      • price_monthly = (known after apply)
      • private_networking = (known after apply)
      • region = “fra1”
      • resize_disk = true
      • size = “s-1vcpu-1gb”
      • status = (known after apply)
      • urn = (known after apply)
      • vcpus = (known after apply)
      • volume_ids = (known after apply)
      • vpc_uuid = (known after apply) }

    module.groups.digitalocean_droplet.droplets[2] will be created

    • resource “digitalocean_droplet” “droplets” {
      • backups = false
      • created_at = (known after apply)
      • disk = (known after apply)
      • id = (known after apply)
      • image = “ubuntu-18-04-x64”
      • ipv4_address = (known after apply)
      • ipv4_address_private = (known after apply)
      • ipv6 = false
      • ipv6_address = (known after apply)
      • locked = (known after apply)
      • memory = (known after apply)
      • monitoring = false
      • name = “group1-2”
      • price_hourly = (known after apply)
      • price_monthly = (known after apply)
      • private_networking = (known after apply)
      • region = “fra1”
      • resize_disk = true
      • size = “s-1vcpu-1gb”
      • status = (known after apply)
      • urn = (known after apply)
      • vcpus = (known after apply)
      • volume_ids = (known after apply)
      • vpc_uuid = (known after apply) }

    module.groups.digitalocean_loadbalancer.www-lb will be created

    • resource “digitalocean_loadbalancer” “www-lb” {
      • algorithm = “round_robin”

      • droplet_ids = (known after apply)

      • enable_backend_keepalive = false

      • enable_proxy_protocol = false

      • id = (known after apply)

      • ip = (known after apply)

      • name = “lb-group1”

      • redirect_http_to_https = false

      • region = “fra1”

      • size = “lb-small”

      • status = (known after apply)

      • urn = (known after apply)

      • vpc_uuid = (known after apply)

      • forwarding_rule {

        • certificate_id = (known after apply)
        • certificate_name = (known after apply)
        • entry_port = 80
        • entry_protocol = “http”
        • target_port = 80
        • target_protocol = “http”
        • tls_passthrough = false }
      • healthcheck {

        • check_interval_seconds = 10
        • healthy_threshold = 5
        • port = 22
        • protocol = “tcp”
        • response_timeout_seconds = 5
        • unhealthy_threshold = 3 }
      • sticky_sessions {

        • cookie_name = (known after apply)
        • cookie_ttl_seconds = (known after apply)
        • type = (known after apply) } }

    Plan: 4 to add, 0 to change, 0 to destroy.

    Changes to Outputs:

    • loadbalancer-ip = (known after apply)

    Do you want to perform these actions? Terraform will perform the actions described above. Only ‘yes’ will be accepted to approve.

    Enter a value: yes

    2021-09-24T15:17:07.603-0400 [WARN] ValidateProviderConfig from “provider["registry.terraform.io/digitalocean/digitalocean"]” changed the config value, but that value is unused 2021-09-24T15:17:07.607-0400 [WARN] Provider “registry.terraform.io/digitalocean/digitalocean” produced an invalid plan for module.groups.digitalocean_droplet.droplets[2], but we are tolerating it because it is using the legacy plugin SDK. The following problems may be the cause of any confusing errors from downstream operations: - .monitoring: planned value cty.False for a non-computed attribute - .ipv6: planned value cty.False for a non-computed attribute - .backups: planned value cty.False for a non-computed attribute - .resize_disk: planned value cty.True for a non-computed attribute module.groups.digitalocean_droplet.droplets[2]: Creating… 2021-09-24T15:17:07.607-0400 [WARN] Provider “registry.terraform.io/digitalocean/digitalocean” produced an invalid plan for module.groups.digitalocean_droplet.droplets[1], but we are tolerating it because it is using the legacy plugin SDK. The following problems may be the cause of any confusing errors from downstream operations: - .monitoring: planned value cty.False for a non-computed attribute - .ipv6: planned value cty.False for a non-computed attribute - .backups: planned value cty.False for a non-computed attribute - .resize_disk: planned value cty.True for a non-computed attribute 2021-09-24T15:17:07.608-0400 [WARN] Provider “registry.terraform.io/digitalocean/digitalocean” produced an invalid plan for module.groups.digitalocean_droplet.droplets[0], but we are tolerating it because it is using the legacy plugin SDK. The following problems may be the cause of any confusing errors from downstream operations: - .monitoring: planned value cty.False for a non-computed attribute - .ipv6: planned value cty.False for a non-computed attribute - .backups: planned value cty.False for a non-computed attribute - .resize_disk: planned value cty.True for a non-computed attribute module.groups.digitalocean_droplet.droplets[0]: Creating… module.groups.digitalocean_droplet.droplets[1]: Creating… ╷ │ Error: Error creating droplet: POST https://api.digitalocean.com/v2/droplets: 401 (request “c48b0fc8-f93c-42f2-92d9-fe4f591e3267”) Unable to authenticate you │ │ with module.groups.digitalocean_droplet.droplets[2], │ on modules/droplet-lb/droplets.tf line 3, in resource “digitalocean_droplet” “droplets”: │ 3: resource “digitalocean_droplet” “droplets” { │ ╵ ╷ │ Error: Error creating droplet: POST https://api.digitalocean.com/v2/droplets: 401 (request “2c7000b8-b87b-4532-a12d-fe4334a08e6b”) Unable to authenticate you │ │ with module.groups.digitalocean_droplet.droplets[0], │ on modules/droplet-lb/droplets.tf line 3, in resource “digitalocean_droplet” “droplets”: │ 3: resource “digitalocean_droplet” “droplets” { │ ╵ ╷ │ Error: Error creating droplet: POST https://api.digitalocean.com/v2/droplets: 401 (request “898897fa-5c8b-444e-a1f0-5cf2875fbf80”) Unable to authenticate you │ │ with module.groups.digitalocean_droplet.droplets[1], │ on modules/droplet-lb/droplets.tf line 3, in resource “digitalocean_droplet” “droplets”: │ 3: resource “digitalocean_droplet” “droplets” { │ ╵


    I have been doing several of your earlier such tutorials just fine so dunno why this error now ?

    Try DigitalOcean for free

    Click below to sign up and get $200 of credit to try our products over 60 days!

    Sign up

    Join the Tech Talk
    Success! Thank you! Please check your email for further details.

    Please complete your information!

    Get our biweekly newsletter

    Sign up for Infrastructure as a Newsletter.

    Hollie's Hub for Good

    Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.

    Become a contributor

    Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

    Welcome to the developer cloud

    DigitalOcean makes it simple to launch in the cloud and scale up as you grow — whether you're running one virtual machine or ten thousand.

    Learn more
    DigitalOcean Cloud Control Panel