By Savic, Kathryn Hancox and Vinayak Baranwal
The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.
Terraform troubleshooting gets faster when you run validation first, capture targeted debug logs, and isolate failure scope before re-applying.
This guide is for beginner to intermediate Terraform users who need a repeatable debugging workflow on DigitalOcean infrastructure. It focuses on actions you can run immediately in a live project instead of abstract troubleshooting theory.
You will validate configuration quality before apply, enable and interpret TF_LOG output, diagnose terraform apply failures, and resolve cycle errors using dependency graphs. You will also reconcile drifted state, use lifecycle controls for safer updates, and fix provider version conflicts that block initialization or planning.
By the end, you will have a checklist-driven process for debugging Terraform errors with less guesswork and fewer repeated failed applies.
terraform fmt -recursive and terraform validate before every apply to catch formatting, syntax, and type errors early.TF_LOG=DEBUG and TF_LOG_PATH=./terraform.log when diagnosing failures so provider API calls and error responses are captured in one file.terraform apply errors in order: resource address, provider response, and source file reference, then confirm the change path with terraform plan.terraform graph to identify circular dependencies and break cycles by removing mutual references or moving shared values into locals.terraform state list before state rm or import, and use terraform apply -refresh-only to reconcile drift safely.lifecycle settings such as ignore_changes and create_before_destroy for controlled behavior; avoid blanket suppression patterns that hide real drift.required_providers and re-resolving with terraform init -upgrade, then commit .terraform.lock.hcl.terraform-troubleshooting, instead of loadbalance. During Step 2, do not include the pvt_key variable and the SSH key resource.terraform init, terraform plan, and terraform apply.Note: This tutorial has been tested with Terraform 1.8.x. Commands are compatible with Terraform 1.5 and later.
Run formatting and static validation before apply so you catch local issues before hitting provider APIs.
Use terraform fmt to normalize HCL formatting across your project so reviewers and automated tooling see consistent configuration before planning or applying.
terraform fmt -recursive
The command rewrites files that do not match Terraform’s canonical style. A common fix is indentation and alignment inside blocks.
# Before
resource "digitalocean_droplet" "web" {
image="ubuntu-22-04-x64"
name = "web-1"
region="nyc3"
size= "s-1vcpu-1gb"
}
# After
resource "digitalocean_droplet" "web" {
image = "ubuntu-22-04-x64"
name = "web-1"
region = "nyc3"
size = "s-1vcpu-1gb"
}
If everything is already formatted, terraform fmt -recursive prints no file paths and exits successfully. If files are rewritten, Terraform prints each updated path.
Outputterraform fmt -recursive
droplets.tf
variables.tf
Run terraform validate to check configuration syntax, argument names, and type constraints before a provider call occurs; it does not validate API-level conditions such as credentials, quotas, or region availability.
terraform validate
A failed validation usually points to an exact resource, argument, and file location. This example shows a required argument missing from a DigitalOcean Droplet resource:
Outputterraform validate
╷
│ Error: Missing required argument
│
│ on droplets.tf line 1, in resource "digitalocean_droplet" "web":
│ 1: resource "digitalocean_droplet" "web" {
│
│ The argument "size" is required, but no definition was found.
╵
Correct the resource and run validation again:
resource "digitalocean_droplet" "web" {
image = "ubuntu-22-04-x64"
name = "web-1"
region = "nyc3"
size = "s-1vcpu-1gb"
}
Outputterraform validate
Success! The configuration is valid.
When variable inputs can drift into invalid values, add explicit validation rules. This pattern keeps invalid values out before planning:
variable "test_ip" {
type = string
default = "8.8.8.8"
validation {
condition = can(regex("\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}", var.test_ip))
error_message = "The provided value is not a valid IP address."
}
}
The regex() function tests the pattern and can() converts function success into a boolean Terraform can enforce during validation.
When terraform apply fails with a message that tells you what went wrong
but not why, TF_LOG is the next tool to reach for. It exposes the full
conversation between Terraform core and the provider, including every API
call, every response code, and every internal evaluation step. Without it,
you are reading the error summary. With it, you are reading the transcript.
TF_LOG controls the minimum log verbosity Terraform emits during command execution.
| Log Level | What It Captures | When To Use It |
|---|---|---|
TRACE |
Full execution trace, internal evaluation steps, plugin RPC details | Last-resort deep debugging, Terraform bug reports, reproducing non-obvious internal behavior |
DEBUG |
Detailed provider interactions and internal decision points | Default level for troubleshooting failed plan/apply operations |
INFO |
High-level operation milestones and state transitions | Lightweight runtime insight during routine validation of workflow behavior |
WARN |
Non-fatal warnings and suspicious configuration conditions | Finding risky config patterns that did not fail execution |
ERROR |
Fatal execution errors that stop Terraform | Quick triage when you only need terminal failure reasons |
If you set TF_LOG to any unrecognized value, Terraform defaults to TRACE. TF_LOG sets a minimum verbosity floor, not a strict filter. For example, setting DEBUG still shows INFO, WARN, and ERROR lines.
Write logs to a file for large configurations so output stays searchable and you avoid losing context in terminal scrollback.
export TF_LOG=DEBUG
export TF_LOG_PATH=./terraform.log
After capturing the needed logs, unset both variables so normal runs stay quiet:
unset TF_LOG
unset TF_LOG_PATH
Note: TF_LOG_PATH appends to an existing file. Remove or rotate terraform.log between runs so stale lines do not mix with new failures.
Focus first on API status codes, request targets, and the resource Terraform was creating when the failure occurred.
Output2024-06-18T10:42:11.223Z [INFO] Terraform version: 1.8.5
2024-06-18T10:42:11.971Z [DEBUG] provider.terraform-provider-digitalocean_v2.38.1: POST https://api.digitalocean.com/v2/droplets
2024-06-18T10:42:12.214Z [DEBUG] provider.terraform-provider-digitalocean_v2.38.1: Response code: 401
2024-06-18T10:42:12.215Z [ERROR] provider.terraform-provider-digitalocean_v2.38.1: Error creating droplet: POST https://api.digitalocean.com/v2/droplets: 401 Unable to authenticate you
Error: Error creating droplet: POST https://api.digitalocean.com/v2/droplets: 401 Unable to authenticate you
Breaking down what each line tells you: the [INFO] line at startup
confirms which Terraform binary is running, which rules out version
mismatch as a cause before you spend time on it. The first [DEBUG] line
shows the exact HTTP method and endpoint the provider called, so you know
whether Terraform reached the API at all. The second [DEBUG] line gives
you the raw response code before Terraform formats it into a human-readable
error, which is useful when the formatted message is ambiguous. The [ERROR]
line is the one to act on: it contains the provider’s exact failure reason,
which in this case is a 401, meaning the request reached DigitalOcean but
authentication failed.
When you see a 401, the fix is to verify your token is exported and has
the correct scopes:
echo $DIGITALOCEAN_TOKEN
If that prints nothing, the variable is not set in the current shell session.
Re-export it and re-run the command. If it prints a value but the 401
persists, the token exists but lacks write permissions for the resource type
you are managing. Generate a new token with full read/write access from the
DigitalOcean Control Panel.
For large log files where scrolling is not practical, narrow the output to the failure window by searching for the first ERROR line and reading the DEBUG lines immediately above it:
grep -n "\[ERROR\]" terraform.log
The -n flag prints line numbers. Assign that number to a variable and
view the surrounding context:
LINE=150 # replace with the line number from grep output
sed -n "$((LINE-10)),$((LINE+2))p" terraform.log
This gives you the ten DEBUG lines that preceded the failure, which is almost always enough context to identify the root cause.
Use a quick scan to pull only warning and error lines from the log file:
grep -E "\[ERROR\]|\[WARN\]" terraform.log
Treat a failed apply as a structured signal, then narrow the failing edge before retrying.
Start by extracting the four key fields from the failure block: resource address, error type, provider response, and source file reference.
Outputterraform apply -var "do_token=${DO_PAT}"
digitalocean_droplet.web: Creating...
╷
│ Error: Error creating Droplet: POST https://api.digitalocean.com/v2/droplets: 422 (request "9c2fbb70-xxxx"): You specified an invalid region.
│
│ with digitalocean_droplet.web,
│ on droplets.tf line 1, in resource "digitalocean_droplet" "web":
│ 1: resource "digitalocean_droplet" "web" {
│
╵
Here, digitalocean_droplet.web is the resource address, 422 indicates invalid request data, the provider response says region is invalid, and droplets.tf line 1 points to the exact block to correct.
Run terraform plan before re-attempting apply so you confirm what Terraform intends to change and whether hidden drift or conflicts exist.
terraform plan -var "do_token=${DO_PAT}"
A useful plan output pattern is an unexpected replacement or conflict on a resource you did not intend to change:
OutputTerraform will perform the following actions:
# digitalocean_droplet.web must be replaced
-/+ resource "digitalocean_droplet" "web" {
~ region = "nyc3" -> "sfo3" # forces replacement
id = "412345678"
name = "web-1"
# ... truncated ...
}
Plan: 1 to add, 0 to change, 1 to destroy.
If replacement is not expected, you need to determine whether the change originated in your configuration or in the state file. Check these three sources in order:
First, check whether a variable value changed. If the region is being read
from a variable with a default, verify the default has not been modified and
that no -var flag or terraform.tfvars file is overriding it to a
different value.
Second, check whether the resource was modified outside of Terraform. Open
the DigitalOcean Control Panel and compare the resource’s current attributes
against what your configuration declares. If they differ, drift is the cause.
Run terraform apply -refresh-only to update state before re-running the
plan.
Third, check whether the provider version changed. Some provider upgrades rename or restructure resource arguments, which can cause Terraform to see a diff where none exists in intent. If you recently updated your lock file, see Step 7 for the full provider upgrade verification workflow.
Use -target only for recovery when one known-good resource must proceed while you defer a broken dependency for separate repair.
terraform apply -target=digitalocean_droplet.web -var "do_token=${DO_PAT}"
Warning: The -target flag bypasses Terraform’s dependency graph. Use it only for targeted recovery. Never use it as a routine deployment pattern, as it can cause state drift and incomplete resource graphs.
Break cycle errors by removing circular references so Terraform can compute a linear execution order.
Error: Cycle: resource_a, resource_b means Terraform detected a circular dependency it cannot resolve. Two or more resources are referencing each other directly, or indirectly through outputs and locals, so graph evaluation cannot determine which resource should be created first.
Generate a dependency graph and inspect edges to locate mutual references:
terraform graph | dot -Tsvg > graph.svg
The dot command is part of Graphviz. Install it on Ubuntu with:
sudo apt install graphviz
In the rendered graph.svg, look for nodes linked in both directions,
which indicates the cycle path Terraform reported. If Graphviz is
unavailable, run terraform graph and read the raw DOT output directly:
terraform graph
In the DOT output, a cycle appears as two resources each listing the other as a dependency on separate lines:
Outputdigraph {
compound = "true"
"[root] digitalocean_droplet.web_a (expand)" ->
"[root] digitalocean_droplet.web_b (expand)"
"[root] digitalocean_droplet.web_b (expand)" ->
"[root] digitalocean_droplet.web_a (expand)"
}
When you see two resources pointing at each other like this, that pair
is the cycle. The resource names in the DOT output match the addresses
Terraform reports in the Error: Cycle message.
The fix depends on what the resources were actually sharing. If both
resources needed the same static configuration string, move that string
into a locals block so each resource reads from a single source of truth
rather than from each other. In this example, both Droplets were passing
the same cloud-init script name via user_data. The circular reference
existed because each was reading it from the other instead of from a
shared local.
If the dependency is on a runtime attribute, such as an IP address that
only exists after a resource is created, you cannot resolve the cycle with
locals. In that case, the architecture itself needs to change: only one
resource can reference the other, and the reference must flow in one
direction. The resource being referenced must be created first, which means
removing the reverse reference entirely, not moving it.
# Before: circular dependency
resource "digitalocean_droplet" "web_a" {
image = "ubuntu-22-04-x64"
name = "web-a"
region = "nyc3"
size = "s-1vcpu-1gb"
user_data = digitalocean_droplet.web_b.ipv4_address
}
resource "digitalocean_droplet" "web_b" {
image = "ubuntu-22-04-x64"
name = "web-b"
region = "nyc3"
size = "s-1vcpu-1gb"
user_data = digitalocean_droplet.web_a.ipv4_address
}
# After: shared value moved to locals, no circular reference
locals {
shared_config = "cloud-init-bootstrap"
}
resource "digitalocean_droplet" "web_a" {
image = "ubuntu-22-04-x64"
name = "web-a"
region = "nyc3"
size = "s-1vcpu-1gb"
user_data = local.shared_config
}
resource "digitalocean_droplet" "web_b" {
image = "ubuntu-22-04-x64"
name = "web-b"
region = "nyc3"
size = "s-1vcpu-1gb"
user_data = local.shared_config
}
Note: The locals pattern works for shared static configuration values.
If one resource genuinely needs a runtime attribute from another resource,
such as an IP address assigned after creation, the dependency must be
one-directional. Restructure so only one resource references the other, or
provision the shared value through a separate data source or output. Mutual
runtime references cannot be resolved by Terraform regardless of how they
are expressed.
State drift happens when your real infrastructure and Terraform’s state file describe different things. On DigitalOcean, drift typically results from manual changes made through the Control Panel, resources resized or tagged outside of Terraform, and Droplets rebuilt or replaced by automated processes Terraform did not initiate
The first signal is usually a terraform plan output that proposes changes
you did not make in configuration. If the plan shows a diff on an attribute
you have not touched, and you have not changed the corresponding variable or
default, drift is the likely cause. Run terraform apply -refresh-only
before investigating further so Terraform re-reads the current state of your
infrastructure and updates its state file to match. Only after that refresh
should you decide whether to accept the drift, correct it in the Control
Panel, or let Terraform reconcile it on the next apply.
Use terraform state list to print all resource addresses currently tracked in state, especially before any state rm or import operation.
terraform state list
Example output:
Outputdigitalocean_droplet.web
digitalocean_loadbalancer.public
digitalocean_domain.example
Confirming exact addresses first prevents accidental removal or import of the wrong object.
If a resource you expect to see is missing from the list, it means Terraform
has no record of managing it. This happens when a resource was created
manually in the Control Panel, when it was removed from state with
terraform state rm, or when a previous terraform apply failed partway
through and the resource creation was never recorded. In that case, import
the resource so Terraform can manage it going forward:
terraform import digitalocean_droplet.web <droplet-id>
Replace <droplet-id> with the numeric ID from the DigitalOcean Control
Panel or from the API. After importing, run terraform plan to confirm
Terraform sees no diff between the imported state and your configuration.
If the plan shows changes, update your configuration to match the actual
resource attributes before applying.
As of Terraform 0.15, the standalone terraform refresh command is deprecated in favor of refresh-only apply:
terraform apply -refresh-only -var "do_token=${DO_PAT}"
Expected prompt and output:
OutputTerraform will perform the following actions:
~ update in-place digitalocean_droplet.web
Do you want to perform these actions?
Terraform will write these changes to the state without modifying any real infrastructure.
Only 'yes' will be accepted to approve.
Enter a value: yes
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
Remove a resource from state when the state entry is broken or you intend to re-import the object under controlled conditions.
terraform state rm digitalocean_droplet.web
Warning: Removing a resource from state does not destroy it in your cloud account. The resource will continue to run and accrue cost. Use this command only when you intend to re-import the resource or manage it outside of Terraform.
Use lifecycle controls to reduce avoidable failures while keeping drift visible and manageable.
Set ignore_changes when an attribute on a resource is modified by an external process and you do not want Terraform to revert it on every apply. On DigitalOcean, tags are a frequent example. If your team applies tags to Droplets through the Control Panel or through a tagging automation outside of Terraform, every subsequent terraform plan will show those tags as unexpected drift and propose removing them. Adding tags to ignore_changes tells Terraform to stop tracking that attribute without affecting how it manages everything else on the resource
resource "digitalocean_droplet" "web" {
image = "ubuntu-22-04-x64"
name = "web-1"
region = "nyc3"
size = "s-1vcpu-1gb"
lifecycle {
ignore_changes = [tags]
}
}
ignore_changes accepts specific attributes or all. Avoid all in production because it hides all drift, including changes you must detect and review.
Enable create_before_destroy for resources that cannot tolerate a gap
between deletion and replacement. By default, Terraform destroys the
existing resource before creating the replacement, which means the old
resource is gone before the new one is available. For a load balancer
referencing a TLS certificate, that window results in requests being
served with no valid certificate attached.
resource "digitalocean_certificate" "tls" {
name = "example-cert"
type = "lets_encrypt"
domains = ["example.com"]
lifecycle {
create_before_destroy = true
}
}
A common DigitalOcean case is replacing a digitalocean_certificate used by a load balancer. Creating the replacement certificate first avoids a window where no valid certificate is attached.
Terraform does not provide a native flag to ignore all errors on apply.
When people search for “terraform ignore errors on apply,” they usually need either selective drift suppression with ignore_changes or partial recovery with -target. Another useful debugging control is reduced parallelism, which applies resources sequentially to expose the first failing resource deterministically:
terraform apply -parallelism=1 -var "do_token=${DO_PAT}"
Lower parallelism slows execution, but it makes failure order easier to trace in large configurations with many concurrent operations.
Pin provider and CLI versions explicitly so initialization remains reproducible across environments.
Define constraints in required_providers and required_version to avoid unplanned upgrades and mismatch errors.
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
required_version = ">= 1.5"
}
Terraform supports multiple operators for version constraints:
~> allows patch or minor updates within the specified boundary. Example: ~> 2.0 accepts 2.0, 2.1, 2.38, but not 3.0.> and < enforce strict greater-than or less-than boundaries. Example: > 2.30, < 3.0.>= and <= include the specified boundary version. Example: >= 1.5, <= 1.8.5.!= excludes a known bad release. Example: >= 2.0, != 2.34.0, < 3.0.You can combine groups with commas, and Terraform requires every group to match before selecting a version.
Note: Omitting version constraints allows Terraform to install newer provider releases that may introduce breaking changes. Pin constraints in production and review upgrades intentionally.
Run terraform init -upgrade when .terraform.lock.hcl pins a provider version that no longer satisfies updated constraints in configuration.
terraform init -upgrade
Typical output includes provider re-resolution and lock file updates:
OutputInitializing the backend...
Initializing provider plugins...
- Finding digitalocean/digitalocean versions matching "~> 2.0"...
- Installing digitalocean/digitalocean v2.38.1...
- Installed digitalocean/digitalocean v2.38.1 (signed by HashiCorp)
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file.
Commit the updated .terraform.lock.hcl after successful initialization so every team member and CI job uses the same resolved provider versions.
Before committing, verify the upgraded provider did not introduce breaking changes by running a plan against a non-production workspace or a representative configuration:
terraform plan -var "do_token=${DO_PAT}"
Review the plan output for unexpected resource replacements or attribute changes that were not present before the upgrade. Provider changelogs are published at the DigitalOcean provider releases page. Check the changelog for the version you upgraded to before applying in production.
| Error Message | Root Cause | Fix Command or Configuration |
|---|---|---|
Error: cycle |
Circular resource dependency | Remove circular references, move shared values to locals, and re-run terraform graph to confirm acyclic dependencies |
Error: Invalid provider configuration |
Missing or malformed provider block, or missing credentials | Verify provider and required_providers blocks, then export a valid token: export DIGITALOCEAN_TOKEN="${DO_PAT}" |
Error acquiring the state lock |
Concurrent apply or stale lock from interrupted run | Confirm no active Terraform process, then run terraform force-unlock LOCK_ID |
Error: Resource already exists |
State drift or manual resource creation outside Terraform | Import existing resource with terraform import, or remove stale entry with terraform state rm and re-manage intentionally |
401 Unable to authenticate |
Invalid, missing, or expired API token | Re-export DIGITALOCEAN_TOKEN with correct value and scopes, then retry terraform plan |
Error: Unsupported argument |
Provider version mismatch or renamed/deprecated attribute | Run terraform init -upgrade, then adjust configuration to provider changelog-compatible arguments |
Q: What is the first thing to check when terraform apply fails?
Run terraform validate to confirm the configuration has no syntax errors, then re-run terraform plan to isolate the exact resource and error before re-attempting apply. Checking the plan output first prevents re-triggering the same failure and wasting API quota. This sequence also narrows investigation to configuration versus provider/API issues.
Q: How do I enable debug logging in Terraform?
Set TF_LOG=DEBUG before running any Terraform command. Use TF_LOG_PATH to write logs to a file instead of stdout for large configurations where terminal output is difficult to scan. Unset both variables when you are done to avoid performance overhead on subsequent runs.
Q: What causes a Terraform cycle error and how do I fix it?
A cycle error occurs when two or more resources reference each other, creating a circular dependency that Terraform cannot resolve into an ordered execution plan. Run terraform graph to visualize the dependency tree, then remove the circular reference by restructuring resource arguments or extracting the shared value into a local. After refactoring, run terraform plan again to verify Terraform can build a valid execution graph.
Q: Can Terraform ignore errors on apply?
Terraform does not support a flag to ignore all errors during apply. Use lifecycle { ignore_changes = [...] } to suppress drift on specific attributes, or use -target to apply only known-good resources while deferring the problematic one. For sequential failure isolation, set -parallelism=1.
Q: How do I fix a Terraform state lock error?
If a previous apply was interrupted and the lock was not released, run terraform force-unlock LOCK_ID, substituting LOCK_ID with the ID printed in the error message. Confirm no other apply process is running before forcing the unlock, as releasing an active lock can corrupt the state file. In team environments, also check CI pipelines to ensure no remote run is still active.
Q: How do I troubleshoot a provider version conflict in Terraform?
Check the required_providers block for version constraints and verify they are compatible with your Terraform version. Run terraform init -upgrade to re-resolve and update the lock file. If constraints are too restrictive, widen the version range and re-run init.
Q: What does terraform state list do and when should I use it?
terraform state list prints all resource addresses tracked in the current state file. Use it before running terraform state rm, terraform import, or terraform refresh to confirm the exact resource address and avoid operating on the wrong resource. It is also useful in incident response when you need a quick inventory of what Terraform currently believes it manages.
Q: How do I debug Terraform on DigitalOcean specifically?
Set TF_LOG=DEBUG and TF_LOG_PATH=./terraform.log before running any Terraform command against DigitalOcean resources. In the log file, search for the HTTP status code returned by the DigitalOcean API: 401 means the token is missing or invalid, 422 means the request payload was rejected (commonly a wrong region slug or invalid size), and 429 means you have hit the API rate limit and should reduce -parallelism or add a retry delay. For full walkthrough steps, see Step 2 of this tutorial.
You can now troubleshoot Terraform with a repeatable process: validate configuration before apply, enable and read debug logs, diagnose apply and cycle errors, inspect and repair state safely, apply lifecycle resilience patterns, and resolve provider version conflicts with controlled upgrades. Following this sequence cuts repeated failures and improves reliability in day-to-day infrastructure changes.
What to read next: How To Use Terraform with DigitalOcean, How To Manage Infrastructure with Terraform (series), How To Structure a Terraform Project, How To Import Existing DigitalOcean Assets into Terraform, and How To Use Terraform with DigitalOcean Spaces.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
Terraform is a popular open source Infrastructure as Code (IAC) tool that automates provisioning of your infrastructure in the cloud and manages the full lifecycle of all deployed resources, which are defined in source code. Its resource-managing behavior is predictable and reproducible, so you can plan the actions in advance and reuse your code configurations for similar infrastructure.
In this series, you will build out examples of Terraform projects to gain an understanding of the IAC approach and how it’s applied in practice to facilitate creating and deploying reusable and scalable infrastructure architectures.
Browse Series: 12 tutorials
Expert in cloud topics including Kafka, Kubernetes, and Ubuntu.
Former Senior Technical Editor at DigitalOcean, with a strong focus on DevOps and System Administration content. Areas of expertise include Terraform, PyTorch, Python, and Django.
Building future-ready infrastructure with Linux, Cloud, and DevOps. Full Stack Developer & System Administrator. Technical Writer @ DigitalOcean | GitHub Contributor | Passionate about Docker, PostgreSQL, and Open Source | Exploring NLP & AI-TensorFlow | Nailed over 50+ deployments across production environments.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.