I’m managing my k8s deployment using Terraform. I’m trying to change my cluster settings (decreasing the nodes). However when I do this I get an authentication error.
terraform {
# Backend configuration
cloud {
organization = "Clipido"
}
# Plugins configuration
required_providers {
kubectl = {
source = "alekc/kubectl"
version = ">= 2.0.2"
}
infisical = {
source = "infisical/infisical"
}
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
helm = {
source = "hashicorp/helm"
version = "~> 2.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.0"
}
}
}
data "digitalocean_kubernetes_cluster" "clipido" {
name = digitalocean_kubernetes_cluster.clipido.name
depends_on = [digitalocean_kubernetes_cluster.clipido]
}
provider "helm" {
# Authentication to get into the cluster
kubernetes {
host = data.digitalocean_kubernetes_cluster.clipido.endpoint
cluster_ca_certificate = base64decode(data.digitalocean_kubernetes_cluster.clipido.kube_config.0.cluster_ca_certificate)
token = data.digitalocean_kubernetes_cluster.clipido.kube_config.0.token
}
}
provider "kubernetes" {
# Authentication to get into the cluster
host = data.digitalocean_kubernetes_cluster.clipido.endpoint
cluster_ca_certificate = base64decode(data.digitalocean_kubernetes_cluster.clipido.kube_config.0.cluster_ca_certificate)
token = data.digitalocean_kubernetes_cluster.clipido.kube_config.0.token
}
resource "digitalocean_kubernetes_cluster" "clipido" {
name = "clipido-${var.infisical_env}"
region = "ams3" # `doctl kubernetes options regions`
version = "1.27.4-do.0" # `doctl kubernetes options versions`
auto_upgrade = true
maintenance_policy {
start_time = "04:00"
day = "sunday"
}
node_pool {
name = "web"
size = "s-1vcpu-2gb" # `doctl kubernetes options sizes`
auto_scale = true
min_nodes = 1
max_nodes = 2
tags = ["frontend", "web"]
labels = {
service = "frontend"
priority = "high"
}
}
}
resource "digitalocean_project_resources" "assign_kubernetes_to_project" {
project = digitalocean_project.project.id
resources = [
digitalocean_kubernetes_cluster.clipido.urn,
]
}
resource "helm_release" "nginx_ingress-helm" {
name = "ingress-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
namespace = "terraform-clipido-${var.project_env}"
version = "4.5.2"
timeout = 300
set {
name = "controller.daemonset.useHostPort"
value = "false"
}
set {
name = "controller.ingressClassResource.default"
value = "true"
}
set {
name = "controller.ingressClassResource.name"
value = "nginx"
}
set {
name = "controller.kind"
value = "DaemonSet"
}
set {
name = "controller.publishService.enabled"
value = "true"
}
set {
name = "controller.resources.requests.memory"
value = "140Mi"
}
set {
name = "controller.service.externalTrafficPolicy"
value = "Local"
}
}
Terraform planned the following actions, but then encountered a problem:
# data.digitalocean_kubernetes_cluster.clipido will be read during apply
# (depends on a resource or a module with changes pending)
<= data "digitalocean_kubernetes_cluster" "clipido" {
+ auto_upgrade = (known after apply)
+ cluster_subnet = (known after apply)
+ created_at = (known after apply)
+ endpoint = (known after apply)
+ ha = (known after apply)
+ id = (known after apply)
+ ipv4_address = (known after apply)
+ kube_config = (sensitive value)
+ maintenance_policy = (known after apply)
+ name = "clipido-staging"
+ node_pool = (known after apply)
+ region = (known after apply)
+ service_subnet = (known after apply)
+ status = (known after apply)
+ surge_upgrade = (known after apply)
+ updated_at = (known after apply)
+ urn = (known after apply)
+ version = (known after apply)
+ vpc_uuid = (known after apply)
}
# digitalocean_kubernetes_cluster.clipido will be updated in-place
~ resource "digitalocean_kubernetes_cluster" "clipido" {
id = "60d49a31-xxxx-xxxx-xxxx-xxxxxxxxxx"
name = "clipido-staging"
tags = []
# (16 unchanged attributes hidden)
~ node_pool {
id = "60d49a31-xxxx-xxxx-xxxx-xxxxxxxxxx"
~ max_nodes = 3 -> 2
name = "web"
tags = [
"frontend",
"web",
]
# (7 unchanged attributes hidden)
}
# (1 unchanged block hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
Changes to Outputs:
~ endpoint = "https://60d49a31-xxxx-xxxx-xxxx-xxxxxxxxxx.k8s.ondigitalocean.com" -> (known after apply)
╷
│ Error: Get "http://localhost/api/v1/namespaces/terraform-clipido-staging": dial tcp [::1]:80: connect: connection refused
│
│ with kubernetes_namespace.default,
│ on do-network.tf line 21, in resource "kubernetes_namespace" "default":
│ 21: resource "kubernetes_namespace" "default" {
│
╵
╷
│ Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
│
│ with helm_release.nginx_ingress-helm,
│ on do-network.tf line 27, in resource "helm_release" "nginx_ingress-helm":
│ 27: resource "helm_release" "nginx_ingress-helm" {
│
╵
Operation failed: failed running terraform plan (exit 1)
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Enter your email to get $200 in credit for your first 60 days with DigitalOcean.
New accounts only. By submitting your email you agree to our Privacy Policy.
Hi there,
Based on the output that you’ve shared the main issues seem to be that the Terraform can’t reach the Kubernetes cluster (
dial tcp [::1]:80: connect: connection refused
andKubernetes cluster unreachable
) and that the Kubernetes provider and Helm provider have no configuration available, even though you’ve provided one in the Terraform script.Here’s what might be happening and some steps to address the issues:
1. Data Source Issue
The way you’ve defined your
data.digitalocean_kubernetes_cluster.clipido
data source can lead to a race condition. Thedepends_on
statement is making the data source wait for thedigitalocean_kubernetes_cluster.clipido
resource, but since the data source depends directly on the resource, Terraform might not have the information it needs when planning the changes. This might be causing Terraform to default to the “localhost” when it cannot fetch the real endpoint, which is leading to the error.Suggestion:
depends_on
statement from the data source, since it should be inferred by Terraform from the direct dependency.2. Adjust the Terraform Workflow
I think that a working solution here would be to break the Terraform workflow into two steps:
digitalocean_kubernetes_cluster.clipido
resource separately, and then,This is because changing the cluster might cause a temporary unavailability, making the Kubernetes and Helm resources inaccessible.
Suggestion:
digitalocean_kubernetes_cluster.clipido
resource first. Once the changes are applied and the cluster is stable, uncomment the Kubernetes and Helm resources and apply again.Let me know how it goes!
Best,
Bobby