By petercorrea
I am running a small self managed K8s cluster (w/ calico and traefik gateway) on 3 droplets. I have a load balancer sending traffic to it. Everything works as expected only once I have no firewall. Once I place the firewall, the healthchecks to nodes begin to fail intermittently but the majority of the time. Any ideas? Thank you.
resource "digitalocean_loadbalancer" "ingress_lb" {
count = 1
name = "public-lb"
region = var.region
vpc_uuid = digitalocean_vpc.ok_vpc.id
forwarding_rule {
entry_protocol = "http"
entry_port = 80
target_protocol = "http"
target_port = 30080
}
forwarding_rule {
entry_protocol = "https"
entry_port = 443
target_protocol = "https"
target_port = 30443
tls_passthrough = true
}
forwarding_rule {
entry_protocol = "http"
entry_port = 8081
target_protocol = "http"
target_port = 30081
}
healthcheck {
protocol = "http"
port = 30081
path = "/ping"
check_interval_seconds = 10
response_timeout_seconds = 60
unhealthy_threshold = 10
healthy_threshold = 2
}
droplet_ids = var.worker_ids
}
locals {
k8s_api_lb_ip = length(digitalocean_loadbalancer.k8s_api_lb) > 0 ? [digitalocean_loadbalancer.k8s_api_lb[0].ip] : []
}
# Firewall rules for HA Kubernetes cluster
resource "digitalocean_firewall" "k8s_firewall" {
name = "k8s-firewall-${terraform.workspace}"
droplet_ids = concat(
digitalocean_droplet.k8s_master[*].id,
digitalocean_droplet.k8s_worker[*].id,
)
# Allow Kubernetes API access (Control-Plane)
inbound_rule {
protocol = "tcp"
port_range = "6443"
source_addresses = concat(
digitalocean_droplet.k8s_master[*].ipv4_address_private,
digitalocean_droplet.k8s_worker[*].ipv4_address_private,
# VPN clients, subnet obtained from /etc/wireguard/wg0.conf
["10.0.0.1/24"]
)
}
# Allow etcd communication between master nodes (Control-Plane)
inbound_rule {
protocol = "tcp"
port_range = "2379-2380"
source_addresses = digitalocean_droplet.k8s_master[*].ipv4_address_private
}
# Kube-scheduler (Control-Plane)
inbound_rule {
protocol = "tcp"
port_range = "10259"
source_addresses = digitalocean_droplet.k8s_master[*].ipv4_address_private
}
# Kube-controller-manager (Control-Plane)
inbound_rule {
protocol = "tcp"
port_range = "10257"
source_addresses = digitalocean_droplet.k8s_master[*].ipv4_address_private
}
# Allow Kubelet API (Control-Plane & Worker)
inbound_rule {
protocol = "tcp"
port_range = "10250"
source_addresses = concat(
digitalocean_droplet.k8s_master[*].ipv4_address_private,
digitalocean_droplet.k8s_worker[*].ipv4_address_private,
)
}
# Allow kube-proxy (Worker)
inbound_rule {
protocol = "tcp"
port_range = "10256"
source_addresses = concat(
digitalocean_droplet.k8s_master[*].ipv4_address_private,
digitalocean_droplet.k8s_worker[*].ipv4_address_private,
)
}
# Calico BGP
inbound_rule {
protocol = "tcp"
port_range = "179"
source_addresses = concat(
digitalocean_droplet.k8s_master[*].ipv4_address_private,
digitalocean_droplet.k8s_worker[*].ipv4_address_private,
)
}
# Allow VXLAN traffic for Calico
inbound_rule {
protocol = "udp"
port_range = "4789"
source_addresses = concat(
digitalocean_droplet.k8s_master[*].ipv4_address_private,
digitalocean_droplet.k8s_worker[*].ipv4_address_private,
)
}
# Allow pod-to-pod communication
inbound_rule {
protocol = "tcp"
port_range = "1-65535"
source_addresses = concat(
digitalocean_droplet.k8s_master[*].ipv4_address_private,
digitalocean_droplet.k8s_worker[*].ipv4_address_private,
)
}
inbound_rule {
protocol = "udp"
port_range = "1-65535"
source_addresses = concat(
digitalocean_droplet.k8s_master[*].ipv4_address_private,
digitalocean_droplet.k8s_worker[*].ipv4_address_private,
)
}
# Allow SSH
inbound_rule {
protocol = "tcp"
port_range = "22"
source_addresses = ["0.0.0.0/0"]
}
# Allow LB
inbound_rule {
protocol = "tcp"
port_range = "80"
source_load_balancer_uids = [var.ingress_lb_id]
# source_addresses = ["0.0.0.0/0"]
}
inbound_rule {
protocol = "tcp"
port_range = "443"
source_load_balancer_uids = [var.ingress_lb_id]
# source_addresses = ["0.0.0.0/0"]
}
inbound_rule {
protocol = "tcp"
port_range = "30000-32767"
source_load_balancer_uids = [var.ingress_lb_id]
# source_addresses = ["0.0.0.0/0"]
}
# Allow all outbound traffic
outbound_rule {
protocol = "tcp"
port_range = "1-65535"
destination_addresses = ["0.0.0.0/0"]
}
outbound_rule {
protocol = "udp"
port_range = "1-65535"
destination_addresses = ["0.0.0.0/0"]
}
}
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Hi there,
Hmm, based on your config, port 30081 is already included in the 30000–32767
range you’re allowing from the load balancer, so in theory that should cover the health check traffic.
That said, I’ve seen cases where things only started working after adding an explicit rule for the exact port:
inbound_rule {
protocol = "tcp"
port_range = "30081"
source_load_balancer_uids = [var.ingress_lb_id]
}
Not totally sure whym maybe health checks come from a different source internally, or something quirky happens with how DigitalOcean routes traffic. Also worth confirming that var.ingress_lb_id
is the actual UID, not just the numeric ID.
If you’re still stuck, you can always reach out to the DigitalOcean support: https://do.co/support
They should be able to confirm if something unexpected is happening on the backend. Let us know how it goes!
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.