By peiops
When using a DOKS cluster with a LoadBalancer type Service, the CCM automatically
manages a k8s-public-access-* firewall and adds inbound rules for the service ports
(e.g. 80, 443) with source hardcoded to 0.0.0.0/0 / ::/0.
There is no way to restrict these rules to private or specific CIDRs only. Any manual changes to the firewall are reverted by the controller on the next reconcile.
Allow specifying allowed source CIDRs for the auto-managed firewall inbound rules,
for example via a Service annotation or a CCM environment variable — similar to how
spec.loadBalancerSourceRanges works for the Load Balancer itself.
Clusters behind Cloudflare (or any CDN/proxy) should only accept traffic from
the CDN IP ranges — not from the entire internet. Right now this is only possible
at the Load Balancer level (spec.loadBalancerSourceRanges), but the node-level
firewall stays wide open regardless.
Import the k8s-public-access-* firewall into Terraform and override rules on
every apply — which creates a race condition with the controller and is not
a reliable solution.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Hi there,
Not 100% sure but sounds like a gap in the DOKS CCM implementation. You are right that manual firewall changes get stomped on reconcile, and there is currently no annotation to control source CIDRs on the k8s-public-access-* firewall rules.
The spec.loadBalancerSourceRanges field does restrict traffic at the Load Balancer level, but the node firewall stays open to 0.0.0.0/0 regardless, which defeats the purpose for Cloudflare-only setups.
Maybe a workaround is to restrict traffic at the Load Balancer level using spec.loadBalancerSourceRanges with Cloudflare’s published IP ranges, and accept that the node firewall is a separate layer you cannot control through the CCM yet, something like:
spec:
loadBalancerSourceRanges:
- 103.21.244.0/22
- 103.22.200.0/22
- 103.31.4.0/22
# ... rest of Cloudflare ranges
Cloudflare publishes their IP ranges at https://www.cloudflare.com/ips/ and you can automate keeping them in sync.
For the node firewall specifically, NodePort ranges are what actually need locking down. If your nodes are in a VPC and the Load Balancer is the only entry point, the exposure is partially mitigated since the NodePorts are still reachable only via the LB in most setups. Worth verifying your VPC rules to make sure direct node access from the internet is not possible.
This is a legitimate feature request. I would open a GitHub issue on the digitalocean/digitalocean-cloud-controller-manager repo directly, referencing the annotation approach, since that is where the CCM team tracks this kind of work.
Heya, @3d308a1eb60c443faf8ae8b13cbd1a
Worth opening this as a feature request at ideas.digitalocean.com with exactly this write-up - the annotation-based approach you suggested (similar to loadBalancerSourceRanges) is a clean solution and not a big conceptual leap from what’s already there. If there’s an existing GitHub issue on the CCM repo it’s also worth commenting there since that’s closer to where the actual change would land.
In the meantime the least-bad workaround I’ve seen for the Cloudflare scenario is putting the CIDR restriction on the LB itself via loadBalancerSourceRanges with Cloudflare’s published IP ranges, and accepting that the node firewall stays open - the traffic still can’t reach your pods without going through the LB first. Not perfect from a defense-in-depth perspective but it’s at least not a race condition.
Have you tried the CCM GitHub issues to see if this is already tracked there?
Regards
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
From GPU-powered inference and Kubernetes to managed databases and storage, get everything you need to build, scale, and deploy intelligent applications.