Report this

What is the reason for this report?

Advanced Network Policies on DOKS with Cilium

Published on February 25, 2026
Vinayak Baranwal

By Vinayak Baranwal

Technical Writer II

Advanced Network Policies on DOKS with Cilium

Introduction

Kubernetes NetworkPolicies are API resources that define allowed ingress and egress traffic for pods. By default, Kubernetes networking is permissive: any pod can reach any other pod and many external endpoints. NetworkPolicies give you the specification for segmentation, but enforcement depends entirely on the CNI. Cilium, using eBPF in the kernel, enforces those policies at line rate and adds something vanilla implementations lack: observability. Without flow-level visibility, misconfigurations go unnoticed and security audits lack the data needed to verify policy.

Cilium plus eBPF changes both enforcement and visibility. Policies are applied in the kernel datapath, not in userspace iptables chains, so you get high-performance filtering and the option of layer 7 rules (e.g., HTTP method and path). Hubble surfaces flow-level data so policy misconfigurations are caught before they affect production. On DigitalOcean Kubernetes (DOKS), you can run Cilium as the CNI and use Hubble to verify and debug policy rollouts.

This tutorial shows how to build enforceable network segmentation with Kubernetes traffic observability on DOKS: install Cilium, deploy a multi-service demo, apply basic and advanced policies, and use Hubble to verify behavior and troubleshoot. You will implement pod-to-pod communication controls and a zero trust networking pattern suitable for multi-tenant or compliance-heavy environments.

Key Takeaways

  • Kubernetes NetworkPolicies default to allow-all unless a CNI enforces them; without a policy-aware CNI, all traffic is permitted.
  • Cilium uses eBPF for high-performance packet filtering in the kernel, reducing overhead and enabling L3/L4 and L7 policy.
  • Hubble exports flow logs and metrics from Cilium eBPF hooks at namespace and service granularity.
  • Layer 7 policy enforcement (e.g., allow GET /health, deny POST /admin) is possible with CiliumNetworkPolicy and is not available with standard NetworkPolicies.
  • Zero-trust networking is practical on DOKS with explicit ingress and egress rules and namespace-level isolation.
  • Observability prevents silent misconfigurations; Hubble lets you validate policies before and after enforcement and support security audits.

Conceptual Foundation

What Are Kubernetes NetworkPolicies?

NetworkPolicies are namespace-scoped (or pod-selected) rules that define ingress (who can send traffic to selected pods) and egress (where selected pods can send traffic). They are declarative: you specify allowed peers and ports; the CNI is responsible for enforcing them. If no policy selects a pod, behavior is implementation-dependent; with Cilium, pods that are not selected by any policy typically allow all traffic until you add a default-deny and then allow-list.

Ingress vs egress: Ingress rules apply to traffic to the selected pods; egress rules apply to traffic from those pods. For network segmentation, you often start with “deny all ingress” and then allow only specific namespaces or pod labels.

Namespace isolation vs pod-level isolation: You can target all pods in a namespace with a single policy or use pod selectors to isolate specific workloads (e.g., only backend pods). Multi-tenant setups commonly use namespace-level isolation with explicit ingress from an API gateway or ingress namespace.

How Cilium Differs from Other CNIs

Cilium implements the Kubernetes NetworkPolicy API and adds its own CiliumNetworkPolicy CRD for L7 and other extensions. Unlike many CNIs that rely on iptables or userspace proxies, Cilium runs eBPF programs in the Linux kernel to filter and observe traffic. That yields:

  • eBPF datapath: Packet handling and policy checks in the kernel, with fewer context switches and stable performance at scale.
  • L7 awareness: HTTP/gRPC method, path, and header-based rules (CiliumNetworkPolicy only).
  • Built-in observability: Hubble consumes eBPF-derived flow and metrics data from Cilium agents.

What Is eBPF and Why Is It Used?

eBPF (extended Berkeley Packet Filter) is a kernel technology that runs sandboxed programs on events such as packet arrival or socket operations. Cilium compiles NetworkPolicy and CiliumNetworkPolicy into eBPF programs that run on each node. Packets are classified in the kernel and either permitted or dropped; no packet need reach userspace for a simple L3/L4 policy. That is why eBPF network policies scale and why eBPF networking is used for both enforcement and visibility in Kubernetes.

Why Observability Is Often Missing with Vanilla NetworkPolicies

Standard Kubernetes NetworkPolicy does not define observability. Many CNIs enforce allow/deny but do not expose which flows were dropped or why. Operators roll out “deny all” policies and then debug connectivity failures without a clear view of traffic. Hubble observability fills that gap by exporting flow logs and metrics from Cilium’s eBPF hooks, so you can correlate connections with policy rules.

Feature Flannel Calico Cilium
eBPF No No Yes
NetworkPolicy enforcement No Yes Yes
L7 policy No Limited Yes
Built-in observability No No Hubble

Architecture Overview: DOKS + Cilium

In this setup, all pod traffic passes through the Cilium eBPF datapath running on each node. Hubble collects flow data from the same eBPF hooks Cilium uses for policy enforcement, so enforcement and visibility share a single datapath with no additional overhead.

  • Cilium on DOKS: DOKS does not ship Cilium by default. You install it via Helm; when used as the cluster CNI it replaces the default DOKS CNI. See Cilium installation for your Kubernetes version. Once installed, all pod traffic is subject to eBPF policy.
  • Packet inspection: Packets are inspected at the kernel level; L3/L4 rules are enforced in eBPF; L7 rules (CiliumNetworkPolicy) require HTTP parsing in the datapath.
  • Hubble: Hubble Relay (and optionally UI) aggregates flow data from Cilium agents so you can run hubble observe and filter by namespace, label, or verdict (allowed/dropped).

Multi-service example: A typical layout includes a frontend namespace, a backend namespace, a database namespace, and an observability namespace. Design: frontend can reach backend only on allowed ports; backend can reach database only; observability can scrape or query as needed. Default-deny plus explicit ingress and egress rules implement Kubernetes service isolation and network segmentation.

graph TD
    FE[frontend namespace<br/>app=frontend] -->|port 80 allowed| BE[backend namespace<br/>app=backend]
    BE -->|port 6379 allowed| DB[database namespace<br/>app=database]
    OBS[observability namespace] -.->|scrape metrics| BE
    OBS -.->|scrape metrics| FE
    EXT[external / internet] -. blocked by egress policy .-> BE
    EXT -. blocked by egress policy .-> DB
    style EXT fill:#f66,color:#fff

Default-deny ingress is applied per namespace. Arrows represent explicitly allowed traffic paths. Dotted lines represent blocked paths.

Prerequisites

Step-by-Step Implementation

Step 1 - Install or Enable Cilium on DOKS

Add the Cilium Helm repo and install with Hubble enabled. Replace the Cilium version with a supported release compatible with your Kubernetes version.

Warning: Installing Cilium as the CNI on a cluster with running workloads causes a disruption to pod networking. All pods lose network connectivity during the CNI transition until Cilium agents are fully Ready on every node. For production clusters, drain and cordon nodes one at a time and validate Cilium readiness per node before proceeding. For new clusters with no workloads, the transition is safe to run in a single pass. If you are testing, provision a dedicated DOKS cluster for this tutorial.

helm repo add cilium https://helm.cilium.io/
helm repo update
helm install cilium cilium/cilium --version 1.16.2 \
  --namespace kube-system \
  --set hubble.relay.enabled=true \
  --set hubble.ui.enabled=true

Expected output:

"cilium" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "cilium" chart repository
Update Complete.
NAME: cilium
LAST DEPLOYED: Sat Feb 21 10:00:00 2026
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1

Wait for the rollout:

kubectl -n kube-system rollout status ds/cilium --timeout=300s

Expected output:

daemon set "cilium" successfully rolled out

Install cilium-cli:

The cilium status command requires the cilium-cli binary, which is separate from the Helm-installed Cilium agent. Install it now:

CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
curl -L --fail --remote-name-all \
  https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz
sudo tar xzvf cilium-linux-${CLI_ARCH}.tar.gz -C /usr/local/bin

Note: For macOS, replace linux-amd64 with darwin-amd64.

Verify Cilium and CNI:

kubectl get pods -n kube-system -l k8s-app=cilium
cilium status

Expected output for kubectl get pods:

NAME            READY   STATUS    RESTARTS   AGE
cilium-4xk2p    1/1     Running   0          2m
cilium-9rtzq    1/1     Running   0          2m
cilium-vbn3w    1/1     Running   0          2m

Expected output for cilium status:

    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
 \__/¯¯\__/    Hubble Relay:       OK
    \__/        ClusterMesh:        disabled

KVStore:                Ok
Kubernetes:             Ok   1.29+ (v1.29.0) [linux/amd64]
NodeMonitor:            Listening for events on 4 CPUs
Cilium health daemon:   Ok
IPAM:                   IPv4: 5/254 allocated

Confirm all Cilium pods are Ready and cilium status reports the datapath and policy mode. If you prefer not to install cilium-cli, use this equivalent instead:

kubectl exec -n kube-system ds/cilium -- cilium status

This completes the DOKS Cilium setup and Cilium on DigitalOcean Kubernetes baseline.

Step 2 - Deploy Multi-Service Demo Application

Create namespaces, label them so the allow policy in Step 3 can match (namespaceSelector uses labels, not names), deploy pods, and create Services so DNS names resolve. Nginx listens on port 80. Do not apply any NetworkPolicy yet; baseline connectivity is allow-all.

kubectl create namespace frontend
kubectl create namespace backend
kubectl create namespace database
kubectl label namespace frontend name=frontend
kubectl label namespace backend name=backend
kubectl label namespace database name=database
kubectl create deployment frontend -n frontend --image=nginx
kubectl label deployment frontend -n frontend app=frontend
kubectl create deployment backend -n backend --image=kennethreitz/httpbin
kubectl label deployment backend -n backend app=backend
kubectl create deployment db -n database --image=redis:alpine
kubectl label deployment db -n database app=database
kubectl expose deployment frontend -n frontend --port=80 --target-port=80
kubectl expose deployment backend -n backend --port=80 --target-port=80

Note: The backend uses kennethreitz/httpbin, which provides /get, /post, and /anything routes. This is required for accurate L7 policy verification in Step 4. A plain Nginx image has no /health or /admin routes and would return 404 instead of the expected HTTP codes.

Confirm pods are Ready in each namespace before proceeding:

kubectl get pods -n frontend
kubectl get pods -n backend
kubectl get pods -n database

Expected output (pod names will differ):

NAME                        READY   STATUS    RESTARTS   AGE
frontend-6d4b7f9c8d-xk2p9   1/1     Running   0          30s

NAME                        READY   STATUS    RESTARTS   AGE
backend-7f8b9d6c4d-p2r4x    1/1     Running   0          30s

NAME                     READY   STATUS    RESTARTS   AGE
db-5c6d7e8f9a-m3n4o      1/1     Running   0          30s

Then test baseline connectivity from the frontend namespace:

kubectl run curl-test --rm -it --restart=Never \
  -n frontend \
  --image=curlimages/curl \
  -- curl -s -o /dev/null -w "%{http_code}" http://backend.backend.svc.cluster.local

Expected output:

200

A 200 response confirms baseline connectivity before any NetworkPolicy is applied. If you see a connection error, verify the backend Service exists:

kubectl get svc -n backend

Expected output:

NAME      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
backend   ClusterIP   10.245.78.123   <none>        80/TCP    1m

Warning: Do not apply a default-deny policy to a production namespace without first running hubble observe to baseline existing traffic patterns. Applying default-deny immediately breaks all ingress, including health checks, service-to-service calls, and monitoring scrapes. Complete Step 5 (Hubble setup) in a staging cluster first, observe all active flows, and build your allow-list before applying deny policies to production workloads.

Step 3 - Apply Basic Kubernetes NetworkPolicy

Apply a default-deny ingress policy in the namespace you want to isolate (e.g., backend):

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: backend
spec:
  podSelector: {}
  policyTypes:
    - Ingress

Save the policy to a file and apply it:

kubectl apply -f default-deny-ingress.yaml

Expected output:

networkpolicy.networking.k8s.io/default-deny-ingress created

Then allow ingress only from the frontend namespace (or a specific label):

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-from-frontend
  namespace: backend
spec:
  podSelector: {}
  policyTypes:
    - Ingress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              name: frontend
      ports:
        - protocol: TCP
          port: 80

Save the policy to a file and apply it:

kubectl apply -f allow-from-frontend.yaml

Expected output:

networkpolicy.networking.k8s.io/allow-from-frontend created

The allow rule matches namespaces with label name=frontend (you added it in Step 2).

After applying the default-deny policy, verify that ingress is blocked from the default namespace:

kubectl run curl-test --rm -it --restart=Never \
  -n default \
  --image=curlimages/curl \
  -- curl -s -o /dev/null -w "%{http_code}" http://backend.backend.svc.cluster.local

Expected output:

000

A 000 response means the connection was refused or timed out, confirming the default-deny policy is active.

Then apply the allow-from-frontend policy. Verify that ingress from the frontend namespace is now permitted:

kubectl run curl-test --rm -it --restart=Never \
  -n frontend \
  --image=curlimages/curl \
  -- curl -s -o /dev/null -w "%{http_code}" http://backend.backend.svc.cluster.local

Expected output:

200

A 200 confirms traffic from the frontend namespace reaches the backend. A curl from the default namespace should still return 000.

Step 4 - Apply CiliumNetworkPolicy (Advanced L7)

Use CiliumNetworkPolicy for L7 HTTP filtering. Example: allow GET /health and GET /api but deny POST /admin to backend pods.

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: backend-http-rules
  namespace: backend
spec:
  endpointSelector:
    matchLabels:
      app: backend
  ingress:
    - fromEndpoints:
        - matchLabels:
            k8s:io.kubernetes.pod.namespace: frontend
      toPorts:
        - ports:
            - port: "80"
              protocol: TCP
          rules:
            http:
              - method: "GET"
                path: "/get"
              - method: "GET"
                path: "/anything/.*"

Save the policy to a file and apply it:

kubectl apply -f backend-http-rules.yaml

Expected output:

ciliumnetworkpolicy.cilium.io/backend-http-rules created

This level of control (method and path) is not possible with vanilla NetworkPolicies; it is a service-mesh-alternatives-style capability without a full mesh.

The label key k8s:io.kubernetes.pod.namespace is a Cilium-internal identifier that maps to the Kubernetes pod namespace. It is not the same as the namespaceSelector field used in standard NetworkPolicy objects. In CiliumNetworkPolicy fromEndpoints, you reference namespaces using this internal key rather than a label on the namespace object. To target a different namespace, change the value from frontend to the target namespace name. To confirm which labels Cilium attaches to a pod’s identity, run:

kubectl exec -n kube-system ds/cilium -- cilium endpoint list

Expected output (trimmed):

ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS
1234       Enabled            Disabled          16385      k8s:app=backend
                                                           k8s:io.kubernetes.pod.namespace=backend
5678       Enabled            Disabled          16386      k8s:app=frontend
                                                           k8s:io.kubernetes.pod.namespace=frontend

This lists all endpoints with their Cilium-assigned identity labels, including k8s:io.kubernetes.pod.namespace.

To verify the L7 policy, test an allowed path and a blocked method from the frontend namespace:

# Allowed: GET /get from frontend
kubectl run curl-test --rm -it --restart=Never \
  -n frontend \
  --image=curlimages/curl \
  -- curl -s -o /dev/null -w "%{http_code}" \
  http://backend.backend.svc.cluster.local/get

Expected output:

200
# Blocked: POST /post from frontend
kubectl run curl-test --rm -it --restart=Never \
  -n frontend \
  --image=curlimages/curl \
  -- curl -s -o /dev/null -w "%{http_code}" -X POST \
  http://backend.backend.svc.cluster.local/post

Expected output:

403

A 403 confirms Cilium is enforcing method-level access control at layer 7. The GET request to /get is permitted by the L7 policy; the POST request to /post is not in the allow-list and is dropped by Cilium’s HTTP proxy. Hubble will show these as ALLOWED and DROPPED flows respectively.

Step 5 - Enable and Use Hubble for Observability

Hubble Relay is in-cluster; you need the Hubble CLI to query it. Install it before running any hubble commands:

HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -L --fail --remote-name-all \
  https://github.com/cilium/hubble/releases/download/${HUBBLE_VERSION}/hubble-linux-amd64.tar.gz
tar xzvf hubble-linux-amd64.tar.gz
sudo mv hubble /usr/local/bin/
hubble version

Expected output:

hubble v0.13.0 compiled with go1.21.5 on linux/amd64

Note: For macOS, replace linux-amd64 with darwin-amd64. Confirm the binary is accessible with hubble version before continuing.

Then port-forward the Hubble Relay service:

kubectl port-forward -n kube-system svc/hubble-relay 4245:4245

Expected output:

Forwarding from 127.0.0.1:4245 -> 4245
Forwarding from [::1]:4245 -> 4245

Keep this terminal open. Open a new terminal for the following commands.

In another terminal, set the server address and run observe:

export HUBBLE_SERVER=localhost:4245
hubble observe --since 1m --namespace backend

To see only dropped flows:

hubble observe --since 1m --verdict DROPPED --namespace backend

Example output (format may vary):

Sep 15 10:01:00.123: frontend/frontend -> backend/backend:80 (HTTP) ALLOWED
Sep 15 10:01:00.456: default/curl-xxx -> backend/backend:80 (HTTP) DROPPED

Hubble shows which flows were allowed and which were dropped; use this to confirm policy behavior and debug connectivity (e.g. wrong namespace label or port).

Real-World Multi-Tenant Scenario

In a shared cluster, Tenant A and Tenant B each have their own namespace. An API gateway (or ingress namespace) is the only entry from outside. Internal databases live in a dedicated namespace.

  • Namespace-level isolation: Each tenant namespace has a default-deny ingress policy. Ingress is allowed only from the API gateway (or ingress controller) namespace with explicit port rules.

  • Egress: Tenant pods have egress rules allowing only required external APIs (e.g., DNS, package registries) and internal services (e.g., database namespace). This prevents lateral movement and limits blast radius. For DNS, allow UDP 53 to the kube-dns namespace (e.g. kube-system where CoreDNS runs). Example egress rule for a tenant namespace:

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-dns-egress
      namespace: tenant-a
    spec:
      podSelector: {}
      policyTypes: [Egress]
      egress:
        - to:
            - namespaceSelector:
                matchLabels:
                  kubernetes.io/metadata.name: kube-system
          ports:
            - protocol: UDP
              port: 53
    
  • Zero trust: No implicit trust between namespaces; every path is allow-listed. Hubble flow logs provide an audit trail for every connection attempt across tenant boundaries.

Troubleshooting

Policy Not Enforced

Symptom: Traffic still reaches or is blocked unexpectedly after applying a NetworkPolicy.

Likely causes:

  • The podSelector does not match the intended pods.
  • The namespaceSelector label does not match the namespace.
  • The policy was applied in the wrong namespace.

Step-by-step fixes:

  1. Verify the policy exists in the correct namespace:

    kubectl get networkpolicy -n backend
    

    Expected output:

    NAME                   POD-SELECTOR   AGE
    allow-from-frontend    <none>         5m
    default-deny-ingress   <none>         6m
    
  2. Inspect the policy selectors:

    kubectl describe networkpolicy -n backend
    
  3. Confirm pod labels:

    kubectl get pods -n backend --show-labels
    

    Expected output:

    NAME                       READY   STATUS    RESTARTS   AGE   LABELS
    backend-7d6b9f8c4d-p2r4x   1/1     Running   0          10m   app=backend,pod-template-hash=7d6b9f8c4d
    
  4. Confirm namespace labels:

    kubectl get namespace frontend --show-labels
    

    Expected output:

    NAME       STATUS   AGE   LABELS
    frontend   Active   15m   kubernetes.io/metadata.name=frontend,name=frontend
    
  5. Check Cilium status:

    cilium status
    

Cilium Pods CrashLoop or Not Ready

Symptom: Cilium pods in kube-system are not Ready or are restarting.

Likely causes:

  • Kernel does not support required eBPF features.
  • Another CNI is conflicting.
  • Insufficient node resources.

Step-by-step fixes:

  1. Inspect Cilium pod logs:

    kubectl logs -n kube-system -l k8s-app=cilium
    
  2. Verify all Cilium DaemonSet pods are running:

    kubectl get ds cilium -n kube-system
    

    Expected output:

    NAME     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    cilium   3         3         3       3             3           <none>          20m
    
  3. Confirm no other CNI is active and nodes meet kernel requirements for eBPF.


Hubble Not Showing Flows

Symptom: hubble observe returns no output.

Likely causes:

  • Hubble Relay is not running.
  • Port-forwarding is not active.
  • No traffic has occurred during the observed time window.

Step-by-step fixes:

  1. Verify Hubble Relay:

    kubectl get pods -n kube-system -l k8s-app=hubble-relay
    

    Expected output:

    NAME                            READY   STATUS    RESTARTS   AGE
    hubble-relay-5d8b9f7c6d-xk2p9   1/1     Running   0          15m
    
  2. Ensure port-forwarding is active:

    kubectl port-forward -n kube-system svc/hubble-relay 4245:4245
    
  3. Generate test traffic and observe again:

    hubble observe --since 1m
    

DNS or External API Calls Failing After Policy

Symptom: Pods cannot resolve DNS or reach external APIs after applying egress policies.

Likely causes:

  • Egress rules do not allow traffic to kube-dns/CoreDNS.
  • External destinations are not explicitly permitted.

Step-by-step fixes:

  1. Confirm dropped DNS traffic:

    hubble observe --since 1m --verdict DROPPED
    
  2. Add an egress rule allowing DNS (UDP/TCP 53) to the kube-dns namespace.

  3. Add explicit egress rules for required external IPs or use Cilium FQDN policies where appropriate.


For low-level per-packet inspection on a specific node, run cilium monitor directly inside a Cilium pod. This outputs a raw event stream from the eBPF datapath, which is more granular than Hubble but harder to filter:

kubectl exec -n kube-system ds/cilium -- cilium monitor --type drop

The --type drop flag limits output to dropped packets only. Use this when hubble observe does not surface enough detail about a specific flow. For most policy debugging, hubble observe --verdict DROPPED is faster to work with:

hubble observe --since 1m --verdict DROPPED

These two tools together cover the full range of policy debugging: Hubble for flow-level filtering across the cluster, and cilium monitor for per-node packet inspection when you need lower-level detail.

Frequently Asked Questions

What are Kubernetes NetworkPolicies?
They are API resources that define allowed ingress and egress traffic for pods (by selector). Enforcement is done by the CNI; Cilium enforces them in the kernel using eBPF.

How does Cilium differ from other CNIs?
Cilium uses eBPF for the datapath and policy enforcement, supports L7 (HTTP) policy via CiliumNetworkPolicy, and provides built-in observability with Hubble. Many other CNIs use iptables and do not offer L7 or flow visibility.

What is eBPF and why is it used in Kubernetes?
eBPF is a kernel mechanism for running safe, efficient programs on network and other events. Cilium uses it to enforce NetworkPolicies and CiliumNetworkPolicies in the kernel, reducing overhead and enabling L7 and observability.

What is Hubble in Cilium?
Hubble is Cilium’s observability layer. It collects flow and metrics data from Cilium agents (via eBPF) and exposes them so you can visualize allowed and dropped traffic (e.g., with hubble observe or Hubble UI).

How do I visualize traffic in a Kubernetes cluster?
With Cilium and Hubble: use hubble observe with filters (namespace, label, verdict) or deploy Hubble UI and query flows. This gives L3/L4 (and with L7 policy, HTTP) visibility without a service mesh.

Can Cilium enforce layer 7 policies?
Yes. CiliumNetworkPolicy supports L7 rules (e.g., HTTP method and path). Standard Kubernetes NetworkPolicy is L3/L4 only.

How do NetworkPolicies improve security?
They restrict which pods can talk to which others and to the internet. With default-deny and allow-lists, you get network segmentation, reduced lateral movement, and alignment with zero-trust and compliance requirements.

Is Cilium supported on DigitalOcean Kubernetes?
Cilium can be installed on DOKS as the CNI or for advanced features (e.g., Gateway API, L7 policy). Confirm compatibility with your Kubernetes version and any DOKS-specific notes in DigitalOcean Kubernetes networking and DOKS at scale.

Conclusion

Kubernetes NetworkPolicies alone are not enough without a CNI that enforces them and without observability to verify behavior. Cilium provides high-performance, eBPF-based enforcement and Hubble gives you the data to validate each policy change before and after rollout. On DOKS, you can achieve zero-trust networking and Kubernetes service isolation by applying segmentation gradually: observe baseline traffic with Hubble, introduce default-deny and allow-lists, then add CiliumNetworkPolicy for L7 where needed.

Further Learning

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the author

Vinayak Baranwal
Vinayak Baranwal
Author
Technical Writer II
See author profile

Building future-ready infrastructure with Linux, Cloud, and DevOps. Full Stack Developer & System Administrator. Technical Writer @ DigitalOcean | GitHub Contributor | Passionate about Docker, PostgreSQL, and Open Source | Exploring NLP & AI-TensorFlow | Nailed over 50+ deployments across production environments.

Still looking for an answer?

Was this helpful?


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Creative CommonsThis work is licensed under a Creative Commons Attribution-NonCommercial- ShareAlike 4.0 International License.
Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.