Hi! After update of the Kubernetes LTD, access to a container through hostPort was broken. I assume this is related to cilium. Anyone known how to fix that?

I ran across the information that this problem is solved by the plugin portmap (https://github.com/containernetworking/plugins/tree/master/plugins/meta/portmap), but not have idea how to use it…

edited by AHA

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
10 answers

Hi, could you open up a support ticket? We’d be happy to take a look.

Just as an fyi, only newer versions of clusters (1.13.2, 1.12.5, 1.11.7) use cilium for networking, not sure if you are on a newly created cluster or an older version.

Also, it usually better practice to not rely on host ports or host networking in general unless there is a very specific use case you need them for. Is there any reason node port on a service can’t be used in your case? Regardless, we can follow up and try to figure out why host ports are not working for you if you open up a support ticket.

I have the same problem. I just provisioned a 2 node cluster, allowed http and https traffic on all nodes and created an ingress-nginx DaemonSet that would bind nginx to port 80 and 443 on each host.

the ingress-nginx container-spec contains these keys:

          ports:
          - containerPort: 80
            hostPort: 80
          - containerPort: 443
            hostPort: 443

but it doesn’t work.
Is there a workaround for this? Do I have to reconfigure cillium?

  • Hi, @erdii!
    Sorry for the late reply.
    Why don’t you use Kubernetes nginx-ingress? DigitalOcean automatically provides Load Balancer for this.

    As for my problem (hostPort), I got around it by using a very dirty trick settings spec.hostNetwork: true.

    • @oskovpen Thanks for your reply.

      Actually i am using nginx-ingress, but I’d like to self-manage my edge node(s) because it’s cheaper for playground-clusters than to provision a loadbalancer. :/

Hi, I’m also running into this issue and it does not happen in the other providers GKE, AKS, EKS. I have 3 clusters and one of them is from before the LTD and hostPort was working back then (and still is since have not deleted that cluster). In newer clusters I’m not able to configure the networking so that it is possible to reach directly to the public ip of the Kubelet nodes that are provisioned. The workaround that @oskovpen suggests does work but it means that the pod ips cannot be used anymore.

My 1.13.1 cluster doesn’t work with hostPort. Accessing both public and private IP addresses of the droplets (I have 4 nodes) does not work.

I cannot use ingress because I do TLS termination via cert-manager and DigitalOcean load balancer hides client ip in tcp mode.

Please advise?

Had to change the Helm chart values a bit for ingress-nginx, but this worked:

controller:
  kind: DaemonSet
  hostNetwork: true
  dnsPolicy: ClusterFirstWithHostNet
  daemonset:
    useHostPort: true

  service:
    type: ClusterIP

hostPort is broken for me as well. a TCPdump shows the packets being filtered, something possibly wrong with the DO kubernetes machines?

Not sure how to fix this yet but here are some clues:

  • The reason why it hostPort does not is indeed due to cilium install which by default does not support hostport: https://docs.cilium.io/en/latest/gettingstarted/cni-chaining-portmap/

  • According to the cilium docs, one can enable the hostport by using a CNI plugin by doing toggling the cni-chaining-mode: portmap option which is in the configmap from the kube system:

kubectl -n kube-system get cm cilium-config -o yaml > cilium.yaml
  • Apparently this should cause a configuration called 05-cilium.conflist to replace /etc/cni/net.d/05-cilium-cni.conf which is in the cilium container, unfortunately this part is not working for me…

The cni-chaining-mode: portmap stuff is only available on the current cilium master builds (not even on v1.5.1).
I tried simply updating the image of the provided cilium daemonset and operator, but that didn’t suffice.
So i’m back to the workaround of setting hostnetwork: true in addition to hostports.

Guess we’ll have to wait till cilium is properly updated to >= v1.5.2.

Hey everyone, I got the workaround to this issue at last KubeCon EU. This is what worked for me: https://github.com/snormore/cilium-portmap
You have to install that daemonset on the kube-system namespace, and then restart/redeploy all the pods that were deployed with the hostport config that was not effective. Seems that the tcp connect can take a bit longer with this setup, but at least there is connectivity.

Hi there 👋

We’ve recently released new DOKS versions that have hostPort support enabled by default. You can upgrade to any of our latest patch versions to start using it today.

Submit an Answer