Question

Cilium Operator pod cannot start

I’m using Digital Ocean kubernetes and cilium operator pod cannot start.

Here’s my kubernetes status.

$ kubectl get po -n kube-system
NAME                                    READY   STATUS             RESTARTS   AGE
cilium-operator-6444788657-6pk9h        0/1     CrashLoopBackOff   2204       20d
cilium-sq9dk                            1/1     Running            0          20d
cilium-w8mww                            1/1     Running            0          20d
cilium-zmx48                            1/1     Running            0          20d
coredns-84c79f5fb4-k8crt                1/1     Running            0          20d
coredns-84c79f5fb4-plxn6                1/1     Running            0          20d
csi-do-node-dfvc5                       2/2     Running            0          20d
csi-do-node-vckfg                       2/2     Running            0          20d
csi-do-node-wsch4                       2/2     Running            0          20d
do-node-agent-9lnkl                     1/1     Running            0          20d
do-node-agent-cfk57                     1/1     Running            0          20d
do-node-agent-mhsvl                     1/1     Running            0          20d
kube-proxy-c698r                        1/1     Running            0          20d
kube-proxy-f549f                        1/1     Running            0          20d
kube-proxy-qnbs8                        1/1     Running            0          20d
kubelet-rubber-stamp-7f966c6779-7v2fq   1/1     Running            0          20d

$ kubectl logs -p cilium-operator-6444788657-6pk9h -n kube-system
level=info msg="Cilium Operator " subsys=cilium-operator
level=info msg="Starting apiserver on address :9234" subsys=cilium-operator
level=info msg="Establishing connection to apiserver" host="https://10.245.0.1:443" subsys=k8s
level=info msg="Connected to apiserver" subsys=k8s
level=info msg="Retrieved node information from kubernetes" nodeName=pool-d2o-6v7u subsys=k8s
level=info msg="Received own node information from API server" ipAddr.ipv4=10.130.13.143 ipAddr.ipv6="<nil>" nodeName=pool-d2o-6v7u subsys=k8s v4Prefix=10.244.0.0/24 v6Prefix="<nil>"
level=info msg="Starting to synchronize k8s services to kvstore..." subsys=cilium-operator
level=info msg="Connecting to kvstore..." address= kvstore=etcd subsys=cilium-operator
level=info msg="Connecting to etcd server..." config=/var/lib/etcd-config/etcd.config endpoints="[https://8690ff65-fea8-4c25-9376-7b2d633d2245.internal.k8s.ondigitalocean.com:2379]" subsys=kvstore
level=info msg="Starting to synchronize k8s nodes to kvstore..." subsys=cilium-operator
{"level":"warn","ts":"2019-12-21T05:55:22.927Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://8690ff65-fea8-4c25-9376-7b2d633d2245.internal.k8s.ondigitalocean.com:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
{"level":"warn","ts":"2019-12-21T05:55:37.928Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://8690ff65-fea8-4c25-9376-7b2d633d2245.internal.k8s.ondigitalocean.com:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
{"level":"warn","ts":"2019-12-21T05:55:52.929Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://8690ff65-fea8-4c25-9376-7b2d633d2245.internal.k8s.ondigitalocean.com:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
{"level":"warn","ts":"2019-12-21T05:56:07.930Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://8690ff65-fea8-4c25-9376-7b2d633d2245.internal.k8s.ondigitalocean.com:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
level=warning msg="Health check status" error="not able to connect to any etcd endpoints" subsys=cilium-operator
level=warning msg="Health check status" error="not able to connect to any etcd endpoints" subsys=cilium-operator
{"level":"warn","ts":"2019-12-21T05:56:22.931Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://8690ff65-fea8-4c25-9376-7b2d633d2245.internal.k8s.ondigitalocean.com:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
level=warning msg="Health check status" error="not able to connect to any etcd endpoints" subsys=cilium-operator
level=fatal msg="Unable to start status api: http: Server closed" subsys=cilium-operator


Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

Hi there!

This is often the cilium operator complaining that it cannot communicate with your master node in a timely manner. The healthchecks are failing causing the operator to restart. Can you please open up a support ticket and we can take a look at your master’s health?

Regards,

John Kwiatkoski Senior Developer Support Engineer - Kubernetes

Hi, what can you tell me about the version of your Linux kernel? Are you using Linux kernel version >= 4.9.17 which is recommended?