I am using “kubeadm --pod-network-cidr 10.0.0.0/8 init” to start node but kubectl describe node shows PodCIDR: as x.x.x.x/24.
cat /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: “” creationTimestamp: null labels: component: kube-controller-manager tier: control-plane name: kube-controller-manager namespace: kube-system spec: containers:
command: kube-controller-manager –root-ca-file=/etc/kubernetes/pki/ca.crt –cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt –address=127.0.0.1 –use-service-account-credentials=true –controllers=*,bootstrapsigner,tokencleaner –kubeconfig=/etc/kubernetes/controller-manager.conf –service-account-private-key-file=/etc/kubernetes/pki/sa.key –cluster-signing-key-file=/etc/kubernetes/pki/ca.key –leader-elect=true –allocate-node-cidrs=true –cluster-cidr=10.0.0.0/8 –node-cidr-mask-size=24 Please let me know how to change PodCIDR or - --node-cidr-mask-size to /16 or /8
Thanks Anupam Thakur
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Hello,
To change the --node-cidr-mask-size for your Kubernetes cluster, you’ll need to edit the kube-controller-manager configuration. This configuration file is typically located at /etc/kubernetes/manifests/kube-controller-manager.yaml on the master node.
In the file find the line --node-cidr-mask-size=24 and change it to whatever you need, save the file and exit. The changes should take effect immediately, as the kubelet service automatically restarts the kube-controller-manager pod when it detects changes to the configuration file.
Please note:
Changing the --node-cidr-mask-size will affect how IP addresses are allocated to nodes and pods in your cluster. Make sure you understand the implications of this change before you make it. For example, if you set --node-cidr-mask-size to 16, each node in your cluster could be assigned up to 65,536 (2^16) IP addresses for its pods. This could quickly use up the IP addresses in your --cluster-cidr range if you have many nodes.
This change won’t affect any existing nodes in your cluster. It will only affect new nodes that you add after making the change. If you want the change to apply to existing nodes, you’ll need to drain and delete the nodes, then add them back to the cluster.
If you’re using a network plugin like Calico or Weave, you’ll also need to make sure that your network plugin is configured to handle the new CIDR mask size.
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.