Hello, I’m tring to set up a NFS server in k8s. I want make the NFS exportations available on other pods. Here follow the server manifests. For the server implementation I’m tring to use this image: itsthenetwork/nfs-server-alpine:12
I defined a Headless Service called nfs-server exporting the required port 2049
apiVersion: v1
kind: Service
metadata:
name: nfs-server
labels:
app: nfs-server
spec:
ports:
- name: nfs
port: 2049
selector:
app: nfs-server
clusterIP: None
Then I defined a StatefulSet that starts a pod with a BlockStorage volume. The volume is mounted in the same exported path and the pod runs as privileged
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nfs-server
spec:
replicas: 1
selector:
matchLabels:
app: nfs-server
template:
metadata:
labels:
app: nfs-server
spec:
initContainers:
containers:
- name: nfs-server
image: 'itsthenetwork/nfs-server-alpine:12'
imagePullPolicy: Always
ports:
- containerPort: 2049
env:
- name: SHARED_DIRECTORY
value: "/nfs"
volumeMounts:
- mountPath: /nfs
name: nfs-volume-claim
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: nfs-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: do-block-storage
When I try to start a pod for testing purposes it never starts, because of the volume mounting is not feasable.
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: busybox
command:
- sh
- -c
- 'while true; do date > /tmp/test; sleep $(($RANDOM % 5 + 5)); done'
imagePullPolicy: Always
volumeMounts:
- mountPath: /resources/
name: nfs-volume-m
volumes:
- name: nfs-volume-m
nfs:
#server: nfs-server
server: nfs-server.default.svc.cluster.local
path: /nfs
readOnly: false
it seems the system cannot reach the nfs server given the service name. By executing a simple ping inside the container the IP is resolved correctly. I tried the following server names in the nfs volume specifications: nfs-server and nfs-server.default.svc.cluster.local
I think k8s is tring to resolve the name from the host worker node point of view, but I’m not sure about it. How can i face this problem?
Thank you, N
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Hey!
I think you might be right, on DOKS, when mounting an NFS volume, it’s actually the node trying to reach the NFS server, not the pod.
So even if DNS works fine inside the pod, the node itself probably can’t reach
nfs-server.default.svc.cluster.local
over port 2049.Since the NFS server is running inside the same cluster, I guess the node can’t access it directly through the cluster network. That could explain why the mount fails even though the service name resolves.
One thing I’ve seen is people running the NFS server with
hostNetwork: true
so it binds to the node’s IP, then the node can talk to it like any local service.- Bobby