Report this

What is the reason for this report?

I have set up csi-for-s3 on k8s, can create files but cannot see them.

Posted on September 19, 2024

I installed csi-for-s3 in a cluster. The one that uses [geesefs](CSI for S3 | DigitalOcean Marketplace 1-Click App). Precreated a bucket and create yamls to that bucket.

Can touch <file> or vi <file> and the file is seen on the D.O.

But cannot see the files on the container, using kubectl exec … ls. Even I cannot see the files I just created.

The container image is Alpine. Directory were the files are, is root and group root.

kubectl logs -l app=csi-s3-provisioner -n csi-s3, gives:

I0919 07:16:42.669631       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 4 items received
I0919 07:17:22.192663       1 reflector.go:530] sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:875: Watch close - *v1.StorageClass total 0 items received
I0919 07:20:21.956558       1 reflector.go:530] sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:872: Watch close - *v1.PersistentVolume total 0 items received
I0919 07:23:17.698600       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 0 items received
I0919 07:24:18.412923       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 9 items received
I0919 07:25:30.205731       1 reflector.go:530] sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:875: Watch close - *v1.StorageClass total 10 items received
I0919 07:26:44.994973       1 reflector.go:530] sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:872: Watch close - *v1.PersistentVolume total 8 items received
I0919 07:28:25.960968       1 reflector.go:381] sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:872: forcing resync
I0919 07:28:25.963330       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 07:29:58.726554       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 7 items received*


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Hi Marcel,

It sounds like your csi-for-s3 setup is partially working since files appear in your Digital Ocean S3 bucket, but not in the container when listing with kubectl exec ... ls. This is likely because you’re not operating in the mounted directory.

Can you check your pod’s mount point with kubectl describe pod <pod_name> (usually /data), then run cd /data && touch testfile && ls in the container?

If files still don’t show, verify permissions with ls -ld /data (should be root:root, which is fine for Alpine). The FUSE filesystem might also be caching, so double-check the csi-for-s3 logs (kubectl logs -l app=csi-s3-provisioner -n csi-s3) for clues.

If it’s still tricky, peek at the csi-for-s3 docs or ping DigitalOcean support, especially since you’re using their 1-Click App:

https://do.co/support

- Bobby

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.