The yandex-cloud k8s-csi-s3 driver lets Kubernetes pods mount S3-compatible buckets as persistent volumes via FUSE (GeeseFS and s3fs backends). This article details how to configure k8s-csi-s3 for use with Wasabi Hot Cloud Storage.
Note that although the Github page says k8s-csi-s3 supports Rclone, we found it to not be functional during our testing.
Requirements
Active Wasabi Cloud Storage Account.
Access to the Wasabi Console.
Wasabi access and secret keys. It is recommended to create a sub-user with their own set of keys for this purpose rather than using your root keys. See Creating a User for more details. You may then restrict what access the sub-user has, such as access to a specific bucket, using IAM policies. See IAM and Bucket Policies for details.
A Wasabi bucket may be created to store your backups, or you can let the k8s-csi-s3 driver create buckets for you. Do not enable Object Lock or Versioning. See Creating a Bucket for details on this procedure.
This solution was tested Kubernetes server version v1.34.6+k3s1 running on Ubuntu Linux version 24.04.4 LTS with k8s-csi-s3 version 0.43.6.
Access to your Kubernetes server.
Install the Driver (if required)
Login to your Kubernetes server.
Issue the following commands if the driver is not already installed.
helm repo add yandex-s3 https://yandex-cloud.github.io/k8s-csi-s3/charts
helm install csi-s3 yandex-s3/csi-s3 --namespace kube-system \
--set secret.create=falseCreate a Secret
Create a Kubernetes secret based on your Wasabi access and secret key pair. Generate an “s3-secret.yaml” file with the following information. Replace YOUR_ACCESS_KEY and YOUR_SECRET_KEY with your Wasabi keys.
This configuration example discuss the use of Wasabi's us-east-1 storage region. Use the region your bucket is located in, or the region you want your bucket(s) to be located in if allowing the driver to create them for you. For a list of regions, see Service URLs for Wasabi's Storage Regions.
apiVersion: v1
kind: Secret
metadata:
name: csi-s3-secret
# Namespace depends on the configuration in the storageclass.yaml
namespace: kube-system
stringData:
accessKeyID: YOUR_ACCESS_KEY
secretAccessKey: YOUR_SECRET_KEY
endpoint: https://s3.us-east-1.wasabisys.comIssue the following command to create the Kubernetes secret:
kubectl apply -f s3-secret.yamlCreate the Storage Class
Create a Storage Class YAML file named “storageclass.yaml”. The first example we list below is for the GeeseFS mounter, which is the preferred method as listed on the driver’s Github page, and it uses an existing Wasabi bucket. Replace YOUR_EXISTING_BUCKET with the name of your bucket, or comment the bucket line out (by including a # character at the beginning of the line) if you want the driver to create your Wasabi bucket(s).
If allowing the driver to create bucket(s) for you, it will create a bucket per Persistent Volume (PV). If you use your own preexisting bucket, it will create a folder per PV in that bucket.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: csi-s3
provisioner: ru.yandex.s3.csi
parameters:
mounter: geesefs
# you can set mount options here, for example limit memory cache size (recommended)
options: "--memory-limit 1000 --dir-mode 0777 --file-mode 0666"
# to use an existing bucket, specify it here:
bucket: YOUR_EXISTING_BUCKET
csi.storage.k8s.io/provisioner-secret-name: csi-s3-secret
csi.storage.k8s.io/provisioner-secret-namespace: kube-system
csi.storage.k8s.io/controller-publish-secret-name: csi-s3-secret
csi.storage.k8s.io/controller-publish-secret-namespace: kube-system
csi.storage.k8s.io/node-stage-secret-name: csi-s3-secret
csi.storage.k8s.io/node-stage-secret-namespace: kube-system
csi.storage.k8s.io/node-publish-secret-name: csi-s3-secret
csi.storage.k8s.io/node-publish-secret-namespace: kube-systemThe next example is for using the s3fs mounter. The same bucket rules apply.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: csi-s3
provisioner: ru.yandex.s3.csi
parameters:
mounter: s3fs
# you can set mount options here, for example limit memory cache size (recommended)
# options: "--memory-limit 1000 --dir-mode 0777 --file-mode 0666"
# to use an existing bucket, specify it here:
# bucket: some-existing-bucket
csi.storage.k8s.io/provisioner-secret-name: csi-s3-secret
csi.storage.k8s.io/provisioner-secret-namespace: kube-system
csi.storage.k8s.io/controller-publish-secret-name: csi-s3-secret
csi.storage.k8s.io/controller-publish-secret-namespace: kube-system
csi.storage.k8s.io/node-stage-secret-name: csi-s3-secret
csi.storage.k8s.io/node-stage-secret-namespace: kube-system
csi.storage.k8s.io/node-publish-secret-name: csi-s3-secret
csi.storage.k8s.io/node-publish-secret-namespace: kube-systemIssue the following commands to delete the default csi-s3 storage class and create the new storage class.
kubectl delete storageclass csi-s3
kubectl create -f storageclass.yamlTest the S3 Driver
Create a “pvc.yaml” file with the following contents. Adjust the storage size as needed.
# Dynamically provisioned PVC:
# A bucket or path inside bucket will be created automatically
# for the PV and removed when the PV will be removed
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-s3-pvc
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5000Gi
storageClassName: csi-s3Issue the following commands.
kubectl create -f pvc.yaml
kubectl get pvc csi-s3-pvcHere is an example output. The Status should show “Bound”.
.png)
Test the S3 driver by creating a test nginx pod. Generate a “pod.yaml” file with the following contents.
apiVersion: v1
kind: Pod
metadata:
name: csi-s3-test-nginx
namespace: default
spec:
containers:
- name: csi-s3-test-nginx
image: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html/s3
name: webroot
volumes:
- name: webroot
persistentVolumeClaim:
claimName: csi-s3-pvc
readOnly: falseCreate the test nginx pod, mount the volume, and generate a test file by issuing the commands below.
kubectl create -f pod.yaml
kubectl exec -ti csi-s3-test-nginx -- bash
mount | grep fuse
touch /usr/share/nginx/html/s3/hello_world
ls -la /usr/share/nginx/html/s3/
exitHere is the output from our testing.

Validate Data in Wasabi
Login to the Wasabi Console. Click Buckets. If you configured the driver to create buckets, you will see a bucket named after the Persistent Volume. Click the name of the bucket and observe the hello_world object in the bucket.

If you configured the driver to use an existing bucket, click the name of the bucket. You will see a folder named after the PV.

Click the name of the folder. You will see the hello_world object in the folder.

Delete the Test Environment
On your Kubernetes server, delete the test pod, Persistent Volume, and Storage Class by issuing the following command.
kubectl delete -f pod.yaml
kubectl delete -f pvc.yaml
kubectl delete -f storageclass.yamlCreate Your Own Environment
Edit the storageclass.yaml, pvc.yaml, and pod.yaml files to match your environment requirements. Issue the following commands.
kubectl create -f storageclass.yaml
kubectl create -f pvc.yaml
kubectl get pvc csi-s3-pvc
kubectl create -f pod.yaml
kubectl get podsThe status of all pods should be “Running”.