-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable To install Dynamic NFS Provisioner on GKE with Backend Storage Class of pd.csi.storage.gke.io provisioner #157
Comments
I ran into the same problem on GKE. |
@sanke-t I ran into the same problem. My mistake was to provision a too small pvc. I was testing with 1GB whichs works fine in Azure but makes problems with AWS/GCP. After turning it up to 10GB all of my hyper-scalers worked fine :-) Tested on: DigitalOcean/GCP/AWS/OTC/Azure. Should I create a pullrequest for updating the readme so that nobody must endure my pain again? ;-) |
Hi @beneiltis I was using a regional disk of 200 GB (minimum possible size). Will it be possible to share your storage class and pvc definition from the working implementation(for GKE)? |
Hey @sanke-t after celebrating we found out that I accidentally setup openebs-hostpath. In fact it was never working. I am so sorry. We are still working on this. I will share the solution if we find one. Are you aware of alternatives to this project? It looks like it is not maintained anymore. |
Hi all! Any news about problem? |
Hi, @beneiltis @Feniksss . I will try to get my hands on a GKE env and take a look at this. Meanwhile, could you please confirm whether GKE version and configuration are done as expected here: https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver |
I did some more research on this issue and found out that the ndm provisioner is interfering with the NFS one, more specifically the Deploying the chart with These are the
|
Good find ! |
@sanke-t Could you check and confirm if installing the chart with ndm disabled resolves your issue, if this is an issue still. |
Describe the bug: I am unable to install Dynamic NFS Provisioner on GKE with GKE storage class to ensure my pvc is backed to a disk which will withstand node drains, upgrades and failures. The nfs-pv-<volume_name> pod is stuck in ContainerCreating status. Pod description shows the error-
MountVolume.MountDevice failed for volume "pvc-4ec6bfba-b5e7-47af-a6ec-5c1afd82e2b7" : rpc error: code = Internal desc = Failed to format and mount device from ("/dev/disk/by-id/google-pvc-4ec6bfba-b5e7-47af-a6ec-5c1afd82e2b7_regional") to ("/var/lib/kubelet/plugins/kubernetes.io/csi/pd.csi.storage.gke.io/2a9b9ac5e5e297142fb243da54292afd54d46280869358967bea09e4cc96b1ba/globalmount") with fstype ("ext4") and options ([]): mount failed: exit status 32
Expected behaviour: The pvc should be mounted and nfs-pv-<volume_name> pod should be in running status
Steps to reproduce the bug:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cms-pv-claim spec: storageClassName: openebs-gcp-pd-rwx accessModes: - ReadWriteMany resources: requests: storage: 200Gi
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: openebs-gcp-pd-rwx annotations: openebs.io/cas-type: nfsrwx cas.openebs.io/config: | - name: NFSServerType value: "kernel" - name: BackendStorageClass value: "regionalpd-storageclass" provisioner: openebs.io/nfsrwx reclaimPolicy: Delete
`kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: regionalpd-storageclass
provisioner: pd.csi.storage.gke.io
parameters:
type: pd-standard
replication-type: regional-pd
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
values:
Steps to reproduce the bug should be clear and easily reproducible to help people gain an understanding of the problem
The output of the following commands will help us better understand what's going on:
kubectl get pods -n <openebs_namespace> --show-labels
kubectl get pvc -n <openebs_namespace>
kubectl get pvc -n <application_namespace>
https://gist.github.com/sanke-t/7a4d8cc41f1840c79c6261da92d62003
Anything else we need to know?:
The same setup works fine with BackendStorageClass as openebs-hostpath
Environment details:
kubectl get po -n openebs --show-labels
): 3.5.0kubectl version
): v1.24.10-gke.2300cat /etc/os-release
): Ubuntu 22.04.2 LTSuname -a
): 5.15.0-1027-gkeThe text was updated successfully, but these errors were encountered: