Closed
Description
What happened:
Created a PV with the NFS Driver pointing at an existing NFS server.
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-pv
spec:
capacity:
storage: 1000Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
csi:
driver: nfs.csi.k8s.io
readOnly: false
volumeHandle: test-media-shared-many
volumeAttributes:
server: 192.168.69.16
share: /
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
volumeName: test-pv
storageClassName: ""
When I deleted the PVC, kubectl get pvc
showed that the PVC was removed.
The PV showed it as Released
though it still has a Claim in it.
❯ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
test-pv 1000Gi RWX Retain Released test-pvc 5h8m
Attempting to create a new PVC against this PV fails with the error:
❯ kubectl describe pvc test-claim
Name: test-claim
Namespace: default
StorageClass:
Status: Pending
Volume: test-pv
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 0
Access Modes:
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedBinding 11s persistentvolume-controller volume "test-pv" already bound to a different claim.
What you expected to happen:
That since there is a clear PV, I can setup a new PVC to this PV.
How to reproduce it:
See info above.
Anything else we need to know?:
Installed the CSI Driver using the helm chart version 3.1.0, App Version: latest
Environment:
- CSI Driver version: 3.1.0
- Kubernetes version (use
kubectl version
): v1.21.2+vmware.1 - OS (e.g. from /etc/os-release): VMware Photon version 3.0
- Kernel (e.g.
uname -a
):4.19.198-1.ph3 #1-photon SMP
- Install tools: helm
- Others:
Metadata
Metadata
Assignees
Labels
No labels