-
Couldn't load subscription status.
- Fork 48
Description
Bug Report
Description
I created following VolumeAttributesClass after enabling this API per this doc https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ and also adding --feature-gates=VolumeAttributesClass=true to csi-provisioner and csi-resizer args
apiVersion: storage.k8s.io/v1beta1
driverName: csi.proxmox.sinextra.dev
kind: VolumeAttributesClass
metadata:
creationTimestamp: "2025-03-13T20:46:26Z"
finalizers:
- kubernetes.io/vac-protection
labels:
kustomize.toolkit.fluxcd.io/name: cluster-apps
kustomize.toolkit.fluxcd.io/namespace: flux-system
name: allow-backup
resourceVersion: "205728"
uid: af4b3d5e-0f25-43c8-aee1-2b67d0c9ca82
parameters:
backup: "true"
But the above does not seem to have any effect on Proxmox side, in the logs I see CSI tries to create the volume with backup 1, see here
I0313 22:23:11.139868 1 node.go:280] "NodePublishVolume: called" args="{\"publish_context\":{\"DevicePath\":\"/dev/disk/by-id/wwn-0x5056432d49443032\",\"lun\":\"2\"},\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/2e5a88b1d2d58ad922056f868e18e89e22e9eea91cc725ebeeaa2f12e32457ae/globalmount\",\"target_path\":\"/var/lib/kubelet/pods/54bc69c3-397a-45a9-a05e-edb03649975f/volumes/kubernetes.io~csi/pvc-7f01d318-9502-49ed-bbd3-accb0b6e50db/mount\",\"volume_capability\":{\"access_mode\":{\"mode\":\"SINGLE_NODE_WRITER\"},\"mount\":{\"fs_type\":\"xfs\",\"mount_flags\":[\"noatime\"]}},\"volume_context\":{\"backup\":\"1\",\"cache\":\"writethrough\",\"csi.storage.k8s.io/ephemeral\":\"false\",\"csi.storage.k8s.io/pod.name\":\"babybuddy-0\",\"csi.storage.k8s.io/pod.namespace\":\"default\",\"csi.storage.k8s.io/pod.uid\":\"54bc69c3-397a-45a9-a05e-edb03649975f\",\"csi.storage.k8s.io/serviceAccount.name\":\"default\",\"ssd\":\"true\",\"storage\":\"fast1\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1741900554025-7731-csi.proxmox.sinextra.dev\"},\"volume_id\":\"pve-cluster/pve-2/fast1/vm-9999-pvc-7f01d318-9502-49ed-bbd3-accb0b6e50db\"}"
Logs
Controller: [kubectl logs -c proxmox-csi-plugin-controller proxmox-csi-plugin-controller-...]
I0313 22:22:13.079199 1 controller.go:577] "GetCapacity: called" args="{\"accessible_topology\":{\"segments\":{\"topology.kubernetes.io/region\":\"pve-cluster\",\"topology.kubernetes.io/zone\":\"pve-2\"}},\"parameters\":{\"cache\":\"writethrough\",\"csi.storage.k8s.io/fstype\":\"xfs\",\"ssd\":\"true\",\"storage\":\"fast1\"}}"
I0313 22:22:13.079219 1 controller.go:589] "GetCapacity" region="pve-cluster" zone="pve-2" storageID="fast1"
I0313 22:22:58.563060 1 controller.go:95] "CreateVolume: called" args="{\"accessibility_requirements\":{\"preferred\":[{\"segments\":{\"topology.kubernetes.io/region\":\"pve-cluster\",\"topology.kubernetes.io/zone\":\"pve-2\"}}],\"requisite\":[{\"segments\":{\"topology.kubernetes.io/region\":\"pve-cluster\",\"topology.kubernetes.io/zone\":\"pve-2\"}}]},\"capacity_range\":{\"required_bytes\":16106127360},\"mutable_parameters\":{\"backup\":\"true\"},\"name\":\"pvc-7f01d318-9502-49ed-bbd3-accb0b6e50db\",\"parameters\":{\"cache\":\"writethrough\",\"ssd\":\"true\",\"storage\":\"fast1\"},\"volume_capabilities\":[{\"access_mode\":{\"mode\":\"SINGLE_NODE_MULTI_WRITER\"},\"mount\":{\"fs_type\":\"xfs\",\"mount_flags\":[\"noatime\"]}}]}"
I0313 22:22:58.563081 1 controller.go:112] "CreateVolume: parameters" parameters={"cache":"writethrough","ssd":"true","storage":"fast1"}
I0313 22:22:58.563142 1 controller.go:124] "CreateVolume: modify parameters" parameters={"backup":true,"Iops":null,"SpeedMbps":null,"ReplicateSchedule":"","ReplicateZones":""}
I0313 22:22:58.570060 1 controller.go:169] "CreateVolume" storageConfig={"blocksize":"16K","content":"rootdir,images","digest":"5c6b1bb2eec2eb3f59c500f7763ba22911c132b4","pool":"vmmirror","sparse":1,"storage":"fast1","type":"zfspool"}
I0313 22:22:59.725906 1 controller.go:304] "CreateVolume: volume created" cluster="pve-cluster" volumeID="pve-cluster/pve-2/fast1/vm-9999-pvc-7f01d318-9502-49ed-bbd3-accb0b6e50db" size=16106127360
I0313 22:22:59.726996 1 controller.go:577] "GetCapacity: called" args="{\"accessible_topology\":{\"segments\":{\"topology.kubernetes.io/region\":\"pve-cluster\",\"topology.kubernetes.io/zone\":\"pve-2\"}},\"parameters\":{\"cache\":\"writethrough\",\"csi.storage.k8s.io/fstype\":\"xfs\",\"ssd\":\"true\",\"storage\":\"fast1\"}}"
I0313 22:22:59.727013 1 controller.go:589] "GetCapacity" region="pve-cluster" zone="pve-2" storageID="fast1"
I0313 22:23:00.641679 1 controller.go:416] "ControllerPublishVolume: called" args="{\"node_id\":\"ctrl-000\",\"volume_capability\":{\"access_mode\":{\"mode\":\"SINGLE_NODE_MULTI_WRITER\"},\"mount\":{\"fs_type\":\"xfs\",\"mount_flags\":[\"noatime\"]}},\"volume_context\":{\"backup\":\"1\",\"cache\":\"writethrough\",\"ssd\":\"true\",\"storage\":\"fast1\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1741900554025-7731-csi.proxmox.sinextra.dev\"},\"volume_id\":\"pve-cluster/pve-2/fast1/vm-9999-pvc-7f01d318-9502-49ed-bbd3-accb0b6e50db\"}"
I0313 22:23:00.644591 1 controller.go:858] "ControllerPublishVolume: failed to get proxmox vmrID from ProviderID" nodeID="ctrl-000"
I0313 22:23:02.782440 1 controller.go:513] "ControllerPublishVolume: volume published" cluster="pve-cluster" volumeID="pve-cluster/pve-2/fast1/vm-9999-pvc-7f01d318-9502-49ed-bbd3-accb0b6e50db" vmID=600
I0313 22:23:13.079429 1 controller.go:577] "GetCapacity: called" args="{\"accessible_topology\":{\"segments\":{\"topology.kubernetes.io/region\":\"pve-cluster\",\"topology.kubernetes.io/zone\":\"pve-2\"}},\"parameters\":{\"cache\":\"writethrough\",\"csi.storage.k8s.io/fstype\":\"xfs\",\"ssd\":\"true\",\"storage\":\"fast1\"}}"
I0313 22:23:13.079454 1 controller.go:589] "GetCapacity" region="pve-cluster" zone="pve-2" storageID="fast1"
Node: [kubectl logs -c proxmox-csi-plugin-node proxmox-csi-plugin-node-...]
I0313 22:23:08.172458 1 node.go:513] "NodeGetCapabilities: called"
I0313 22:23:08.174411 1 node.go:513] "NodeGetCapabilities: called"
I0313 22:23:08.174902 1 node.go:513] "NodeGetCapabilities: called"
I0313 22:23:08.175501 1 node.go:84] "NodeStageVolume: called" args="{\"publish_context\":{\"DevicePath\":\"/dev/disk/by-id/wwn-0x5056432d49443032\",\"lun\":\"2\"},\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/2e5a88b1d2d58ad922056f868e18e89e22e9eea91cc725ebeeaa2f12e32457ae/globalmount\",\"volume_capability\":{\"access_mode\":{\"mode\":\"SINGLE_NODE_WRITER\"},\"mount\":{\"fs_type\":\"xfs\",\"mount_flags\":[\"noatime\"]}},\"volume_context\":{\"backup\":\"1\",\"cache\":\"writethrough\",\"ssd\":\"true\",\"storage\":\"fast1\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1741900554025-7731-csi.proxmox.sinextra.dev\"},\"volume_id\":\"pve-cluster/pve-2/fast1/vm-9999-pvc-7f01d318-9502-49ed-bbd3-accb0b6e50db\"}"
I0313 22:23:08.175849 1 node.go:119] "NodeStageVolume: mount device" device="/dev/sdc" path="/var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/2e5a88b1d2d58ad922056f868e18e89e22e9eea91cc725ebeeaa2f12e32457ae/globalmount"
I0313 22:23:08.175869 1 mount_linux.go:680] Attempting to determine if disk "/dev/sdc" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/sdc])
I0313 22:23:08.977825 1 mount_linux.go:683] Output: ""
I0313 22:23:08.977856 1 mount_linux.go:618] Disk "/dev/sdc" appears to be unformatted, attempting to format as type: "xfs" with options: [-f /dev/sdc]
I0313 22:23:11.106605 1 mount_linux.go:629] Disk successfully formatted (mkfs): xfs - /dev/sdc /var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/2e5a88b1d2d58ad922056f868e18e89e22e9eea91cc725ebeeaa2f12e32457ae/globalmount
I0313 22:23:11.106632 1 mount_linux.go:647] Attempting to mount disk /dev/sdc in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/2e5a88b1d2d58ad922056f868e18e89e22e9eea91cc725ebeeaa2f12e32457ae/globalmount
I0313 22:23:11.106646 1 mount_linux.go:270] Mounting cmd (mount) with arguments (-t xfs -o noatime,noatime,nouuid,defaults /dev/sdc /var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/2e5a88b1d2d58ad922056f868e18e89e22e9eea91cc725ebeeaa2f12e32457ae/globalmount)
I0313 22:23:11.135010 1 node.go:209] "NodeStageVolume: volume mounted" device="/dev/sdc"
I0313 22:23:11.135889 1 node.go:513] "NodeGetCapabilities: called"
I0313 22:23:11.138355 1 node.go:513] "NodeGetCapabilities: called"
I0313 22:23:11.139081 1 node.go:513] "NodeGetCapabilities: called"
I0313 22:23:11.139868 1 node.go:280] "NodePublishVolume: called" args="{\"publish_context\":{\"DevicePath\":\"/dev/disk/by-id/wwn-0x5056432d49443032\",\"lun\":\"2\"},\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/2e5a88b1d2d58ad922056f868e18e89e22e9eea91cc725ebeeaa2f12e32457ae/globalmount\",\"target_path\":\"/var/lib/kubelet/pods/54bc69c3-397a-45a9-a05e-edb03649975f/volumes/kubernetes.io~csi/pvc-7f01d318-9502-49ed-bbd3-accb0b6e50db/mount\",\"volume_capability\":{\"access_mode\":{\"mode\":\"SINGLE_NODE_WRITER\"},\"mount\":{\"fs_type\":\"xfs\",\"mount_flags\":[\"noatime\"]}},\"volume_context\":{\"backup\":\"1\",\"cache\":\"writethrough\",\"csi.storage.k8s.io/ephemeral\":\"false\",\"csi.storage.k8s.io/pod.name\":\"babybuddy-0\",\"csi.storage.k8s.io/pod.namespace\":\"default\",\"csi.storage.k8s.io/pod.uid\":\"54bc69c3-397a-45a9-a05e-edb03649975f\",\"csi.storage.k8s.io/serviceAccount.name\":\"default\",\"ssd\":\"true\",\"storage\":\"fast1\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1741900554025-7731-csi.proxmox.sinextra.dev\"},\"volume_id\":\"pve-cluster/pve-2/fast1/vm-9999-pvc-7f01d318-9502-49ed-bbd3-accb0b6e50db\"}"
I0313 22:23:11.142032 1 mount_linux.go:270] Mounting cmd (mount) with arguments (-t xfs -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/2e5a88b1d2d58ad922056f868e18e89e22e9eea91cc725ebeeaa2f12e32457ae/globalmount /var/lib/kubelet/pods/54bc69c3-397a-45a9-a05e-edb03649975f/volumes/kubernetes.io~csi/pvc-7f01d318-9502-49ed-bbd3-accb0b6e50db/mount)
I0313 22:23:11.143453 1 mount_linux.go:270] Mounting cmd (mount) with arguments (-t xfs -o bind,remount,rw /var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/2e5a88b1d2d58ad922056f868e18e89e22e9eea91cc725ebeeaa2f12e32457ae/globalmount /var/lib/kubelet/pods/54bc69c3-397a-45a9-a05e-edb03649975f/volumes/kubernetes.io~csi/pvc-7f01d318-9502-49ed-bbd3-accb0b6e50db/mount)
I0313 22:23:11.144665 1 node.go:374] "NodePublishVolume: volume published for pod" device="/dev/disk/by-id/wwn-0x5056432d49443032" pod="default/babybuddy-0"
I0313 22:23:26.441046 1 node.go:513] "NodeGetCapabilities: called"
I0313 22:23:26.441770 1 node.go:406] "NodeGetVolumeStats: called" args="{\"volume_id\":\"pve-cluster/pve-2/fast1/vm-9999-pvc-7f01d318-9502-49ed-bbd3-accb0b6e50db\",\"volume_path\":\"/var/lib/kubelet/pods/54bc69c3-397a-45a9-a05e-edb03649975f/volumes/kubernetes.io~csi/pvc-7f01d318-9502-49ed-bbd3-accb0b6e50db/mount\"}"
I0313 22:24:06.486203 1 node.go:513] "NodeGetCapabilities: called"
Side note, had to modify cluster role to get this feature to work in the first place, but there does not currently seem to be a way to handle this in a gitops way as the role is currently hard coded when intalling using Helm chart.
Needed to add: volumeattributesclasses.storage.k8s.io [get list watch]
Environment
- Plugin version: v0.11.0
- Kubernetes version: Server Version: v1.32.2
- CSI capasity: [
kubectl get csistoragecapacities -ocustom-columns=CLASS:.storageClassName,AVAIL:.capacity,ZONE:.nodeTopology.matchLabels -A] - CSI resource on the node: [
kubectl get CSINode <node> -oyaml] - Node describe: [
kubectl describe node <node>] - OS version: talos linux 1.9.4
- Proxmox: 8.3.4
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request