-
Couldn't load subscription status.
- Fork 48
Description
Bug Report
Description
Currently at my workplace, we are using FQDN on our nodes in proxmox. We are setting up your csi plugin and we run into issues where its trying to look for the shortname instead of FQDN in proxmox when provisioning a PVC.
No errors in proxmox-csi controller or node pods. We see the errors in the pod logs when provisioning a component with PVC with storageclass proxmox-data-zfs
We are using the helm chart on Talos version 1.10.4 with FluxCD.
Logs
Controller: [kubectl logs -c proxmox-csi-plugin-controller proxmox-csi-plugin-controller-...]
Defaulted container "proxmox-csi-plugin-controller" out of: proxmox-csi-plugin-controller, csi-attacher, csi-provisioner, csi-resizer, liveness-probe
I0717 08:30:57.944463 1 controller.go:625] "GetCapacity: called" args="{\"accessible_topology\":{\"segments\":{\"topology.kubernetes.io/region\":\"pve\",\"topology.kubernetes.io/zone\":\"pve-node-b\"}},\"parameters\":{\"csi.storage.k8s.io/fstype\":\"ext4\",\"storage\":\"K8S-dev-cluster-zfs\"}}"
I0717 08:30:57.944744 1 controller.go:637] "GetCapacity" region="pve" zone="pve-node-b" storageID="K8S-dev-cluster-zfs"
I0717 08:31:55.827763 1 controller.go:625] "GetCapacity: called" args="{\"accessible_topology\":{\"segments\":{\"topology.kubernetes.io/region\":\"pve\",\"topology.kubernetes.io/zone\":\"pve-node-a\"}},\"parameters\":{\"csi.storage.k8s.io/fstype\":\"ext4\",\"storage\":\"K8S-dev-cluster-zfs\"}}"
I0717 08:31:55.827829 1 controller.go:637] "GetCapacity" region="pve" zone="pve-node-a" storageID="K8S-dev-cluster-zfs"
I0717 08:31:56.576886 1 controller.go:625] "GetCapacity: called" args="{\"accessible_topology\":{\"segments\":{\"topology.kubernetes.io/region\":\"pve\",\"topology.kubernetes.io/zone\":\"pve-node-b\"}},\"parameters\":{\"csi.storage.k8s.io/fstype\":\"ext4\",\"storage\":\"K8S-dev-cluster-zfs\"}}"
I0717 08:31:56.576904 1 controller.go:637] "GetCapacity" region="pve" zone="pve-node-b" storageID="K8S-dev-cluster-zfs"
I0717 08:31:57.287858 1 controller.go:625] "GetCapacity: called" args="{\"accessible_topology\":{\"segments\":{\"topology.kubernetes.io/region\":\"pve\",\"topology.kubernetes.io/zone\":\"pve-node-c\"}},\"parameters\":{\"csi.storage.k8s.io/fstype\":\"ext4\",\"storage\":\"K8S-dev-cluster-zfs\"}}"
I0717 08:31:57.287975 1 controller.go:637] "GetCapacity" region="pve" zone="pve-node-c" storageID="K8S-dev-cluster-zfs"
I0717 08:31:58.050646 1 controller.go:625] "GetCapacity: called" args="{\"accessible_topology\":{\"segments\":{\"topology.kubernetes.io/region\":\"pve\",\"topology.kubernetes.io/zone\":\"pve-node-d\"}},\"parameters\":{\"csi.storage.k8s.io/fstype\":\"ext4\",\"storage\":\"K8S-dev-cluster-zfs\"}}"
I0717 08:31:58.050665 1 controller.go:637] "GetCapacity" region="pve" zone="pve-node-d" storageID="K8S-dev-cluster-zfs"Node: [kubectl logs -c proxmox-csi-plugin-node proxmox-csi-plugin-node-...]
Defaulted container "proxmox-csi-plugin-node" out of: proxmox-csi-plugin-node, csi-node-driver-registrar, liveness-probe
I0717 08:08:26.757391 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0717 08:08:26.757407 1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0717 08:08:26.757412 1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0717 08:08:26.757415 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0717 08:08:26.757419 1 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0717 08:08:26.764787 1 mount_linux.go:324] Detected umount with safe 'not mounted' behavior
I0717 08:08:26.764855 1 main.go:140] Listening for connection on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"}
I0717 08:08:26.868785 1 identity.go:40] "GetPluginInfo: called"
I0717 08:08:26.976748 1 identity.go:40] "GetPluginInfo: called"
I0717 08:08:27.757009 1 node.go:540] "NodeGetInfo: called"Events from a mysql pod:
Message
---- ------ ---- ---- -------
Normal Scheduled 2m42s default-scheduler Successfully assigned dev-pxc-8/pxc-db-5-7-pxc-0 to dev-worker04
Warning FailedAttachVolume 119s attachdetach-controller AttachVolume.Attach failed for volume "pvc-c73b059f-c8a5-4d0b-9bc0-4ffd3354a967" : rpc error: code = Internal desc = vm 'dev-worker04' not foundHelm release ( I have redacted the hostname) :
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: proxmox-csi-plugin
namespace: storage
spec:
interval: 2h
timeout: 15m
chart:
spec:
chart: proxmox-csi-plugin
sourceRef:
kind: HelmRepository
name: proxmox-csi-plugin
namespace: flux-system
version: placeholder # change version in overlays or clusters
values:
metrics:
enabled: true
existingConfigSecret: proxmox-csi-config
storageClass:
- name: proxmox-data-zfs
storage: K8S-dev-cluster-zfs
reclaimPolicy: Delete
fstype: ext4
# Deploy Node CSI driver only on worker nodes
node:
nodeSelector:
<domain>.com/role: worker
tolerations:
- operator: Exists
# Deploy CSI controller only on control-plane nodes
nodeSelector:
<domain>.com/role: controlplane
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoScheduleEnvironment
- Plugin version: 0.3.11
- Kubernetes version: [
kubectl version --short]
Client Version: v1.33.2
Kustomize Version: v5.6.0
Server Version: v1.32.5- CSI capasity: [
kubectl get csistoragecapacities -ocustom-columns=CLASS:.storageClassName,AVAIL:.capacity,ZONE:.nodeTopology.matchLabels -A]
CLASS AVAIL ZONE
proxmox-data-zfs 5387901524Ki map[topology.kubernetes.io/region:pve topology.kubernetes.io/zone:pve-node-b]
proxmox-data-zfs 6133449936Ki map[topology.kubernetes.io/region:pve topology.kubernetes.io/zone:pve-node-c]
proxmox-data-zfs 6164295984Ki map[topology.kubernetes.io/region:pve topology.kubernetes.io/zone:pve-node-a]
proxmox-data-zfs 7360642248Ki map[topology.kubernetes.io/region:pve topology.kubernetes.io/zone:pve-node-d]- CSI resource on the node: [
kubectl get CSINode <node> -oyaml] - Node describe: [
kubectl describe node <node>] - OS version [
cat /etc/os-release]
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request