-
Couldn't load subscription status.
- Fork 580
Description
Describe the bug
I'm running k3s with ceph-csi-rbd. When I reboot the Virtual Machine on my Proxmox cluster I get "libceph: connect (1)10.255.255.2:6789 error -101" and "mon1 (1)10.255.255.5:6789 connect error" spamming errors on the console for about 5 minutes, before it acutally reboots.
Environment details
- Image/version of Ceph CSI driver : Which container exactly?
- Helm chart version : 3.15.0
- Kernel version :
- Mounter used for mounting PVC (for cephFS its
fuseorkernel. for rbd its
krbdorrbd-nbd) : - Kubernetes cluster version : v1.33.4+k3s1
- Ceph cluster version : 19.2.3
Steps to reproduce
Steps to reproduce the behavior:
3 node cluster with 1 control plane, helm chart values:
csiConfig:
- clusterID: <HIDDEN>
monitors:
- '10.255.255.5:6789'
- '10.255.255.2:6789'
- '10.255.255.3:6789'
provisioner:
replicaCount: 1
secret:
create: false # Using an external secret from hashicorp vault
selinuxMount: false
storageClass:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
clusterID: <HIDDEN>
create: true
name: csi-cephrbd-sc
pool: k8s_rbd_poolActual results
See screenshot above.
Expected behavior
A normal, quick reboot.
Logs
Ceph monitoring (of 10.255.255.5) reports normal logging status messages;
2025-10-20T09:46:53.229034+0200 mgr.kvm-03 (mgr.120307575) 1956 : cluster [DBG] pgmap v1969: 417 pgs: 417 active+clean; 4.0 TiB data, 12 TiB used, 15 TiB / 27 TiB avail; 445 KiB/s rd, 1.0 MiB/s wr, 57 op/s
The provisioner doesn't log any errors (except VolumeSnapshot errors, as I didn't install the VolumeSnapshotClass CRD's or anything).
Additional context
Add any other context about the problem here.
For example:
Any existing bug report which describe about the similar issue/behavior