-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Description
During an IP pool migration using Calico in VXLAN mode, we encountered unexpected network policy behavior due to a missing step in the official documentation: updating the clusterCIDR value in the kube-proxy ConfigMap to match the new Calico IP pool.
Expected Behavior
The Calico IPAM migration documentation ( https://docs.tigera.io/calico/latest/networking/ipam/migrate-pools )should clearly state that during the migration process, the clusterCIDR value in the kube-proxy ConfigMap (located in the kube-system namespace) must be updated to match the new IP pool defined in Calico.
Current Behavior
The documentation does not mention the need to update the clusterCIDR in the kube-proxy configuration. As a result, users may overlook this critical step during migration.
Possible Solution
Add a clear note in the migration guide instructing users to:
Update the clusterCIDR field in the kube-proxy ConfigMap to reflect the new Calico IP pool (e.g., 172.20.0.0/16) and restart kube-proxy
Ensure this change is applied before or during the migration to avoid network policy issues.
Context
When migrating IP pools in Calico (VXLAN mode), if the clusterCIDR in kube-proxy is not updated to match the new pool, certain network policies may block traffic. Specifically:
Pod-to-Pod communication continues to work.
Pod-to-Service communication fails when network policies are defined.
This behavior can lead to significant service disruptions and is difficult to diagnose without prior knowledge of the clusterCIDR dependency.
Your Environment
Calico Version: All
Calico Mode: VXLAN
Kubernetes Versions: All supported versions
Observed Symptoms: Network policies block traffic through Kubernetes services; direct Pod-to-Pod communication unaffected.