-
Notifications
You must be signed in to change notification settings - Fork 126
OCPBUGS-60074: UPSTREAM: 132028: podresources: list: use active pods #2391
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPBUGS-60074: UPSTREAM: 132028: podresources: list: use active pods #2391
Conversation
The podresources API List implementation uses the internal data of the resource managers as source of truth. Looking at the implementation here: https://github.com/kubernetes/kubernetes/blob/v1.34.0-alpha.0/pkg/kubelet/apis/podresources/server_v1.go#L60 we take care of syncing the device allocation data before querying the device manager to return its pod->devices assignment. This is needed because otherwise the device manager (and all the other resource managers) would do the cleanup asynchronously, so the `List` call will return incorrect data. But we don't do this syncing neither for CPUs or for memory, so when we report these we will get stale data as the issue kubernetes#132020 demonstrates. For CPU manager, we however have the reconcile loop which cleans the stale data periodically. Turns out this timing interplay was actually the reason the existing issue kubernetes#119423 seemed fixed (see: kubernetes#119423 (comment)). But it's actually timing. If in the reproducer we set the `cpuManagerReconcilePeriod` to a time very high (>= 5 minutes), then the issue still reproduces against current master branch (https://github.com/kubernetes/kubernetes/blob/v1.34.0-alpha.0/test/e2e_node/podresources_test.go#L983). Taking a step back, we can see multiple problems: 1. not syncing the resource managers internal data before to query for pod assignment (no removeStaleState calls) but most importantly 2. the List call iterate overs all the pod known to the kubelet. But the resource managers do NOT hold resources for non-running pod, so it is better, actually it's correct to iterate only over the active pods. This will also avoid issue 1 above. Furthermore, the resource managers all iterate over the active pods anyway: `List` is using all the pods known about: 1. https://github.com/kubernetes/kubernetes/blob/v1.34.0-alpha.0/pkg/kubelet/kubelet.go#L3135 goes in 2. https://github.com/kubernetes/kubernetes/blob/v1.34.0-alpha.0/pkg/kubelet/pod/pod_manager.go#L215 But all the resource managers are using the list of active pods: 1. https://github.com/kubernetes/kubernetes/blob/v1.34.0-alpha.0/pkg/kubelet/kubelet.go#L1666 goes in 2. https://github.com/kubernetes/kubernetes/blob/v1.34.0-alpha.0/pkg/kubelet/kubelet_pods.go#L198 So this change will also make the `List` view consistent with the resource managers view, which is also a promise of the API currently broken. We also need to acknowledge the the warning in the docstring of GetActivePods. Arguably, having the endpoint using a different podset wrt the resource managers with the related desync causes way more harm than good. And arguably, it's better to fix this issue in just one place instead of having the `List` use a different pod set for unclear reason. For these reasons, while important, I don't think the warning per se invalidated this change. We need to further acknowledge the `List` endpoint used the full pod list since its inception. So, we will add a Feature Gate to disable this fix and restore the old behavior. We plan to keep this Feature Gate for quite a long time (at least 4 more releases) considering how stable this change was. Should a consumer of the API being broken by this change, we have the option to restore the old behavior and to craft a more elaborate fix. The old `v1alpha1` endpoint will be not modified intentionally. ***RELEASE-4.19 BACKPORT NOTE*** dropped the versioned feature gate entry as we don't have the versioned geature gates in this version. Signed-off-by: Francesco Romani <[email protected]>
In order to facilitate backports (see OCPBUGS-56785) we prefer to remove the feature gate added as safety measure upstream and disable this escape hatch upstream added. This commit must be dropped once we rebase on top of 1.34. Signed-off-by: Francesco Romani <[email protected]>
@ffromani: the contents of this pull request could not be automatically validated. The following commits are valid:
The following commits could not be validated and must be approved by a top-level approver:
Comment |
/jira cherrypick OCPBUGS-56785 |
@ffromani: An error was encountered cloning bug for cherrypick for bug OCPBUGS-56785 on the Jira server at https://issues.redhat.com/. No known errors were detected, please see the full error message for details. Full error message.
No Link Issue Permission for issue 'OCPBUGS-60074': request failed. Please analyze the request body for more details. Status code: 401:
Please contact an administrator to resolve this issue, then request a bug refresh with In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
@ffromani: An error was encountered adding this pull request to the external tracker bugs for bug OCPBUGS-60074 on the Jira server at https://issues.redhat.com/. No known errors were detected, please see the full error message for details. Full error message.
failed to add remote link: failed to add link: No Link Issue Permission for issue 'OCPBUGS-60074'.: request failed. Please analyze the request body for more details. Status code: 403:
Please contact an administrator to resolve this issue, then request a bug refresh with In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/jira refresh |
@ffromani: This pull request references Jira Issue OCPBUGS-60074, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/retest-required |
/jira refresh |
@ffromani: This pull request references Jira Issue OCPBUGS-60074, which is invalid:
Comment In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/retest |
/jira refresh |
@ffromani: This pull request references Jira Issue OCPBUGS-60074, which is invalid:
Comment In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/jira refresh |
@ffromani: This pull request references Jira Issue OCPBUGS-60074, which is invalid:
Comment In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/retest |
/jira refresh |
/retest |
[APPROVALNOTIFIER] This PR is APPROVED Approval requirements bypassed by manually added approval. This pull-request has been approved by: ffromani, haircommander The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest-required |
@ffromani: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
97b7f2e
into
openshift:release-4.19
@ffromani: Jira Issue OCPBUGS-60074: All pull requests linked via external trackers have merged: Jira Issue OCPBUGS-60074 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
@ffromani: #2391 failed to apply on top of branch "release-4.18":
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[ART PR BUILD NOTIFIER] Distgit: openshift-enterprise-pod |
[ART PR BUILD NOTIFIER] Distgit: kube-proxy |
[ART PR BUILD NOTIFIER] Distgit: openshift-enterprise-hyperkube |
[ART PR BUILD NOTIFIER] Distgit: ose-installer-kube-apiserver-artifacts |
/cherry-pick release-4.18 |
@haircommander: #2391 failed to apply on top of branch "release-4.18":
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Fix included in accepted release 4.19.0-0.nightly-2025-08-14-135013 |
openshift/kubernetes#2391 has landed in openshift, which means that the behavior described is enabled by default and we want the scheduler to adapt to this behavior by default iff the user didn't explicitly set the informer mode in the CR. Until now, the scheduler works in a dedicated informer mode which is meant to take into account the pods in terminal state (such as pods that ran and completed) in the PFP computation to ensure it matches the PFP computed by the RTE and reported in NRT. The intended behavior which the new kubelet behavior is about ignoring such pods and accounting only for active pods, so if this behavior is enabled in the RTE-NRT, while kept the default dedicated in the scheduler there will be misalignment in the computed vs the expected PFP from scheduler's POV vs NRT's POV and a scheduler stall will happen that will never recover. In this commit we adjust the informer default value to Shared (instead of Dedicated) only if both below conditions are met: 1. the cluster version supports the fixed kubelet which is met if the cluster version is equal or greater to the known-to-be fixed OCP version. 2. the user didn't set the Spec.SchedulerInformer field in the NRS CR This modification will enable the shared mode which in turn includes only running pods (= active pods) in the PFP computation from POV allowing ultimately PFP alignment with NRT. Signed-off-by: Shereen Haj <[email protected]> (cherry picked from commit 6a56840)
What type of PR is this?
/kind bug
What this PR does / why we need it:
Backport of the upstream fix for https://issues.redhat.com/browse/OCPBUGS-56785
Which issue(s) this PR fixes:
Fixes https://issues.redhat.com/browse/OCPBUGS-56785
Special notes for your reviewer:
NA
Does this PR introduce a user-facing change?