-
Notifications
You must be signed in to change notification settings - Fork 18
e2e: serial: Test for performance profile update and NRT reconciliation #1255
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: SargunNarula The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! below is the initial review
newTMPolicy := getNewTopologyManagerPolicyValue(initialTMPolicy) | ||
Expect(initialTMPolicy).ToNot(Equal(newTMPolicy), "new TM policy should differ from the initial one") | ||
|
||
By("retrieving the PerformanceProfile configmap from the management cluster") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need to do extra verification layer before, how do you know if the configmap is for the PP ? and whether NROP is using PP config or kubeletconfig?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the ConfigMap is retrieved from the tuningConfig
field under the NodePool spec, it effectively serves as the source of the PerformanceProfile configuration. In the NROP-HCP environment, there is no separate KubeletConfig as far as I’m aware. On the hosted cluster, the KubeletConfig ConfigMap is generated by the HyperShift operator and is marked as immutable: true
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so is performance-profile is a must in HCP? what is the user want to do kubeletchanges but not involve PP? can't this just be done via kubeletconfig like in OCP?
a47214f
to
7b6837f
Compare
7b6837f
to
e023c87
Compare
e023c87
to
5b7afc1
Compare
/retest |
Expect(modifiedYaml).ToNot(Equal(yamlData), "no changes applied to ConfigMap YAML") | ||
|
||
targetCM.Data["tuning"] = modifiedYaml | ||
err = e2eclient.MNGClient.Update(context.TODO(), targetCM) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I'm correct the reboot should happen on the update, where are we waiting for reboot?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the node reboots on a PerformanceProfile update, but we don’t wait for the reboot explicitly. Since NRT relies on the RTE DaemonSet, and RTE pods are recreated only after the node becomes Ready, waiting for the updated NRT implicitly covers the reboot. Please let me know if I’ve misunderstood anything here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't went fully into reviewing the code itself, but there's an hidden assumption here, which is that a performance profile is available for the test.
This is not always the case because one can create a kubeletconfig directly.
The only valid assumption is that we have a kubeleconfig available, not a performance profile.
…reconciliation Signed-off-by: Sargun Narula <[email protected]>
This PR implements the test for the performance profile update over the management cluster and verifies the NRT objects reconciliation with the updated changes on the hosted cluster.