-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Server-side Apply problem #1311
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
What are those GetUID functions? |
Oh the second isn’t a server side apply. Check the patch diff you are sending by logging it or similar. Will probably show something unexpected. |
Thank you for your response, |
I do it in my rabbitmq-operator, https://github.com/coderanger/rabbitmq-operator/blob/main/controllers/rabbituser.go though the code is probably going to be hard to follow. Also it's been a huge source of bugs and I really want to get rid of it :D |
when restart controller pod got the following message
when change lead to deployment's pod restart, because C1 controller change the replicas of the deployment although |
Field manager names should not be UUIDs, they should be names, usually the name of the controller (or subcontroller element). |
The conflict probably means that your apply contains those fields. When using SSA one should only send the fields they care about. In this case your operator should only specify (I think) replicas. And not other fields. If two controllers are intended to manage the same resource, they should never specify a patch that would make their ownership collide. SSA is not meant for sending the full object if you don't care about the full object in your workflow. I don't know how you're building your object here, are you applying the full object? |
in my case , I have two controllers when restart the controllers got the following event
|
That should be the representation of a restart for all pods. You scale down and then back up. Make sure both controllers have different fieldManagers or they might bounce around ownership (one fieldManager has to send the fields it cares about on every apply or they will get dropped). And make sure that the second controller only sends the replicas field. |
is there a way remove the deployment field or fill deployment field by managedFields between desired deployment and current deployment ? I have two kind controller assume controller 1 named this Apply function for Workload create or update deployment
this Patch function is Scaler to scale deployment
|
I think I don't understand your setup yet. Can you clarify the goal you want to reach? |
I know what my problem is. in my WorkloadController which controls the deployment
It should be used but I have to pre-process this deployment which WorkloadController render when the deployment has exists. at the beginning I think k8s SSA will skip this replicas field which managedFields belong to ScalerController, I need find a way to deal with fields that have been modified by ScalerController thanks @coderanger @kwiesmueller if SSA can provide
|
That would be a feature request for upstream, but seems unlikely. As mentioned, you shouldn't be sending a replicas field at all. A common mistake with Apply patches is to use a normal object, you almost always want to be using an unstructued. Take a look at the code in https://github.com/coderanger/controller-utils/blob/main/components/template.go (or just use it directly) |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Uh oh!
There was an error while loading. Please reload this page.
In our internal scenario, We developed a PaaS platform based on OAM
our business simply declare the Workload and Traits they use
There are two types of Workload,
ServerWorkload
(online) andTaskWorkload
(offline)Traits include
ManualScalerTrait
,AutoScalerTrait
,LoadBalanceTrait
etc.The ServerWorkload renders a Deployment and creates it
When the
ManualScalerTrait
observes that this Deployment has been created, it changes the replicas of this DeploymentIn practice, however, we found that once our Operator was restarted, the POD that was already under the running deployment would be restarted too.
found the following event
Because the replicas of Deployment rendered by ServerWorkload is 0 But the number of ManualScaler is 3, the deployment changes from 3 to 0 and then to 3 again, Caused a restart
Based on the implementation mechanism, it is difficult for ServerWorkload to patch Deployment because it renders a complete Deployment template.
If APIServer can provide this parameter
client.SkipConflictFields
, perfect solutioncontroller-runtime version :
0.6.2
The text was updated successfully, but these errors were encountered: