-
Notifications
You must be signed in to change notification settings - Fork 1.2k
use leader election API if available #460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
here leader election api means coordination.k8s.io api? @DirectXMan12 |
yeah |
Hi @DirectXMan12, I can provide a PR for that 🙂 |
/assign |
yeah, still interested. Will have to coordinate with #444 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/lifecycle frozen |
/help |
@vincepri: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What's the status on this issue? If using the lease API based on whether it's available or not is not an option, I would still be interested in having the LE resource lock configurable in This would allow controllers to make the LE lock configurable just like in kcm, ccm, etc. (ref If you like the proposed approach, I would definitely be willing to try to solve it in a PR :) |
@DirectXMan12 @vincepri |
cc @alvaroaleman, given that there was some discussion on #600. How we approach this problem depends on what we want our compatibility story look like. I'm fine discussing the best way forward that makes sense for most of our users, although I want to make sure we don't break the world at the same time. If we can get a pluggable leader election strategy that'd be the best way to keep backward compatible behaviors. |
Yeah, maybe just expose the strategy as a config option? Its possible to e.G. require both a configmap and a leader lease for a transition. I was first thinking that we should maybe do that for a couple of releases, but if ppl skip those releases, they might end up with two parallel leaders due to the changed strategy. Making it configurable would avoid that. Only drawback is that we will forever have a default that isn't great. |
The other solution would be to reduce the scope of Kubernetes versions we support, or rather just increase it. |
This doesn't really have anything to do with Kubernetes versions. This was added in 1.11 or something IIRC so presumably everyone has this. The issue is that if we change the strategy, the next time something gets deployed, there might be two active leaders, one with the old strategy, one with the new strategy. Furthermore, changing the default won't result in compile error or anything, just in a runtime "we broke a certain guarantee". |
Ah I see what you mean, I misunderstood the earlier comment. Let's discuss it at the community meeting? |
sgtm |
We discussed this on the community meeting on Aug 27 and concluded that:
|
Sounds great! |
We should use the new leader election api when available. We can check if it's around via discovery, and then use it if available, falling back to configmaps if not
/kind feature
/priority backlog
The text was updated successfully, but these errors were encountered: