Add option to also deregister the terminating instance from all ELBs#5
Add option to also deregister the terminating instance from all ELBs#5piontec wants to merge 5 commits intopusher:masterfrom
Conversation
JoelSpeed
left a comment
There was a problem hiding this comment.
I've just got a few minor questions on this but am fairly happy with the PR in general.
Would you be able to add something to the ReadMe on this new feature?
Out of interest, have you tried externalTrafficPolicy: local on your Kubernetes services, we don't see this problem and I believe it's because we have the external policy set. It means that kube-proxy only responds to requests if the service target exists on the node, so draining the node causes ELB health check failures.
| ## Usage | ||
|
|
||
| ### Deploy to Kubernetes | ||
| A docker image is available at `quay.io/pusher/k8s-spot-termination-handler`. |
There was a problem hiding this comment.
Out of interest, why are you removing this comment?
| verbs: | ||
| - get | ||
| - update | ||
| - patch |
There was a problem hiding this comment.
I can't see any reason to add this extra verb unless you've run into errors with RBAC using this helper?
| print('Node Drain successful') | ||
| else: | ||
| print('Node drain failed, will retry') | ||
| continue |
There was a problem hiding this comment.
Is this continue necessary?
|
@piontec: PR needs rebase. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This becomes essential when instances are exposed with ELB: after getting the termination signal, the instance is drained and soon will be terminated. Still,
kube-proxykeeps running on the instance, so it's not removed from ELB. WHen isntance stops, ELB must detect failure on its own, which introduces a delay of at least 10s. With this PR terminating instances can deregister themselfs from all ELBs before they will be reclaimed by AWS.