-
Notifications
You must be signed in to change notification settings - Fork 543
Informer spams cluster API after restarting #1933
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
After some debug, I can see that this error is returned by the API after reconnecting, which makes the watcher terminate, and being restarted immediately:
|
Merged
I propose this fix, please tell me if it seems ok, I'll add unit tests |
1 task
closing this as completed via #1934 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
Informer sends tens of watch requests per second to the cluster after the connection has been lost then re-established.
** Client Version **
1.0.0-rc6
** Server Version **
e.g.
1.29.2
(kind version 0.22.0)To Reproduce
Steps to reproduce the behavior:
index.ts:
pod1
in the default namespacenpx tsc && node index.js
)==> add pod1
Expected behavior
The api should not be spammed this way
** Example Code**
See repository https://github.com/feloy/kubernetes-client-issue-1933
Environment (please complete the following information):
The text was updated successfully, but these errors were encountered: