You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What did you do?
A clear and concise description of the steps you took (or insert a code snippet).
Deploy the operator.yaml generated by the operator-sdk, and run successfully in the openshift cluster.
Add the env of the running operator to trigger the new deployment, such as KeyA=ValueA, whatever.
In this case, we expected the old container is stopped, and the new container started.
But the old container could not stop as expected, the new container will also log the following message because the configmap leader ship.
2019-05-15T10:18:19.973Z INFO leader Not the leader. Waiting.
What did you expect to see?
The old container should exit as other Deployment behavior.
What did you see instead? Under which circumstances?
The behavior only happens on Openshift 3.9.
In Openshift 3.11. The old container will exit normally.
Possible Solution
Note: The generated operator.yaml did not specify the deployment strategy, which means the default is RollingUpdate.
We try to hange the deployment strategy to Recreate, but it is the same.
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered:
@linzhaoming Before we get too far in diagnosing your issue, I want to point out that the SDK lists Kubernetes 1.11.3+ as a prerequisite, so older versions of Kubernetes (and OpenShift) may not work.
Having said that, this sounds similar to #920. Does your operator have a readiness probe that succeeds only after the operator pod becomes the leader?
Bug Report
What did you do?
A clear and concise description of the steps you took (or insert a code snippet).
operator-sdk
, and run successfully in the openshift cluster.KeyA=ValueA
, whatever.But the old container could not stop as expected, the new container will also log the following message because the configmap leader ship.
What did you expect to see?
The old container should exit as other Deployment behavior.
What did you see instead? Under which circumstances?
The behavior only happens on Openshift 3.9.
In Openshift 3.11. The old container will exit normally.
Environment
operator-sdk version:
v0.7.0
go version:
go version go1.12.4 darwin/amd64
Kubernetes version information:
go
Possible Solution
Note: The generated operator.yaml did not specify the deployment strategy, which means the default is RollingUpdate.
We try to hange the deployment strategy to
Recreate
, but it is the same.Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: