Closed
Description
Bug Report
What did you do?
A clear and concise description of the steps you took (or insert a code snippet).
- Deploy the operator.yaml generated by the
operator-sdk
, and run successfully in the openshift cluster. - Add the env of the running operator to trigger the new deployment, such as
KeyA=ValueA
, whatever. - In this case, we expected the old container is stopped, and the new container started.
But the old container could not stop as expected, the new container will also log the following message because the configmap leader ship.
2019-05-15T10:18:19.973Z INFO leader Not the leader. Waiting.
What did you expect to see?
The old container should exit as other Deployment behavior.
What did you see instead? Under which circumstances?
The behavior only happens on Openshift 3.9.
In Openshift 3.11. The old container will exit normally.
Environment
-
operator-sdk version:
v0.7.0 -
go version:
go version go1.12.4 darwin/amd64 -
Kubernetes version information:
[root@~]# kubectl version
Client Version: version.Info{Major:"", Minor:"", GitVersion:"v1.9.1+a0ce1bc657", GitCommit:"a0ce1bc", GitTreeState:"clean", BuildDate:"2018-04-11T20:47:54Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"", Minor:"", GitVersion:"v1.9.1+a0ce1bc657", GitCommit:"a0ce1bc", GitTreeState:"clean", BuildDate:"2018-04-11T20:47:54Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
- Kubernetes cluster kind:
go
Possible Solution
Note: The generated operator.yaml did not specify the deployment strategy, which means the default is RollingUpdate.
We try to hange the deployment strategy to Recreate
, but it is the same.
Additional context
Add any other context about the problem here.