We have been noticing that after a while the multi-binder process goes zombie which seems to mean new haproxy processes cannot be started, which means new versions of the haproxy.cfg cannot be consumed. This causes config drift that is hard to track as it only causes problems when pods/services/ingresses change.
It looks like the reason it goes zombie (rather than just terminating) is because the bash script that starts multi-binder (start.sh) exits with an exec command to start haproxy-ingress-controller which does not listen for the multi binders exit code.
Is there maybe a way that haproxy-ingress-controller itself could start the multi-binder process instead of the bash script. That way haproxy-ingress-controller could detect when the multi-binder process exits and report itself as unhealty/not live.
We have been noticing that after a while the
multi-binderprocess goeszombiewhich seems to mean new haproxy processes cannot be started, which means new versions of thehaproxy.cfgcannot be consumed. This causes config drift that is hard to track as it only causes problems when pods/services/ingresses change.It looks like the reason it goes zombie (rather than just terminating) is because the bash script that starts multi-binder (
start.sh) exits with an exec command to starthaproxy-ingress-controllerwhich does not listen for the multi binders exit code.Is there maybe a way that haproxy-ingress-controller itself could start the multi-binder process instead of the bash script. That way haproxy-ingress-controller could detect when the multi-binder process exits and report itself as unhealty/not live.