We got a very weird error while doing our load tests. Suddenly and apparently without any reasons the ingress controller stops routing the requests coming to the https port (443) and instead sending them to the default-backend. The request coming to the http port (80) were still being routed.
I inspected the cluster, the secret object containing the public certificate was indeed present in the namespace and the certificate file itself was also inside the container. But the /etc/haproxy.conf file didn't have any route configured for the https.
I assume that haproxy-ingress-controller was trying to refresh the certificate and since it was not able to get it the secret containing it, it decided to drop the rule associated to the same ingress controller where the certificate was refereed.
After some time, again apparently without any reason, the certificate was found and haproxy start routing request again on port 443
Here I have two questions/requests.
- What could have been the reason for this problem.
- If the root cause is not easy to find, is there anyway that I can configure haproxy to don't drop the requests comming to port 443 in case it doesn't find the associated certificate and instead offer the one configured as --default-ssl-certificate?
When Inspecting the logs I found the following.
I1201 19:09:38.239630 7 listers.go:66] ignoring delete for ingress gwing based on annotation kubernetes.io/ingress.class
I1201 19:09:38.248416 7 listers.go:66] ignoring delete for ingress websocketsproxying based on annotation kubernetes.io/ingress.class
I1201 19:09:39.093803 7 listers.go:66] ignoring delete for ingress ingress-cb based on annotation kubernetes.io/ingress.class
I1201 19:09:39.100881 7 listers.go:66] ignoring delete for ingress ingress-gw based on annotation kubernetes.io/ingress.class
W1201 19:14:06.097959 7 controller.go:1056] ssl certificate "<namespace>/<certificate-secret>" does not exist in local store
I1201 19:14:06.099581 7 controller.go:312] backend reload required
I1201 19:14:06.122096 7 controller.go:171] HAProxy output:
I1201 19:14:06.122125 7 controller.go:321] ingress backend successfully reloaded...
W1201 19:24:06.098290 7 controller.go:1056] ssl certificate "<namespace>/<certificate-secret>" does not exist in local store
W1201 19:34:06.098385 7 controller.go:1056] ssl certificate "<namespace>/<certificate-secret>" does not exist in local store
W1201 19:44:06.100323 7 controller.go:1056] ssl certificate "<namespace>/<certificate-secret>" does not exist in local store
W1201 19:54:06.099091 7 controller.go:1056] ssl certificate "<namespace>/<certificate-secret>" does not exist in local store
W1201 19:54:09.297970 7 controller.go:1056] ssl certificate "<namespace>/<certificate-secret>" does not exist in local store
I1201 20:14:06.099869 7 backend_ssl.go:63] adding secret <namespace>/<certificate-secret> to the local store
I1201 20:14:06.102840 7 controller.go:312] backend reload required
I1201 20:14:06.123826 7 controller.go:171] HAProxy output:
We got a very weird error while doing our load tests. Suddenly and apparently without any reasons the ingress controller stops routing the requests coming to the https port (443) and instead sending them to the default-backend. The request coming to the http port (80) were still being routed.
I inspected the cluster, the secret object containing the public certificate was indeed present in the namespace and the certificate file itself was also inside the container. But the /etc/haproxy.conf file didn't have any route configured for the https.
I assume that haproxy-ingress-controller was trying to refresh the certificate and since it was not able to get it the secret containing it, it decided to drop the rule associated to the same ingress controller where the certificate was refereed.
After some time, again apparently without any reason, the certificate was found and haproxy start routing request again on port 443
Here I have two questions/requests.
When Inspecting the logs I found the following.