-
Notifications
You must be signed in to change notification settings - Fork 8.5k
Description
What happened:
I was updating from ingress-nginx 1.10.1 to 1.11.0 using helm upgrade ingress-nginx /helm/ingress-nginx --install -f /helm/ingress-nginx/custom-values.yaml
All five ingresses assigned to that ingress-nginx instance went unresponsive, came up a few times but then crashed constantly.
When opening the ingress hosts in the browser, i see lots of messages such as:
2024/07/09 07:00:34 [alert] 23#23: worker process 229 exited on signal 11 (core dumped)
2024/07/09 07:00:35 [alert] 23#23: worker process 29 exited on signal 11 (core dumped)
2024/07/09 07:02:17 [alert] 23#23: worker process 328 exited on signal 11 (core dumped)
2024/07/09 07:02:22 [alert] 23#23: worker process 295 exited on signal 11 (core dumped)
A fallback to 1.10.1 fixed the issue.
What you expected to happen:
A normal (rolling) deployment with responsive hosts and no core dumps.
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
NGINX Ingress controller
Release: v1.11.0
Build: 96dea88
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.25.5
Kubernetes version (use kubectl version):
Client Version: v1.28.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.6
Environment:
-
Cloud provider or hardware configuration: AWS
-
OS (e.g. from /etc/os-release): Ubuntu 20.04.6 LTS / containerd://1.7.18
-
Kernel (e.g.
uname -a): 5.15.0-107-generic -
Basic cluster related info:
kubectl version: Already described, see above...kubectl get nodes -o wide: Already described, see above...
-
How was the ingress-nginx-controller installed:
- If helm was used then please show output of
helm ls -A | grep -i ingress
- If helm was used then please show output of
ingress-nginx ingress-nginx 19 2024-07-09 09:12:31.294492684 +0200 CEST deployed ingress-nginx-4.11.0 1.11.0
ingress-nginx-internet ingress-nginx-internet 15 2024-05-02 08:55:37.339164187 +0200 CEST deployed ingress-nginx-4.10.1 1.10.1
-
If helm was used then please show output of
helm -n <ingresscontrollernamespace> get values <helmreleasename>
values.json -
if you have more than one instance of the ingress-nginx-controller installed in the same cluster, please provide details for all the instances
-
Current State of the controller:
kubectl describe ingressclasses
ingressclasses.jsonkubectl -n <ingresscontrollernamespace> get all -A -o wide
getallwide.logkubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
pod-describe.logkubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
svc-describe.logkubectl -n <ingresscontrollernamespace> logs <ingresscontrollerservicename>
ingress-nginx-controller-5bd44bf869-4k9kf.log
How to reproduce this issue:
- Have a ingress-nginx controller running in 1.10.1.
- Update to 1.11.0 using helm with the given configuration.
- Check the logs / ingress hosts assigned.
Anything else we need to know:
Attached logs:
values.json
ingressclasses.json
getallwide.log
pod-describe.log
svc-describe.log
ingress-nginx-controller-5bd44bf869-4k9kf.log
Metadata
Metadata
Assignees
Labels
Type
Projects
Status