Skip to content

Conversation

alebedev87
Copy link
Contributor

@alebedev87 alebedev87 commented Nov 7, 2023

This PR removes the tunnel and server timeouts in the global default section and sets them on all the backends. The route annotations for the timeouts continue to work as they did before.

As suggested in the upstream issue, the middle backends (be_sni and be_no_sni) are set with the maximum of all the route timeouts to avoid the following warning message:

I1130 10:49:26.557528       1 router.go:649] template "msg"="router reloaded" "output"="[NOTICE]   (18) : haproxy version is 2.6.13-234aa6d\n[NOTICE]   (18) : path to executable is /usr/sbin/haproxy\n[WARNING]  (18) : config : missing timeouts for backend 'be_sni'.\n   | While not properly invalid, you will certainly encounter various problems\n   | with such a configuration. To fix this, please ensure that all following\n   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.\n[WARNING]  (18) : config : missing timeouts for backend 'be_no_sni'.\n   | While not properly invalid, you will certainly encounter various problems\n   | with such a configuration. To fix this, please ensure that all following\n   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.\n

Test using haproxy-timeout-tunnel repository:

$ oc -n openshift-ingress get deploy router-default -o yaml | yq .spec.template.spec.containers[0].image 
quay.io/alebedev/openshift-router:12.4.110

$ oc -n openshift-ingress get deploy router-default -o yaml | yq .spec.template.spec.containers[0].env | grep -A1 TUNNEL
- name: ROUTER_DEFAULT_TUNNEL_TIMEOUT
  value: 5s

$ oc -n tunnel-timeout get route hello-openshift -o yaml | yq .metadata.annotations
haproxy.router.openshift.io/timeout-tunnel: 15s

$ ROUTE_HOST=$(oc -n tunnel-timeout get route hello-openshift --template='{{.spec.host}}')
$ time HOST=${ROUTE_HOST} PORT=443 SCHEME=wss ./websocket-cli.js
WebSocket Client Connected
echo-protocol Connection Closed

real	0m15.721s
user	0m0.126s
sys	0m0.021s

$ oc -n openshift-ingress exec -ti deploy/router-default -- cat /var/lib/haproxy/conf/haproxy.config | grep 'timeout tunnel '
  timeout tunnel  5s
  timeout tunnel  5s
  timeout tunnel  5s
  timeout tunnel  5s
  timeout tunnel  5s
  timeout tunnel  5s
  timeout tunnel  5s
  timeout tunnel  5s
  timeout tunnel  15s

The e2e test: openshift/cluster-ingress-operator#1013.
Presentation which summarizes the issue: link (Red Hat only).

@openshift-ci-robot openshift-ci-robot added the jira/severity-important Referenced Jira bug's severity is important for the branch this PR is targeting. label Nov 7, 2023
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 7, 2023
@openshift-ci-robot openshift-ci-robot added jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Nov 7, 2023
@openshift-ci-robot
Copy link
Contributor

@alebedev87: This pull request references Jira Issue OCPBUGS-14914, which is invalid:

  • expected the bug to target only the "4.15.0" version, but multiple target versions were set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot requested review from candita and gcs278 November 7, 2023 20:15
Copy link
Contributor

openshift-ci bot commented Nov 7, 2023

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from alebedev87. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@alebedev87 alebedev87 changed the title [WIP] OCPBUGS-14914: set timeouts on the backend level OCPBUGS-14914: set timeouts on the backend level Nov 14, 2023
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 14, 2023
@alebedev87
Copy link
Contributor Author

/jira refresh

@openshift-ci-robot openshift-ci-robot added the jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. label Nov 14, 2023
@openshift-ci-robot
Copy link
Contributor

@alebedev87: This pull request references Jira Issue OCPBUGS-14914, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.15.0) matches configured target version for branch (4.15.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @ShudiLi

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot removed the jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. label Nov 14, 2023
@openshift-ci openshift-ci bot requested a review from ShudiLi November 14, 2023 22:31
@openshift-ci-robot
Copy link
Contributor

@alebedev87: This pull request references Jira Issue OCPBUGS-14914, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.15.0) matches configured target version for branch (4.15.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @ShudiLi

In response to this:

This PR removes the tunnel and server timeouts in the global default section ans sets them on all the backends. The route annotations for the timeouts continue to work as they did before.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ShudiLi
Copy link
Member

ShudiLi commented Nov 17, 2023

Tested it with 4.15.0-0.ci.test-2023-11-16-093712-ci-ln-dr45512-latest
`
1.
% oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.15.0-0.ci.test-2023-11-16-093712-ci-ln-dr45512-latest True False 50m Cluster version is 4.15.0-0.ci.test-2023-11-16-093712-ci-ln-dr45512-latest

% oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
myedge myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com unsec-server3 http edge None

% oc annotate route myedge haproxy.router.openshift.io/timeout-tunnel=70m
route.route.openshift.io/myedge annotate

% oc -n openshift-ingress rsh router-default-d78ddc6f-bprk5
sh-4.4$ grep -A7 "timeout server " haproxy-config.template
timeout server {{ $value }}
{{- else }}
timeout server {{ firstMatch $timeSpecPattern (env "ROUTER_DEFAULT_SERVER_TIMEOUT") "30s" }}
{{- end }}
{{- with $value := clipHAProxyTimeoutValue (firstMatch $timeSpecPattern (index $cfg.Annotations "haproxy.router.openshift.io/timeout-tunnel")) }}
timeout tunnel {{ $value }}
{{- else }}
timeout tunnel {{ firstMatch $timeSpecPattern (env "ROUTER_DEFAULT_TUNNEL_TIMEOUT") "1h" }}
{{- end }}

sh-4.4$

  1. let client get a large size file with speed 1000bytes/s, started time at about 2023-11-17 03:18:56 and terminated time about 2023-11-17 04:38:15, more than 1h

sh-4.4# wget --no-check-certificate --limit-rate=1000 --delete-after https://myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com/oc -k
--2023-11-17 03:18:56-- https://myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com/oc
Resolving myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com (myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com)... 146.148.109.119
Connecting to myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com (myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com)|146.148.109.119|:443... connected.
WARNING: The certificate of 'myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com' is not trusted.
WARNING: The certificate of 'myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com' hasn't got a known issuer.
HTTP request sent, awaiting response... 200 OK
Length: 121359408 (116M)
Saving to: 'oc.tmp'

oc.tmp 0%[ ] 239.88K 1000 B/s in 4m 6s

--2023-11-17 04:38:15-- (try:20) https://myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com/oc
Connecting to myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com (myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com)|146.148.109.119|:443... connected.
WARNING: The certificate of 'myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com' is not trusted.
WARNING: The certificate of 'myedge-default.apps.ci-ln-9r4m3fk-72292.origin-ci-int-gce.dev.rhcloud.com' hasn't got a known issuer.
HTTP request sent, awaiting response... 206 Partial Content
Length: 121359408 (116M), 116746470 (111M) remaining
Saving to: 'oc.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp'

oc.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp.tmp 3%[++++++ ] 4.62M 1000 B/s in 3m 55s

2023-11-17 04:42:09 (1000 B/s) - Read error at byte 4847481/121359408 (The TLS connection was non-properly terminated.). Giving up.

sh-4.4#

  1. check the captured packets and can see the session is terminated after more than 1h
    tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
    03:19:09.684312 IP (tos 0x0, ttl 64, id 2327, offset 0, flags [DF], proto TCP (6), length 52)
    146.148.109.119.https > 10.129.2.17.43942: Flags [.], cksum 0x0cc4 (incorrect -> 0x0a65), ack 1908667883, win 506, options [nop,nop,TS val 1841440389 ecr 1595459286], length 0
    03:19:09.684337 IP (tos 0x0, ttl 64, id 12471, offset 0, flags [DF], proto TCP (6), length 52)
    10.129.2.17.43942 > 146.148.109.119.https: Flags [.], cksum 0x0cc4 (incorrect -> 0x189e), ack 1, win 0, options [nop,nop,TS val 1595462614 ecr 1841433925], length 0
    03:19:16.340278 IP (tos 0x0, ttl 64, id 2328, offset 0, flags [DF], proto TCP (6), length 52)
    146.148.109.119.https > 10.129.2.17.43942: Flags [.], cksum 0x0cc4 (incorrect -> 0xe364), ack 1, win 506, options [nop,nop,TS val 1841447045 ecr 1595462614], length 0
    03:19:16.340316 IP (tos 0x0, ttl 64, id 12472, offset 0, flags [DF], proto TCP (6), length 52)
    10.129.2.17.43942 > 146.148.109.119.https: Flags [.], cksum 0x0cc4 (incorrect -> 0xfe9d), ack 1, win 0, options [nop,nop,TS val 1595469270 ecr 1841433925], length 0
    ...

04:41:53.345283 IP (tos 0x0, ttl 62, id 24165, offset 0, flags [DF], proto TCP (6), length 628)
146.148.109.119.https > 10.129.2.17.45884: Flags [P.], cksum 0xccdd (correct), seq 235640:236216, ack 904, win 505, options [nop,nop,TS val 927357442 ecr 1600426274], length 576
04:41:53.345307 IP (tos 0x0, ttl 62, id 24166, offset 0, flags [DF], proto TCP (6), length 628)
146.148.109.119.https > 10.129.2.17.45884: Flags [.], cksum 0x0157 (correct), seq 236216:236792, ack 904, win 505, options [nop,nop,TS val 927357442 ecr 1600426274], length 576
04:41:53.345371 IP (tos 0x0, ttl 64, id 62346, offset 0, flags [DF], proto TCP (6), length 52)
`

@alebedev87 alebedev87 changed the title OCPBUGS-14914: set timeouts on the backend level [WIP] OCPBUGS-14914: set timeouts on the backend level Nov 30, 2023
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 30, 2023
@alebedev87
Copy link
Contributor Author

Back to WIP to address the config warning about missing timeout server in the middle backends:

I1130 10:49:26.557528       1 router.go:649] template "msg"="router reloaded" "output"="[NOTICE]   (18) : haproxy version is 2.6.13-234aa6d\n[NOTICE]   (18) : path to executable is /usr/sbin/haproxy\n[WARNING]  (18) : config : missing timeouts for backend 'be_sni'.\n   | While not properly invalid, you will certainly encounter various problems\n   | with such a configuration. To fix this, please ensure that all following\n   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.\n[WARNING]  (18) : config : missing timeouts for backend 'be_no_sni'.\n   | While not properly invalid, you will certainly encounter various problems\n   | with such a configuration. To fix this, please ensure that all following\n   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.\n

@alebedev87 alebedev87 force-pushed the timeouts-in-backends branch 2 times, most recently from 4113831 to 6096a39 Compare December 1, 2023 16:33
@alebedev87
Copy link
Contributor Author

/test e2e-agnostic

@alebedev87
Copy link
Contributor Author

/test e2e-metal-ipi-ovn-ipv6

@openshift-ci-robot
Copy link
Contributor

@alebedev87: This pull request references Jira Issue OCPBUGS-14914, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.15.0) matches configured target version for branch (4.15.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @ShudiLi

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

This PR removes the tunnel and server timeouts in the global default section ans sets them on all the backends. The route annotations for the timeouts continue to work as they did before.

Test using haproxy-timeout-tunnel repository:

$ oc -n openshift-ingress get deploy router-default -o yaml | yq .spec.template.spec.containers[0].image 
quay.io/alebedev/openshift-router:12.4.110

$ oc -n openshift-ingress get deploy router-default -o yaml | yq .spec.template.spec.containers[0].env | grep -A1 TUNNEL
- name: ROUTER_DEFAULT_TUNNEL_TIMEOUT
 value: 5s

$ oc -n tunnel-timeout get route hello-openshift -o yaml | yq .metadata.annotations
haproxy.router.openshift.io/timeout-tunnel: 15s

$ ROUTE_HOST=$(oc -n tunnel-timeout get route hello-openshift --template='{{.spec.host}}')
$ time HOST=${ROUTE_HOST} PORT=443 SCHEME=wss ./websocket-cli.js
WebSocket Client Connected
echo-protocol Connection Closed

real	0m15.721s
user	0m0.126s
sys	0m0.021s

$ oc -n openshift-ingress exec -ti deploy/router-default -- cat /var/lib/haproxy/conf/haproxy.config | grep 'timeout tunnel '
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  15s

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@alebedev87 alebedev87 changed the title [WIP] OCPBUGS-14914: set timeouts on the backend level OCPBUGS-14914: set timeouts on the backend level Dec 4, 2023
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Dec 4, 2023
@Miciah
Copy link
Contributor

Miciah commented Dec 5, 2023

Back to WIP to address the config warning about missing timeout server in the middle backends:

I1130 10:49:26.557528       1 router.go:649] template "msg"="router reloaded" "output"="[NOTICE]   (18) : haproxy version is 2.6.13-234aa6d\n[NOTICE]   (18) : path to executable is /usr/sbin/haproxy\n[WARNING]  (18) : config : missing timeouts for backend 'be_sni'.\n   | While not properly invalid, you will certainly encounter various problems\n   | with such a configuration. To fix this, please ensure that all following\n   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.\n[WARNING]  (18) : config : missing timeouts for backend 'be_no_sni'.\n   | While not properly invalid, you will certainly encounter various problems\n   | with such a configuration. To fix this, please ensure that all following\n   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.\n

I still see these errors in the most recent test results:

Did you mean remove the WIP label?

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 26, 2024
@alebedev87 alebedev87 force-pushed the timeouts-in-backends branch from e165421 to 05a37e0 Compare June 28, 2024 21:26
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 28, 2024
@alebedev87
Copy link
Contributor Author

Reanimating the PR: start addressing Miciah's review.

@alebedev87
Copy link
Contributor Author

/remove-lifecycle stale

@openshift-ci openshift-ci bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 28, 2024
@alebedev87 alebedev87 force-pushed the timeouts-in-backends branch 3 times, most recently from c1b2a01 to c67ef49 Compare July 2, 2024 10:04
@alebedev87
Copy link
Contributor Author

alebedev87 commented Jul 2, 2024

High level perf test of writeConfig events for a cluster with ~ 1000 routes.

Baseline (CI image == 4.17):

 $ grep -Po 'writeConfig.*' baseline-1000-2-pod*.logs | cut -d ' ' -f1,2
baseline-1000-2-pod1.logs:writeConfig" "duration"="794.385841ms"
baseline-1000-2-pod1.logs:writeConfig" "duration"="867.438091ms"
baseline-1000-2-pod1.logs:writeConfig" "duration"="990.070294ms"
baseline-1000-2-pod1.logs:writeConfig" "duration"="852.333994ms"
baseline-1000-2-pod1.logs:writeConfig" "duration"="815.822647ms"
baseline-1000-2-pod1.logs:writeConfig" "duration"="988.009627ms"
baseline-1000-2-pod1.logs:writeConfig" "duration"="853.049032ms"

Change (image with c67ef49 latest commit):

$ grep -Po 'writeConfig.*' c67ef49-* | cut -d ' ' -f1,2
c67ef49-pod1.logs:writeConfig" "duration"="1.081579122s"
c67ef49-pod1.logs:writeConfig" "duration"="914.782473ms"
c67ef49-pod1.logs:writeConfig" "duration"="1.002188869s"
c67ef49-pod1.logs:writeConfig" "duration"="1.046333068s"
c67ef49-pod1.logs:writeConfig" "duration"="958.092477ms"
c67ef49-pod2.logs:writeConfig" "duration"="947.33866ms"
c67ef49-pod2.logs:writeConfig" "duration"="1.003091258s"
c67ef49-pod2.logs:writeConfig" "duration"="917.348525ms"
c67ef49-pod2.logs:writeConfig" "duration"="1.013730305s"
c67ef49-pod2.logs:writeConfig" "duration"="995.759517ms"

Comment on lines +28 to +31
BENCH_PKGS ?= $(shell \grep -lR '^func Benchmark' | xargs -I {} dirname {} | sed 's|^|./|' | paste -sd ' ')
BENCH_TEST ?= .
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you mind adding a couple of comments about what these are for? I see below that BENCH_PKGS is a value for flag -benchmem, so it could use a clarification.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@candita
Copy link
Contributor

candita commented Aug 20, 2024

If I understand correctly, this represents a major change for our users - whatever is the max connect timeout in ANY route becomes the max server and tunnel timeout for all the routes on the backend. I strongly feel like this needs to be something that is configurable, and is off by default in order for past users to see the same behavior when they upgrade to a version that has this fix.

@alebedev87
Copy link
Contributor Author

If I understand correctly, this represents a major change for our users - whatever is the max connect timeout in ANY route becomes the max server and tunnel timeout for all the routes on the backend.

@candita: 1) only the server timeout is set to the max value on the middle backend, 2) this is only one part of the change. The other part is the server and tunnel timeouts set on all the route backends. Note that the default timeout values are added to the route level timeouts.

We should not have any change in the behavior. Because for the routes which don't set any value for the timeouts (in the annotations) the default value will be set on the route backend.

I strongly feel like this needs to be something that is configurable, and is off by default in order for past users to see the same behavior when they upgrade to a version that has this fix.

"Configurable" this may put us in the same situation as right now - when a user can set a global (almost) timeout and override the timeouts set on the route level. "Off by default" would lead us to this warning during the config reloads.

Overall, I understand the concern. The change may appear to have a potential to hide something I didn't manage to test manually and with our test suites. However I don't see any other solution for this bug. Let me try to organize a discussion about this PR outside of GitHub so that we can come to a conclusion.

The middle backends (`be_sni`, `be_no_sni`) are updated with the server timeout
which is set to the maximum value among all routes from the configuration.
This prevents a warning message during config reloads.

A benchmark test is added for the template helper function which
finds the maximum timeout value among all routes (`maxTimeoutFirstMatchedAndClipped`).
A new make target is introduced to run all benchmark tests (`bench`).
@alebedev87 alebedev87 force-pushed the timeouts-in-backends branch from c67ef49 to d70581e Compare October 8, 2024 08:34
@alebedev87 alebedev87 changed the title OCPBUGS-14914: Set tunnel and server timeouts only at the backend level OCPBUGS-14914: Set tunnel and server timeouts at backend level Oct 8, 2024
@alebedev87
Copy link
Contributor Author

Squashed the commits.

@alebedev87
Copy link
Contributor Author

/test e2e-metal-ipi-ovn-ipv6

@openshift-ci-robot
Copy link
Contributor

@alebedev87: This pull request references Jira Issue OCPBUGS-14914, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.18.0) matches configured target version for branch (4.18.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @ShudiLi

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

This PR removes the tunnel and server timeouts in the global default section and sets them on all the backends. The route annotations for the timeouts continue to work as they did before.

As suggested in the upstream issue, the middle backends (be_sni and be_no_sni) are set with the maximum of all the route timeouts to avoid the following warning message:

I1130 10:49:26.557528       1 router.go:649] template "msg"="router reloaded" "output"="[NOTICE]   (18) : haproxy version is 2.6.13-234aa6d\n[NOTICE]   (18) : path to executable is /usr/sbin/haproxy\n[WARNING]  (18) : config : missing timeouts for backend 'be_sni'.\n   | While not properly invalid, you will certainly encounter various problems\n   | with such a configuration. To fix this, please ensure that all following\n   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.\n[WARNING]  (18) : config : missing timeouts for backend 'be_no_sni'.\n   | While not properly invalid, you will certainly encounter various problems\n   | with such a configuration. To fix this, please ensure that all following\n   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.\n

Test using haproxy-timeout-tunnel repository:

$ oc -n openshift-ingress get deploy router-default -o yaml | yq .spec.template.spec.containers[0].image 
quay.io/alebedev/openshift-router:12.4.110

$ oc -n openshift-ingress get deploy router-default -o yaml | yq .spec.template.spec.containers[0].env | grep -A1 TUNNEL
- name: ROUTER_DEFAULT_TUNNEL_TIMEOUT
 value: 5s

$ oc -n tunnel-timeout get route hello-openshift -o yaml | yq .metadata.annotations
haproxy.router.openshift.io/timeout-tunnel: 15s

$ ROUTE_HOST=$(oc -n tunnel-timeout get route hello-openshift --template='{{.spec.host}}')
$ time HOST=${ROUTE_HOST} PORT=443 SCHEME=wss ./websocket-cli.js
WebSocket Client Connected
echo-protocol Connection Closed

real	0m15.721s
user	0m0.126s
sys	0m0.021s

$ oc -n openshift-ingress exec -ti deploy/router-default -- cat /var/lib/haproxy/conf/haproxy.config | grep 'timeout tunnel '
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  5s
 timeout tunnel  15s

The e2e test: openshift/cluster-ingress-operator#1013.
Presentation which summarizes the issue: [link|https://docs.google.com/presentation/d/1vU1CH20lzFWDs8Z_iNaWBhPwdKM2iyS-Qb7zlMD9XTc].

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@alebedev87
Copy link
Contributor Author

alebedev87 commented Oct 30, 2024

The slidedeck has been updated with sequence diagrams: current, future.

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 24, 2025
@alebedev87
Copy link
Contributor Author

/remove-lifecycle stale

@openshift-ci openshift-ci bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 24, 2025
Copy link
Contributor

openshift-ci bot commented Apr 11, 2025

@alebedev87: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-upgrade d70581e link true /test e2e-upgrade

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 31, 2025
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
jira/severity-important Referenced Jira bug's severity is important for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. qe-approved Signifies that QE has signed off on this PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants