Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion pkg/router/template/configmanager/haproxy/backend.go
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@ func (b *Backend) EnableServer(name string) error {
// DisableServer stops serving traffic for a haproxy backend server.
func (b *Backend) DisableServer(name string) error {
log.V(4).Info("disabling server with maint state", "server", name)
return b.UpdateServerState(name, BackendServerStateMaint)
return b.UpdateServerState(name, BackendServerStateDrain)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Currently set server state is not used on any shipped OpenShift product. It's part of the Dynamic Configuration Manager feature which is still in TechPreview. 2) router watches for endpoints and react to changes, DisableServer is used for deleted endpoints. That is, corresponding pods are not there anymore, so the server should be disabled.

Copy link
Author

@christf christf Aug 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the feedback! I am aware of (1) and I am raising this PR to eventually be able to use this tech preview feature.

Can we dig a bit into (2) please?
As per my understanding, DisableServer is being run, when kube-proxy is notified that an endpoint is to be removed. As per kubernetes/kubernetes#106476, the notification to remove an endpoint happens around at the same time as the pod is being asked to terminate. So the pods are still very much ready to serve requests and they need to continue to do so until they have handled all in-flight requests. During this time the router must ensure no new requests are being sent into these pods while still retaining the active connections to those pods that are about to be terminated.
If "maint" is used, all in-flight connections are being broken. "drain" will keep them alive until they are being closed by either end of the connection (either clients are done, or the pod gets SIGKILLED which is governed by a timeout already)
The goal of this change is to support rolling deployments without losing a single request.

There is another bit missing to make it perfect, which is finding a way to delay the SIGTERM to the pod until the endpoint has been drained. But that is another can of worms.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reasoning about the second point seems to be valid. Let me try to check our test coverage for this use case.

}

// Commit commits all the pending changes made to a haproxy backend.
Expand Down