Skip to content

prevent client deadlock and incorrect timing on stop_on_client_after#25946

Merged
tgross merged 3 commits intomainfrom
NMD407-stop-on-client-deadlock
May 29, 2025
Merged

prevent client deadlock and incorrect timing on stop_on_client_after#25946
tgross merged 3 commits intomainfrom
NMD407-stop-on-client-deadlock

Conversation

@tgross
Copy link
Copy Markdown
Member

@tgross tgross commented May 28, 2025

The disconnect.stop_on_client_after feature is implemented as a loop on the client that's intended to wait on the shortest timeout of all the allocations on the node and then check whether the interval since the last heartbeat has been longer than the timeout. It uses a buffered channel of allocations written and read from the same goroutine to push "stops" from the timeout expiring to the next pass through the loop. Unfortunately if there are multiple allocations that need to be stopped in the same timeout event, or even if a previous event has not yet been dequeued, then sending on the channel will block and the entire goroutine deadlocks itself.

While fixing this, I also discovered that the stop_on_client_after and heartbeat loops can synchronize in a pathological way that extends the stop_on_client_after window. If a heartbeat fails close to the beginning of the shortest stop_on_client_after window, the loop will end up waiting until almost 2x the intended wait period.

stop_on_client_after

While fixing both of those issues, I discovered that the existing tests had a bug such that we were asserting that an allocrunner was being destroyed when it had already exited.

This commit includes the following:

  • Rework the watch loop so that we handle the stops in the same case as the timer expiration, rather than using a channel in the method scope.
  • Remove the alloc intervals map field from the struct and keep it in the method scope, in order to discourage writing racy tests that read its value.
  • Reset the timer whenever we receive a heartbeat, which forces the two intervals to synchronize correctly.
  • Minor refactoring of the disconnect timeout lookup to improve brevity.
  • Fix the test assertions and expand them to cover more ground.

Fixes: #24679
Ref: https://hashicorp.atlassian.net/browse/NMD-407

Contributor Checklist

  • Changelog Entry If this PR changes user-facing behavior, please generate and add a
    changelog entry using the make cl command.
  • Testing Please add tests to cover any new functionality or to demonstrate bug fixes and
    ensure regressions will be caught.
  • Documentation If the change impacts user-facing functionality such as the CLI, API, UI,
    and job configuration, please update the Nomad website documentation to reflect this. Refer to
    the website README for docs guidelines. Please also consider whether the
    change requires notes within the upgrade guide.

Reviewer Checklist

  • Backport Labels Please add the correct backport labels as described by the internal
    backporting document.
  • Commit Type Ensure the correct merge method is selected which should be "squash and merge"
    in the majority of situations. The main exceptions are long-lived feature branches or merges where
    history should be preserved.
  • Enterprise PRs If this is an enterprise only PR, please add any required changelog entry
    within the public repository.

Comment thread client/heartbeatstop.go
shutdownCh: shutdownCh,
lock: &sync.RWMutex{},
startupGrace: time.Now().Add(timeout),
allocHookCh: make(chan *structs.Allocation, 10),
Copy link
Copy Markdown
Member Author

@tgross tgross May 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing this buffer isn't strictly necessary but seemed like a safe belt-and-suspenders thing to do. During client restart it's possible to get a bunch of allocs on this channel in a short window when we restore them, so having a larger buffer gives us some room to avoid putting backpressure on the allocrunner setup.

The `disconnect.stop_on_client_after` feature is implemented as a loop on the
client that's intended to wait on the shortest timeout of all the allocations on
the node and then check whether the interval since the last heartbeat has been
longer than the timeout. It uses a buffered channel of allocations written and
read from the same goroutine to push "stops" from the timeout expiring to the
next pass through the loop. Unfortunately if there are multiple allocations that
need to be stopped in the same timeout event, or even if a previous event has
not yet been dequeued, then sending on the channel will block and the entire
goroutine deadlocks itself.

While fixing this, I also discovered that the `stop_on_client_after` and
heartbeat loops can synchronize in a pathological way that extends the
`stop_on_client_after` window. If a heartbeat fails close to the beginning of
the shortest `stop_on_client_after` window, the loop will end up waiting until
almost 2x the intended wait period.

While fixing both of those issues, I discovered that the existing tests had a
bug such that we were asserting that an allocrunner was being destroyed when it
had already exited.

This commit includes the following:
* Rework the watch loop so that we handle the stops in the same case as the
  timer expiration, rather than using a channel in the method scope.
* Remove the alloc intervals map field from the struct and keep it in the
  method scope, in order to discourage writing racy tests that read its value.
* Reset the timer whenever we receive a heartbeat, which forces the two
  intervals to synchronize correctly.
* Minor refactoring of the disconnect timeout lookup to improve brevity.

Fixes: #24679
Ref: https://hashicorp.atlassian.net/browse/NMD-407
@tgross tgross force-pushed the NMD407-stop-on-client-deadlock branch from e0dcd3c to 7c72297 Compare May 28, 2025 19:41
@tgross tgross marked this pull request as ready for review May 28, 2025 20:57
@tgross tgross requested review from a team as code owners May 28, 2025 20:57
gulducat
gulducat previously approved these changes May 29, 2025
Copy link
Copy Markdown
Member

@gulducat gulducat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the code proper LGTM!

however, I do have a concern about what the last test case is actually covering, if you wanna noodle on that.

Comment thread client/heartbeatstop.go Outdated
var interval time.Duration
checkAllocs := false

maxInterval := time.Hour
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this hour arbitrary? I don't see it in docs.

if I'm reading all this right, it represents the amount of time between loop iterations, if the select doesn't get poked by anything else, right?

so it isn't a documented value that users care about (or might be surprised by), it just keeps the loop looping (very slowly) in lieu of any other input?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right! We need to find the minimum interval and because we have a map instead of a slice we don't have a good starting value to compare against. And we want to reset the timer to something in the case where we're not watching any allocs, so having it be an hour keeps it idle for a good long while.

The alternative would be to copy the values of allocIntervals into a slice every loop and take the min, or track the lowest interval at all times whenever we add/delete an alloc. And then we'd stop the timer if len(allocIntervals) == 0. Let me see if we can reasonably track the lowest interval because that might make this a bit more legible.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, in 6ad1aae I've moved where we declare the interval and check for 0 when minimizing the interval. This avoids the arbitrary 1 hour interval and also stops the timer's internal goroutine when not in use.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so now not having any allocs to watch (interval remains 0) calls timer.Stop(), which means the loop will only turn over on heartbeats or new allocs (or bail on shutdown). that seems right to me 👍

Comment thread client/heartbeatstop.go Outdated
Comment thread client/heartbeatstop_test.go Outdated
Comment thread client/heartbeatstop_test.go Outdated
Copy link
Copy Markdown
Member

@gulducat gulducat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm!

@tgross tgross merged commit 48b1b01 into main May 29, 2025
36 checks passed
@tgross tgross deleted the NMD407-stop-on-client-deadlock branch May 29, 2025 19:05
@github-actions
Copy link
Copy Markdown

I'm going to lock this pull request because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active contributions.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions Bot locked as resolved and limited conversation to collaborators Sep 27, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

stop_on_client_after doesn't handle network partitions as expected

2 participants