Skip to content

Conversation

@jtuglu1
Copy link
Contributor

@jtuglu1 jtuglu1 commented Nov 21, 2025

Fixes #18764.

Description

Race

Segment load/drop callbacks are racing with prepareCurrentServers.

Consider the following scenario: Coordinator is moving segment S from server A to server B. S has 1x replication.

Basically, the load/drop callbacks can happen between the start/end of this function in a way where you get a

SegmentReplicaCount{requiredAndLoadable=1, required=1, loaded=2, loadedNonHistorical=0, loading=0, dropping=0, movingTo=0, movingFrom=0}. 

Essentially:

T0[Coordinator]: enter prepareCurrentServers()
T1[Coordinator]: Server[B] completed request[MOVE_TO] on segment[S] with status[SUCCESS]
T2[Coordinator]: Dropping segment [S] from server[A]
T3[A]: Completely removing segment[S] in [30,000]ms.
T4[Coordinator]: Server[A] completed request[DROP] on segment[S] with status[SUCCESS].
T5[Coordinator]: exit prepareCurrentServers()
T6[Coordinator]: enter prepareCluster()
T7[Coordinator]: exit prepareCluster()
T8[Coordinator]: enter initReplicaCounts()
T9[Coordinator]: exit initReplicaCounts()
T10[Coordinator]: Segment S replica count is SegmentReplicaCount{requiredAndLoadable=1, required=1, loaded=2, loadedNonHistorical=0, loading=0, dropping=0, movingTo=0, movingFrom=0}
T11[Coordinator]: Dropping segment [S] from server[B]

I think what's happening is that the server loop in prepareCurrentServers() reads the servers in a state where LOAD has persisted in the server view (2x loaded) but the DROP has not materialized yet in the view. This causes loaded=2. Then, I thought the in-flight DROP (since it hasn't materialized in the view) would get picked up in the queuedSegments load queue peons (and show up as dropping=1), but I think since the DROP callback returns – and clears the entry from the old queuedSegments load queue peons – before prepareCluster() has a chance to copy over the queued action to new queuedSegments, we lose that important bit of information. Hence, you are left in a weird state with a "valid" queue but an invalid load state. In other words, I think we need to somehow synchronize callbacks with this prepareCurrentServers() and prepareCluster().

Fix

I don't really like the idea of passing a lock through to child peons, but a solution here is to synchronize the callback executions with the PrepareBalancerAndLoadQueues::run(). This way, callbacks on HttpLoadQueuePeon can run concurrently w.r.t each other, but during the PrepareBalancerAndLoadQueues::run() step, callbacks must wait (and vice-versa). While this could mean "stale" accounting snapshots compared to the actual state of the cluster, this ensures valid accounting snapshots which means that we don't do anything crazy like drop a segment from the timeline during a rebalance (assuming the rest of the rebalancing logic is correct).

Release note


Key changed/added classes in this PR
  • MyFoo
  • OurBar
  • TheirBaz

This PR has:

  • been self-reviewed.
  • added documentation for new or modified features or behaviors.
  • a release note entry in the PR description.
  • added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
  • added or updated version, license, or notice information in licenses.yaml
  • added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
  • added unit tests or modified existing tests to cover new code paths, ensuring the threshold for code coverage is met.
  • added integration tests.
  • been tested in a test Druid cluster.

@jtuglu1 jtuglu1 changed the title Add repro test Fix segment rebalance race Nov 21, 2025
@kfaraz
Copy link
Contributor

kfaraz commented Nov 21, 2025

Thanks for adding the test, @jtuglu1 ! I will take a look soon.

@jtuglu1 jtuglu1 force-pushed the fix-rebalance-race branch 2 times, most recently from ae1de96 to 60a2092 Compare December 2, 2025 20:28
@jtuglu1 jtuglu1 closed this Dec 2, 2025
@jtuglu1 jtuglu1 reopened this Dec 2, 2025
@jtuglu1 jtuglu1 changed the title Fix segment rebalance race [WIP]: Fix segment rebalance race Dec 2, 2025
@jtuglu1 jtuglu1 force-pushed the fix-rebalance-race branch from 60a2092 to d061caf Compare December 2, 2025 23:59
@jtuglu1 jtuglu1 force-pushed the fix-rebalance-race branch from d061caf to 8226834 Compare December 3, 2025 00:53
@jtuglu1 jtuglu1 marked this pull request as draft December 3, 2025 03:09
@jtuglu1 jtuglu1 requested a review from gianm December 3, 2025 17:29
@jtuglu1
Copy link
Contributor Author

jtuglu1 commented Dec 3, 2025

@gianm – thoughts on the approach here? One pending change is I'll need to encompass these updates to queuedSegments within the critical section as well to ensure an atomic snapshot of both server inventory and load queue peons.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Segment Rebalance Race

2 participants