docs: Balanced consolidation policy RFC#2942
docs: Balanced consolidation policy RFC#2942k8s-ci-robot merged 4 commits intokubernetes-sigs:mainfrom
Conversation
|
Hi @jamesmt-aws. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Regular contributors should join the org to skip this step. Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
7b4dff4 to
592c2bd
Compare
|
/easycla |
|
|
||
| This exists in all consolidation modes. The cost threshold concentrates the remaining moves onto higher-impact candidates. The system self-corrects: a nearly-empty replacement scores as a trivial DELETE next cycle. Cascades terminate because each round has strictly fewer displaced nodes. | ||
|
|
||
| Configuring kube-scheduler with `MostAllocated` scoring reduces divergence. The [Workload-Aware Scheduling proposal](https://docs.google.com/document/d/1mPYqS4cFmsHPaVQDKyCz7-TKyWNJGjTaZQD3Umkvmgk) (Kepka, Feb 2026) addresses this more directly. |
There was a problem hiding this comment.
Btw this doc isn't accessible. Presumably it's private or possibly a bad link?
There was a problem hiding this comment.
let me ask around for an updated link that I can send you on Kubernetes Slack. I think there was a lot of chatter at KubeCon about the path forward, and I don't really need the link. That will come up in other ways.
Pull Request Test Coverage Report for Build 23920136107Warning: This coverage report may be inaccurate.This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.
Details
💛 - Coveralls |
| @@ -0,0 +1,442 @@ | |||
| # Balanced Consolidation: Scoring Moves by Savings and Disruption | |||
There was a problem hiding this comment.
I am super excited about this approach!
| spec: | ||
| disruption: | ||
| consolidationPolicy: Balanced | ||
| consolidationThreshold: 2.0 |
There was a problem hiding this comment.
Thoughts on exposing different Enum values that codify these numbers, rather than the number itself? Also -- note that json cannot support floats in a stable way.
There was a problem hiding this comment.
good call on the float. the formally motivated values are all integers. k=1 is break-even (deletes only, no replaces). k=2 is where within-family replaces become viable, 4-step max steps to stasis. k=3 adds 8 cross-family pairs with the same 4-step number of steps to stasis. at k=4 churn chains, jump to 9 steps and the formal analysis starts arguing against k=4 (or any higher value). so the natural set is {1, 2, 3} and I'll restrict the input type.
I thought about named presets but Karpenter doesn't have an ordinal enum pattern today, and picking names that age well is hard. "Conservative/Balanced/Aggressive" reuses "Balanced" which is already the policy name. I think the integer is simpler, but we can do whatever you and the rest of the community want to do here.
There was a problem hiding this comment.
From an API design perspective, I am not sure that we need both knobs.
Right now, WhenEmpty: k=INF and WhenEmptyOrUnderutilized, k=0.
I see two approaches:
- Expand the enum that aliases other K values
- Expose a new
consolidationThresholdthat works when consolidationPolicy:WhenEmptyOrUnderutilizedand simply changes the threshold.
cc: @jmdeal, @DerekFrank curious to your thoughts.
There was a problem hiding this comment.
Yeah, I think your idea is better. The doc as-written assumes (maybe defensively) that can't justify k=2 uniquely for customers. If that's right, then we need k=3 and k=4, and then we might as well just make this parameter adjust the behavior of WhenEmptyOrUnderutilized. If we can uniquely justify k=2, then we can have a new enum.
I'm leaning towards making k a parameter of WhenEmptyOrUnderutilized based purely on this RFC. @jmdeal and @DerekFrank I'm happy to do whatever you two think is sensible, I'll try to catch up with you you today.
| 1. **Pod deletion cost** ([`controller.kubernetes.io/pod-deletion-cost`](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost)), divided by 2^27, range -16 to +16. Default 0. The ReplicaSet controller uses this for scale-down ordering; Karpenter reuses it as a disruption signal. | ||
| 2. **Pod priority**, divided by 2^25, range -64 to +30. Default 0. Higher-priority pods increase their node's disruption cost. | ||
|
|
||
| With neither set, per-pod disruption cost is 1.0. `EvictionCost` clamps to [-10, 10]. The scoring path clamps negative values to 0 via `max(0, EvictionCost(pod))` in the per-node formula (see [NodePool Totals](#nodepool-totals)). Other consumers of `EvictionCost` (eviction ordering) still see negatives. Scoring range per pod: [0, 10]. |
There was a problem hiding this comment.
How does a user communicate that disrupting a pod is free?
There was a problem hiding this comment.
if users want their pods to have zero disruption cost, they can set pod-deletion-cost to a large negative value. that drives EvictionCost negative, and the disruption cost in this RFC clamps the result to 0, so the pod contributes nothing to the node's total disruption cost.
the node still has a disruption cost of 1. nodes have a disruption cost independent of their pods (cordoning, draining, API calls, replacement latency). we haven't modeled this precisely and cost=1 is a placeholder. I'll make that clearer in the design. for today it eliminates a divide-by-zero that comes up if node disruption cost is zero. it could be larger if we wanted, but we don't need that yet.
so a node where every pod has negative deletion cost scores the same as an empty node. cheap to disrupt, not free. if a user wants truly zero-friction disruption at a NodePool level, they want WhenEmptyOrUnderutilized (or k=+inf)
There was a problem hiding this comment.
Got it -- so in our docs we would recommend setting it to -1 if you want to treat it as free.
6e41cd8 to
63468c3
Compare
A new consolidationPolicy: Balanced that scores each consolidation move by comparing savings and disruption as fractions of NodePool totals. Moves where disruption outweighs savings are rejected. - consolidationThreshold (integer, 1-3, default 2): a move passes when its disruption fraction is at most k times its savings fraction - Per-node disruption cost of 1.0 eliminates division-by-zero edge cases - Score-based ranking replaces disruption-only ranking when budget limits move count - Exhaustive verification across c7i/m7i/r7i confirms k=2 is the smallest value that makes within-family REPLACEs viable
63468c3 to
d89e204
Compare
|
squashed to fix EasyCLA (removed Co-Authored-By trailer that the bot couldn't resolve). no content changes beyond what was already pushed. /easycla |
Source: kubernetes-sigs#2942 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
14a675d to
a403fee
Compare
Adds a new consolidationPolicy value, Balanced, that scores each consolidation move and rejects moves where the disruption outweighs the savings. Gated behind --feature-gates BalancedConsolidation=true. The scoring formula compares savings and disruption as fractions of NodePool totals: score = savings_fraction / disruption_fraction. A move is approved when score >= 1/consolidationThreshold (default 2). The scoring step is a filter inserted after scheduling feasibility and price comparison. It can only reject moves, never create them. If scoring has a bug that incorrectly approves, the move was already feasible and cost-saving. If it incorrectly rejects, the cluster is less optimized but not disrupted. API: - consolidationPolicy: Balanced (new enum value) - consolidationThreshold: 1-3 (default 2, requires Balanced) Implementation: - balanced.go: scoring formula, NodePool totals, candidate pre-filter, cross-NodePool move evaluation - Feature gate, API validation (CEL + runtime), defaulting - ShouldDisrupt accepts Balanced, sets ConsolidationPolicyUnsupported status condition when gate is disabled - Score-based candidate ranking for single-node consolidation - Events (ConsolidationApproved/Rejected on Node+NodeClaim for single-node, NodePool for multi-node) - Metrics (consolidation_score histogram, consolidation_moves_total counter) Tests (31 new): - 15 unit tests covering all RFC worked examples - 9 integration tests (NodePool totals, cross-pool, candidate price) - 3 feature gate tests - 5 validation + 4 defaulting tests - 4 score-based ranking tests - 1 status condition test See designs/balanced-consolidation.md (PR kubernetes-sigs#2942) for the full RFC.
Adds a new consolidationPolicy value, Balanced, that scores each consolidation move and rejects moves where the disruption outweighs the savings. Gated behind --feature-gates BalancedConsolidation=true. The scoring formula compares savings and disruption as fractions of NodePool totals: score = savings_fraction / disruption_fraction. A move is approved when score >= 1/consolidationThreshold (default 2). The scoring step is a filter inserted after scheduling feasibility and price comparison. It can only reject moves, never create them. If scoring has a bug that incorrectly approves, the move was already feasible and cost-saving. If it incorrectly rejects, the cluster is less optimized but not disrupted. API: - consolidationPolicy: Balanced (new enum value) - consolidationThreshold: 1-3 (default 2, requires Balanced) Implementation: - balanced.go: scoring formula, NodePool totals, candidate pre-filter, cross-NodePool move evaluation - Feature gate, API validation (CEL + runtime), defaulting - ShouldDisrupt accepts Balanced, sets ConsolidationPolicyUnsupported status condition when gate is disabled - Score-based candidate ranking for single-node consolidation - Events (ConsolidationApproved/Rejected on Node+NodeClaim for single-node, NodePool for multi-node) - Metrics (consolidation_score histogram, consolidation_moves_total counter) Tests (31 new): - 15 unit tests covering all RFC worked examples - 9 integration tests (NodePool totals, cross-pool, candidate price) - 3 feature gate tests - 5 validation + 4 defaulting tests - 4 score-based ranking tests - 1 status condition test See designs/balanced-consolidation.md (PR kubernetes-sigs#2942) for the full RFC.
Adds a new consolidationPolicy value, Balanced, that scores each consolidation move and rejects moves where the disruption outweighs the savings. Gated behind --feature-gates BalancedConsolidation=true. The scoring formula compares savings and disruption as fractions of NodePool totals: score = savings_fraction / disruption_fraction. A move is approved when score >= 1/consolidationThreshold (default 2). The scoring step is a filter inserted after scheduling feasibility and price comparison. It can only reject moves, never create them. If scoring has a bug that incorrectly approves, the move was already feasible and cost-saving. If it incorrectly rejects, the cluster is less optimized but not disrupted. API: - consolidationPolicy: Balanced (new enum value) - consolidationThreshold: 1-3 (default 2, requires Balanced) Implementation: - balanced.go: scoring formula, NodePool totals, candidate pre-filter, cross-NodePool move evaluation - Feature gate, API validation (CEL + runtime), defaulting - ShouldDisrupt accepts Balanced, sets ConsolidationPolicyUnsupported status condition when gate is disabled - Score-based candidate ranking for single-node consolidation - Events (ConsolidationApproved/Rejected on Node+NodeClaim for single-node, NodePool for multi-node) - Metrics (consolidation_score histogram, consolidation_moves_total counter) Tests (31 new): - 15 unit tests covering all RFC worked examples - 9 integration tests (NodePool totals, cross-pool, candidate price) - 3 feature gate tests - 5 validation + 4 defaulting tests - 4 score-based ranking tests - 1 status condition test See designs/balanced-consolidation.md (PR kubernetes-sigs#2942) for the full RFC.
Adds a new consolidationPolicy value, Balanced, that scores each consolidation move and rejects moves where the disruption outweighs the savings. Gated behind --feature-gates BalancedConsolidation=true. The scoring formula compares savings and disruption as fractions of NodePool totals: score = savings_fraction / disruption_fraction. A move is approved when score >= 1/consolidationThreshold (default 2). The scoring step is a filter inserted after scheduling feasibility and price comparison. It can only reject moves, never create them. If scoring has a bug that incorrectly approves, the move was already feasible and cost-saving. If it incorrectly rejects, the cluster is less optimized but not disrupted. API: - consolidationPolicy: Balanced (new enum value) - consolidationThreshold: 1-3 (default 2, requires Balanced) Implementation: - balanced.go: scoring formula, NodePool totals, candidate pre-filter, cross-NodePool move evaluation - Feature gate, API validation (CEL + runtime), defaulting - ShouldDisrupt accepts Balanced, sets ConsolidationPolicyUnsupported status condition when gate is disabled - Score-based candidate ranking for single-node consolidation - Events (ConsolidationApproved/Rejected on Node+NodeClaim for single-node, NodePool for multi-node) - Metrics (consolidation_score histogram, consolidation_moves_total counter) Tests (31 new): - 15 unit tests covering all RFC worked examples - 9 integration tests (NodePool totals, cross-pool, candidate price) - 3 feature gate tests - 5 validation + 4 defaulting tests - 4 score-based ranking tests - 1 status condition test See designs/balanced-consolidation.md (PR kubernetes-sigs#2942) for the full RFC.
Adds a new consolidationPolicy value, Balanced, that scores each consolidation move and rejects moves where the disruption outweighs the savings. Gated behind --feature-gates BalancedConsolidation=true. The scoring formula compares savings and disruption as fractions of NodePool totals: score = savings_fraction / disruption_fraction. A move is approved when score >= 1/consolidationThreshold (default 2). The scoring step is a filter inserted after scheduling feasibility and price comparison. It can only reject moves, never create them. If scoring has a bug that incorrectly approves, the move was already feasible and cost-saving. If it incorrectly rejects, the cluster is less optimized but not disrupted. API: - consolidationPolicy: Balanced (new enum value) - consolidationThreshold: 1-3 (default 2, requires Balanced) Implementation: - balanced.go: scoring formula, NodePool totals, candidate pre-filter, cross-NodePool move evaluation - Feature gate, API validation (CEL + runtime), defaulting - ShouldDisrupt accepts Balanced, sets ConsolidationPolicyUnsupported status condition when gate is disabled - Score-based candidate ranking for single-node consolidation - Events (ConsolidationApproved/Rejected on Node+NodeClaim for single-node, NodePool for multi-node) - Metrics (consolidation_score histogram, consolidation_moves_total counter) Tests (31 new): - 15 unit tests covering all RFC worked examples - 9 integration tests (NodePool totals, cross-pool, candidate price) - 3 feature gate tests - 5 validation + 4 defaulting tests - 4 score-based ranking tests - 1 status condition test See designs/balanced-consolidation.md (PR kubernetes-sigs#2942) for the full RFC.
|
|
||
| ### Candidate Filtering | ||
|
|
||
| Move generation is expensive (find a destination, compute replacement costs, verify scheduling). A node's best possible score is its delete ratio: a DELETE saving the full node cost with no replacement. |
There was a problem hiding this comment.
The replacement cost can diverge from actual reality, so we might end up getting a node without the expected savings?
There was a problem hiding this comment.
That's true. Is that a problem with the balanced consolidation proposal, or is it a problem that is fundamental to consolidation and cost savings in Karpenter generally?
I'm happy to fix it if you think that's necessary, but that feels separable to me. What do you think? Are there any specific cases that we can talk through? This feels like a real problem, but the proposal here doesn't make that problem worse.
consolidationPolicy is now an IntOrString field per ellistarn's feedback. Balanced maps to k=2. Integer values (1-3) pass k directly as an escape hatch. Removes the separate consolidationThreshold field.
consolidationPolicy is now IntOrString. All policies expressed through the disruption cost model: - WhenEmpty: approve only when move disruption cost equals per-node disruption cost (no pod contributes positive disruption cost). Behavioral change from today: pods with large negative pod-deletion-cost no longer block consolidation. - Balanced (k=2): scoring with default threshold - Integer values (1-3): pass k directly as escape hatch - WhenEmptyOrUnderutilized: k=+inf (any positive savings) Removes separate consolidationThreshold field per ellistarn feedback.
aa8d8f3 to
6d29106
Compare
My goal here is to solve your problems with fewer user-facing controls. Basic idea is that we prioritize consolidation actions that target consolidation "moves" that have two properties. First, we should get a cost savings that looks pretty good. Second, we shouldn't disrupt a ton of pods. There's a simple estimation technique that we use to normalize these two factors to NodePool totals (more details in the RFC), so that a node with a lot of waste and small pod disruption is top priority, high waste/high disruption and low waste/low disruption are a rung down in priority, and low waste/high disruption moves are avoided entirely. There's more detail in the RFC, and I'm happy to talk with you about particular cases on Kubernetes Slack, over a Zoom call, or over email. I think the one thing we're missing is the "node killed per-hour" limit, but I'm hoping that can also be an implicit effect of setting the disruption budget on your nodepool. |
|
x-posting from #2962 -- I should've posted on the RFC here: Discussed offline. Let's model this with consolidationPolicy: WhenEmpty | Balanced | WhenEmptyOrUnderutilized, with the option to expand that policy to support additional values in the future, with an escape hatch to pass k directly into that field. Ready to approve once the RFC reflects this. cc: @jonathan-innis @DerekFrank @jmdeal |
|
prioritizing the nodes with most waste sounds about right (need some formula for gpu waste > cpu waste > mem waste, I usually use factors of 1gpu=8cpu 1cpu=4GB mem) the killed per hour is less important if all disruptions are "worth it" (saving at least x% for example) |
While working through some of the cases here, I thought about what it really means for this RFC to take into account the whole spectrum between WhenEmpty and WhenEmptyOrUnderutilized. There's a node disruption cost in the current RFC, defaulting to just 1 (the same as disrupting a single pod). That means WhenEmpty can be expressed as: approve only when move disruption cost equals the per-node disruption cost. No pod contributes positive disruption cost. WhenEmptyOrUnderutilized is k=+inf (any positive savings passes). The whole spectrum lives in one model. This also surfaces a small behavioral improvement. Today's WhenEmpty checks for literally zero pods. Under the disruption cost framing, a node whose pods all have large negative pod-deletion-cost (disruption cost clamped to 0) also qualifies. Pods that declared themselves free to disrupt shouldn't block consolidation. RFC is updated. Also making consolidationPolicy an IntOrString so integer values (1-3) pass k directly as the escape hatch. |
|
@grosser Yeah, I completely agree. That's a resource-weighted cost concern. The RFC currently uses Karpenter's pricing model (dollar cost per node) for the savings side, so we're comparing node costs before/after proposed consolidation moes. This already implicitly weights GPUs higher because GPU instances cost more. It doesn't have an explicit resource-type weighting for disruption cost though, but if you see a pressing need for that I'm happy to add it. |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ellistarn, jamesmt-aws The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
| | `WhenEmpty` | Approve only when move disruption cost equals the per-node disruption cost (no pod contributes positive disruption cost) | | ||
| | `1` | Scoring with break-even threshold (deletes only, no replaces in uniform pools) | | ||
| | `Balanced` | Scoring with k=2 (within-family replaces viable) | | ||
| | `3` | Scoring with k=3 (adds cross-family replace pairs) | |
There was a problem hiding this comment.
One note. We could call this Conservative, Balanced, and Aggressive if we are only supporting 3.
Summary
A new
consolidationPolicy: Balancedthat scores each consolidation move by comparing savings and disruption as fractions of NodePool totals. Moves where disruption outweighs savings are rejected.consolidationPolicyis an IntOrString field.Balancedmaps to k=2. Integer values (1-3) pass k directly as an escape hatch.Related issues
aws#8868, aws#8536, aws#6642, aws#7146, #2319, #1019, #735, #1851, #2705, #2883, #1440, #1686, #1430, aws#5218, aws#3577