Skip to content

Conversation

@houshengbo
Copy link
Contributor

@houshengbo houshengbo commented May 15, 2025

Fixes #15889

Proposed Changes

  • Revision is set to healthy, when the deployment has the minimum number of replicas running and ready.

Release Note

Revision is set to healthy, when the deployment has the minimum number of replicas running and ready.

@knative-prow knative-prow bot requested review from dprotaso, dsimansk and skonto May 15, 2025 21:29
@knative-prow knative-prow bot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label May 15, 2025
@knative-prow
Copy link

knative-prow bot commented May 15, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: houshengbo
Once this PR has been reviewed and has the lgtm label, please assign dprotaso for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@codecov
Copy link

codecov bot commented May 15, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 80.95%. Comparing base (bbf34f6) to head (797b64c).
⚠️ Report is 134 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main   #15890      +/-   ##
==========================================
+ Coverage   80.93%   80.95%   +0.01%     
==========================================
  Files         210      210              
  Lines       16731    16731              
==========================================
+ Hits        13542    13545       +3     
+ Misses       2839     2836       -3     
  Partials      350      350              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@dprotaso
Copy link
Member

If min-scale is involved then I think the issue might be in a different place - more specifically the PodAutoscaler.

For example PodAutoscalerConditionScaleTargetInitialized should only be 'True' when we've reached min-scale.

var podCondSet = apis.NewLivingConditionSet(
PodAutoscalerConditionActive,
PodAutoscalerConditionScaleTargetInitialized,
PodAutoscalerConditionSKSReady,
)

We mark the revision ready here

// Don't mark the resources available, if deployment status already determined
// it isn't so.
if ps.IsScaleTargetInitialized() && !resUnavailable {
// Precondition for PA being initialized is SKS being active and
// that implies that |service.endpoints| > 0.
rs.MarkResourcesAvailableTrue()
rs.MarkContainerHealthyTrue()
}

In

@dprotaso
Copy link
Member

Another thing we should do is update the e2e test to surface the issue you found - eg. fail if Ready=True but ReplicaReadyCount < MinScale

https://github.com/knative/serving/blob/main/test/e2e/minscale_readiness_test.go

@houshengbo
Copy link
Contributor Author

@dprotaso Do we allow this situation to happen?
Revision is set to true even if there is only one pod, even if the minscale is larger than one.

If the revision is set to true, I guess traffic is assigned and shifted to the new revisiosn, and old revision starts to terminate(This is how we run into the issue). If deployment is running with pods less than the minscale, revision availability should NOT be set to true.

I will check on the podautoscaler part and the test cases.

@dprotaso
Copy link
Member

Revision is set to true even if there is only one pod, even if the minscale is larger than one.

No - because I would expect PodAutoscalerConditionScaleTargetInitialized to be False until minReady=minScale

When that condition is False then PodAutoscaler.Ready Condition should be False as well because it's a child condition of that LivingConditionSet.

@houshengbo
Copy link
Contributor Author

rev.Status.MarkContainerHealthyTrue()

This line was added into pkg/reconciler/revision/reconcile_resources.go, because we would like the changes of the deployment resource itself to be reflected in the revision.

Per the PodAutoscaler, the logic was correct as in kpa:

https://github.com/knative/serving/blob/main/pkg/reconciler/autoscaling/kpa/kpa.go#L274

and in hpa

https://github.com/knative/serving/blob/main/pkg/reconciler/autoscaling/hpa/hpa.go#L102

so the PodAutoscaler is running as we expected for both kpa and hpa. They both check if the minscale is reached before setting the rev status.

@knative-prow knative-prow bot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels May 16, 2025
@houshengbo
Copy link
Contributor Author

/retest

@houshengbo
Copy link
Contributor Author

/retest

2 similar comments
@houshengbo
Copy link
Contributor Author

/retest

@houshengbo
Copy link
Contributor Author

/retest

@dprotaso
Copy link
Member

/test istio-latest-no-mesh_serving_main

@dprotaso
Copy link
Member

curling github for a file is getting a HTTP 429 rate limit

/test istio-latest-no-mesh_serving_main

@houshengbo
Copy link
Contributor Author

/test istio-latest-no-mesh_serving_main

@houshengbo
Copy link
Contributor Author

@dprotaso I tried multiple times to make sure there is no race condition. The test results seem to be good so far.

Copy link
Member

@dprotaso dprotaso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been trying to run the changes to the TestMinScale locally against v1.18.0 (that should have the broken scaling). I'm finding the test as is not failing reliably.

I think we need to change the test to check that the prior revision isn't scaled down until after the second revision is ready.

Comment on lines +99 to +101
if replicas := *revision.Status.ActualReplicas; replicas < minScale {
t.Fatalf("Container is indicated as healthy. Expected actual replicas for revision %v to be %v but got %v", revision.Name, minScale, replicas)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm hitting this condition when running locally against a broken serving release. I wouldn't expect that so it makes me think there's a delay when ActualReplicas is updated.

This fails sometimes when scaling up the first revision or second. Because of that I don't think we can reliably use this value in the test.

This is probably a separate issue to the one you're addressing

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This check is also duplicated on line 117

@dprotaso
Copy link
Member

One other observation I have - I think as part of the test you should introduce a readiness delay on the second revision.

What I'm seeing happen locally is the new revisions spin up instantaneously because of the image being pre-cached on the node. Thus there isn't an observed early termination of the first revision.

@yuzisun
Copy link

yuzisun commented Jun 1, 2025

@houshengbo @dprotaso where are we on this ?

@dprotaso
Copy link
Member

dprotaso commented Jun 5, 2025

I dug into this more - I think we should fix PropagateAutoscalerStatus

func (rs *RevisionStatus) PropagateAutoscalerStatus(ps *autoscalingv1alpha1.PodAutoscalerStatus) {

When we have no pods ready we end up with a PA Status of

status:
  actualScale: 0
  conditions:
  - lastTransitionTime: "2025-06-05T01:42:44Z"
    message: Requests to the target are being buffered as resources are provisioned.
    reason: Queued
    status: Unknown
    type: Active
  - lastTransitionTime: "2025-06-05T01:42:44Z"
    message: Requests to the target are being buffered as resources are provisioned.
    reason: Queued
    status: Unknown
    type: Ready
  - lastTransitionTime: "2025-06-05T01:42:44Z"
    message: K8s Service is not ready
    reason: NotReady
    status: Unknown
    type: SKSReady
  - lastTransitionTime: "2025-06-05T01:42:44Z"
    status: Unknown
    type: ScaleTargetInitialized
  desiredScale: 3
  metricsServiceName: revision-failure-00002-private
  observedGeneration: 1
  serviceName: revision-failure-00002

When 1 replica is ready (out of 3) we have the PA status

status:
  actualScale: 1
  conditions:
  - lastTransitionTime: "2025-06-05T01:42:44Z"
    message: Requests to the target are being buffered as resources are provisioned.
    reason: Queued
    status: Unknown
    type: Active
  - lastTransitionTime: "2025-06-05T01:42:44Z"
    message: Requests to the target are being buffered as resources are provisioned.
    reason: Queued
    status: Unknown
    type: Ready
  - lastTransitionTime: "2025-06-05T01:44:09Z"
    status: "True"
    type: SKSReady
  - lastTransitionTime: "2025-06-05T01:42:44Z"
    status: Unknown
    type: ScaleTargetInitialized
  desiredScale: 3
  metricsServiceName: revision-failure-00002-private
  observedGeneration: 1
  serviceName: revision-failure-00002

So PropagateAutoscalerStatus doesn't handle the case where SKSReady=True and ScaleTargetInitialized=Unknown - it seems like we should set RevisionConditionResourcesAvailable=Unknown given that

@github-actions
Copy link

github-actions bot commented Sep 4, 2025

This Pull Request is stale because it has been open for 90 days with
no activity. It will automatically close after 30 more days of
inactivity. Reopen with /reopen. Mark as fresh by adding the
comment /remove-lifecycle stale.

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 4, 2025
@dprotaso
Copy link
Member

I have the alternate fix in this PR - #16094

@dprotaso dprotaso closed this Sep 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Rollout of new revision will kill old pods at the same time

3 participants