Skip to content

Add a new metrics to indicates the the current queue length. #1969

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
whalecold opened this issue Aug 4, 2022 · 12 comments
Closed

Add a new metrics to indicates the the current queue length. #1969

whalecold opened this issue Aug 4, 2022 · 12 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@whalecold
Copy link

Should we add a metrics named controller_runtime_queue_length to indicates the length of the queue in the controller. Some times the Reconcile has not been invoked, i want to know if the queue is emtpy.

@FillZpp
Copy link
Contributor

FillZpp commented Aug 9, 2022

You should use workqueue_depth, which is the current depth of workqueue. Here are some metrics for workqueue: https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/metrics/workqueue.go#L41

@Somefive
Copy link

Will there be any conflict between

workqueue.SetProvider(workqueueMetricsProvider{})
and https://github.com/kubernetes/component-base/blob/03d57670a9cda43def5d9c960823d6d4558e99ff/metrics/prometheus/workqueue/metrics.go#L101?

Both repository try to set the Provider but only the earliest will take effect. In the case where component-base library is initialized first, the workqueue_depth metrics in component-base will be used and the metrics in controller-runtime will not work. It will cause the default exposed metrics in controller-runtime to be unable to show the workflow_depth number.

Is there any hint or recommendation for handling that?

@FillZpp
Copy link
Contributor

FillZpp commented Oct 10, 2022

@Somefive k/component-base is synced from k/k/staging/src/k8s.io/component-base and mostly for those core components of Kubernetes, such as KCM, kube-scheduler, which will not import controller-runtime.

On the other hand, most custom Operators based on controller-runtime probably don't have to rely on component-base. But if they are both imported by a project, you will find they all register to workqueue.SetProvider. So why do you need the component-base?

@Somefive
Copy link

Somefive commented Oct 21, 2022

@Somefive k/component-base is synced from k/k/staging/src/k8s.io/component-base and mostly for those core components of Kubernetes, such as KCM, kube-scheduler, which will not import controller-runtime.

On the other hand, most custom Operators based on controller-runtime probably don't have to rely on component-base. But if they are both imported by a project, you will find they all register to workqueue.SetProvider. So why do you need the component-base?

The component-base library might not be directly depended. However, other libraries like k8s.io/apiextensions-apiserver, sigs.k8s.io/controller-runtime, github.com/coreos/prometheus-operator, and many others depends on that. If the codes in these libraries call component-base functions, the initialization function in component-base will work and might call workqueue.SetProvider before controller-runtime, which will prevent the later controller-runtime from setting its own workqueue_depth metrics.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 19, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 18, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 20, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@RainbowMango
Copy link
Member

RainbowMango commented Dec 12, 2024

/reopen

As mentioned by @Somefive, the issue indeed exists and has not yet been resolved.

@k8s-ci-robot k8s-ci-robot reopened this Dec 12, 2024
@k8s-ci-robot
Copy link
Contributor

@RainbowMango: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants