-
Notifications
You must be signed in to change notification settings - Fork 816
Ruler should be protected against high-cardinality output #1396
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I agree! The implementation is going to be a bit tricky I believe. I think we are going to have to write our own rule manager instead of using the prom upstream. Sent with GitHawk |
Perhaps this could be implemented via engineQueryFunc. |
Possibly addressed by prometheus/prometheus#9260 |
@krishnateja325 something you are looking at? |
yes, pulled-in prometheus/prometheus#9260 and added support for limit field in this PR: #5528 |
/assign @krishnateja325 |
Suppose someone creates a rule, either recording rule or alert, that generates 100,000 output series.
Currently, all series will be sent in one request, which will hit the distributor rate-limit (defaults to 50,000 burst size) and be dropped.
This creates problems:
caller=manager.go:539 msg="rule sample appending failed" err="rpc error: code = Code(429) desc = ingestion rate limit (50000) exceeded while adding 100000 samples"
) doesn't include the tenant IDI'm thinking ruler should cap the size of its output, and generate some signal (a synthetic series, perhaps?) that can be used to know when the cap was hit.
If we want to handle outputs from rules in the hundreds of thousands, we should batch them up so they don't choke the distributor.
The channel to alertmanager is also limited:
caller=notifier.go:371 msg="Alert batch larger than queue capacity, dropping alerts" num_dropped=30973
The text was updated successfully, but these errors were encountered: