You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
This is not a bug we experienced (because we run Cortex with -distributor.shard-by-all-labels=true) but while working on #3252 I realised this could be an issue for specific Cortex setups.
When Cortex is running with WAL enabled (chunks and blocks storage) and transfers on shutdown are disabled during ingesters rollout and -distributor.shard-by-all-labels=false (default) there may be gaps when querying series/samples from ingesters because:
There will be series spillover during rollout
Queriers always query ingesters currently holding the tokens, but some series/samples may be in different ingesters due to the spillover
A similar scenario may occur during ingesters scale up even if transfers on shutdown are enabled. Generally speaking, this issue could occur whenever there's a ring topology change.
Storage Engine
Blocks
Chunks
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale because it has not had any activity in the past 60 days. It will be closed in 15 days if no further activity occurs. Thank you for your contributions.
Describe the bug
This is not a bug we experienced (because we run Cortex with
-distributor.shard-by-all-labels=true
) but while working on #3252 I realised this could be an issue for specific Cortex setups.When Cortex is running with WAL enabled (chunks and blocks storage) and transfers on shutdown are disabled during ingesters rollout and
-distributor.shard-by-all-labels=false
(default) there may be gaps when querying series/samples from ingesters because:A similar scenario may occur during ingesters scale up even if transfers on shutdown are enabled. Generally speaking, this issue could occur whenever there's a ring topology change.
Storage Engine
The text was updated successfully, but these errors were encountered: