You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The series store will write the index entries (metric:label -> series) for every chunk, and rely on the underlying store to dedupe these. We could keep a cache of the most recently written entries and not bother writing ones we already know exist, removing some write load on the database.
For an ingester with ~1m series, an average chunk size of 6hs, and 10 labels per chunks, we're writing ~500 entries / sec. A cache of 10m entries would potentially reduce this to ~entries / sec.
The text was updated successfully, but these errors were encountered:
The series store will write the index entries (metric:label -> series) for every chunk, and rely on the underlying store to dedupe these. We could keep a cache of the most recently written entries and not bother writing ones we already know exist, removing some write load on the database.
For an ingester with ~1m series, an average chunk size of 6hs, and 10 labels per chunks, we're writing ~500 entries / sec. A cache of 10m entries would potentially reduce this to ~entries / sec.
The text was updated successfully, but these errors were encountered: