You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At least with the DynamoDB back, end, where we write to multiple different tables, the code makes no attempt to keep writing chunks to some tables while others are queued up. With different provisioning rates for different tables this can result in a blow-up in chunk queues.
The text was updated successfully, but these errors were encountered:
Currently there are multiple goroutines to write to the backend store, but:
they are only given a set of chunks+index writes for one timeseries, so write to the same hashkey multiple times when there are multiple chunks queued for a series.
they are given work to do in priority order based on how old the oldest chunk is.
To avoid hotspotting, the code should look across all writes that are ready, grab as many as will efficiently fit in a batch, and split them up so writes in the same batch have different hashkeys.
At least with the DynamoDB back, end, where we write to multiple different tables, the code makes no attempt to keep writing chunks to some tables while others are queued up. With different provisioning rates for different tables this can result in a blow-up in chunk queues.
The text was updated successfully, but these errors were encountered: