Skip to content

Tune chunk size #11

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
jml opened this issue Sep 9, 2016 · 15 comments
Closed

Tune chunk size #11

jml opened this issue Sep 9, 2016 · 15 comments
Milestone

Comments

@jml
Copy link
Contributor

jml commented Sep 9, 2016

From @tomwilkie

Currently 10mins, should be 1hr.

Copied from original issue: tomwilkie/frankenstein#10

@jml
Copy link
Contributor Author

jml commented Sep 9, 2016

From @jml

Why should it be 1hr?

@jml
Copy link
Contributor Author

jml commented Sep 9, 2016

From @tomwilkie

We should investigate what the right number is, but the parameters are:

  • data batched up in the ingesters is a risk of loss in event of machine failure. we should bound this.
  • chunks are 1kb, and making bigger chunks doesn't necessarily make things more efficient. we should try and fill chunks as much as possible (something we already monitoring for).

Ticket should really say "max 1hr" to bound the loss, if that give good utilization

@jml
Copy link
Contributor Author

jml commented Nov 2, 2016

This is possibly related to the dynamo errors we are seeing in #85

@juliusv
Copy link
Contributor

juliusv commented Nov 2, 2016

Oh wow yeah, the default chunk max age of 10 minutes seems way too low. I'm wondering why we're still achieving such decent chunk utilization (sum(cortex_ingester_chunk_utilization_sum) / sum(cortex_ingester_chunk_utilization_count) is around 0.43) with such a low max age. Under certain circumstances, chunks can last for hours or days, so maybe it's the frequent scraping plus noisiness of the data that makes the chunks fill up that fast. Still, I would set the max age to an hour or so (as you said, it depends a bit on our risk profile, of course).

@tomwilkie
Copy link
Contributor

I suspect it can't flush chunks quickly enough, and therefore they are
getting more than 10mins worth of data.

On Wednesday, 2 November 2016, Julius Volz [email protected] wrote:

Oh wow yeah, the default chunk max age of 10 minutes seems way too low.
I'm wondering why we're still achieving such decent chunk utilization (
sum(cortex_ingester_chunk_utilization_sum) / sum(cortex_ingester_chunk_
utilization_count) is around 0.43) with such a low max age. Under certain
circumstances, chunks can last for hours or days, so maybe it's the
frequent scraping plus noisiness of the data that makes the chunks fill up
that fast. Still, I would set the max age to an hour or so (as you said, it
depends a bit on our risk profile, of course).


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#11 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAbGhYFnewSerf7ltmWR169MdiiOQ0bfks5q6SEHgaJpZM4J5RTd
.

@juliusv
Copy link
Contributor

juliusv commented Nov 3, 2016

I suspect it can't flush chunks quickly enough, and therefore they are getting more than 10mins worth of data.

At least the failures should not have a big effect because during normal operation, only ~4% of chunk puts fail (sum(rate(cortex_ingester_chunk_store_failures_total[1m])) / sum(rate(cortex_ingester_chunk_utilization_count[1m])) -> 0.043). Maybe general latency in non-failed puts delays things somewhat, but the effect cannot be huge, as sum(cortex_ingester_memory_chunks) / sum(cortex_ingester_memory_series) shows us that there's just 1.12 chunks per series in memory at a given time (there's always at least one open head chunk for active series).

@tomwilkie
Copy link
Contributor

tomwilkie commented Nov 6, 2016

Actually make sense, since we're on doubledelta (not varbit). So its about 3.3 bytes per sample, at 15s scrape interval == about 20mins per chunk. With 10mins, you'd expect 50% utilisation.

@juliusv
Copy link
Contributor

juliusv commented Nov 6, 2016

Hmm, how do you get to 20 mins per chunk at 15s scrape interval and 3.3 bytes per sample? 1024 / 3.3 = 310 samples per chunk, but 20 minutes of samples would only be 4 * 20 = 80 samples? So a chunk should be full after ~ 310 / 4 = 77 minutes. Or am I missing something stupid?

@tomwilkie
Copy link
Contributor

Nope, I was being stupid. I did 300/15 not 300*15.

@tomwilkie
Copy link
Contributor

tomwilkie commented Nov 6, 2016

Okay, bit more progress: 99th percentil chunk "age" is 27mins on flush. This could explain the higher utilisation. Just added a dashboard for it, will link to it when it live.

http://frontend.dev.weave.works/admin/grafana/dashboard/file/cortex-chunks.json

@tomwilkie
Copy link
Contributor

So, the question is why are some chunks 27mins old?

Thoughts:

  • it takes 0.8s avg to flush a single chunk 0.07s s3 + 0.7s dynamo
  • we limit concurrent chunk flushes to 100
  • we need to flush 20k chunks every 10mins
  • means we should be able to flush all chunks in ~3mins.

@tomwilkie
Copy link
Contributor

Except:

  • we need to write to dynamo multiple times for each chunk (due to indexing)
  • we batch dynamo write (currently doing about 200qps, and using about 600 capacities/s in dynamo, so batch size is 3?

@tomwilkie
Copy link
Contributor

tomwilkie commented Nov 6, 2016

Average number of entries per chunk is 8.6 here

And its no coincident that 8.6 * 3min is 27mins - which is the 99%ile chunk age...

@tomwilkie
Copy link
Contributor

With the latest change, we may be writing chunks more than once. Needs fixing.

@tomwilkie
Copy link
Contributor

Set to 1hr and behaving as expected in #118

roystchiang pushed a commit to roystchiang/cortex that referenced this issue Apr 6, 2022
…e/e02797ac7f3b68f08c7b778c95c60ba82303f81b

Pre release/e02797ac7f3b68f08c7b778c95c60ba82303f81b
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants