You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/configuration/arguments.md
+18Lines changed: 18 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -305,6 +305,24 @@ It also talks to a KVStore and has it's own copies of the same flags used by the
305
305
Where you don't want to cache every chunk written by ingesters, but you do want to take advantage of chunk write deduplication, this option will make ingesters write a placeholder to the cache for each chunk.
306
306
Make sure you configure ingesters with a different cache to queriers, which need the whole value.
307
307
308
+
#### WAL
309
+
310
+
- `--ingester.wal-dir`
311
+
Directory where the WAL data should be stores and/or recovered from.
312
+
313
+
- `--ingester.wal-enabled`
314
+
315
+
Setting this to `true` enables writing to WAL during ingestion.
316
+
317
+
- `--ingester.checkpoint-enabled`
318
+
Set this to `true` to enable checkpointing of in-memory chunks to disk. This is optional which helps in speeding up the replay process.
319
+
320
+
- `--ingester.checkpoint-duration`
321
+
This is the interval at which checkpoints should be created.
322
+
323
+
- `--ingester.recover-from-wal`
324
+
Set this to to `true` to recover data from an existing WAL. The data is recovered even if WAL is disabled and this is set to `true`. The WAL dir needs to be set for this.
325
+
308
326
## Runtime Configuration file
309
327
310
328
Cortex has a concept of "runtime config" file, which is simply a file that is reloaded while Cortex is running. It is used by some Cortex components to allow operator to change some aspects of Cortex configuration without restarting it. File is specified by using `-runtime-config.file=<filename>` flag and reload period (which defaults to 10 seconds) can be changed by `-runtime-config.reload-period=<duration>` flag. Previously this mechanism was only used by limits overrides, and flags were called `-limits.per-user-override-config=<filename>` and `-limits.per-user-override-period=10s` respectively. These are still used, if `-runtime-config.file=<filename>` is not specified.
Currently the ingesters running in the chunks storage mode, store all their data in memory. If there is a crash, there could be loss of data. WAL helps fill this gap in reliability.
9
+
10
+
To use WAL, there are some changes that needs to be made in the deployment.
11
+
12
+
## Changes to deployment
13
+
14
+
1. Since ingesters need to have the same persistent volume across restarts/rollout, all the ingesters should be run on [statefulset](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) with fixed volumes.
15
+
16
+
2. Following flags needs to be set
17
+
*`--ingester.wal-dir` to the directory where the WAL data should be stores and/or recovered from. Note that this should be on the mounted volume.
18
+
*`--ingester.wal-enabled` to `true` which enables writing to WAL during ingestion.
19
+
*`--ingester.checkpoint-enabled` to `true` to enable checkpointing of in-memory chunks to disk. This is optional which helps in speeding up the replay process.
20
+
*`--ingester.checkpoint-duration` to the interval at which checkpoints should be created. Default is `30m`, and depending on the number of series, it can be brought down to `15m` if there are less series per ingester (say 1M).
21
+
*`--ingester.recover-from-wal` to `true` to recover data from an existing WAL. The data is recovered even if WAL is disabled and this is set to `true`. The WAL dir needs to be set for this.
22
+
* If you are going to enable WAL, it is advisable to always set this to `true`.
23
+
*`--ingester.tokens-file-path` should be set to the filepath where the tokens should be stored. Note that this should be on the mounted volume. Why this is required is described below.
24
+
25
+
## Changes in lifecycle when WAL is enabled
26
+
27
+
1. Flushing of data to chunk store during rollouts or scale down is disabled. This is because during a rollout of statefulset there are no ingesters that are simultaneously leaving and joining, rather the same ingester is shut down and brought back again with updated config. Hence flushing is skipped and the data is recovered from the WAL.
28
+
29
+
2. As there are no transfers between ingesters, the tokens are stored and recovered from disk between rollout/restarts. This is [not a new thing](https://github.com/cortexproject/cortex/pull/1750) but it is effective when using statefulsets.
30
+
31
+
## Migrating from stateless deployments
32
+
33
+
The ingester _deployment without WAL_ and _statefulset with WAL_ should be scaled down and up respectively in sync without transfer of data between them to ensure that any ingestion after migration is reliable immediately.
34
+
35
+
Let's take an example of 4 ingesters. The migration would look something like this:
36
+
37
+
1. Bring up one stateful ingester `ingester-0` and wait till it's ready (accepting read and write requests).
38
+
2. Scale down old ingester deployment to 3 and wait till the leaving ingester flushes all the data to chunk store.
39
+
3. Once that ingester has disappeared from `kc get pods ...`, add another stateful ingester and wait till it's ready. This assures not transfer. Now you have `ingester-0 ingester-1`.
40
+
4. Repeat step 2 to reduce remove another ingester from old deployment.
41
+
5. Repeat step 3 to add another stateful ingester. Now you have `ingester-0 ingester-1 ingester-2`.
42
+
6. Repeat step 4 and 5, and now you will finally have `ingester-0 ingester-1 ingester-2 ingester-3`.
43
+
44
+
## How to scale up/down
45
+
46
+
### Scale up
47
+
48
+
Scaling up is same as what you would do without WAL or statefulsets. Nothing to change here.
49
+
50
+
### Scale down
51
+
52
+
Since Kubernetes doesn't differentiate between rollout and scale down when sending a signal, the flushing of chunks is disabled by default. Hence the only thing to take care during scale down is flushing of chunks.
53
+
54
+
There are 2 ways to do it, with the latter being a fallback option.
55
+
56
+
**First option**
57
+
Consider you have 4 ingesters `ingester-0 ingester-1 ingester-2 ingester-3` and you want to scale down to 2 ingesters, the ingesters which will be shutdown according to statefulset rules are `ingester-3` and then `ingester-2`.
58
+
59
+
Hence before actually scaling down in Kubernetes, port forward those ingesters and hit the [`/shutdown`](https://github.com/cortexproject/cortex/pull/1746) endpoint. This will flush the chunks and shut down the ingesters (while also removing itself from the ring).
60
+
61
+
After hitting the endpoint for `ingester-2 ingester-3`, scale down the ingesters to 2.
62
+
63
+
PS: Given you have to scale down 1 ingester at a time, you can pipeline the shutdown and scaledown process instead of hitting shutdown endpoint for all to-be-scaled-down ingesters at the same time.
64
+
65
+
**Fallback option**
66
+
67
+
There is a [flush mode ingester](https://github.com/cortexproject/cortex/pull/1747) in progress, and with recent discussions there will be a separate target called flusher in it's place.
68
+
69
+
You can run it as a kubernetes job which will
70
+
* Attach to the volume of the scaled down ingester
71
+
* Recover from the WAL
72
+
* And flush all the chunks.
73
+
74
+
This job is to be run for all the ingesters that you missed hitting the shutdown endpoint as a first option.
75
+
76
+
More info about the flusher target will be added once it's upstream.
// Config for transferring chunks. Zero or negative = no retries.
@@ -70,6 +76,7 @@ type Config struct {
70
76
// RegisterFlags adds the flags required to config this to the given FlagSet
71
77
func (cfg*Config) RegisterFlags(f*flag.FlagSet) {
72
78
cfg.LifecyclerConfig.RegisterFlags(f)
79
+
cfg.WALConfig.RegisterFlags(f)
73
80
74
81
f.IntVar(&cfg.MaxTransferRetries, "ingester.max-transfer-retries", 10, "Number of times to try and transfer chunks before falling back to flushing. Negative value or zero disables hand-over.")
75
82
f.DurationVar(&cfg.FlushCheckPeriod, "ingester.flush-period", 1*time.Minute, "Period with which to attempt to flush chunks.")
0 commit comments