Skip to content

Commit 4e08621

Browse files
authored
Backport WAL failover feature page to v24.1 (#19667)
* Backport WAL failover feature page to v24.1 Fixes DOC-13610 Relates to DOC-11199
1 parent c197489 commit 4e08621

14 files changed

+534
-30
lines changed

src/current/_includes/v24.1/sidebar-data/self-hosted-deployments.json

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -556,7 +556,13 @@
556556
"urls": [
557557
"/${VERSION}/ui-key-visualizer.html"
558558
]
559-
}
559+
},
560+
{
561+
"title": "WAL Failover",
562+
"urls": [
563+
"/${VERSION}/wal-failover.html"
564+
]
565+
}
560566
]
561567
},
562568
{
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
On a CockroachDB [node]({% link {{ page.version.version }}/architecture/overview.md %}#node) with [multiple stores]({% link {{ page.version.version }}/cockroach-start.md %}#store), you can mitigate some effects of [disk stalls]({% link {{ page.version.version }}/cluster-setup-troubleshooting.md %}#disk-stalls) by configuring the node to failover each store's [write-ahead log (WAL)]({% link {{ page.version.version }}/architecture/storage-layer.md %}#memtable-and-write-ahead-log) to another store's data directory using the `--wal-failover` flag to [`cockroach start`]({% link {{ page.version.version }}/cockroach-start.md %}#enable-wal-failover) or the `COCKROACH_WAL_FAILOVER` environment variable.
2+
3+
Failing over the WAL may allow some operations against a store to continue to complete despite temporary unavailability of the underlying storage. For example, if the node's primary store is stalled, and the node can't read from or write to it, the node can still write to the WAL on another store. This can allow the node to continue to service requests during momentary unavailability of the underlying storage device.
4+
5+
When WAL failover is enabled, CockroachDB will take the the following actions:
6+
7+
- At node startup, each store is assigned another store to be its failover destination.
8+
- CockroachDB will begin monitoring the latency of all WAL writes. If latency to the WAL exceeds the value of the [cluster setting `storage.wal_failover.unhealthy_op_threshold`]({% link {{page.version.version}}/cluster-settings.md %}#setting-storage-wal-failover-unhealthy-op-threshold), the node will attempt to write WAL entries to a secondary store's volume.
9+
- CockroachDB will update the [store status endpoint]({% link {{ page.version.version }}/monitoring-and-alerting.md %}#store-status-endpoint) at `/_status/stores` so you can monitor the store's status.
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
{% include_cached copy-clipboard.html %}
2+
~~~ yaml
3+
file-defaults:
4+
buffered-writes: false
5+
buffering:
6+
max-staleness: 1s
7+
flush-trigger-size: 256KiB
8+
max-buffer-size: 50MiB
9+
~~~
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
You can monitor WAL failover occurrences using the following metrics:
2+
3+
- `storage.wal.failover.secondary.duration`: Cumulative time spent (in nanoseconds) writing to the secondary WAL directory. Only populated when WAL failover is configured.
4+
- `storage.wal.failover.primary.duration`: Cumulative time spent (in nanoseconds) writing to the primary WAL directory. Only populated when WAL failover is configured.
5+
- `storage.wal.failover.switch.count`: Count of the number of times WAL writing has switched from primary to secondary store, and vice versa.
6+
7+
The `storage.wal.failover.secondary.duration` is the primary metric to monitor. You should expect this metric to be `0` unless a WAL failover occurs. If a WAL failover occurs, the rate at which it increases provides an indication of the health of the primary store.
8+
9+
You can access these metrics via the following methods:
10+
11+
- The [**Custom Chart** debug page]({% link {{ page.version.version }}/ui-custom-chart-debug-page.md %}) in [DB Console]({% link {{ page.version.version }}/ui-custom-chart-debug-page.md %}).
12+
- By [monitoring CockroachDB with Prometheus]({% link {{ page.version.version }}/monitor-cockroachdb-with-prometheus.md %}).
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
- Size = minimum 25 GiB
2+
- IOPS = 1/10th of the disk for the "user data" store
3+
- Bandwidth = 1/10th of the disk for the "user data" store
Loading
Loading
Loading
Binary file not shown.
Binary file not shown.
Binary file not shown.

src/current/v24.1/cluster-setup-troubleshooting.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -420,6 +420,10 @@ Different filesystems may treat the ballast file differently. Make sure to test
420420
421421
A _disk stall_ is any disk operation that does not terminate in a reasonable amount of time. This usually manifests as write-related system calls such as [`fsync(2)`](https://man7.org/linux/man-pages/man2/fdatasync.2.html) (aka `fdatasync`) taking a lot longer than expected (e.g., more than 20 seconds). The mitigation in almost all cases is to [restart the node]({% link {{ page.version.version }}/cockroach-start.md %}) with the stalled disk. CockroachDB's internal disk stall monitoring will attempt to shut down a node when it sees a disk stall that lasts longer than 20 seconds. At that point the node should be restarted by your [orchestration system]({% link {{ page.version.version }}/recommended-production-settings.md %}#orchestration-kubernetes).
422422
423+
{{site.data.alerts.callout_success}}
424+
In cloud environments, transient disk stalls are common, often lasting on the order of several seconds. If you deploy a CockroachDB {{ site.data.products.core }} cluster in the cloud, we strongly recommend enabling [WAL failover]({% link {{ page.version.version }}/cockroach-start.md %}#write-ahead-log-wal-failover).
425+
{{site.data.alerts.end}}
426+
423427
Symptoms of disk stalls include:
424428
425429
- Bad cluster write performance, usually in the form of a substantial drop in QPS for a given workload.

src/current/v24.1/cockroach-start.md

Lines changed: 7 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -233,20 +233,16 @@ Field | Description
233233

234234
#### Write Ahead Log (WAL) Failover
235235

236-
{% include_cached new-in.html version="v24.1" %} On a CockroachDB [node]({% link {{ page.version.version }}/architecture/overview.md %}#node) with [multiple stores](#store), you can mitigate some effects of [disk stalls]({% link {{ page.version.version }}/cluster-setup-troubleshooting.md %}#disk-stalls) by configuring the node to failover each store's [write-ahead log (WAL)]({% link {{ page.version.version }}/architecture/storage-layer.md %}#memtable-and-write-ahead-log) to another store's data directory using the `--wal-failover` flag.
237-
238-
Failing over the WAL may allow some operations against a store to continue to complete despite temporary unavailability of the underlying storage. For example, if the node's primary store is stalled, and the node can't read from or write to it, the node can still write to the WAL on another store. This can give the node a chance to eventually catch up once the disk stall has been resolved.
239-
240-
When WAL failover is enabled, CockroachDB will take the the following actions:
241-
242-
- At node startup, each store is assigned another store to be its failover destination.
243-
- CockroachDB will begin monitoring the latency of all WAL writes. If latency to the WAL exceeds the value of the [cluster setting `storage.wal_failover.unhealthy_op_threshold`]({% link {{page.version.version}}/cluster-settings.md %}#setting-storage-wal-failover-unhealthy-op-threshold), the node will attempt to write WAL entries to a secondary store's volume.
244-
- CockroachDB will update the [store status endpoint]({% link {{ page.version.version }}/monitoring-and-alerting.md %}#store-status-endpoint) at `/_status/stores` so you can monitor the store's status.
236+
{% include_cached new-in.html version="v24.1" %} {% include {{ page.version.version }}/wal-failover-intro.md %}
245237

246238
{{site.data.alerts.callout_info}}
247239
{% include feature-phases/preview.md %}
248240
{{site.data.alerts.end}}
249241

242+
This page has basic instructions on how to enable WAL failover, disable WAL failover, and monitor WAL failover.
243+
244+
For more detailed instructions showing how to use, test, and monitor WAL failover, as well as descriptions of how WAL failover works in multi-store configurations, see [WAL Failover]({% link {{ page.version.version }}/wal-failover.md %}).
245+
250246
##### Enable WAL failover
251247

252248
To enable WAL failover, you must take one of the following actions:
@@ -264,14 +260,7 @@ Therefore, if you enable WAL failover and log to local disks, you must also upda
264260
1. When `buffering` is enabled, `buffered-writes` must be explicitly disabled as shown in the following example. This is necessary because `buffered-writes` does not provide true asynchronous disk access, but rather a small buffer. If the small buffer fills up, it can cause internal routines performing logging operations to hang. This will in turn cause internal routines doing other important work to hang, potentially affecting cluster stability.
265261
1. The recommended logging configuration for using file-based logging with WAL failover is as follows:
266262

267-
~~~
268-
file-defaults:
269-
buffered-writes: false
270-
buffering:
271-
max-staleness: 1s
272-
flush-trigger-size: 256KiB
273-
max-buffer-size: 50MiB
274-
~~~
263+
{% include {{ page.version.version }}/wal-failover-log-config.md %}
275264

276265
As an alternative to logging to local disks, you can configure [remote log sinks]({% link {{page.version.version}}/logging-use-cases.md %}#network-logging) that are not correlated with the availability of your cluster's local disks. However, this will make troubleshooting using [`cockroach debug zip`]({% link {{ page.version.version}}/cockroach-debug-zip.md %}) more difficult, since the output of that command will not include the (remotely stored) log files.
277266

@@ -284,18 +273,7 @@ To disable WAL failover, you must [restart the node]({% link {{ page.version.ver
284273

285274
##### Monitor WAL failover
286275

287-
You can monitor if WAL failover occurs using the following metrics:
288-
289-
- `storage.wal.failover.secondary.duration`: Cumulative time spent (in nanoseconds) writing to the secondary WAL directory. Only populated when WAL failover is configured.
290-
- `storage.wal.failover.primary.duration`: Cumulative time spent (in nanoseconds) writing to the primary WAL directory. Only populated when WAL failover is configured.
291-
- `storage.wal.failover.switch.count`: Count of the number of times WAL writing has switched from primary to secondary store, and vice versa.
292-
293-
The `storage.wal.failover.secondary.duration` is the primary metric to monitor. You should expect this metric to be `0` unless a WAL failover occurs. If a WAL failover occurs, you probably care about how long it remains non-zero because it provides an indication of the health of the primary store.
294-
295-
You can access these metrics via the following methods:
296-
297-
- The [Custom Chart Debug Page]({% link {{ page.version.version }}/ui-custom-chart-debug-page.md %}) in [DB Console]({% link v24.1/ui-custom-chart-debug-page.md %}).
298-
- By [monitoring CockroachDB with Prometheus]({% link {{ page.version.version }}/monitor-cockroachdb-with-prometheus.md %}).
276+
{% include {{ page.version.version }}/wal-failover-metrics.md %}
299277

300278
### Logging
301279

0 commit comments

Comments
 (0)