Skip to content

Commit 3ecea93

Browse files
committed
Remove Enterprise from v25.2 cdc docs and deprecate experimental changefeed syntax
1 parent e3dabf6 commit 3ecea93

39 files changed

+172
-226
lines changed

src/current/_data/redirects.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -251,6 +251,10 @@
251251
sources: ['grant-roles.md']
252252
versions: ['v21.1']
253253

254+
- destination: how-does-a-changefeed-work.md
255+
sources: ['how-does-an-enterprise-changefeed-work.md']
256+
versions: ['v25.2']
257+
254258
- destination: kubernetes-overview.md
255259
sources: ['operate-cockroachdb-kubernetes.md']
256260
versions: ['v21.2']

src/current/_includes/v25.2/cdc/cdc-schema-locked-example.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
Use the `schema_locked` [storage parameter]({% link {{ page.version.version }}/with-storage-parameter.md %}) to disallow [schema changes]({% link {{ page.version.version }}/online-schema-changes.md %}) on a watched table, which allows the changefeed to take a fast path that avoids checking if there are schema changes that could require synchronization between [changefeed aggregators]({% link {{ page.version.version }}/how-does-an-enterprise-changefeed-work.md %}). This helps to decrease the latency between a write committing to a table and it emitting to the [changefeed's sink]({% link {{ page.version.version }}/changefeed-sinks.md %}). Enabling `schema_locked`
1+
Use the `schema_locked` [storage parameter]({% link {{ page.version.version }}/with-storage-parameter.md %}) to disallow [schema changes]({% link {{ page.version.version }}/online-schema-changes.md %}) on a watched table, which allows the changefeed to take a fast path that avoids checking if there are schema changes that could require synchronization between [changefeed aggregators]({% link {{ page.version.version }}/how-does-a-changefeed-work.md %}). This helps to decrease the latency between a write committing to a table and it emitting to the [changefeed's sink]({% link {{ page.version.version }}/changefeed-sinks.md %}). Enabling `schema_locked`
22

33
Enable `schema_locked` on the watched table with the [`ALTER TABLE`]({% link {{ page.version.version }}/alter-table.md %}) statement:
44

src/current/_includes/v25.2/cdc/core-csv.md

Lines changed: 0 additions & 3 deletions
This file was deleted.

src/current/_includes/v25.2/cdc/core-url.md

Lines changed: 0 additions & 3 deletions
This file was deleted.

src/current/_includes/v25.2/cdc/create-core-changefeed-avro.md renamed to src/current/_includes/v25.2/cdc/create-sinkless-changefeed-avro.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
In this example, you'll set up a basic changefeed for a single-node cluster that emits Avro records. CockroachDB's Avro binary encoding convention uses the [Confluent Schema Registry](https://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html) to store Avro schemas.
1+
In this example, you'll set up a sinkless changefeed for a single-node cluster that emits Avro records. CockroachDB's Avro binary encoding convention uses the [Confluent Schema Registry](https://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html) to store Avro schemas.
22

33
1. Use the [`cockroach start-single-node`]({% link {{ page.version.version }}/cockroach-start-single-node.md %}) command to start a single-node cluster:
44

@@ -28,36 +28,36 @@ In this example, you'll set up a basic changefeed for a single-node cluster that
2828
$ cockroach sql --url="postgresql://[email protected]:26257?sslmode=disable" --format=csv
2929
~~~
3030

31-
{% include {{ page.version.version }}/cdc/core-url.md %}
31+
{% include {{ page.version.version }}/cdc/sinkless-url.md %}
3232

33-
{% include {{ page.version.version }}/cdc/core-csv.md %}
33+
{% include {{ page.version.version }}/cdc/sinkless-csv.md %}
3434

3535
1. Enable the `kv.rangefeed.enabled` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}):
3636

3737
{% include_cached copy-clipboard.html %}
3838
~~~ sql
39-
> SET CLUSTER SETTING kv.rangefeed.enabled = true;
39+
SET CLUSTER SETTING kv.rangefeed.enabled = true;
4040
~~~
4141

4242
1. Create table `bar`:
4343

4444
{% include_cached copy-clipboard.html %}
4545
~~~ sql
46-
> CREATE TABLE bar (a INT PRIMARY KEY);
46+
CREATE TABLE bar (a INT PRIMARY KEY);
4747
~~~
4848

4949
1. Insert a row into the table:
5050

5151
{% include_cached copy-clipboard.html %}
5252
~~~ sql
53-
> INSERT INTO bar VALUES (0);
53+
INSERT INTO bar VALUES (0);
5454
~~~
5555

5656
1. Start the basic changefeed:
5757

5858
{% include_cached copy-clipboard.html %}
5959
~~~ sql
60-
> EXPERIMENTAL CHANGEFEED FOR bar WITH format = avro, confluent_schema_registry = 'http://localhost:8081';
60+
CREATE CHANGEFEED FOR TABLE bar WITH format = avro, confluent_schema_registry = 'http://localhost:8081';
6161
~~~
6262

6363
~~~
@@ -69,16 +69,16 @@ In this example, you'll set up a basic changefeed for a single-node cluster that
6969

7070
{% include_cached copy-clipboard.html %}
7171
~~~ shell
72-
$ cockroach sql --insecure -e "INSERT INTO bar VALUES (1)"
72+
cockroach sql --insecure -e "INSERT INTO bar VALUES (1)"
7373
~~~
7474

75-
1. Back in the terminal where the basic changefeed is streaming, the output will appear:
75+
1. Back in the terminal where the changefeed is streaming, the output will appear:
7676

7777
~~~
7878
bar,\000\000\000\000\001\002\002,\000\000\000\000\002\002\002\002
7979
~~~
8080

81-
Note that records may take a couple of seconds to display in the basic changefeed.
81+
Note that records may take a couple of seconds to display in the changefeed.
8282

8383
1. To stop streaming the changefeed, enter **CTRL+C** into the terminal where the changefeed is running.
8484

src/current/_includes/v25.2/cdc/create-core-changefeed.md renamed to src/current/_includes/v25.2/cdc/create-sinkless-changefeed.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
In this example, you'll set up a basic changefeed for a single-node cluster.
1+
In this example, you'll set up a sinkless changefeed for a single-node cluster.
22

33
1. In a terminal window, start `cockroach`:
44

@@ -14,41 +14,41 @@ In this example, you'll set up a basic changefeed for a single-node cluster.
1414

1515
{% include_cached copy-clipboard.html %}
1616
~~~ shell
17-
$ cockroach sql \
17+
cockroach sql \
1818
--url="postgresql://[email protected]:26257?sslmode=disable" \
1919
--format=csv
2020
~~~
2121

22-
{% include {{ page.version.version }}/cdc/core-url.md %}
22+
{% include {{ page.version.version }}/cdc/sinkless-url.md %}
2323

24-
{% include {{ page.version.version }}/cdc/core-csv.md %}
24+
{% include {{ page.version.version }}/cdc/sinkless-csv.md %}
2525

2626
1. Enable the `kv.rangefeed.enabled` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}):
2727

2828
{% include_cached copy-clipboard.html %}
2929
~~~ sql
30-
> SET CLUSTER SETTING kv.rangefeed.enabled = true;
30+
SET CLUSTER SETTING kv.rangefeed.enabled = true;
3131
~~~
3232

3333
1. Create table `foo`:
3434

3535
{% include_cached copy-clipboard.html %}
3636
~~~ sql
37-
> CREATE TABLE foo (a INT PRIMARY KEY);
37+
CREATE TABLE foo (a INT PRIMARY KEY);
3838
~~~
3939

4040
1. Insert a row into the table:
4141

4242
{% include_cached copy-clipboard.html %}
4343
~~~ sql
44-
> INSERT INTO foo VALUES (0);
44+
INSERT INTO foo VALUES (0);
4545
~~~
4646

47-
1. Start the basic changefeed:
47+
1. Start the sinkless changefeed:
4848

4949
{% include_cached copy-clipboard.html %}
5050
~~~ sql
51-
> EXPERIMENTAL CHANGEFEED FOR foo;
51+
CREATE CHANGEFEED FOR TABLE foo;
5252
~~~
5353
~~~
5454
table,key,value
@@ -62,13 +62,13 @@ In this example, you'll set up a basic changefeed for a single-node cluster.
6262
$ cockroach sql --insecure -e "INSERT INTO foo VALUES (1)"
6363
~~~
6464

65-
1. Back in the terminal where the basic changefeed is streaming, the following output has appeared:
65+
1. Back in the terminal where the changefeed is streaming, the following output has appeared:
6666

6767
~~~
6868
foo,[1],"{""after"": {""a"": 1}}"
6969
~~~
7070

71-
Note that records may take a couple of seconds to display in the basic changefeed.
71+
Note that records may take a couple of seconds to display in the changefeed.
7272

7373
1. To stop streaming the changefeed, enter **CTRL+C** into the terminal where the changefeed is running.
7474

src/current/_includes/v25.2/cdc/examples-license-workload.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,3 @@
1-
1. If you do not already have one, [request a trial {{ site.data.products.enterprise }} license]({% link {{ page.version.version }}/licensing-faqs.md %}#obtain-a-license).
2-
31
1. Use the [`cockroach start-single-node`]({% link {{ page.version.version }}/cockroach-start-single-node.md %}) command to start a single-node cluster:
42

53
{% include_cached copy-clipboard.html %}

src/current/_includes/v25.2/cdc/lagging-ranges.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Use the `changefeed.lagging_ranges` metric to track the number of [ranges]({% li
55
- `lagging_ranges_polling_interval` sets the interval rate for when lagging ranges are checked and the `lagging_ranges` metric is updated. Polling adds latency to the `lagging_ranges` metric being updated. For example, if a range falls behind by 3 minutes, the metric may not update until an additional minute afterward.
66
- **Default:** `1m`
77

8-
Use the `changefeed.total_ranges` metric to monitor the number of ranges that are watched by [aggregator processors]({% link {{ page.version.version }}/how-does-an-enterprise-changefeed-work.md %}) participating in the changefeed job. If you're experiencing lagging ranges, `changefeed.total_ranges` may indicate that the number of ranges watched by aggregator processors in the job is unbalanced. You may want to try [pausing]({% link {{ page.version.version }}/pause-job.md %}) the changefeed and then [resuming]({% link {{ page.version.version }}/resume-job.md %}) it, so that the changefeed replans the work in the cluster. `changefeed.total_ranges` shares the same polling interval as the `changefeed.lagging_ranges` metric, which is controlled by the `lagging_ranges_polling_interval` option.
8+
Use the `changefeed.total_ranges` metric to monitor the number of ranges that are watched by [aggregator processors]({% link {{ page.version.version }}/how-does-a-changefeed-work.md %}) participating in the changefeed job. If you're experiencing lagging ranges, `changefeed.total_ranges` may indicate that the number of ranges watched by aggregator processors in the job is unbalanced. You may want to try [pausing]({% link {{ page.version.version }}/pause-job.md %}) the changefeed and then [resuming]({% link {{ page.version.version }}/resume-job.md %}) it, so that the changefeed replans the work in the cluster. `changefeed.total_ranges` shares the same polling interval as the `changefeed.lagging_ranges` metric, which is controlled by the `lagging_ranges_polling_interval` option.
99

1010
{{site.data.alerts.callout_success}}
1111
You can use the [`metrics_label`]({% link {{ page.version.version }}/monitor-and-debug-changefeeds.md %}#using-changefeed-metrics-labels) option to track the `lagging_ranges` and `total_ranges` metric per changefeed.

src/current/_includes/v25.2/cdc/modify-changefeed.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
To modify an {{ site.data.products.enterprise }} changefeed, [pause]({% link {{ page.version.version }}/create-and-configure-changefeeds.md %}#pause) the job and then use:
1+
To modify a changefeed, [pause]({% link {{ page.version.version }}/create-and-configure-changefeeds.md %}#pause) the job and then use:
22

33
~~~ sql
44
ALTER CHANGEFEED job_id {ADD table DROP table SET option UNSET option};

src/current/_includes/v25.2/cdc/msk-tutorial-crdb-setup.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -21,10 +21,6 @@
2121
cockroach sql --insecure
2222
~~~
2323

24-
{{site.data.alerts.callout_info}}
25-
To set your {{ site.data.products.enterprise }} license, refer to the [Licensing FAQs]({% link {{ page.version.version }}/licensing-faqs.md %}#set-a-license) page.
26-
{{site.data.alerts.end}}
27-
2824
1. Enable the `kv.rangefeed.enabled` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}):
2925

3026
{% include_cached copy-clipboard.html %}

src/current/_includes/v25.2/cdc/show-changefeed-job.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ SHOW CHANGEFEED JOBS;
1010
(2 rows)
1111
~~~
1212

13-
To show an individual {{ site.data.products.enterprise }} changefeed:
13+
To show an individual changefeed:
1414

1515
{% include_cached copy-clipboard.html %}
1616
~~~ sql
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
{{site.data.alerts.callout_info}}
2+
To determine how wide the columns need to be, the default `table` display format in `cockroach sql` buffers the results it receives from the server before printing them to the console. When consuming sinkless changefeed data using `cockroach sql`, it's important to use a display format like `csv` that does not buffer its results. To set the display format, use the [`--format=csv` flag]({% link {{ page.version.version }}/cockroach-sql.md %}#sql-flag-format) when starting the [built-in SQL client]({% link {{ page.version.version }}/cockroach-sql.md %}), or set the [`\set display_format=csv` option]({% link {{ page.version.version }}/cockroach-sql.md %}#client-side-options) once the SQL client is open.
3+
{{site.data.alerts.end}}
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
{{site.data.alerts.callout_info}}
2+
Sinkless changefeeds return results differently than other SQL statements, which means that they require a dedicated database connection with specific settings around result buffering. In normal operation, CockroachDB improves performance by buffering results server-side before returning them to a client; however, result buffering is automatically turned off for sinkless changefeeds. Also, sinkless changefeeds have different cancellation behavior than other queries: they can only be canceled by closing the underlying connection or issuing a [`CANCEL QUERY`]({% link {{ page.version.version }}/cancel-query.md %}) statement on a separate connection. Combined, these attributes of changefeeds mean that applications should explicitly create dedicated connections to consume changefeed data, instead of using a connection pool as most client drivers do by default.
3+
{{site.data.alerts.end}}

src/current/_includes/v25.2/cdc/sql-cluster-settings-example.md

Lines changed: 2 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -2,26 +2,14 @@
22

33
{% include_cached copy-clipboard.html %}
44
~~~ shell
5-
$ cockroach sql --insecure
6-
~~~
7-
8-
1. Set your organization name and [{{ site.data.products.enterprise }} license]({% link {{ page.version.version }}/licensing-faqs.md %}#types-of-licenses) key:
9-
10-
{% include_cached copy-clipboard.html %}
11-
~~~ sql
12-
> SET CLUSTER SETTING cluster.organization = '<organization name>';
13-
~~~
14-
15-
{% include_cached copy-clipboard.html %}
16-
~~~ sql
17-
> SET CLUSTER SETTING enterprise.license = '<secret>';
5+
cockroach sql --insecure
186
~~~
197

208
1. Enable the `kv.rangefeed.enabled` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}):
219

2210
{% include_cached copy-clipboard.html %}
2311
~~~ sql
24-
> SET CLUSTER SETTING kv.rangefeed.enabled = true;
12+
SET CLUSTER SETTING kv.rangefeed.enabled = true;
2513
~~~
2614

2715
{% include {{ page.version.version }}/cdc/cdc-cloud-rangefeed.md %}

src/current/_includes/v25.2/sidebar-data/stream-data.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -150,9 +150,9 @@
150150
"title": "Technical Overview",
151151
"items": [
152152
{
153-
"title": "How Does an Enterprise Changefeed Work?",
153+
"title": "How Does a Changefeed Work?",
154154
"urls": [
155-
"/${VERSION}/how-does-an-enterprise-changefeed-work.html"
155+
"/${VERSION}/how-does-a-changefeed-work.html"
156156
]
157157
}
158158
]

src/current/cockroachcloud/costs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -284,7 +284,7 @@ This is the usage for any data leaving CockroachDB such as SQL data being sent t
284284

285285
### Change data capture (changefeeds)
286286

287-
For change data capture (CDC), all CockroachDB {{ site.data.products.cloud }} clusters can use [Enterprise changefeeds]({% link {{ site.current_cloud_version}}/how-does-an-enterprise-changefeed-work.md %}).
287+
For change data capture (CDC), all CockroachDB {{ site.data.products.cloud }} clusters can use [Enterprise changefeeds]({% link {{ site.current_cloud_version}}/how-does-a-changefeed-work.md %}).
288288

289289
<section class="filter-content" markdown="1" data-scope="basic">
290290

src/current/cockroachcloud/stream-changefeed-to-snowflake-aws.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ docs_area: stream_data
77

88
While CockroachDB is an excellent system of record, it also needs to coexist with other systems. For example, you might want to keep your data mirrored in full-text indexes, analytics engines, or big data pipelines.
99

10-
This page demonstrates how to use an [{{ site.data.products.enterprise }} changefeed](../{{site.current_cloud_version}}/create-changefeed.html) to stream row-level changes to [Snowflake](https://www.snowflake.com/), an online analytical processing (OLAP) database.
10+
This page demonstrates how to use a [changefeed](../{{site.current_cloud_version}}/create-changefeed.html) to stream row-level changes to [Snowflake](https://www.snowflake.com/), an online analytical processing (OLAP) database.
1111

1212
{{site.data.alerts.callout_info}}
1313
Snowflake is optimized for inserts and batch rewrites over streaming updates. This tutorial sets up a changefeed to stream data to S3 with Snowpipe sending changes to Snowflake. Snowpipe imports previously unseen files and does not address uniqueness for primary keys, which means that target tables in Snowflake can contain multiple records per primary key.
@@ -91,11 +91,11 @@ Every change to a watched row is emitted as a record in a configurable format (i
9191
9292
1. Create an S3 bucket where streaming updates from the watched tables will be collected.
9393
94-
You will need the name of the S3 bucket when you [create your changefeed](#step-7-create-an-enterprise-changefeed). Ensure you have a set of IAM credentials with write access on the S3 bucket that you will use during [changefeed setup](#step-7-create-an-enterprise-changefeed).
94+
You will need the name of the S3 bucket when you [create your changefeed](#step-7-create-a-changefeed). Ensure you have a set of IAM credentials with write access on the S3 bucket that you will use during [changefeed setup](#step-7-create-a-changefeed).
9595
96-
## Step 7. Create an enterprise changefeed
96+
## Step 7. Create a changefeed
9797
98-
Back in the built-in SQL shell, [create an enterprise changefeed](../{{site.current_cloud_version}}/create-changefeed.html). Replace the placeholders with your AWS access key ID and AWS secret access key:
98+
Back in the built-in SQL shell, [create a changefeed](../{{site.current_cloud_version}}/create-changefeed.html). Replace the placeholders with your AWS access key ID and AWS secret access key:
9999
100100
{% include_cached copy-clipboard.html %}
101101
~~~ sql
771 KB
Loading
Binary file not shown.

0 commit comments

Comments
 (0)