Skip to content

Commit d29e3db

Browse files
committed
refactor(quickstarts): improve formatting and readability of Bufstream documentation
- Enhanced the Bufstream quickstart guide by improving text formatting for better readability. - Adjusted line breaks and spacing for clarity. - Ensured consistent formatting for code snippets and command outputs.
1 parent db003bf commit d29e3db

File tree

1 file changed

+75
-30
lines changed

1 file changed

+75
-30
lines changed

docs/quickstarts/bufstream.mdx

Lines changed: 75 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -1,35 +1,66 @@
11
## Getting Started with Bufstream on Tigris
22

3-
(Bufstream)[https://buf.build/product/bufstream] is the Kafka-compatible message queue built for the data lakehouse era. It's a drop-in replacement for Apache Kafka®, but instead of requiring expensive machines with large attached disks, Bufstream builds on top of off-the-shelf technologies like Object Storage and Postgres, providing a Kafka implementation designed for the cloud-native era.
3+
(Bufstream)[https://buf.build/product/bufstream] is the Kafka-compatible message
4+
queue built for the data lakehouse era. It's a drop-in replacement for Apache
5+
Kafka®, but instead of requiring expensive machines with large attached disks,
6+
Bufstream builds on top of off-the-shelf technologies like Object Storage and
7+
Postgres, providing a Kafka implementation designed for the cloud-native era.
48

5-
Tigris is a globally distributed, multi-cloud object storage platform with native S3 API support and zero egress fees. It dynamically places data in the region where it’s being accessed—eliminating cross-cloud data transfer costs without sacrificing performance.
9+
Tigris is a globally distributed, multi-cloud object storage platform with
10+
native S3 API support and zero egress fees. It dynamically places data in the
11+
region where it’s being accessed—eliminating cross-cloud data transfer costs
12+
without sacrificing performance.
613

7-
When you combine the two, you get unlimited message retention and truly global operation. Combining zero egress fees with typed streams means that your applications can scale across the globe fearlessly.
14+
When you combine the two, you get unlimited message retention and truly global
15+
operation. Combining zero egress fees with typed streams means that your
16+
applications can scale across the globe fearlessly.
817

918
## Parts overview
1019

11-
(Bufstream)[https://buf.build/product/bufstream] is a fully self-hosted drop-in replacement for Apache Kafka® that writes data to S3-compatible object storage. It’s 100% compatible with the Kafka protocol, including support for exactly-once semantics (EOS) and transactions. Bufstream is more cost-effective to operate, and a single cluster can elastically scale to hundreds of GB/s of throughput without sacrificing performance. It's the universal Kafka replacement for the modern age.
20+
(Bufstream)[https://buf.build/product/bufstream] is a fully self-hosted drop-in
21+
replacement for Apache Kafka® that writes data to S3-compatible object storage.
22+
It’s 100% compatible with the Kafka protocol, including support for exactly-once
23+
semantics (EOS) and transactions. Bufstream is more cost-effective to operate,
24+
and a single cluster can elastically scale to hundreds of GB/s of throughput
25+
without sacrificing performance. It's the universal Kafka replacement for the
26+
modern age.
1227

13-
Even better, for teams sending Protobuf messages across their Kafka topics, Bufstream can enforce data quality and governance requirements on the broker with (Protovalidate)[https://protovalidate.com/]. Bufstream can even store topics as (Apache Iceberg™)[https://iceberg.apache.org/] tables, reducing time-to-insight in popular data lakehouse products like Snowflake and ClickHouse.
28+
Even better, for teams sending Protobuf messages across their Kafka topics,
29+
Bufstream can enforce data quality and governance requirements on the broker
30+
with (Protovalidate)[https://protovalidate.com/]. Bufstream can even store
31+
topics as (Apache Iceberg™)[https://iceberg.apache.org/] tables, reducing
32+
time-to-insight in popular data lakehouse products like Snowflake and
33+
ClickHouse.
1434

15-
To interact with Bufstream, we’ll use Kafkactl, a CLI tool for interacting with Apache Kafka and compatible tools.
35+
To interact with Bufstream, we’ll use Kafkactl, a CLI tool for interacting with
36+
Apache Kafka and compatible tools.
1637

17-
In addition, we’ll use Docker, the universal package format for the Internet. Docker lets you put your application and all its dependencies into a container image so that it can’t conflict with anything else on the system.
38+
In addition, we’ll use Docker, the universal package format for the Internet.
39+
Docker lets you put your application and all its dependencies into a container
40+
image so that it can’t conflict with anything else on the system.
1841

1942
## Pre-reqs
2043

21-
- (Docker Desktop)[https://www.docker.com/products/docker-desktop/] or a similar app, like (Podman Desktop)[https://podman-desktop.io/].
44+
- (Docker Desktop)[https://www.docker.com/products/docker-desktop/] or a similar
45+
app, like (Podman Desktop)[https://podman-desktop.io/].
2246
- A Tigris account; if you don’t have one, you can create one at storage.new.
2347

2448
## Clone the example repo
25-
Clone the bufstream-tigris demo repo to your laptop and open it in your editor of choice. You’ll come back to it later to add configuration details.
49+
50+
Clone the bufstream-tigris demo repo to your laptop and open it in your editor
51+
of choice. You’ll come back to it later to add configuration details.
2652

2753
## Create a Tigris bucket
28-
Create a new bucket at (storage.new)[https://storage.new/] in the Standard access tier. Copy its name down into your notes. You’ll need it later for configuration.
2954

30-
Create a new (access key)[https://storage.new/accesskey] with Editor permissions for that bucket. Open the .env file included in the repository and add the access key ID and secret access key values to the block shown below:
55+
Create a new bucket at (storage.new)[https://storage.new/] in the Standard
56+
access tier. Copy its name down into your notes. You’ll need it later for
57+
configuration.
3158

32-
```bash
59+
Create a new (access key)[https://storage.new/accesskey] with Editor permissions
60+
for that bucket. Open the .env file included in the repository and add the
61+
access key ID and secret access key values to the block shown below:
62+
63+
````bash
3364
# Add your Tigris access key ID and its secret below.
3465
TIGRIS_ACCESS_KEY_ID=
3566
TIGRIS_SECRET_ACCESS_KEY=
@@ -47,41 +78,50 @@ storage:
4778
env_var: TIGRIS_ACCESS_KEY_ID
4879
secret_access_key:
4980
env_var: TIGRIS_SECRET_ACCESS_KEY
50-
```
81+
````
5182
5283
We’re ready to start Bufstream and begin writing data to Tigris!
5384
5485
## Start Bufstream
55-
Start the environment, using -d to run the Compose project in detached mode, returning your to a prompt after all services start.
86+
87+
Start the environment, using -d to run the Compose project in detached mode,
88+
returning your to a prompt after all services start.
89+
5690
```bash
57-
docker compose up -d
91+
docker compose up -d
5892
```
93+
5994
You should see the following output:
95+
6096
```text
61-
✔ Network bufstream-on-tigris_bufstream_net Created 0.0s
62-
✔ Container cli Started 0.3s
63-
✔ Container postgres Healthy 10.9s
97+
✔ Network bufstream-on-tigris_bufstream_net Created 0.0s
98+
✔ Container cli Started 0.3s
99+
✔ Container postgres Healthy 10.9s
64100
✔ Container bufstream Started 11.1s
65101
```
66102

67-
##Create a topic
68-
Use kafkactl to create a Kafka topic in Bufstream. In your terminal, run the following:
103+
##Create a topic Use kafkactl to create a Kafka topic in Bufstream. In your
104+
terminal, run the following:
105+
69106
```bash
70107
docker exec cli kafkactl create topic bufstream-on-tigris
71108
```
72-
```text
109+
110+
````text
73111
When it completes, you’ll see the following output:
74112
```text
75113
topic created: bufstream-on-tigris
76-
```
114+
````
115+
116+
##Produce to the topic Now that you’ve created a topic, let’s write some data.
117+
In the example repo, we’ve included a sample message in messages.txt. Run the
118+
following in your terminal:
77119
78-
##Produce to the topic
79-
Now that you’ve created a topic, let’s write some data. In the example repo, we’ve included a sample message in messages.txt.
80-
Run the following in your terminal:
81120
```bash
82121
docker exec cli kafkactl produce bufstream-on-tigris --file=/messages.txt
83122
```
84-
```text
123+
124+
````text
85125
When it’s done, you’ll see the following message:
86126
7 messages produced
87127
```text
@@ -90,9 +130,10 @@ When it’s done, you’ll see the following message:
90130
Let’s read the messages back. Consume the last 100 messages from the topic:
91131
```bash
92132
docker exec cli kafkactl consume bufstream-on-tigris --tail=100
93-
```
133+
````
94134

95-
You’ll see the seven messages from messages.txt that were published to the topic:
135+
You’ll see the seven messages from messages.txt that were published to the
136+
topic:
96137

97138
```text
98139
Hello, world!
@@ -104,5 +145,9 @@ on
104145
Tigris!
105146
```
106147

107-
It works! You’ve successfully produced data to a topic and then consumed it. From here, you can rest easy knowing that Tigris securely backs up your data, and you can access it from anywhere in the world.
108-
If you open your Tigris console to the bucket you created, you’ll see Bufstream’s added a number of keys to store your topic data. Feel free to keep using kafkactl—or your own code—to add more messages and topics, keeping an eye on the bucket for changes.
148+
It works! You’ve successfully produced data to a topic and then consumed it.
149+
From here, you can rest easy knowing that Tigris securely backs up your data,
150+
and you can access it from anywhere in the world. If you open your Tigris
151+
console to the bucket you created, you’ll see Bufstream’s added a number of keys
152+
to store your topic data. Feel free to keep using kafkactl—or your own code—to
153+
add more messages and topics, keeping an eye on the bucket for changes.

0 commit comments

Comments
 (0)