Skip to content

Commit e94aeb4

Browse files
authored
Merge pull request #302 from tigrisdata/automq
feat(docs): add AutoMQ quickstart guide for Tigris integration
2 parents f8aea1e + 80ed628 commit e94aeb4

File tree

1 file changed

+205
-0
lines changed

1 file changed

+205
-0
lines changed

docs/quickstarts/automq.md

Lines changed: 205 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,205 @@
1+
# AutoMQ on Tigris
2+
3+
[AutoMQ](https://www.automq.com) is an
4+
[Apache Kafka-compatible](https://kafka.apache.org/documentation/) streaming
5+
engine that stores all durability and log data in object storage. When paired
6+
with Tigris, AutoMQ can run as fully stateless brokers with no attached disks or
7+
replication overhead, and benefit from Tigris' globally distributed object
8+
storage with zero egress fees.
9+
10+
## Quick Start with Docker Compose:
11+
12+
The easiest way to run AutoMQ with Tigris is using Docker Compose. This guide
13+
will walk you through setting up a single-node AutoMQ cluster backed by Tigris
14+
storage.
15+
16+
:::tip This guide is based on the
17+
[official AutoMQ Docker Compose setup](https://github.com/AutoMQ/automq-labs/blob/main/opensource-setup/docker-compose/docker-compose.yaml).
18+
For more deployment options, see the
19+
[AutoMQ Deployment Overview](https://www.automq.com/docs/automq/deployment/overview).
20+
:::
21+
22+
### 1. Prerequisites
23+
24+
- **Docker** and **Docker Compose** installed
25+
- A **Tigris account** - create one at
26+
[https://storage.new](https://storage.new)
27+
- **Tigris credentials** - create Access Key and Secret Key from your Tigris
28+
dashboard at
29+
[https://console.tigris.dev/createaccesskey](https://console.tigris.dev/createaccesskey)
30+
31+
### 2. Create Buckets in Tigris
32+
33+
AutoMQ requires two buckets: one for data storage and one for operational
34+
metadata. You can create them via the Tigris console or using the AWS CLI:
35+
36+
```bash
37+
# Configure credentials
38+
export AWS_ACCESS_KEY_ID=YOUR_TIGRIS_ACCESS_KEY
39+
export AWS_SECRET_ACCESS_KEY=YOUR_TIGRIS_SECRET_KEY
40+
export AWS_ENDPOINT_URL_S3=https://t3.storage.dev
41+
42+
# Create buckets for AutoMQ data and operations storage
43+
aws s3api create-bucket --bucket your-automq-data --endpoint-url https://t3.storage.dev
44+
aws s3api create-bucket --bucket your-automq-ops --endpoint-url https://t3.storage.dev
45+
```
46+
47+
**Note**: Bucket names must be globally unique across all Tigris users.
48+
49+
### 3. Configure Docker Compose
50+
51+
Edit the `docker-compose.yaml` file and update the Tigris credentials and bucket
52+
names:
53+
54+
````yaml
55+
services:
56+
server1:
57+
container_name: "automq-server1"
58+
image: automqinc/automq:1.6.0-rc0
59+
stop_grace_period: 1m
60+
environment:
61+
# Replace with your Tigris credentials
62+
- KAFKA_S3_ACCESS_KEY=tid_YOUR_ACCESS_KEY_HERE
63+
- KAFKA_S3_SECRET_KEY=tsec_YOUR_SECRET_KEY_HERE
64+
- KAFKA_HEAP_OPTS=-Xms1g -Xmx4g -XX:MetaspaceSize=96m
65+
-XX:MaxDirectMemorySize=1G
66+
- CLUSTER_ID=3D4fXN-yS1-vsQ8aJ_q4Mg
67+
command:
68+
- bash
69+
- -c
70+
- |
71+
/opt/automq/kafka/bin/kafka-server-start.sh \
72+
/opt/automq/kafka/config/kraft/server.properties \
73+
--override cluster.id=$$CLUSTER_ID \
74+
--override node.id=0 \
75+
--override controller.quorum.voters=0@server1:9093 \
76+
--override controller.quorum.bootstrap.servers=server1:9093 \
77+
--override advertised.listeners=PLAINTEXT://server1:9092 \
78+
--override s3.data.buckets='0@s3://your-automq-data?region=auto&endpoint=https://t3.storage.dev' \
79+
--override s3.ops.buckets='1@s3://your-automq-ops?region=auto&endpoint=https://t3.storage.dev' \
80+
--override s3.wal.path='0@s3://your-automq-data?region=auto&endpoint=https://t3.storage.dev'
81+
networks:
82+
- automq_net
83+
84+
networks:
85+
automq_net:
86+
driver: bridge
87+
88+
**Key Configuration Parameters:**
89+
90+
- `KAFKA_S3_ACCESS_KEY` - Your Tigris Access Key (starts with `tid_`)
91+
- `KAFKA_S3_SECRET_KEY` - Your Tigris Secret Key (starts with `tsec_`)
92+
- `s3.data.buckets` - Your data bucket name in the S3 URL (stores Kafka data)
93+
- `s3.ops.buckets` - Your ops bucket name in the S3 URL (stores operational
94+
metadata)
95+
- `s3.wal.path` - Write-Ahead Log path (typically same as data bucket)
96+
- `endpoint=https://t3.storage.dev` - Tigris S3-compatible endpoint
97+
- `region=auto` - Tigris automatically routes to the nearest region
98+
99+
For detailed information on these Tigris and S3 configuration parameters, refer
100+
to the
101+
[AutoMQ Broker and Controller Configuration guide](https://www.automq.com/docs/automq/configuration/broker-and-controller-configuration#s3-data-buckets).
102+
103+
### 4. Start AutoMQ
104+
105+
Start the AutoMQ cluster with Docker Compose:
106+
107+
```bash
108+
docker-compose up -d
109+
````
110+
111+
Check the logs to verify AutoMQ is running:
112+
113+
```bash
114+
docker-compose logs -f
115+
```
116+
117+
You should see messages indicating:
118+
119+
- `Readiness check pass! (ObjectStorageReadinessCheck)` - Connected to Tigris
120+
- `The broker has been unfenced` - Broker is ready
121+
- `Kafka Server started` - AutoMQ is running
122+
123+
### 5. Create a Topic
124+
125+
Create a Kafka topic using the AutoMQ CLI:
126+
127+
```bash
128+
docker run --network automq_net automqinc/automq:1.6.0-rc0 \
129+
/bin/bash -c "/opt/automq/kafka/bin/kafka-topics.sh \
130+
--create \
131+
--topic my-test-topic \
132+
--bootstrap-server server1:9092 \
133+
--partitions 3 \
134+
--replication-factor 1"
135+
```
136+
137+
List all topics to verify:
138+
139+
```bash
140+
docker run --network automq_net automqinc/automq:1.6.0-rc0 \
141+
/bin/bash -c "/opt/automq/kafka/bin/kafka-topics.sh \
142+
--list \
143+
--bootstrap-server server1:9092"
144+
```
145+
146+
Describe the topic:
147+
148+
```bash
149+
docker run --network automq_net automqinc/automq:1.6.0-rc0 \
150+
/bin/bash -c "/opt/automq/kafka/bin/kafka-topics.sh \
151+
--describe \
152+
--topic my-test-topic \
153+
--bootstrap-server server1:9092"
154+
```
155+
156+
### 6. Produce and Consume Messages
157+
158+
**Produce test messages:**
159+
160+
```bash
161+
docker run --network automq_net automqinc/automq:1.6.0-rc0 \
162+
/bin/bash -c "/opt/automq/kafka/bin/kafka-producer-perf-test.sh \
163+
--topic my-test-topic \
164+
--num-records=10000 \
165+
--throughput 1000 \
166+
--record-size 1024 \
167+
--producer-props bootstrap.servers=server1:9092"
168+
```
169+
170+
**Consume messages:**
171+
172+
```bash
173+
docker run --network automq_net automqinc/automq:1.6.0-rc0 \
174+
/bin/bash -c "/opt/automq/kafka/bin/kafka-console-consumer.sh \
175+
--topic my-test-topic \
176+
--bootstrap-server server1:9092 \
177+
--from-beginning \
178+
--max-messages 10"
179+
```
180+
181+
## Congratulations! 🎉
182+
183+
You've successfully deployed AutoMQ with Tigris as the storage backend! In this
184+
guide, you:
185+
186+
- Created Tigris buckets for data and operational storage
187+
- Configured and launched a single-node AutoMQ cluster using Docker Compose
188+
- Connected AutoMQ to Tigris using S3-compatible endpoints
189+
- Created a Kafka topic with multiple partitions
190+
- Produced and consumed messages through AutoMQ
191+
192+
Your AutoMQ cluster is now running entirely stateless with all data durably
193+
stored in Tigris object storage. You can scale brokers up or down without
194+
worrying about data migration, and benefit from Tigris' global distribution and
195+
zero egress fees.
196+
197+
## Learn More
198+
199+
### AutoMQ Resources
200+
201+
- [AutoMQ Documentation](https://www.automq.com/docs/automq/)
202+
- [AutoMQ Deployment Overview](https://www.automq.com/docs/automq/deployment/overview)
203+
- [AutoMQ Broker and Controller Configuration](https://www.automq.com/docs/automq/configuration/broker-and-controller-configuration)
204+
- [AutoMQ Docker Compose Setup (GitHub)](https://github.com/AutoMQ/automq-labs/blob/main/opensource-setup/docker-compose/docker-compose.yaml)
205+
- [AutoMQ GitHub Repository](https://github.com/AutoMQ/automq)

0 commit comments

Comments
 (0)