Skip to content

Commit 59b0324

Browse files
author
nathan.xu
committed
some cosmetic improvements of the reference
1 parent 6876694 commit 59b0324

File tree

6 files changed

+33
-26
lines changed

6 files changed

+33
-26
lines changed

spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/configuring-topics.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ include::{kotlin-examples}/topics/Config.kt[tag=brokerProps]
4343
----
4444
======
4545

46-
Starting with version 2.7, you can declare multiple `NewTopic` s in a single `KafkaAdmin.NewTopics` bean definition:
46+
Starting with version 2.7, you can declare multiple `NewTopic`+++s+++ in a single `KafkaAdmin.NewTopics` bean definition:
4747

4848
[tabs]
4949
======
@@ -63,7 +63,7 @@ include::{kotlin-examples}/topics/Config.kt[tag=newTopicsBean]
6363
======
6464

6565

66-
IMPORTANT: When using Spring Boot, a `KafkaAdmin` bean is automatically registered so you only need the `NewTopic` (and/or `NewTopics`) `@Bean` s.
66+
IMPORTANT: When using Spring Boot, a `KafkaAdmin` bean is automatically registered so you only need the `NewTopic` (and/or `NewTopics`) `@Bean`+++s+++.
6767

6868
By default, if the broker is not available, a message is logged, but the context continues to load.
6969
You can programmatically invoke the admin's `initialize()` method to try again later.

spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/connecting.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
[[connecting]]
22
= Connecting to Kafka
33

4-
* `KafkaAdmin` - see <<configuring-topics>>
4+
* `KafkaAdmin` - see xref:kafka/configuring-topics.adoc[Configuring Topics]
55
* `ProducerFactory` - see xref:kafka/sending-messages.adoc[Sending Messages]
66
* `ConsumerFactory` - see xref:kafka/receiving-messages.adoc[Receiving Messages]
77

88
Starting with version 2.5, each of these extends `KafkaResourceFactory`.
9-
This allows changing the bootstrap servers at runtime by adding a `Supplier<String>` to their configuration: `setBootstrapServersSupplier(() -> ...)`.
9+
This allows changing the bootstrap servers at runtime by adding a `Supplier<String>` to their configuration: `setBootstrapServersSupplier(() +++->+++ ...)`.
1010
This will be called for all new connections to get the list of servers.
1111
Consumers and Producers are generally long-lived.
1212
To close existing Producers, call `reset()` on the `DefaultKafkaProducerFactory`.
@@ -15,7 +15,7 @@ To close existing Consumers, call `stop()` (and then `start()`) on the `KafkaLis
1515
For convenience, the framework also provides an `ABSwitchCluster` which supports two sets of bootstrap servers; one of which is active at any time.
1616
Configure the `ABSwitchCluster` and add it to the producer and consumer factories, and the `KafkaAdmin`, by calling `setBootstrapServersSupplier()`.
1717
When you want to switch, call `primary()` or `secondary()` and call `reset()` on the producer factory to establish new connection(s); for consumers, `stop()` and `start()` all listener containers.
18-
When using `@KafkaListener` s, `stop()` and `start()` the `KafkaListenerEndpointRegistry` bean.
18+
When using `@KafkaListener`+++s+++, `stop()` and `start()` the `KafkaListenerEndpointRegistry` bean.
1919

2020
See the Javadocs for more information.
2121

spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/sending-messages.adoc

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -42,14 +42,21 @@ List<PartitionInfo> partitionsFor(String topic);
4242
4343
<T> T execute(ProducerCallback<K, V, T> callback);
4444
45-
// Flush the producer.
45+
<T> T executeInTransaction(OperationsCallback<K, V, T> callback);
4646
47+
// Flush the producer.
4748
void flush();
4849
4950
interface ProducerCallback<K, V, T> {
5051
5152
T doInKafka(Producer<K, V> producer);
5253
54+
}
55+
56+
interface OperationsCallback<K, V, T> {
57+
58+
T doInOperations(KafkaOperations<K, V> operations);
59+
5360
}
5461
----
5562

@@ -62,7 +69,7 @@ The `sendDefault` API requires that a default topic has been provided to the tem
6269

6370
The API takes in a `timestamp` as a parameter and stores this timestamp in the record.
6471
How the user-provided timestamp is stored depends on the timestamp type configured on the Kafka topic.
65-
If the topic is configured to use `CREATE_TIME`, the user specified timestamp is recorded (or generated if not specified).
72+
If the topic is configured to use `CREATE_TIME`, the user-specified timestamp is recorded (or generated if not specified).
6673
If the topic is configured to use `LOG_APPEND_TIME`, the user-specified timestamp is ignored and the broker adds in the local broker time.
6774

6875
The `metrics` and `partitionsFor` methods delegate to the same methods on the underlying https://kafka.apache.org/20/javadoc/org/apache/kafka/clients/producer/Producer.html[`Producer`].

spring-kafka-docs/src/main/antora/modules/ROOT/pages/quick-tour.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ Gradle::
4747
+
4848
[source,groovy,subs="+attributes",role="secondary"]
4949
----
50-
compile 'org.springframework.kafka:spring-kafka'
50+
implementation 'org.springframework.kafka:spring-kafka'
5151
----
5252
======
5353

@@ -58,7 +58,7 @@ However, the quickest way to get started is to use https://start.spring.io[start
5858

5959
This quick tour works with the following versions:
6060

61-
* Apache Kafka Clients 3.5.x
61+
* Apache Kafka Clients 3.6.x
6262
* Spring Framework 6.1.x
6363
* Minimum Java version: 17
6464

@@ -124,11 +124,11 @@ include::{kotlin-examples}/started/producer/Application.kt[tag=startedProducer]
124124
=== With Java Configuration (No Spring Boot)
125125

126126
IMPORTANT: Spring for Apache Kafka is designed to be used in a Spring Application Context.
127-
For example, if you create the listener container yourself outside of a Spring context, not all functions will work unless you satisfy all of the `...Aware` interfaces that the container implements.
127+
For example, if you create the listener container yourself outside of a Spring context, not all functions will work unless you satisfy all of the `+++...+++Aware` interfaces that the container implements.
128128

129129
Here is an example of an application that does not use Spring Boot; it has both a `Consumer` and `Producer`.
130130

131-
.Without Boot
131+
.Without Spring Boot
132132
[tabs]
133133
======
134134
Java::

spring-kafka-docs/src/main/java/org/springframework/kafka/jdocs/started/noboot/Sender.java

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -29,20 +29,20 @@
2929
// tag::startedNoBootSender[]
3030
public class Sender {
3131

32-
public static void main(String[] args) {
33-
AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext(Config.class);
34-
context.getBean(Sender.class).send("test", 42);
35-
}
32+
public static void main(String[] args) {
33+
AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext(Config.class);
34+
context.getBean(Sender.class).send("test", 42);
35+
}
3636

37-
private final KafkaTemplate<Integer, String> template;
37+
private final KafkaTemplate<Integer, String> template;
3838

39-
public Sender(KafkaTemplate<Integer, String> template) {
40-
this.template = template;
41-
}
39+
public Sender(KafkaTemplate<Integer, String> template) {
40+
this.template = template;
41+
}
4242

43-
public void send(String toSend, int key) {
44-
this.template.send("topic1", key, toSend);
45-
}
43+
public void send(String toSend, int key) {
44+
this.template.send("topic1", key, toSend);
45+
}
4646

4747
}
4848
// end::startedNoBootSender[]

spring-kafka-docs/src/main/java/org/springframework/kafka/jdocs/topics/Config.java

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,8 @@
1616

1717
package org.springframework.kafka.jdocs.topics;
1818

19-
import java.util.Arrays;
2019
import java.util.HashMap;
20+
import java.util.List;
2121
import java.util.Map;
2222

2323
import org.apache.kafka.clients.admin.AdminClientConfig;
@@ -67,9 +67,9 @@ public NewTopic topic2() {
6767
@Bean
6868
public NewTopic topic3() {
6969
return TopicBuilder.name("thing3")
70-
.assignReplicas(0, Arrays.asList(0, 1))
71-
.assignReplicas(1, Arrays.asList(1, 2))
72-
.assignReplicas(2, Arrays.asList(2, 0))
70+
.assignReplicas(0, List.of(0, 1))
71+
.assignReplicas(1, List.of(1, 2))
72+
.assignReplicas(2, List.of(2, 0))
7373
.config(TopicConfig.COMPRESSION_TYPE_CONFIG, "zstd")
7474
.build();
7575
}

0 commit comments

Comments
 (0)