Skip to content

Commit 19368cf

Browse files
dpkp88manpreet
authored andcommitted
Fixup :meth: sphinx documentation for use in KafkaConsumer.rst etc
1 parent 4a32205 commit 19368cf

File tree

2 files changed

+49
-35
lines changed

2 files changed

+49
-35
lines changed

kafka/consumer/group.py

Lines changed: 35 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,8 @@ class KafkaConsumer(six.Iterator):
3535
3636
Arguments:
3737
*topics (str): optional list of topics to subscribe to. If not set,
38-
call :meth:`.subscribe` or :meth:`.assign` before consuming records.
38+
call :meth:`~kafka.KafkaConsumer.subscribe` or
39+
:meth:`~kafka.KafkaConsumer.assign` before consuming records.
3940
4041
Keyword Arguments:
4142
bootstrap_servers: 'host[:port]' string (or list of 'host[:port]'
@@ -127,7 +128,7 @@ class KafkaConsumer(six.Iterator):
127128
session_timeout_ms (int): The timeout used to detect failures when
128129
using Kafka's group management facilities. Default: 30000
129130
max_poll_records (int): The maximum number of records returned in a
130-
single call to :meth:`.poll`. Default: 500
131+
single call to :meth:`~kafka.KafkaConsumer.poll`. Default: 500
131132
receive_buffer_bytes (int): The size of the TCP receive buffer
132133
(SO_RCVBUF) to use when reading data. Default: None (relies on
133134
system defaults). The java client defaults to 32768.
@@ -172,6 +173,7 @@ class KafkaConsumer(six.Iterator):
172173
api_version (tuple): Specify which Kafka API version to use. If set to
173174
None, the client will attempt to infer the broker version by probing
174175
various APIs. Different versions enable different functionality.
176+
175177
Examples:
176178
(0, 9) enables full group coordination features with automatic
177179
partition assignment and rebalancing,
@@ -181,6 +183,7 @@ class KafkaConsumer(six.Iterator):
181183
partition assignment only,
182184
(0, 8, 0) enables basic functionality but requires manual
183185
partition assignment and offset management.
186+
184187
For the full list of supported versions, see
185188
KafkaClient.API_VERSIONS. Default: None
186189
api_version_auto_timeout_ms (int): number of milliseconds to throw a
@@ -336,11 +339,13 @@ def assign(self, partitions):
336339
partitions (list of TopicPartition): Assignment for this instance.
337340
338341
Raises:
339-
IllegalStateError: If consumer has already called :meth:`.subscribe`.
342+
IllegalStateError: If consumer has already called
343+
:meth:`~kafka.KafkaConsumer.subscribe`.
340344
341345
Warning:
342346
It is not possible to use both manual partition assignment with
343-
:meth:`.assign` and group assignment with :meth:`.subscribe`.
347+
:meth:`~kafka.KafkaConsumer.assign` and group assignment with
348+
:meth:`~kafka.KafkaConsumer.subscribe`.
344349
345350
Note:
346351
This interface does not support incremental assignment and will
@@ -358,12 +363,13 @@ def assign(self, partitions):
358363
def assignment(self):
359364
"""Get the TopicPartitions currently assigned to this consumer.
360365
361-
If partitions were directly assigned using :meth:`.assign`, then this
362-
will simply return the same partitions that were previously assigned.
363-
If topics were subscribed using :meth:`.subscribe`, then this will give
364-
the set of topic partitions currently assigned to the consumer (which
365-
may be None if the assignment hasn't happened yet, or if the partitions
366-
are in the process of being reassigned).
366+
If partitions were directly assigned using
367+
:meth:`~kafka.KafkaConsumer.assign`, then this will simply return the
368+
same partitions that were previously assigned. If topics were
369+
subscribed using :meth:`~kafka.KafkaConsumer.subscribe`, then this will
370+
give the set of topic partitions currently assigned to the consumer
371+
(which may be None if the assignment hasn't happened yet, or if the
372+
partitions are in the process of being reassigned).
367373
368374
Returns:
369375
set: {TopicPartition, ...}
@@ -527,8 +533,8 @@ def poll(self, timeout_ms=0, max_records=None):
527533
with any records that are available currently in the buffer,
528534
else returns empty. Must not be negative. Default: 0
529535
max_records (int, optional): The maximum number of records returned
530-
in a single call to :meth:`.poll`. Default: Inherit value from
531-
max_poll_records.
536+
in a single call to :meth:`~kafka.KafkaConsumer.poll`.
537+
Default: Inherit value from max_poll_records.
532538
533539
Returns:
534540
dict: Topic to list of records since the last fetch for the
@@ -639,10 +645,12 @@ def highwater(self, partition):
639645
def pause(self, *partitions):
640646
"""Suspend fetching from the requested partitions.
641647
642-
Future calls to :meth:`.poll` will not return any records from these
643-
partitions until they have been resumed using :meth:`.resume`. Note that
644-
this method does not affect partition subscription. In particular, it
645-
does not cause a group rebalance when automatic assignment is used.
648+
Future calls to :meth:`~kafka.KafkaConsumer.poll` will not return any
649+
records from these partitions until they have been resumed using
650+
:meth:`~kafka.KafkaConsumer.resume`.
651+
652+
Note: This method does not affect partition subscription. In particular,
653+
it does not cause a group rebalance when automatic assignment is used.
646654
647655
Arguments:
648656
*partitions (TopicPartition): Partitions to pause.
@@ -654,7 +662,8 @@ def pause(self, *partitions):
654662
self._subscription.pause(partition)
655663

656664
def paused(self):
657-
"""Get the partitions that were previously paused using :meth:`.pause`.
665+
"""Get the partitions that were previously paused using
666+
:meth:`~kafka.KafkaConsumer.pause`.
658667
659668
Returns:
660669
set: {partition (TopicPartition), ...}
@@ -677,10 +686,12 @@ def seek(self, partition, offset):
677686
"""Manually specify the fetch offset for a TopicPartition.
678687
679688
Overrides the fetch offsets that the consumer will use on the next
680-
:meth:`.poll`. If this API is invoked for the same partition more than
681-
once, the latest offset will be used on the next :meth:`.poll`. Note
682-
that you may lose data if this API is arbitrarily used in the middle of
683-
consumption, to reset the fetch offsets.
689+
:meth:`~kafka.KafkaConsumer.poll`. If this API is invoked for the same
690+
partition more than once, the latest offset will be used on the next
691+
:meth:`~kafka.KafkaConsumer.poll`.
692+
693+
Note: You may lose data if this API is arbitrarily used in the middle of
694+
consumption to reset the fetch offsets.
684695
685696
Arguments:
686697
partition (TopicPartition): Partition for seek operation
@@ -752,7 +763,7 @@ def subscribe(self, topics=(), pattern=None, listener=None):
752763
Topic subscriptions are not incremental: this list will replace the
753764
current assignment (if there is one).
754765
755-
This method is incompatible with :meth:`.assign`.
766+
This method is incompatible with :meth:`~kafka.KafkaConsumer.assign`.
756767
757768
Arguments:
758769
topics (list): List of topics for subscription.
@@ -781,7 +792,8 @@ def subscribe(self, topics=(), pattern=None, listener=None):
781792
through this interface are from topics subscribed in this call.
782793
783794
Raises:
784-
IllegalStateError: If called after previously calling :meth:`.assign`.
795+
IllegalStateError: If called after previously calling
796+
:meth:`~kafka.KafkaConsumer.assign`.
785797
AssertionError: If neither topics or pattern is provided.
786798
TypeError: If listener is not a ConsumerRebalanceListener.
787799
"""

kafka/producer/kafka.py

Lines changed: 14 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -34,9 +34,9 @@ class KafkaProducer(object):
3434
thread that is responsible for turning these records into requests and
3535
transmitting them to the cluster.
3636
37-
:meth:`.send` is asynchronous. When called it adds the record to a buffer of
38-
pending record sends and immediately returns. This allows the producer to
39-
batch together individual records for efficiency.
37+
:meth:`~kafka.KafkaProducer.send` is asynchronous. When called it adds the
38+
record to a buffer of pending record sends and immediately returns. This
39+
allows the producer to batch together individual records for efficiency.
4040
4141
The 'acks' config controls the criteria under which requests are considered
4242
complete. The "all" setting will result in blocking on the full commit of
@@ -166,11 +166,12 @@ class KafkaProducer(object):
166166
will block up to max_block_ms, raising an exception on timeout.
167167
In the current implementation, this setting is an approximation.
168168
Default: 33554432 (32MB)
169-
max_block_ms (int): Number of milliseconds to block during :meth:`.send`
170-
and :meth:`.partitions_for`. These methods can be blocked either
171-
because the buffer is full or metadata unavailable. Blocking in the
172-
user-supplied serializers or partitioner will not be counted against
173-
this timeout. Default: 60000.
169+
max_block_ms (int): Number of milliseconds to block during
170+
:meth:`~kafka.KafkaProducer.send` and
171+
:meth:`~kafka.KafkaProducer.partitions_for`. These methods can be
172+
blocked either because the buffer is full or metadata unavailable.
173+
Blocking in the user-supplied serializers or partitioner will not be
174+
counted against this timeout. Default: 60000.
174175
max_request_size (int): The maximum size of a request. This is also
175176
effectively a cap on the maximum record size. Note that the server
176177
has its own cap on record size which may be different from this.
@@ -535,10 +536,11 @@ def flush(self, timeout=None):
535536
Invoking this method makes all buffered records immediately available
536537
to send (even if linger_ms is greater than 0) and blocks on the
537538
completion of the requests associated with these records. The
538-
post-condition of :meth:`.flush` is that any previously sent record will
539-
have completed (e.g. Future.is_done() == True). A request is considered
540-
completed when either it is successfully acknowledged according to the
541-
'acks' configuration for the producer, or it results in an error.
539+
post-condition of :meth:`~kafka.KafkaProducer.flush` is that any
540+
previously sent record will have completed
541+
(e.g. Future.is_done() == True). A request is considered completed when
542+
either it is successfully acknowledged according to the 'acks'
543+
configuration for the producer, or it results in an error.
542544
543545
Other threads can continue sending messages while one thread is blocked
544546
waiting for a flush call to complete; however, no guarantee is made

0 commit comments

Comments
 (0)