You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We ran into a crash with the kafka-python library (1.4.7 but seems same code is there in latest stable) on a 5 broker cluster while the cluster was running into some broker wide fetch issues, with the stack trace below. Attempts to restart the application failed stuck with the same trace.
Stack trace:
return self.__kafka_consumer.poll(timeout_ms=timeout_ms, max_records=max_records)
File "(elided)/anaconda3/lib/python3.6/site-packages/kafka/consumer/group.py", line 645, in poll
records = self._poll_once(remaining, max_records, update_offsets=update_offsets)
File "(elided)/anaconda3/lib/python3.6/site-packages/kafka/consumer/group.py", line 674, in _poll_once
records, partial = self._fetcher.fetched_records(max_records, update_offsets=update_offsets)
File "(elided)/anaconda3/lib/python3.6/site-packages/kafka/consumer/fetcher.py", line 344, in fetched_records
self._next_partition_records = self._parse_fetched_data(completion)
File "(elided)/anaconda3/lib/python3.6/site-packages/kafka/consumer/fetcher.py", line 818, in _parse_fetched_data
last_offset = unpacked[-1].offset
IndexError: list index out of range
Here it appears last_offset is only used for the sensors. I cross referenced the java implementation in kafka-clients 2.5.0. In their Fetcher.fetchRecords they do not log the sensor value if it is not possible to calculate it (although their state and the kafka-python state does not match entirely), perhaps we can do the same here by guarding against an empty unpacked list?
We seem to be experiencing some turbulence on the broker cluster at the time but it would be nice if the python clients survived it like the java ones do. If this is agreeable I can prepare a patch for this issue.
The text was updated successfully, but these errors were encountered:
Hi there,
We ran into a crash with the kafka-python library (1.4.7 but seems same code is there in latest stable) on a 5 broker cluster while the cluster was running into some broker wide fetch issues, with the stack trace below. Attempts to restart the application failed stuck with the same trace.
Stack trace:
Here it appears last_offset is only used for the sensors. I cross referenced the java implementation in kafka-clients 2.5.0. In their Fetcher.fetchRecords they do not log the sensor value if it is not possible to calculate it (although their state and the kafka-python state does not match entirely), perhaps we can do the same here by guarding against an empty unpacked list?
We seem to be experiencing some turbulence on the broker cluster at the time but it would be nice if the python clients survived it like the java ones do. If this is agreeable I can prepare a patch for this issue.
The text was updated successfully, but these errors were encountered: