Skip to content

OverflowError: timeout is too large #2512

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
DineshDevaraj opened this issue Mar 4, 2025 · 2 comments
Closed

OverflowError: timeout is too large #2512

DineshDevaraj opened this issue Mar 4, 2025 · 2 comments

Comments

@DineshDevaraj
Copy link

If I set the log level to ERROR or anything higher I see OverflowError error

Code

import time
import logging
from kafka import KafkaProducer

logging.basicConfig(level=logging.ERROR)
producer = KafkaProducer(bootstrap_servers=["kafka-in-docker:9092"], api_version=(7,3,1))
producer.send("foobar", b"hello world")
time.sleep(3)

Error

ERROR:kafka.producer.sender:Uncaught error in kafka producer I/O thread
Traceback (most recent call last):
  File "/usr/local/lib/gravity/poetry/venvs/hpe-cnx-ctb-b6oAM6xe-py3.10/lib/python3.10/site-packages/kafka/producer/sender.py", line 60, in run
    self.run_once()
  File "/usr/local/lib/gravity/poetry/venvs/hpe-cnx-ctb-b6oAM6xe-py3.10/lib/python3.10/site-packages/kafka/producer/sender.py", line 160, in run_once
    self._client.poll(timeout_ms=poll_timeout_ms)
  File "/usr/local/lib/gravity/poetry/venvs/hpe-cnx-ctb-b6oAM6xe-py3.10/lib/python3.10/site-packages/kafka/client_async.py", line 600, in poll
    self._poll(timeout / 1000)
  File "/usr/local/lib/gravity/poetry/venvs/hpe-cnx-ctb-b6oAM6xe-py3.10/lib/python3.10/site-packages/kafka/client_async.py", line 634, in _poll
    ready = self._selector.select(timeout)
  File "/opt/pyenv/versions/3.10.12/lib/python3.10/selectors.py", line 469, in select
    fd_event_list = self._selector.poll(timeout, max_ev)
OverflowError: timeout is too large

Platform

I am running this code
with kafka-python==2.0.5
on Python 3.10.12
on Ubuntu 22.04.5 LTS (Jammy Jellyfish)
in Docker Desktop 4.38.0 (181591)
on MacOS Sequoia 15.3.1 (24D70)

@Lasall
Copy link

Lasall commented Mar 4, 2025

I experience the same problem on MSK-brokers since the new kafka-python version 2.0.3+ (sometimes the send fails). I attached following debug output (broker, addresses adjusted to Dinesh's example):

DEBUG kafka.producer.kafka:696 Requesting metadata update for topic foobar
DEBUG kafka.metrics.metrics:156 Added sensor with name bytes-sent
DEBUG kafka.metrics.metrics:156 Added sensor with name bytes-received
DEBUG kafka.metrics.metrics:156 Added sensor with name request-latency
DEBUG kafka.metrics.metrics:156 Added sensor with name node-bootstrap-0.bytes-sent
DEBUG kafka.metrics.metrics:156 Added sensor with name node-bootstrap-0.bytes-received
DEBUG kafka.metrics.metrics:156 Added sensor with name node-bootstrap-0.latency
DEBUG kafka.conn:368 <BrokerConnection client_id=kafka-python-producer-1, node_id=bootstrap-0 host=kafka-in-docker:9092 <disconnected> [unspecified None]>: creating new socket
DEBUG kafka.conn:378 <BrokerConnection client_id=kafka-python-producer-1, node_id=bootstrap-0 host=kafka-in-docker:9092 <disconnected> [IPv4 ('127.0.0.1', 9092)]>: setting socket option (6, 1, 1)
INFO kafka.conn:384 <BrokerConnection client_id=kafka-python-producer-1, node_id=bootstrap-0 host=kafka-in-docker:9092 <connecting> [IPv4 ('127.0.0.1', 9092)]>: connecting to kafka-in-docker:9092 [('127.0.0.1', 9092) IPv4]
DEBUG kafka.client:592 Timeouts: user 9999999990.000000, metadata inf, idle connection inf, request inf
ERROR kafka.producer.sender:62 Uncaught error in kafka producer I/O thread
Traceback (most recent call last):
File "/app/.venv/lib/python3.11/site-packages/kafka/producer/sender.py", line 60, in run
self.run_once()
File "/app/.venv/lib/python3.11/site-packages/kafka/producer/sender.py", line 160, in run_once
self._client.poll(timeout_ms=poll_timeout_ms)
File "/app/.venv/lib/python3.11/site-packages/kafka/client_async.py", line 600, in poll
self._poll(timeout / 1000)
File "/app/.venv/lib/python3.11/site-packages/kafka/client_async.py", line 634, in _poll
ready = self._selector.select(timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/selectors.py", line 468, in select
fd_event_list = self._selector.poll(timeout, max_ev)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OverflowError: timeout is too large

Potentially with #2480 the reconnect_backoff_ms setting is no longer used, resulting in the 9999999990.000000 to be used as minimum timeout.

@dpkp
Copy link
Owner

dpkp commented Mar 4, 2025

This should be mostly fixed in master. I'll try to release 2.0.6 with the fix today.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants