-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Description
Initial Checks
- I confirm that I'm using the latest version of MCP Python SDK
- I confirm that I searched for my issue in https://github.com/modelcontextprotocol/python-sdk/issues before opening this issue
Description
Solution: #1674
Title: Streamable HTTP transport drops requests immediately after initialize
Summary
When using a server over Streamable HTTP, the first session.list_tools() (and sometimes a few follow-up requests) can intermittently return an empty tool list right after session.initialize() succeeds. This happens even though the server has already provided tool metadata during initialization.
Steps to Reproduce
- Start
examples/servers/simple-streamablehttp. - Run a client that calls
session.initialize()and immediately follows withsession.list_tools(). - Repeat quickly; roughly 1 in 5 iterations returns an empty list.
Expected Behavior
Once initialize completes, the client transport should be ready to send subsequent JSON-RPC requests and receive their responses reliably.
Actual Behavior
There is a race inside streamable_http.streamablehttp_client: the post_writer task is started with tg.start_soon, so the caller can enqueue requests before the writer task has finished subscribing to the in-memory stream. Because the stream buffer size is 0, those requests can be dropped, leading to empty responses or timeouts.
Proposed Fix
Start post_writer with tg.start(...), mirroring the SSE transport. This blocks until the writer task signals readiness via TaskStatus.started, guaranteeing that the zero-buffer stream is ready before yielding to the caller. Two new stress tests (test_streamablehttp_no_race_condition_on_consecutive_requests and test_streamablehttp_rapid_request_sequence) cover the regression.
Impact
Streamable HTTP transports become much more reliable in real deployments that issue back-to-back requests (initialize → list_tools, quick polling, etc.), preventing confusing empty tool listings and retry storms.
Example Code
# reproduce_streamablehttp_race.py
import anyio
from mcp.client.session import ClientSession
from mcp.client.streamable_http import streamablehttp_client
SERVER_URL = "http://127.0.0.1:8000/mcp"
ITERATIONS = 100
async def main() -> None:
failures = 0
for i in range(ITERATIONS):
async with streamablehttp_client(SERVER_URL) as (read_stream, write_stream, _):
async with ClientSession(read_stream, write_stream) as session:
result = await session.initialize()
tools = await session.list_tools()
if not tools.tools:
failures += 1
print(f"[{i}] list_tools returned EMPTY! session.initialize -> list_tools race hit.")
else:
print(f"[{i}] ok: {len(tools.tools)} tools")
if failures:
raise SystemExit(f"Race reproduced: {failures}/{ITERATIONS} iterations failed")
print("No failures observed (try increasing ITERATIONS or lower server CPU).")
if __name__ == "__main__":
anyio.run(main)Python & MCP Python SDK
latest