Skip to content

OpenAI Tool Return Serialisation #3057

@hebenon

Description

@hebenon

Describe the bug
When a tool executes client-side (e.g., via letta-code) the resulting MessageRole.tool object persisted by the server contains populated tool_returns but an empty content list. The OpenAI Responses serializer in Message.to_openai_responses_dicts still assumes a single TextContent entry and asserts, causing every subsequent tool step to crash before the agent can continue.

Please describe your setup

  • How are you running Letta?
    • Docker
  • Describe your setup
    • What's your OS (Windows/MacOS/Linux)? Linux (Ubuntu 22.04)
    • What is your docker run ... command (if applicable)
      podman run -d -p 8283:8283 -v ~/.letta/.persist/pgdata:/var/lib/postgresql/data -e OPENAI_API_KEY=XXXXXXX

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

  • What model you are using: openai/gpt-5
  • Example stack trace:
    Letta.agent-7caa9668-01fe-4bba-94dd-2f4e48a4160c - WARNING - Error during agent stream: {'created_by_id': 'user-00000000-0000-4000-8000-000000000000', 'last_updated_by_id': 'user-00000000-0000-4000-8000-000000000000', 'created_at': datetime.datetime(2025, 11, 1, 5, 31, 45, 149793, tzinfo=datetime.timezone.utc), 'updated_at': datetime.datetime(2025, 11, 1, 5, 31, 45, 180144, tzinfo=datetime.timezone.utc), 'id': 'message-6d241137-424d-4330-962f-5e95f99e60ad', 'agent_id': 'agent-7caa9668-01fe-4bba-94dd-2f4e48a4160c', 'model': 'gpt-5', 'role': <MessageRole.tool: 'tool'>, 'content': [], 'name': None, 'tool_calls': None, 'tool_call_id': 'call_XbfarYdj4p3tP69SlVvNGhBB', 'step_id': 'step-7d950a7b-90ca-4dcc-a160-25a9571c6e12', 'run_id': 'run-05df91c6-ced4-433b-8de2-eedefa8da82c', 'otid': None, 'tool_returns': [ToolReturn(tool_call_id='call_XbfarYdj4p3tP69SlVvNGhBB', status='success', stdout=None, stderr=None, func_response='{\n "status": "OK",\n "message": "No files found",\n "time": "2025-11-01 05:31:45 AM UTC+0000"\n}')], 'group_id': None, 'sender_id': None, 'batch_item_id': None, 'is_err': None, 'approval_request_id': None, 'approve': None, 'denial_reason': None, 'approvals': None}
    Traceback (most recent call last):
    File "/app/letta/agents/letta_agent_v3.py", line 208, in stream
    async for chunk in response:
    File "/app/letta/agents/letta_agent_v3.py", line 524, in _step
    raise e
    File "/app/letta/agents/letta_agent_v3.py", line 428, in _step
    raise e
    File "/app/letta/agents/letta_agent_v3.py", line 358, in _step
    request_data = self.llm_client.build_request_data(
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/app/letta/otel/tracing.py", line 273, in sync_wrapper
    return func(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^
    File "/app/letta/llm_api/openai_client.py", line 378, in build_request_data
    return self.build_request_data_responses(
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/app/letta/otel/tracing.py", line 273, in sync_wrapper
    return func(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^
    File "/app/letta/llm_api/openai_client.py", line 231, in build_request_data_responses
    openai_messages_list = PydanticMessage.to_openai_responses_dicts_from_list(messages)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/app/letta/schemas/message.py", line 1332, in to_openai_responses_dicts_from_list
    result.extend(message.to_openai_responses_dicts(max_tool_id_length=max_tool_id_length))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/app/letta/schemas/message.py", line 1310, in to_openai_responses_dicts
    assert len(self.content) == 1 and isinstance(self.content[0], TextContent), vars(self)
    ^^^^^^^^^^^^^^^^^^^^^^
    AssertionError: {'created_by_id': 'user-00000000-0000-4000-8000-000000000000', 'last_updated_by_id': 'user-00000000-0000-4000-8000-000000000000', 'created_at': datetime.datetime(2025, 11, 1, 5, 31, 45, 149793, tzinfo=datetime.timezone.utc), 'updated_at': datetime.datetime(2025, 11, 1, 5, 31, 45, 180144, tzinfo=datetime.timezone.utc), 'id': 'message-6d241137-424d-4330-962f-5e95f99e60ad', 'agent_id': 'agent-7caa9668-01fe-4bba-94dd-2f4e48a4160c', 'model': 'gpt-5', 'role': <MessageRole.tool: 'tool'>, 'content': [], 'name': None, 'tool_calls': None, 'tool_call_id': 'call_XbfarYdj4p3tP69SlVvNGhBB', 'step_id': 'step-7d950a7b-90ca-4dcc-a160-25a9571c6e12', 'run_id': 'run-05df91c6-ced4-433b-8de2-eedefa8da82c', 'otid': None, 'tool_returns': [ToolReturn(tool_call_id='call_XbfarYdj4p3tP69SlVvNGhBB', status='success', stdout=None, stderr=None, func_response='{\n "status": "OK",\n "message": "No files found",\n "time": "2025-11-01 05:31:45 AM UTC+0000"\n}')], 'group_id': None, 'sender_id': None, 'batch_item_id': None, 'is_err': None, 'approval_request_id': None, 'approve': None, 'denial_reason': None, 'approvals': None}
    Letta.letta.services.run_manager - WARNING - Run run-05df91c6-ced4-433b-8de2-eedefa8da82c completed without a completed_at timestamp

Agent File (optional)
Please attach your .af file, as this helps with reproducing issues.


If you're not using OpenAI, please provide additional information on your local LLM setup:

Local LLM details

If you are trying to run Letta with local LLMs, please provide the following information:
N/A

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions