Skip to content

ChatOpenAI Responses API drops encrypted_content from reasoning items in streaming converter #10844

@thetizzo

Description

@thetizzo

Checked other resources

  • This is a bug, not a usage question
  • I added a very descriptive title to this issue
  • I searched the LangChain.js documentation
  • I used the GitHub search to find a similar question and didn't find it
  • I am sure that this is a bug in LangChain.js rather than my code
  • The bug is not resolved by updating to the latest stable version (@langchain/openai@1.4.5)

Example Code

import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
import { setupServer } from "msw/node";
import { http, HttpResponse } from "msw";

const reasoningItem = {
  type: "reasoning",
  id: "rs_1",
  encrypted_content: "ENCRYPTED_BLOB",
  summary: [],
};
const messageItem = {
  type: "message",
  id: "msg_1",
  role: "assistant",
  status: "completed",
  content: [{ type: "output_text", text: "hi", annotations: [] }],
};

// SSE shape produced by OpenAI Responses API. Note that
// `output_item.added` for the reasoning item carries no `encrypted_content`;
// it only appears in the final `response.completed` event (and would also
// appear on a corresponding `output_item.done` event for reasoning, which
// the converter does not handle).
const sseEvents = [
  { type: "response.created", response: { id: "resp_1", model: "gpt-5.4" } },
  {
    type: "response.output_item.added",
    output_index: 0,
    item: { type: "reasoning", id: "rs_1", summary: [] },
  },
  {
    type: "response.output_item.added",
    output_index: 1,
    item: { type: "message", id: "msg_1" },
  },
  {
    type: "response.output_text.delta",
    item_id: "msg_1",
    output_index: 1,
    content_index: 0,
    delta: "hi",
  },
  {
    type: "response.completed",
    response: {
      id: "resp_1",
      object: "response",
      status: "completed",
      model: "gpt-5.4",
      output: [reasoningItem, messageItem],
      usage: { input_tokens: 1, output_tokens: 1, total_tokens: 2 },
    },
  },
];

const server = setupServer(
  http.post("https://api.openai.com/v1/responses", () => {
    const body = sseEvents
      .map((evt) => `event: ${evt.type}\ndata: ${JSON.stringify(evt)}\n\n`)
      .join("");
    return new HttpResponse(body, {
      headers: { "Content-Type": "text/event-stream" },
    });
  })
);
server.listen();

const llm = new ChatOpenAI({
  model: "gpt-5.4",
  useResponsesApi: true,
  streaming: true,
  zdrEnabled: true,
  apiKey: "sk-test",
});
const result = await llm.invoke([new HumanMessage("hi")], {
  // @ts-expect-error - include is plumbed through bindTools options
  include: ["reasoning.encrypted_content"],
});

console.log("reasoning:", result.additional_kwargs.reasoning);
// → { id: "rs_1", type: "reasoning" }
// Expected: { id: "rs_1", type: "reasoning", encrypted_content: "ENCRYPTED_BLOB", summary: [] }

Description

When using ChatOpenAI with useResponsesApi: true, zdrEnabled: true, streaming, and include: ["reasoning.encrypted_content"], the encrypted_content field on reasoning items is silently dropped from additional_kwargs.reasoning.

In libs/providers/langchain-openai/src/converters/responses.ts, convertResponsesDeltaToChatGenerationChunk captures reasoning items at response.output_item.added. That event fires before encrypted_content is populated by OpenAI. There is no response.output_item.done handler for event.item.type === "reasoning", so encrypted_content is never read.

The full reasoning item (with encrypted_content) is copied into chunk.response_metadata.output at response.completed, but additional_kwargs.reasoning retains only { id, type, ...summary? }.

On the next round, convertMessagesToResponsesInput (around line 1458) reads reasoning?.encrypted_content. Because it's missing, the reasoning item is not pushed onto input. Combined with store: false (which zdrEnabled: true sets), OpenAI cannot resolve it by id either — so the encrypted reasoning chain is fully lost between tool-call rounds.

The non-streaming _generate path is not affected because convertResponsesMessageToAIMessage correctly assigns the full reasoning item — including encrypted_content — to additional_kwargs.reasoning.

What I expect: After a streaming response completes, result.additional_kwargs.reasoning.encrypted_content should be set (when include: ["reasoning.encrypted_content"] was requested and the model returned encrypted reasoning).

What happens: result.additional_kwargs.reasoning.encrypted_content is undefined, and subsequent rounds in a tool-call loop do not include the reasoning item in the request input.

Likely fix: add a response.output_item.done handler for reasoning items that overwrites additional_kwargs.reasoning with the full item from the event. Happy to send a PR.

System Info

@langchain/openai: 1.4.5 (also reproduced on main @ e5b6e1c8568115cdda53eccafbdcd23537f554c5)
@langchain/core: 1.0.x
platform: macOS 26.4.1
Node: v24.14.1
pnpm: 10.30.3

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions