Skip to content

fix(openai): preserve encrypted_content on streaming reasoning items#10845

Open
Jason Taylor (thetizzo) wants to merge 1 commit into
langchain-ai:mainfrom
thetizzo:fix/openai-encrypted-reasoning-streaming
Open

fix(openai): preserve encrypted_content on streaming reasoning items#10845
Jason Taylor (thetizzo) wants to merge 1 commit into
langchain-ai:mainfrom
thetizzo:fix/openai-encrypted-reasoning-streaming

Conversation

@thetizzo
Copy link
Copy Markdown

Fixes #10844.

What

Add a response.output_item.done handler for reasoning items in convertResponsesDeltaToChatGenerationChunk so the final reasoning item — including encrypted_content — is captured into additional_kwargs.reasoning.

Why

The existing response.output_item.added handler for reasoning fires before encrypted_content is populated by the model, and there was no output_item.done handler for reasoning. As a result, encrypted_content was silently dropped on the streaming path.

On subsequent rounds, convertMessagesToResponsesInput (around responses.ts L1458–L1468) checks reasoning?.encrypted_content. With it missing, the reasoning item is not pushed onto input. Combined with store: false (set by zdrEnabled: true), OpenAI cannot resolve the item by id either, so the encrypted reasoning chain is fully lost between tool-call rounds — defeating include: ["reasoning.encrypted_content"].

The non-streaming _generate path is unaffected because convertResponsesMessageToAIMessage already captures the full item, including encrypted_content. See issue #10844 for a minimal SSE-mocked reproduction.

How

Add a new branch in convertResponsesDeltaToChatGenerationChunk:

} else if (
  event.type === "response.output_item.done" &&
  "item" in event &&
  event.item.type === "reasoning"
) {
  additional_kwargs.reasoning = {
    type: "reasoning",
    ...(event.item.encrypted_content
      ? { encrypted_content: event.item.encrypted_content }
      : {}),
    ...(event.item.status ? { status: event.item.status } : {}),
    ...(event.item.content ? { content: event.item.content } : {}),
  };
}

Only emit fields that aren't already populated by output_item.added and the reasoning_summary_* events (id / type / summary), so that chunk-merging via _mergeDicts does not double-add summary parts. _mergeDicts recurses into additional_kwargs.reasoning and merges in encrypted_content / status / content from the done chunk on top of the partial { id, type, summary } from the added chunk.

Tests

Added a unit test in src/converters/tests/responses.test.ts that:

  1. Synthesizes the output_item.added and output_item.done events for a reasoning item with encrypted_content.
  2. Concatenates the resulting chunks (matching the streaming aggregation path).
  3. Asserts the merged additional_kwargs.reasoning carries id, encrypted_content, and status.

pnpm --filter @langchain/openai test runs 245 passing tests after the change (was 244 before). Pre-existing typecheck errors in unrelated test files (webSearch.int.test.ts, around responses.test.ts:2209) were present on main before this change and are not introduced here.

Local repro

I encountered this in production with gpt-5.4 reasoning leaking back into final output text across tool-call rounds. Logs (from a downstream workaround that pulls the full reasoning item out of response_metadata.output):

rehydrateEncryptedReasoning: before
  hasReasoning: true
  reasoningId: "rs_04fc06c5..."
  hadEncryptedContent: false        ← bug confirmed
  foundReasoningItemInOutput: true
  outputItemEncryptedLength: 1208

After this PR is in, the workaround can be removed — additional_kwargs.reasoning.encrypted_content is set directly by the converter.

The streaming Responses converter captured reasoning items at
`response.output_item.added`, which fires before `encrypted_content`
is populated. There was no `output_item.done` handler for reasoning,
so `encrypted_content` was silently dropped from
`additional_kwargs.reasoning`.

On subsequent rounds, `convertMessagesToResponsesInput` checks
`reasoning?.encrypted_content`. With it missing, the reasoning item
was not pushed onto `input`. Combined with `store: false` (set by
`zdrEnabled: true`), OpenAI couldn't resolve the item by id either,
fully dropping the reasoning chain between tool-call rounds and
defeating `include: ["reasoning.encrypted_content"]`.

Add a `response.output_item.done` handler for reasoning items that
emits `encrypted_content` (and `status` / `content`) into
`additional_kwargs.reasoning`. Only emit fields not already set by
`output_item.added` and the `reasoning_summary_*` events so chunk
merging doesn't double-add summary parts.

The non-streaming `_generate` path was unaffected because
`convertResponsesMessageToAIMessage` already captures the full item.

Fixes langchain-ai#10844
@changeset-bot
Copy link
Copy Markdown

changeset-bot Bot commented May 7, 2026

⚠️ No Changeset found

Latest commit: f74d191

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ChatOpenAI Responses API drops encrypted_content from reasoning items in streaming converter

1 participant