fix(openai): preserve encrypted_content on streaming reasoning items#10845
Open
Jason Taylor (thetizzo) wants to merge 1 commit into
Open
fix(openai): preserve encrypted_content on streaming reasoning items#10845Jason Taylor (thetizzo) wants to merge 1 commit into
Jason Taylor (thetizzo) wants to merge 1 commit into
Conversation
The streaming Responses converter captured reasoning items at `response.output_item.added`, which fires before `encrypted_content` is populated. There was no `output_item.done` handler for reasoning, so `encrypted_content` was silently dropped from `additional_kwargs.reasoning`. On subsequent rounds, `convertMessagesToResponsesInput` checks `reasoning?.encrypted_content`. With it missing, the reasoning item was not pushed onto `input`. Combined with `store: false` (set by `zdrEnabled: true`), OpenAI couldn't resolve the item by id either, fully dropping the reasoning chain between tool-call rounds and defeating `include: ["reasoning.encrypted_content"]`. Add a `response.output_item.done` handler for reasoning items that emits `encrypted_content` (and `status` / `content`) into `additional_kwargs.reasoning`. Only emit fields not already set by `output_item.added` and the `reasoning_summary_*` events so chunk merging doesn't double-add summary parts. The non-streaming `_generate` path was unaffected because `convertResponsesMessageToAIMessage` already captures the full item. Fixes langchain-ai#10844
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes #10844.
What
Add a
response.output_item.donehandler for reasoning items inconvertResponsesDeltaToChatGenerationChunkso the final reasoning item — includingencrypted_content— is captured intoadditional_kwargs.reasoning.Why
The existing
response.output_item.addedhandler for reasoning fires beforeencrypted_contentis populated by the model, and there was nooutput_item.donehandler for reasoning. As a result,encrypted_contentwas silently dropped on the streaming path.On subsequent rounds,
convertMessagesToResponsesInput(around responses.ts L1458–L1468) checksreasoning?.encrypted_content. With it missing, the reasoning item is not pushed ontoinput. Combined withstore: false(set byzdrEnabled: true), OpenAI cannot resolve the item by id either, so the encrypted reasoning chain is fully lost between tool-call rounds — defeatinginclude: ["reasoning.encrypted_content"].The non-streaming
_generatepath is unaffected becauseconvertResponsesMessageToAIMessagealready captures the full item, includingencrypted_content. See issue #10844 for a minimal SSE-mocked reproduction.How
Add a new branch in
convertResponsesDeltaToChatGenerationChunk:Only emit fields that aren't already populated by
output_item.addedand thereasoning_summary_*events (id/type/summary), so that chunk-merging via_mergeDictsdoes not double-add summary parts._mergeDictsrecurses intoadditional_kwargs.reasoningand merges inencrypted_content/status/contentfrom thedonechunk on top of the partial{ id, type, summary }from theaddedchunk.Tests
Added a unit test in
src/converters/tests/responses.test.tsthat:output_item.addedandoutput_item.doneevents for a reasoning item withencrypted_content.additional_kwargs.reasoningcarriesid,encrypted_content, andstatus.pnpm --filter @langchain/openai testruns 245 passing tests after the change (was 244 before). Pre-existing typecheck errors in unrelated test files (webSearch.int.test.ts, aroundresponses.test.ts:2209) were present onmainbefore this change and are not introduced here.Local repro
I encountered this in production with
gpt-5.4reasoning leaking back into final output text across tool-call rounds. Logs (from a downstream workaround that pulls the full reasoning item out ofresponse_metadata.output):After this PR is in, the workaround can be removed —
additional_kwargs.reasoning.encrypted_contentis set directly by the converter.