Skip to content

fix(langchain): prevent llmToolSelectorMiddleware from leaking into message stream#10160

Open
Youngho Kim (JadenKim-dev) wants to merge 6 commits into
langchain-ai:mainfrom
JadenKim-dev:fix-middleware-stream
Open

fix(langchain): prevent llmToolSelectorMiddleware from leaking into message stream#10160
Youngho Kim (JadenKim-dev) wants to merge 6 commits into
langchain-ai:mainfrom
JadenKim-dev:fix-middleware-stream

Conversation

@JadenKim-dev
Copy link
Copy Markdown
Contributor

@JadenKim-dev Youngho Kim (JadenKim-dev) commented Feb 25, 2026

Summary

llmToolSelectorMiddleware internally calls structuredModel.invoke() to select relevant tools. When agent.stream() is used with streamMode: "messages", LangGraph injects a StreamMessagesHandler into config.callbacks and stores it in AsyncLocalStorage. Without an explicit config override, this handler is inherited by the internal invoke call, causing the tool selection response ({"tools":["..."]}) to appear as an assistant message in the UI stream.

Root cause

ensureConfig() in @langchain/core merges the explicit config with the implicit config from AsyncLocalStorage. If config.callbacks is undefined, the StreamMessagesHandler from the parent context is inherited. Passing callbacks: [] (an empty array) breaks this inheritance because a non-undefined value always overrides the implicit one.

LangSmith tracing is unaffected — it is injected via LangChainTracer.getTraceableRunTree() inside _configureSync, not through config.callbacks.

Fix

Build an explicit config using mergeConfigs:

const baseConfig: RunnableConfig = pickRunnableConfigKeys(request.runtime) ?? {};
const config = mergeConfigs(baseConfig, {
  metadata: { lc_source: "llmToolSelector" },
  callbacks: [],
});
  • callbacks: [] prevents StreamMessagesHandler from being inherited via AsyncLocalStorage
  • pickRunnableConfigKeys inherits the parent config (tags, metadata, configurable, etc.) from runtime
  • lc_source: "llmToolSelector" tags the call for observability, consistent with summarizationMiddleware

Fixes #10042

Test plan

  • Unit tests: yarn test src/agents/middleware/tests/llmToolSelector.test.ts
  • Integration test verifying no stream leak: OPENAI_API_KEY=... yarn vitest run --mode int src/agents/middleware/tests/llmToolSelector.int.test.ts

@changeset-bot
Copy link
Copy Markdown

changeset-bot Bot commented Feb 25, 2026

🦋 Changeset detected

Latest commit: ff99621

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 1 package
Name Type
langchain Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

…essage stream

Pass `{ callbacks: [] }` to the internal structuredModel.invoke() call so
LangGraph's streaming callbacks are not inherited, preventing the tool
selection response from appearing as an assistant message when using
agent.stream() with streamMode "messages".
…erit runtime config

Merge the parent runnable config from runtime so LangSmith tracing and
other callback-based consumers can properly track the internal tool
selection call, while still overriding callbacks with an empty array to
prevent streaming events from leaking to the UI.
@JadenKim-dev Youngho Kim (JadenKim-dev) marked this pull request as draft February 26, 2026 13:41
Previously `callbacks: []` was spread after `config` in the invoke call,
making the override intent implicit. Moving it into `mergeConfigs` as the
second argument makes it clear that the empty array intentionally overrides
any inherited callbacks from AsyncLocalStorage context.
@JadenKim-dev Youngho Kim (JadenKim-dev) marked this pull request as ready for review February 26, 2026 13:50
@JadenKim-dev Youngho Kim (JadenKim-dev) changed the title fix(langchain): prevent llmToolSelectorMiddleware from leaking into message stream fix(agents): prevent llmToolSelectorMiddleware from leaking into message stream Feb 28, 2026
christian-bromann

This comment was marked as outdated.

@christian-bromann Christian Bromann (christian-bromann) changed the title fix(agents): prevent llmToolSelectorMiddleware from leaking into message stream fix(langchain): prevent llmToolSelectorMiddleware from leaking into message stream May 3, 2026
});

const agent = createAgent({
model: new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 }),
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shouldn't need a real model for this and instead just use a mock as documented in https://docs.langchain.com/oss/javascript/langchain/test/unit-testing

…istChatModel

Replace the llmToolSelector integration test that required a real OpenAI API
call with a unit test using FakeListChatModel, so the streaming isolation
check runs without network access or API keys.
@JadenKim-dev
Copy link
Copy Markdown
Contributor Author

Youngho Kim (JadenKim-dev) commented May 11, 2026

Christian Bromann (@christian-bromann)
Replaced the integration test with FakeListChatModel so it no longer needs a real API call.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

JS LLM Tool Selector Middleware: internal structuredModel.invoke() is streamed to UI

2 participants