-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
feat(core): Instrument LangGraph Agent #18114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
size-limit report 📦
|
node-overhead report 🧳Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.
|
| * | ||
| * Wraps the compile() method to: | ||
| * - Create a `gen_ai.create_agent` span when compile() is called | ||
| * - Automatically wrap the invoke() method on the returned compiled graph |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| * - Automatically wrap the invoke() method on the returned compiled graph | |
| * - Automatically wrap the invoke() method on the returned compiled graph with `gen_ai.invoke_agent` spans |
| name: 'invoke_agent', | ||
| attributes: { | ||
| [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: LANGGRAPH_ORIGIN, | ||
| [SEMANTIC_ATTRIBUTE_SENTRY_OP]: 'gen_ai.invoke_agent', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
l: maybe we should put the span operations in a separate constants file too (like utils/ai/gen-ai-attributes.ts)? since some of them like gen_ai.invoke_agent are emitted from multiple integrations
| const inputMessages = args.length > 0 ? (args[0] as { messages?: LangChainMessage[] }).messages : []; | ||
|
|
||
| if (inputMessages && recordInputs) { | ||
| const normalizedMessages = normalizeLangChainMessages(inputMessages); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
m: Is it possible that this returns a string in this context? because that case is not covered by the truncation (i.e. strings do not get truncated)
| * | ||
| * Tools are stored in: compiledGraph.builder.nodes.tools.runnable.tools | ||
| */ | ||
| function extractToolsFromCompiledGraph(compiledGraph: CompiledGraph): unknown[] | null { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
l: maybe this should be moved to ./utils?
| /** | ||
| * Set response attributes on the span | ||
| */ | ||
| function setResponseAttributes(span: Span, inputMessages: LangChainMessage[] | null, result: unknown): void { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
l: same here
| if (typeof usage.total_tokens === 'number') { | ||
| span.setAttribute(GEN_AI_USAGE_TOTAL_TOKENS_ATTRIBUTE, usage.total_tokens); | ||
| } | ||
| return; // Found usage_metadata, no need to check fallback |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| return; // Found usage_metadata, no need to check fallback | |
| return; |
| .addEdge('tools', 'agent') | ||
| .compile({ name: 'tool_agent' }); | ||
|
|
||
| // Simple invocation - won't call tools since mockLlm returns empty tool_calls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
m: I think it would be good to also add the scenario where tools are being called and then check that the tools are correctly recorded in gen_ai.response.tool_calls
| .compile({ name: 'weather_assistant' }); | ||
|
|
||
| // Test: basic invocation | ||
| await graph.invoke({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
l: Does this always follow this list-of-dicts format or can we also input a plain string? If so, might be worth covering this in a test as well
| /** | ||
| * Adds Sentry tracing instrumentation for LangGraph. | ||
| * | ||
| * This integration is enabled by default. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: How is this controlled? i.e. what would I have to change to disable it by default
| * import * as Sentry from '@sentry/node'; | ||
| * | ||
| * Sentry.init({ | ||
| * integrations: [Sentry.langgraphIntegration()], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
l: maybe it would be better if the casing of the integration name is aligned with other integrations? e.g. for langchain we use langChain
This PR adds official support for instrumenting LangGraph StateGraph operations in Node with Sentry tracing, following OpenTelemetry semantic conventions for Generative AI.
Currently supported:
Node.js - Both agent creation and invocation are instrumented in this PR
ESM and CJS - Both module systems are supported
The langgraphIntegration() accepts the following options:
e.g
Operations traced: