Replies: 7 comments
-
|
Hello @Dodero10! I'm Dosu, your friendly bot here to help you troubleshoot bugs, answer questions, and navigate your contributions. I'm ready to assist you while you wait for a human maintainer. How can I help you today? To ensure that the input and output traces of agent steps are properly displayed and captured in Langfuse when using LangGraph, you should explicitly set the output using async def react_agent(state: State) -> Dict[str, List[AIMessage]]:
configuration = Configuration.from_context()
model = load_chat_model(configuration.model).bind_tools(TOOLS)
system_message = configuration.system_prompt.format(
system_time=datetime.now(tz=UTC).isoformat()
)
response = cast(
AIMessage,
await model.ainvoke(
[{"role": "system", "content": system_message}, *state.messages]
),
)
# Capture the output trace
langfuse.trace(id=state.trace_id, output=response.content)
if state.is_last_step and response.tool_calls:
return {
"messages": [
AIMessage(
id=response.id,
content="Sorry, I could not find an answer to your question in the specified number of steps.",
)
]
}
return {"messages": [response]}Make sure you are using Langfuse version 2.57.3 or later, as this version supports capturing input/output data effectively [1]. If you are using the Traceloop OpenLLMetry SDK and experiencing missing input/output data, consider using the Langfuse SDK directly to capture input/output data for LangGraph workflow spans [2]. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
Beta Was this translation helpful? Give feedback.
-
|
I'm facing a similar issue, where traces are tracked, but the inputs and outputs aren't being visualized properly. This seems linked to #6330 and #7043. Specifically, Langgraph messages appear to use a "type" attribute instead of the "role" attribute required by the ChatML schema, which likely causes the rendering issue. It would significantly improve usability if Langgraph messages could also be displayed in a user-friendly, formatted way, similar to other messages, rather than as raw json messages. |
Beta Was this translation helpful? Give feedback.
-
|
+1, This is happening to me when using the OpenAI Agents SDK. Example trace: 4048f8198f58476fab84582e10c66567 |
Beta Was this translation helpful? Give feedback.
-
|
any solution yet? |
Beta Was this translation helpful? Give feedback.
-
|
+1 |
Beta Was this translation helpful? Give feedback.
-
|
+1 |
Beta Was this translation helpful? Give feedback.
-
|
+1 I've hacked something together by translating attributes in custom spans, but it's ugly. We know that Langfuse, Arize, Braintrust and others are legitimate processors of agent interaction logs, so Livekit should cleanly deliver data to them. Notwithstanding development of your own Agent Insights features. Thank you! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm currently using LangGraph to build and run agents, and integrating Langfuse for tracing and observability. However, I've encountered an issue where the input and output of the agent steps are not being properly displayed or captured in Langfuse during tracing. This makes it difficult to debug or analyze the behavior of the agents effectively.
I've verified that tracing is initialized and Langfuse is recording events, but the structured input/output for each step (e.g., tool calls, thoughts, actions) isn't showing up as expected.
Has anyone else experienced this when using LangGraph with Langfuse? Any suggestions or best practices for making sure input/output is properly tracked in this setup would be appreciated.
Input and Ouput show metadata

This is my code
Beta Was this translation helpful? Give feedback.
All reactions