You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Bug: IDE diff acceptance inflates edit tool_call args with full file content, causing massive token waste
What happened?
In Default (manual approval) mode with IDE integration, when the user accepts an edit diff in the IDE, the onConfirm callback in edit.ts overwrites params.old_string and params.new_string with the entire file content. Because invocation.params shares the same object reference as toolCall.request.args in the scheduler, this mutation propagates into the conversation history. On the next LLM API call, the assistant message's tool_calls contain the inflated args, causing a massive increase in prompt tokens.
Observed behavior
Editing a ~2200-line TypeScript file (config.ts, ~71KB) with three edit tool calls:
LLM's original output
Sent back to LLM in next turn
Edit 1 old_string
87 chars
70,963 chars
Edit 2 old_string
135 chars
70,963 chars
Edit 3 old_string
185 chars
70,963 chars
The new_string fields are similarly inflated (~71K chars each).
Result: prompt_tokens jumped from 19,192 to 112,468 between consecutive API requests — an increase of ~93K tokens caused entirely by the inflated tool call args.
Reproduction steps
Connect to an IDE (VS Code) with the companion extension
Set approval mode to Default
Have the LLM make any edit tool call
Accept the diff in the IDE
Continue the conversation (trigger another LLM API call)
Observe that the previous turn's edit tool_call now carries the full file content as old_string/new_string
This affects every single edit in Default+IDE mode, not just multi-edit scenarios.
What did you expect to happen?
The conversation history should preserve the LLM's original tool call arguments (small, targeted old_string/new_string), not the inflated full-file-content versions created by the onConfirm callback. The prompt token count should remain stable.
Root cause analysis
In packages/core/src/tools/edit.ts, the onConfirm callback (line 319-330) mutates this.params directly:
onConfirm: async(outcome: ToolConfirmationOutcome)=>{if(ideConfirmation){constresult=awaitideConfirmation;if(result.status==='accepted'&&result.content){this.params.old_string=editData.currentContent??'';// entire file contentthis.params.new_string=result.content;// entire new file content}}},
The mutation propagates through a shared object reference chain:
The LLM response Part objects contain functionCall.args
These same Part objects are pushed directly into geminiChat.history (geminiChat.ts L712: consolidatedHistoryParts.push(part) — no copy)
The same args object is assigned to request.args in the scheduler (agent-core.ts L750: const args = fc.args)
The scheduler creates invocation.params as the same reference (tool.build(reqInfo.args))
So this.history[n].parts[m].functionCall.args === invocation.params === request.args
When onConfirm mutates this.params.old_string, it mutates all three simultaneously. On the next API call:
getHistory() deep-clones the already-mutated history (geminiChat.ts L520)
The OpenAI converter stringifies the mutated args: JSON.stringify(part.functionCall.args) (converter.ts L472)
Result: the inflated full-file-content args are serialized into the tool_calls[].function.arguments field
Why Auto Edit mode is unaffected
In AUTO_EDIT/YOLO mode, ideConfirmation is set to undefined (line 302-309), so the onConfirm callback never executes the param-overwriting branch. The original small old_string/new_string values are preserved.
Client information
N/A (this is a code-level bug report based on source analysis and API request log inspection)
The inflated args also propagate to hooks (firePostToolUseHook) via toolInput = scheduledCall.request.args.
For large files, the token waste can be extreme. A single edit on a 71KB file adds ~140KB to the conversation history (old_string + new_string). Multiple edits multiply this linearly.
Workaround: Exiting and restarting the process, then reloading the session, restores normal token usage. This is because recordAssistantTurn() (geminiChat.ts L726) serializes the args to disk before tool execution (and before onConfirm mutates them). On reload, buildApiHistoryFromConversation() (sessionService.ts L592) reconstructs the history from the disk snapshot using structuredClone(), which contains the original non-inflated args.
Bug: IDE diff acceptance inflates edit tool_call args with full file content, causing massive token waste
What happened?
In Default (manual approval) mode with IDE integration, when the user accepts an edit diff in the IDE, the
onConfirmcallback inedit.tsoverwritesparams.old_stringandparams.new_stringwith the entire file content. Becauseinvocation.paramsshares the same object reference astoolCall.request.argsin the scheduler, this mutation propagates into the conversation history. On the next LLM API call, the assistant message'stool_callscontain the inflated args, causing a massive increase in prompt tokens.Observed behavior
Editing a ~2200-line TypeScript file (
config.ts, ~71KB) with three edit tool calls:old_stringold_stringold_stringThe
new_stringfields are similarly inflated (~71K chars each).Result:
prompt_tokensjumped from 19,192 to 112,468 between consecutive API requests — an increase of ~93K tokens caused entirely by the inflated tool call args.Reproduction steps
edittool callold_string/new_stringThis affects every single edit in Default+IDE mode, not just multi-edit scenarios.
What did you expect to happen?
The conversation history should preserve the LLM's original tool call arguments (small, targeted
old_string/new_string), not the inflated full-file-content versions created by theonConfirmcallback. The prompt token count should remain stable.Root cause analysis
In
packages/core/src/tools/edit.ts, theonConfirmcallback (line 319-330) mutatesthis.paramsdirectly:The mutation propagates through a shared object reference chain:
Partobjects containfunctionCall.argsPartobjects are pushed directly intogeminiChat.history(geminiChat.tsL712:consolidatedHistoryParts.push(part)— no copy)argsobject is assigned torequest.argsin the scheduler (agent-core.tsL750:const args = fc.args)invocation.paramsas the same reference (tool.build(reqInfo.args))this.history[n].parts[m].functionCall.args===invocation.params===request.argsWhen
onConfirmmutatesthis.params.old_string, it mutates all three simultaneously. On the next API call:getHistory()deep-clones the already-mutated history (geminiChat.tsL520)JSON.stringify(part.functionCall.args)(converter.tsL472)tool_calls[].function.argumentsfieldWhy Auto Edit mode is unaffected
In AUTO_EDIT/YOLO mode,
ideConfirmationis set toundefined(line 302-309), so theonConfirmcallback never executes the param-overwriting branch. The original smallold_string/new_stringvalues are preserved.Client information
N/A (this is a code-level bug report based on source analysis and API request log inspection)
Anything else we need to know?
onConfirmoverwriting params) but have different symptoms: Bug: Multiple edit tool calls fail in Default mode with IDE integration — only first edit succeeds #2702 is about execution failure, this issue is about token waste.firePostToolUseHook) viatoolInput = scheduledCall.request.args.recordAssistantTurn()(geminiChat.tsL726) serializes the args to disk before tool execution (and beforeonConfirmmutates them). On reload,buildApiHistoryFromConversation()(sessionService.tsL592) reconstructs the history from the disk snapshot usingstructuredClone(), which contains the original non-inflated args.