Skip to content

feat(evals): add behavioral eval for error recovery and self-correction#23361

Closed
ayazhankadessova wants to merge 1 commit intogoogle-gemini:mainfrom
ayazhankadessova:feat/eval-error-recovery
Closed

feat(evals): add behavioral eval for error recovery and self-correction#23361
ayazhankadessova wants to merge 1 commit intogoogle-gemini:mainfrom
ayazhankadessova:feat/eval-error-recovery

Conversation

@ayazhankadessova
Copy link
Copy Markdown

Summary

Add a new behavioral eval (error_recovery.eval.ts) that tests the agent's ability to recover from errors through the observe-diagnose-fix-verify loop.

Test cases

1. Type error recovery (USUALLY_PASSES)

  • Workspace has a TypeScript project with a deliberate type mismatch (strings passed to a function expecting numbers)
  • Prompt: "Fix the type error in this project and verify it compiles"
  • Asserts: agent edited a file, ran tsc/build to verify, and the type error is resolved

2. Test failure recovery (USUALLY_PASSES)

  • Workspace has a function with an off-by-one bug (>= instead of >) that causes test failure
  • Prompt: "The tests are failing. Fix the bug and make them pass"
  • Asserts: agent ran the test suite, edited source code (not the test file), and fixed the logic bug

Design decisions

  • Controlled error injection: Known bugs are injected rather than relying on the agent to generate errors, reducing non-determinism
  • Behavioral assertions: Asserts on tool call patterns (did the agent run build/tests? did it edit the right file?) rather than content matching
  • 600s timeout: Matches validation_fidelity.eval.ts for similar multi-step tasks
  • USUALLY_PASSES policy: Appropriate for behavioral evals with inherent LLM variance

Fixes #21990

Test plan

  • Eval follows existing patterns from validation_fidelity.eval.ts
  • Uses evalTest, EDIT_TOOL_NAMES, and tool log assertions consistent with codebase conventions
  • Prettier formatting verified
  • Both test cases start as USUALLY_PASSES per eval guidelines

Add error_recovery.eval.ts with 2 test cases that validate the agent
ability to detect errors, diagnose issues, apply fixes, and verify
corrections:

1. Type error recovery: agent fixes a TypeScript type mismatch and runs
   tsc to verify the project compiles
2. Test failure recovery: agent identifies an off-by-one bug in source
   code (not the test), fixes it, and re-runs the test suite

Both use controlled error injection and assert on behavioral signals
(tool call patterns) rather than content matching.

Fixes google-gemini#21990
@ayazhankadessova ayazhankadessova requested a review from a team as a code owner March 21, 2026 08:47
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the evaluation suite by adding a new behavioral evaluation focused on an agent's error recovery capabilities. It provides structured scenarios where an agent must identify and resolve common programming errors, such as type mismatches and logical bugs, demonstrating its ability to observe, diagnose, fix, and verify solutions. This improves the robustness testing of agents by simulating real-world debugging challenges.

Highlights

  • New Behavioral Evaluation: Introduced error_recovery.eval.ts to assess an agent's ability to diagnose, fix, and verify solutions for errors.
  • Test Cases: Includes two distinct scenarios: recovering from a TypeScript type error and fixing an off-by-one bug causing test failures.
  • Evaluation Design Principles: Employs controlled error injection, behavioral assertions on tool calls, a 600-second timeout, and the USUALLY_PASSES policy to manage LLM variance.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new behavioral evaluation for error recovery, which is a great addition for testing the agent's robustness. The implementation is solid, with two well-defined test cases for type errors and test failures. I've identified two high-severity issues in the test setup: one is a brittle assertion for checking which file was edited, and the other is an incorrect version for the vitest dependency which could cause the test to fail. My review includes suggestions to fix both.

Note: Security Review is unavailable for this PR.

test: 'vitest run',
},
devDependencies: {
vitest: '^3.0.0',
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The specified vitest version ^3.0.0 does not exist. This seems to be a typo and will likely cause the test setup to fail during dependency installation. The latest stable version of vitest is 1.x. I suggest changing this to ^1.0.0.

Suggested change
vitest: '^3.0.0',
vitest: '^1.0.0',

Comment on lines +159 to +161
const editedSource = editCalls.some((log) =>
log.toolRequest.args.includes('src/utils.ts'),
);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This check is a bit brittle as it uses String.prototype.includes() on a JSON string. A more robust approach would be to parse the JSON and check the file_path property directly. This avoids potential false positives if the file path string appears in other arguments (like old_string or new_string) and ensures correctness if the path contains characters that get escaped in JSON.

Suggested change
const editedSource = editCalls.some((log) =>
log.toolRequest.args.includes('src/utils.ts'),
);
const editedSource = editCalls.some((log) => {
try {
const args = JSON.parse(log.toolRequest.args) as { file_path?: string };
return args.file_path === 'src/utils.ts';
} catch {
return false;
}
});
References
  1. The toolRequest.args property is a JSON string, not an object. It must be parsed using JSON.parse() before its properties can be accessed.

@gemini-cli gemini-cli bot added the area/platform Issues related to Build infra, Release mgmt, Testing, Eval infra, Capacity, Quota mgmt label Mar 21, 2026
@gemini-cli
Copy link
Copy Markdown
Contributor

gemini-cli bot commented Apr 5, 2026

Hi there! Thank you for your interest in contributing to Gemini CLI.

To ensure we maintain high code quality and focus on our prioritized roadmap, we have updated our contribution policy (see Discussion #17383).

We only guarantee review and consideration of pull requests for issues that are explicitly labeled as 'help wanted'. All other community pull requests are subject to closure after 14 days if they do not align with our current focus areas. For this reason, we strongly recommend that contributors only submit pull requests against issues explicitly labeled as 'help-wanted'.

This pull request is being closed as it has been open for 14 days without a 'help wanted' designation. We encourage you to find and contribute to existing 'help wanted' issues in our backlog! Thank you for your understanding and for being part of our community!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/platform Issues related to Build infra, Release mgmt, Testing, Eval infra, Capacity, Quota mgmt

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat(evals): add behavioral eval for error recovery and self-correction

1 participant