Skip to content

docs: Keras 3 evaluate() and compiled metrics clarification #21506

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

sonali-kumari1
Copy link
Contributor

This commit improves the documentation for evaluate() when used with compiled metrics in Keras 3. The existing documentation can be imprecise because model.metrics_names does not always provide a complete list of the values returned by evaluate().
colab gist

Fixes : #21487

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @sonali-kumari1, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on enhancing the clarity of Keras 3's evaluate() method documentation. The primary goal is to resolve confusion regarding the output of evaluate() when used with compiled metrics, particularly how it relates to the model.metrics_names attribute, ensuring users can correctly interpret the results.

Highlights

  • Documentation Improvement: I've updated the docstring for the evaluate() method in Keras 3 to provide clearer guidance on its return values, especially when compiled metrics are involved.
  • Metrics Clarification: The documentation now explicitly addresses the potential mismatch between the number of values returned by evaluate() and the entries in model.metrics_names when submetrics are present. It clarifies that model.metrics_names often lists only top-level names, while evaluate() can return multiple submetric values, and explains that the order of evaluate() output corresponds to the order of metrics specified during model.compile().
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request improves the documentation for evaluate() to clarify the relationship between its return values and model.metrics_names. The change is helpful. I've suggested a small wording improvement to make the order of the returned values (loss and metrics) even clearer.

Comment on lines 801 to 802
length mismatch. The order of the `evaluate()` output corresponds
to the order of metrics specified during `model.compile()`. You can
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This is a great clarification! To make it even more precise, could we also mention that the loss is the first value in the returned list? The current wording (...corresponds to the order of metrics...) could be interpreted as if only metrics are returned, which might be confusing.

Suggested change
length mismatch. The order of the `evaluate()` output corresponds
to the order of metrics specified during `model.compile()`. You can
length mismatch. The returned values are ordered as: first the loss, then the metrics from `model.compile()` in order. You can

@codecov-commenter
Copy link

codecov-commenter commented Jul 24, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 82.72%. Comparing base (df481e9) to head (6d198f5).
Report is 15 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21506      +/-   ##
==========================================
- Coverage   82.84%   82.72%   -0.13%     
==========================================
  Files         565      567       +2     
  Lines       55656    56214     +558     
  Branches     8685     8786     +101     
==========================================
+ Hits        46108    46501     +393     
- Misses       7433     7556     +123     
- Partials     2115     2157      +42     
Flag Coverage Δ
keras 82.52% <ø> (-0.13%) ⬇️
keras-jax 63.92% <ø> (+0.52%) ⬆️
keras-numpy 58.41% <ø> (-0.18%) ⬇️
keras-openvino 34.56% <ø> (+0.61%) ⬆️
keras-tensorflow 64.34% <ø> (+0.46%) ⬆️
keras-torch 63.97% <ø> (+0.50%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Trainer.evaluate and Trainer.metrics_names do not interact as specified in the docs
4 participants