Skip to content

fix torch module wrapper serialization error #21505

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

divyashreepathihalli
Copy link
Collaborator

Fixes : #19226

The get_config method of TorchModuleWrapper was attempting to decode a byte array as a UTF-8 string, which was causing an error.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @divyashreepathihalli, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a serialization error that occurred when Keras models wrapping PyTorch modules attempted to serialize bytes objects containing non-UTF-8 characters. The core change involves updating the serialization and deserialization utilities to use Base64 encoding for bytes data, ensuring robust handling. A new comprehensive test case has been added to verify the end-to-end saving and loading functionality for such models.

Highlights

  • Serialization Fix for Bytes: Modified the serialize_keras_object and deserialize_keras_object functions in serialization_lib.py to correctly handle bytes objects during serialization and deserialization. This is achieved by employing Base64 encoding before UTF-8 decoding for serialization and Base64 decoding for deserialization, preventing UnicodeDecodeError when bytes objects contain non-UTF-8 compatible characters.
  • New Test Case for Torch Module Serialization: Added a new test_save_load method to torch_utils_test.py. This test defines a Keras model wrapping a torch.nn.Sequential module, then saves and loads it, asserting that the weights remain consistent. This specifically validates that models containing TorchModuleWrapper instances can be successfully serialized and deserialized, confirming the fix for the reported issue.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request fixes a serialization error for TorchModuleWrapper by using base64 to encode byte arrays, preventing UnicodeDecodeError. The changes are logical and well-implemented. A new test case has been added to verify the fix. The feedback focuses on moving the new import statements to the top of the file for better code style.

@codecov-commenter
Copy link

codecov-commenter commented Jul 23, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 82.72%. Comparing base (129e3d7) to head (0b7c59e).
⚠️ Report is 13 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21505      +/-   ##
==========================================
+ Coverage   78.00%   82.72%   +4.71%     
==========================================
  Files         565      567       +2     
  Lines       55701    56217     +516     
  Branches     8691     8786      +95     
==========================================
+ Hits        43451    46504    +3053     
+ Misses      10212     7556    -2656     
- Partials     2038     2157     +119     
Flag Coverage Δ
keras 82.52% <100.00%> (+4.67%) ⬆️
keras-jax 63.92% <25.00%> (?)
keras-numpy 58.41% <25.00%> (-0.26%) ⬇️
keras-openvino 34.56% <25.00%> (+0.57%) ⬆️
keras-tensorflow 64.34% <25.00%> (+0.44%) ⬆️
keras-torch 63.97% <100.00%> (+0.44%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@google-ml-butler google-ml-butler bot added kokoro:force-run ready to pull Ready to be merged into the codebase labels Jul 24, 2025
@hertschuh
Copy link
Collaborator

Thank you for the fix!

@google-ml-butler google-ml-butler bot removed the ready to pull Ready to be merged into the codebase label Jul 25, 2025
@divyashreepathihalli divyashreepathihalli merged commit e704b46 into keras-team:master Jul 25, 2025
11 checks passed
@MicheleCattaneo
Copy link

Thanks a lot for this fix! It's been a longstanding issue for me, and I'm glad to see it resolved.
FYI @dnerini

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

TorchModuleWrapper default get_config creates non serializable objects
6 participants