Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix deserialization issue with custom_loss function while reloading #20820

Closed

Conversation

Surya2k1
Copy link
Contributor

Fix deserialization issue with custom_loss function while reloading saved models of <3.7v.

Suppose for a model with custom loss function and saved in 3.6V, to reload in newer version we are facing an issue due to changes in appending the package name to loss function from versions 3.7v onwards.

One Workaround is to pass these custom objects explicitly to load_model. The reason this works is mentioned below.

When we pass custom_objects explicitly to load_model, For example in this case, it will be duplicated and custom_objects will become double like below,with Package name and without function names:

{'MyLayers': <class '__main__.CustomLayer'>, 'MyLayers>CustomLayer': <class '__main__.CustomLayer'>, 'custom_fn': <function custom_fn at 0x0000029D776D440>, 'custom_fn>custom_fn': <function custom_fn at 0x0000029D776D440>, 'custom_fn>my_loss_fn': <function my_loss_fn at 0x0000029DA6E2C00>, 'my_loss_fn': <function my_loss_fn at 0x0000029DA6E2C00>}

This can be a workaround for loading saved model in 3.6V with custom_loss in Versions >3.7. If it is OK then documentation needs to be updated accordingly.

Alternatively this PR will resolve this issue until Keras 3.6 v become inactive for use.

Might Fix #20806

@codecov-commenter
Copy link

codecov-commenter commented Jan 28, 2025

Codecov Report

Attention: Patch coverage is 0% with 3 lines in your changes missing coverage. Please review.

Project coverage is 76.50%. Comparing base (9c8da1f) to head (cb987ec).

Files with missing lines Patch % Lines
keras/src/saving/serialization_lib.py 0.00% 3 Missing ⚠️

❗ There is a different number of reports uploaded between BASE (9c8da1f) and HEAD (cb987ec). Click for more details.

HEAD has 2 uploads less than BASE
Flag BASE (9c8da1f) HEAD (cb987ec)
keras 5 4
keras-torch 1 0
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #20820      +/-   ##
==========================================
- Coverage   82.01%   76.50%   -5.51%     
==========================================
  Files         559      559              
  Lines       52291    52294       +3     
  Branches     8084     8085       +1     
==========================================
- Hits        42884    40009    -2875     
- Misses       7431    10353    +2922     
+ Partials     1976     1932      -44     
Flag Coverage Δ
keras 76.41% <0.00%> (-5.41%) ⬇️
keras-jax 64.26% <0.00%> (-0.01%) ⬇️
keras-numpy 58.99% <0.00%> (-0.01%) ⬇️
keras-openvino 29.83% <0.00%> (-0.01%) ⬇️
keras-tensorflow 64.80% <0.00%> (-0.01%) ⬇️
keras-torch ?

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@fchollet
Copy link
Collaborator

@hertschuh in the light of recent vulnerabilities with custom object loading, can you please review this PR? Thank you

@hertschuh
Copy link
Collaborator

@Surya2k1

Thank you for PR. I believe it does indeed fix #20806 by reverting part of what was done in #20406 .

However the approach is too broad, which is why I wanted to remove it. For instance, if you are reloading elu you might select gelu or relu by accident.

I'll take over from here and keep you updated.

@hertschuh
Copy link
Collaborator

Closing in favor of #20824

@hertschuh hertschuh closed this Jan 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Error deserializing saved model after Keras bump 3.6 -> 3.7
5 participants