You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
TensorFlow version and how it was installed (source or binary): 2.10.0 binary (with pip)
TensorFlow-Addons version and how it was installed (source or binary): 0.19.0 binary (with pip)
Python version: 3.9.7
Is GPU used? (yes/no): no
Describe the bug
When I use ExponentialCyclicalLearningRate and I fit m model with a TensorBoard instance, I get the following error TypeError: Cannot convert 1.0 to EagerTensor of dtype int64
After a little bit of debugging, I have found out that the issue is here:
It seems that self.scale_fn(mode_step) fails internally when trying to compute self.gamma ** x when x (mode_step) is of type int64.
I saw a similar issue here #2593 with some fix that was supposedly about to me merged but since I'm using the latest version I guess that the merge wasn't implemented.
Change self.scale_fn(mode_step) to self.scale_fn(step_as_dtype) since it is of type float32, it does work for that specific line, I just don't know if it can potentially break future dependencies.
The text was updated successfully, but these errors were encountered:
System information
Describe the bug
When I use ExponentialCyclicalLearningRate and I fit m model with a TensorBoard instance, I get the following error
TypeError: Cannot convert 1.0 to EagerTensor of dtype int64
After a little bit of debugging, I have found out that the issue is here:
addons/tensorflow_addons/optimizers/cyclical_learning_rate.py
Lines 86 to 102 in b2dafcf
Specifically at:
It seems that
self.scale_fn(mode_step)
fails internally when trying to computeself.gamma ** x
when x (mode_step
) is of typeint64
.I saw a similar issue here #2593 with some fix that was supposedly about to me merged but since I'm using the latest version I guess that the merge wasn't implemented.
Code to reproduce the issue
Same as #2593
Potential Fix
Change
self.scale_fn(mode_step)
toself.scale_fn(step_as_dtype)
since it is of typefloat32
, it does work for that specific line, I just don't know if it can potentially break future dependencies.The text was updated successfully, but these errors were encountered: