This repository has been archived by the owner on Aug 10, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 117
Astronet Walkthrough Prediction - Restoring From Checkpoint Failed #7
Comments
I figured out why the prediction is wrong:
Going back a level in the directory didn't find the right files. Now I'm sure that the correct files are indeed at the specified directory.
If so, why would there be problems restoring? |
The original error isn't due to the model not being in the given model directory - as you said in your second post the model is in the right place and the model_dir argument is correct. The issue is a data type mismatch between tf.float32 and tf.float64. In astronet/predict.py you just need to force the features to be floats instead of doubles (numpy floats). 115: global_view = preprocess.global_view(time, flux, FLAGS.period).astype(np.float32)
...
120: local_view = preprocess.local_view(time, flux, FLAGS.period, FLAGS.duration).astype(np.float32) |
ritwik12
added a commit
to ritwik12/exoplanet-ml
that referenced
this issue
Feb 24, 2020
As per @caitlynlee in google-research#7 , Make predictions give errors for various tensors as `expected dtype double does not equal original dtype float`. Forcing tensors to be floats instead of doubles in `astronet/predict.py ` solves the issue. ``` 115: global_view = preprocess.global_view(time, flux, FLAGS.period).astype(np.float32) ... 120: local_view = preprocess.local_view(time, flux, FLAGS.period, FLAGS.duration).astype(np.float32) ```
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
When following the instructions to use a trained Astronet model to generate predictions:
The following error occurs:
As the error suggests (buried in there), the directory given for the model isn't lining up to where things are stored. I solved it by changing the path:
This prediction for following the demo exactly is slightly different every time from the expected 0.9480018 (which I assume shouldn't be the case first in the sense that it's different every time and second in the sense that it's off by the demo's result by 45%) and gravitates around 50%. (Three trials: 0.5015401002407824, 0.49691700167065095, 0.49994740353818445). Is this an issue with my solution for the first problem (changing the path of the model), or is this an entirely different issue?
The text was updated successfully, but these errors were encountered: