You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I did an initial round of cleanup in #882, but there's a lot of unwanted code that should be purged, and most of the handling should be forwarded to Lux.
GPU Support: Currently there is an adapt to copy over anything that is not on GPU to GPU on every call to the function. IMO this should be completely removed, and if user calls a model which is partially on GPU then it should be an error (similar to Lux)
We can preserve the current behavior for scalars
Phi / ODEPhi need to be rewritten as a Lux layer, that un-blocks all current shortcomings with nested AD
Annotate the closures with @closure to avoid boxing
Move Bayesian ones into an extension (or a subpackage BayesianNeuralPDE)? I am pretty sure the number of users for those is quite small but those packages add a significant load time
reproducibility -- all models should take in an rng instead of relying on the global RNG
P.S. Just because I am opening this issue doesn't mean I am taking it upon me to do this 😓
The text was updated successfully, but these errors were encountered:
For 1 - GPU Support for NNODE, users should provide the model, initial conditions and parameters in gpu arrays in order to not error? Also, initially I thought GPU works with NNODE now, I wanted to confirm it - I was working on a #866 which implemented a custom broadcast, is it still needed?
I did an initial round of cleanup in #882, but there's a lot of unwanted code that should be purged, and most of the handling should be forwarded to Lux.
adapt
to copy over anything that is not on GPU to GPU on every call to the function. IMO this should be completely removed, and if user calls a model which is partially on GPU then it should be an error (similar to Lux)Phi
/ODEPhi
need to be rewritten as a Lux layer, that un-blocks all current shortcomings with nested AD@closure
to avoid boxingBayesianNeuralPDE
)? I am pretty sure the number of users for those is quite small but those packages add a significant load timerng
instead of relying on the global RNGP.S. Just because I am opening this issue doesn't mean I am taking it upon me to do this 😓
The text was updated successfully, but these errors were encountered: