Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Making the package more maintainable #900

Open
1 of 6 tasks
avik-pal opened this issue Oct 16, 2024 · 3 comments
Open
1 of 6 tasks

Making the package more maintainable #900

avik-pal opened this issue Oct 16, 2024 · 3 comments

Comments

@avik-pal
Copy link
Member

avik-pal commented Oct 16, 2024

I did an initial round of cleanup in #882, but there's a lot of unwanted code that should be purged, and most of the handling should be forwarded to Lux.

  • GPU Support: Currently there is an adapt to copy over anything that is not on GPU to GPU on every call to the function. IMO this should be completely removed, and if user calls a model which is partially on GPU then it should be an error (similar to Lux)
    • We can preserve the current behavior for scalars
  • Phi / ODEPhi need to be rewritten as a Lux layer, that un-blocks all current shortcomings with nested AD
  • Annotate the closures with @closure to avoid boxing
  • Remove the logging subpackage with a extension refactor: remove NeuralPDELogging in-favor of extension #901
  • Move Bayesian ones into an extension (or a subpackage BayesianNeuralPDE)? I am pretty sure the number of users for those is quite small but those packages add a significant load time
  • reproducibility -- all models should take in an rng instead of relying on the global RNG

P.S. Just because I am opening this issue doesn't mean I am taking it upon me to do this 😓

@ChrisRackauckas
Copy link
Member

Agreed with all of these.

@sathvikbhagavan
Copy link
Member

I can help out a bit.

For 1 - GPU Support for NNODE, users should provide the model, initial conditions and parameters in gpu arrays in order to not error? Also, initially I thought GPU works with NNODE now, I wanted to confirm it - I was working on a #866 which implemented a custom broadcast, is it still needed?

For 2 - Will that fix using autodiff with NNODE?

@avik-pal
Copy link
Member Author

  1. Correct. Does GPU already not work with NNODE? I thought I fixed it
  2. Yes that's the main goal

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants