-
-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
N-ODE trajectories cut itself #927
Comments
What figure does this produce? I'm not sure what exactly you're trying to say here. Since it's an ODE the trajectories cannot cross, but of course since it's in 3D space you cannot look at the 1D plot and expect that to be monotonic. You have to look at the 3D plot in order to see if the trajectories cross. |
@ChrisRackauckas That was actually something I was aiming to change; The library example creates one N-ODE and takes in the starting values for all samples at once (each row in yTrain is one time series for a different starting value), due to the the defined loss calculation. But it would be more sensible for my goal (monotonic, non-intersecting functions between 0 and 1), to redefine the N-ODE, such that the input consists of a single starting value. That way, the ODE is one-dimensional and won't cut itself, correct ? I was wondering if that might be the solution. Right now, I am basically treating the samples as hidden state components at the starting time but then I am plotting them in 1D, which then of course may intersect. Thanks for the quick help! |
That's one way to do it. Or just make the neural network in the neural ODE end with a positive only transformation, so that |
Describe the bug 🐞
The trajectories of the N-ODE solution cut each other in the ODE solution.
Expected behavior
As of my understanding, N-ODE trajectories for different starting values should never intersect. This is a prerequisite, that for example Tac et al (https://arxiv.org/abs/2110.03774) use in order to obtain monotonic function behaviour and stems from the uniwueness of ODE solutions. In my code, I enforce monotonic growth during the ODE solution procedure by using a strictly positive activation in the final layer of the neural network inside my N-ODE. The idea can e.g. be seen by Chen et. al (https://arxiv.org/abs/2309.13452). Then, to inforce a value spectrum between 0 and 1, I plug the prediction values of the N-ODE (in the code transSpace in the test section) into a logistic sigmoid function which is used for fitting using a normal MSE loss. The library use in this is inspired by the first example of the DiffEqFlu.jl (https://docs.sciml.ai/DiffEqFlux/dev/examples/neural_ode/).
The transSpace output of the N-ODE should now give me growing values at each time step for growing starting values, if the trajectories do not cut each other. If then plugged into the sigmoid function, which is monotonic in its inputs, this should give non intersecting prediction lines.
Minimal Reproducible Example 👇
The code may need to run a few times in order to get a reproduction of the problem, but n_hidden = 10 should give it out frequently.
Environment (please complete the following information):
using Pkg; Pkg.status()
using Pkg; Pkg.status(; mode = PKGMODE_MANIFEST)
versioninfo()
The text was updated successfully, but these errors were encountered: