-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non consistent results for fluxes accross different platforms. #22
Comments
I added the data points to my previous test to trigger this bug in a new branch testing. You can see workflow results here. |
Any idea to approach this @timlichtenberg , @nichollsh ? I would be curious to see which result you get from your computer. |
I wonder if this is related to the inconsistencies that Emma found between using |
It is probably related as deviations increase with time. The problem here is that the exact same command gives different results which make it not reproducible. |
This is the result I get on my desktop computer (Ubuntu 20.04.6 LTS, Linux 5.4.0, Python 3.12.5, Scipy 1.14.1, Numpy 2.1.1)
I get the same result as this ^ on the AOPP cluster (same Ubuntu version but different hardware). On the Kapteyn cluster I get the same values as you, which is good. |
Which is also different to the values you gave above. Is the difference originating from the different platforms, or maybe Python or SciPy versions? |
Whaouh, this is dreadful... The workflow runs automatically on different python version. But I need to look into the exact version number of the dependencies. |
This sounds to me like it could stem from the interpolation of the original tracks, although I haven't tested that specifically. Looking at MORS, the way it interpolates is a little odd? It seems to use a bespoke method for handling it, rather than using the more advanced routines in Scipy. I can imagine that this might have some issues in it. |
Is this the function responsible? Line 764 in 0196097
Can we add a test for this to start with? |
I found that the computation of the time grid is responsible for the reproducibility issue. Default is to compute an adaptive time step with the Rosenbrock scheme (set in |
Just put here the sequence call when creating a star object for future debugging.
|
Cool, nice detective work! Just a thought, if rounding errors are the problem, have you considered using exact representations, e.g. using the decimals package? |
Will not investigate further since the fixed time stepping is preferred anyway. Will go back to this only in case of efficiency issue. |
Closed by #23 |
When generating a star object with same input, the computed track data differ of few percents between different platform.
Differences are more sensible when:
This is for instance the result I got on my Mac
And the result I got on the Kapteyn cluster and on the Github CI
The
.Value
does an interpolation from track data. But doing the interpolation myself gives the same result. So it is really at the stage of the star object creation that things differ.The text was updated successfully, but these errors were encountered: