Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Non consistent results for fluxes accross different platforms. #22

Closed
lsoucasse opened this issue Sep 5, 2024 · 14 comments
Closed

Non consistent results for fluxes accross different platforms. #22

lsoucasse opened this issue Sep 5, 2024 · 14 comments
Labels
bug Something isn't working help wanted Extra attention is needed

Comments

@lsoucasse
Copy link
Member

lsoucasse commented Sep 5, 2024

When generating a star object with same input, the computed track data differ of few percents between different platform.

Differences are more sensible when:

  • looking at large time (5000-10000 Myrs)
  • looking at quantities like Leuv and Lx
  • looking at round values (ie Mstar =1.0 instead of 0.999)

This is for instance the result I got on my Mac

>>> star = mors.Star(Mstar=0.9999, Omega=1.0001)
>>> star.Value(1e4, 'Leuv')
3.181539021554143e+28
>>> star = mors.Star(Mstar=1.0, Omega=1.0)
>>> star.Value(1e4, 'Leuv')
3.1992107463029277e+28

And the result I got on the Kapteyn cluster and on the Github CI

>>> star = mors.Star(Mstar=0.9999, Omega=1.0001)
>>> star.Value(1e4, 'Leuv')
np.float64(3.1816559317522773e+28)
>>> star = mors.Star(Mstar=1.0, Omega=1.0)
>>> star.Value(1e4, 'Leuv')
np.float64(3.2484555118015116e+28)

The .Value does an interpolation from track data. But doing the interpolation myself gives the same result. So it is really at the stage of the star object creation that things differ.

@lsoucasse lsoucasse linked a pull request Sep 5, 2024 that will close this issue
@lsoucasse lsoucasse changed the title Non consistent results for xuv fluxes accross different platforms. Non consistent results for fluxes accross different platforms. Sep 5, 2024
@lsoucasse
Copy link
Member Author

lsoucasse commented Sep 5, 2024

I added the data points to my previous test to trigger this bug in a new branch testing. You can see workflow results here.

@lsoucasse lsoucasse added bug Something isn't working help wanted Extra attention is needed labels Sep 5, 2024
@lsoucasse lsoucasse removed a link to a pull request Sep 5, 2024
@lsoucasse
Copy link
Member Author

Any idea to approach this @timlichtenberg , @nichollsh ? I would be curious to see which result you get from your computer.

@nichollsh
Copy link
Contributor

I wonder if this is related to the inconsistencies that Emma found between using Star() and the .Lxuv() methods of getting different properties.

@lsoucasse
Copy link
Member Author

I wonder if this is related to the inconsistencies that Emma found between using Star() and the .Lxuv() methods of getting different properties.

It is probably related as deviations increase with time. The problem here is that the exact same command gives different results which make it not reproducible.

@nichollsh
Copy link
Contributor

nichollsh commented Sep 5, 2024

This is the result I get on my desktop computer (Ubuntu 20.04.6 LTS, Linux 5.4.0, Python 3.12.5, Scipy 1.14.1, Numpy 2.1.1)

>>> star = mors.Star(Mstar=0.9999, Omega=1.0001)
>>> star.Value(1e4, 'Leuv')
np.float64(3.184700102777826e+28)
>>> star = mors.Star(Mstar=1.0, Omega=1.0)
>>> star.Value(1e4, 'Leuv')
np.float64(3.1830231535644867e+28)

I get the same result as this ^ on the AOPP cluster (same Ubuntu version but different hardware).

On the Kapteyn cluster I get the same values as you, which is good.

@nichollsh
Copy link
Contributor

nichollsh commented Sep 5, 2024

Which is also different to the values you gave above. Is the difference originating from the different platforms, or maybe Python or SciPy versions?

@lsoucasse
Copy link
Member Author

Which is also different to the values you gave above

Whaouh, this is dreadful... The workflow runs automatically on different python version. But I need to look into the exact version number of the dependencies.

@nichollsh
Copy link
Contributor

nichollsh commented Sep 5, 2024

This sounds to me like it could stem from the interpolation of the original tracks, although I haven't tested that specifically. Looking at MORS, the way it interpolates is a little odd? It seems to use a bespoke method for handling it, rather than using the more advanced routines in Scipy. I can imagine that this might have some issues in it.

@stefsmeets
Copy link
Contributor

Is this the function responsible?

def _Interpolate1D(Xarray,Yarray,X):

Can we add a test for this to start with?

@lsoucasse
Copy link
Member Author

I found that the computation of the time grid is responsible for the reproducibility issue. Default is to compute an adaptive time step with the Rosenbrock scheme (set in parameters.py), which seems to be sensitive to round-off errors when time evolves. Switching to fixed time step methods (forward Euler, RK4) leads to reproducible results between platforms and I will make this the default option. Fixed-time stepping is more computationally expensive (20000 points instead of 300 points with Rosenbrock) but it seems a decent option since we only consider one star system. Still, it will remain possible to switch back to the adaptive scheme or set more restrictive options (maximum time step) to make it more robust.

@lsoucasse
Copy link
Member Author

lsoucasse commented Sep 30, 2024

Just put here the sequence call when creating a star object for future debugging.

  • star.py calls _LoadEvoTracks calls RE.EvolveRotation
  • rotevo.py EvolveRotation calls _CreateTracks calls phys.ExtendedQantities
  • physicalmodel.py ExtendedQuantities calls RotationQuantities calls StarEvo.Rstar
  • stellarevo.py Rstar calls Value calls _ValueSingle calls _Interpolation1D

@stefsmeets
Copy link
Contributor

Cool, nice detective work! Just a thought, if rounding errors are the problem, have you considered using exact representations, e.g. using the decimals package?

https://docs.python.org/3/library/decimal.html

@lsoucasse
Copy link
Member Author

Cool, nice detective work! Just a thought, if rounding errors are the problem, have you considered using exact representations, e.g. using the decimals package?

https://docs.python.org/3/library/decimal.html

Will not investigate further since the fixed time stepping is preferred anyway. Will go back to this only in case of efficiency issue.

@nichollsh
Copy link
Contributor

Closed by #23

@github-project-automation github-project-automation bot moved this from In Progress to Done in PROTEUS Development Roadmap Sep 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed
Projects
Status: Done
Development

No branches or pull requests

3 participants