-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add n-dimensional support to private rotation conversion functions #376
Conversation
By default, Rotation objects can contain n-dimensional arrays of Rotation data. However, _conversions.py is written to only accept 2D arrays, since the functions use jit for acceleration. This adds wrappers to the jit functions to allow input and output of any dimensional array of Rotations.
Thanks for this @argerlt! It is a nice generalization of existing functionality.
The conversions are private, not ment to be used by a user. But, with you mentioning them in #374 and opening this PR, I think it is time to make most of them public as methods to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for making these generalizations! Some comments on function naming and docstrings, mostly. Please say so if anything is unclear or you disagree with something.
qu[1] = -s * np.cos(delta) | ||
qu[2] = -s * np.sin(delta) | ||
qu[3] = -c * np.sin(sigma) | ||
qu[0] = np.array(c * np.cos(sigma), dtype=np.float64) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is making these arrays necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, or at least something similar is.
Without these arrays, I was getting a problem where the right hand sides of lines 647-650 would produce float32 values that would then be inserted into a float64 array, resulting in incorrect values. This was a jit-only problem, because the .py_func versions of the functions execute correctly (float64 in, float64 out).
I tried several variations, which all gave the incorrect results:
qu[3] = -c * np.sin(sigma)
qu[3] = -c * np.sin(sigma, dtype = np.float64)
qu[3] = np.multiply(c, np.sin(sigma))
etc.
This FEELS weird and wrong, but it was the only method that worked which didn't involve adding extra variables, which seemed to slow down the computation.
If you or anyone else think of a more elegant solution, feel free to change it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you do qu[0] = float(c * np.cos(sigma))
? If not, try qu[0] = np.float64(c * np.cos(sigma))
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
np.float64 passes correctly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually, nevermind, np.float64 doesn't work either. I was loading an old pycache file. I think np.array might be the easiest fix to this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I'll have a look myself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh. Okay, this has be totally confused.
For some reason, every variation of np.float64, float, etc, breaks. However, it ONLY breaks if the value it is inserting into qu is less than zero.
I was going to suggest we just approve this and move on, but the underlying implication here is SOMEWHERE, things are getting cast incorrectly, possibly from float64 to float32 and then back, which would translate to a big loss in accuracy.
I also tried explicitly using numpy multiply, divide, etc, like this:
sigma = np.multiply(0.5, np.add(eu[0], eu[2]))
delta = np.multiply(0.5, np.subtract(eu[0], eu[2]))
c = np.cos(np.divide(eu[1], 2))
s = np.sin(np.divide(eu[1], 2))
nc = np.multiply(-1, c)
ns = np.multiply(-1, s)
qu = np.zeros(4, dtype=np.float64)
qu[0] = np.multiply(c, np.cos(sigma))
qu[1] = np.multiply(ns, np.cos(delta))
qu[2] = np.multiply(ns, np.sin(delta))
qu[3] = np.multiply(nc, np.sin(sigma))
if qu[0] < 0:
qu = -qu
but that failed as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that it's OK to leave as is and move on.
Out of curiosity, however, which OS and Python version are you testing with? All pytest -k test_conversions
pass without any changes to the original code on my Ubuntu 22.04 with Python 3.9. Are you running other tests than the ones in test_convers.py
? If so, we should add similar tests to catch the behaviour you're describing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm running on a Windows 10 computer with an AMD CPU. specifically from command prompt, When testing, I was running pytest orix/tests/quaternion/test_conversions.py
Since you brought this up, I tried it again, same code, same computer, but using Windows Subsystem for Linux (WSL) to run pytest. It passed without needing the np.array() additions! So, I assume this is a windows specific problem with numba and jit, which honestly, makes it even weirder.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for clarifying. I've encountered surprising effects with Numba in the past, similar to your experience here. This is just a reminder that we should not trust Numba blindly!
@argerlt, do you intend to use these functions outside of orix? If so, we should include them in the API reference, and find a convenient way to import them. |
Still need to update function names and docstrings, and fix test_conversions.py Co-authored-by: Håkon Wiik Ånes <[email protected]>
I didn't plan to. I personally cannot think of any situation where I would want to use these conversions where I wouldn't also just use the Orix Quaternion Class. Again, using scipy as a point of reference, I often have to go from active euler to Rodrigues-frank. When doing so, I never access those conversion directly. Instead, I just do:
Which I think is conceptually simple and keeps the API clean. |
Alright, assuming you have no further feedback on line 647:
I believe I have addressed all the required changes. let me know if that isn't the case |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that a syntax like Quaternion.from_euler(euler).to_matrix()
etc. is sufficient.
This shapes up rather nicely, thank you very much for addressing all my review comments! I've just got a few of them left.
Just as a heads-up: I prefer to resolve review comments myself, since this makes it easier to decided whether a change has been addressed.
qu[1] = -s * np.cos(delta) | ||
qu[2] = -s * np.sin(delta) | ||
qu[3] = -c * np.sin(sigma) | ||
qu[0] = np.array(c * np.cos(sigma), dtype=np.float64) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you do qu[0] = float(c * np.cos(sigma))
? If not, try qu[0] = np.float64(c * np.cos(sigma))
.
Most recent push includes all the suggested formatting changes, plus a few I caught on reread. I also realized that if any of the _conversion.py functions got passed data that wasn't float64, they would fail. Added lines to the wrapper functions to convert inputs to float64, as well as test cases that would pass float32 data to the checks, just to ensure this doesn't pop back up later. In reference to the eu2qu in _conversions.py, lines 657-671: |
I'm including "Austin Gerlt" in {
"name": "Austin Gerlt",
"orcid": "0000-0002-2204-2055",
"affiliation": "The Ohio State University"
}, The ORCID is here and your affiliation I found here. If you're happy with this, I'll push these changes (and some minor formatting) to your branch before merging. |
@hakonanes Yup, works for me. Change what you need. |
Signed-off-by: Håkon Wiik Ånes <[email protected]>
Signed-off-by: Håkon Wiik Ånes <[email protected]>
Signed-off-by: Håkon Wiik Ånes <[email protected]>
Signed-off-by: Håkon Wiik Ånes <[email protected]>
Sorry @argerlt, it seems I cannot push commits to your branch |
I do, but It might also be a problem with my repo being a fork of a fork: orix --> Mart2Aust_Hackathon --> argerlt/Mart2Aust_Hackathon. On my computer, I just have both repos set as upstreams, but I think Github might see it differently. I planned on changing that soon, once I gathered together the important changes from Mart2Aust_Hackathon. I added you as a collaborator though to my repo, which I think might fix the issue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for contributing these nice generalizations, and for addressing all my review comments, @argerlt!
Merging when checks pass.
Could you briefly comment on https://github.com/pyxem/orix/pull/376/files#r951924756?
Description of the change
By default, Rotation objects can contain n-dimensional arrays of Rotation data. However, _conversions.py is written to only accept 2D arrays, since the functions use jit for acceleration. This adds wrappers to the jit functions to allow input and output of any dimensional array of Rotations.
Reasons for this:
This makes it possible to add to/from functions for Euler, Rodrigues, homochoric, cubochoric, and axis-angle functions into either Rotations or Quaternions. without losing the n-dimensional object support.
all the to/from functions will be able to use the jit-accelerated conversion functions copied directly from the Rowenhorst paper.
Orix contains repeated to/from conversion functions, which adds some bloat and gives more room for errors. Wrapping these functions makes it easier to replace this repeated functionality with the jit-accelerated ones.
This update makes it easier to take care of issues
Purpose for neo_eulerian classes #374
Fix inherited methods in the Orientation class #373
Using scipy's Rotation object #346
Making Orientations a child class of Rotations, not Misorientations #345 and
Accommodating symmetry (which applies to Orientations but not Rotations) in the inheritance structure #254
Progress of the PR
Minimal example of the bug fix or new feature
The test cases verify the new functions produce identical results to the old 2d versions, but here is some code I used to verify this new code can, in fact, accept n-dimensional arrays of Rotations, and does not slow down the execution of the code by a meaningful amount.
Heres the output I got:
old way:6.9345
new way:6.9778
new way weird shape:6.9356
For reviewers
__init__.py
.section in
CHANGELOG.rst
.__credits__
inorix/__init__.py
and in.zenodo.json
.