-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add MPI_F_XXX global variables for Fortran interoperability #27
Conversation
extern MPI_F08_status* MPI_F08_STATUS_IGNORE; | ||
extern MPI_F08_status* MPI_F08_STATUSES_IGNORE; | ||
extern MPI_Fint* MPIX_F_UNWEIGHTED; | ||
extern MPI_Fint* MPIX_F_WEIGHTS_EMPTY; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even for MPI_F_STATUS_IGNORE
and MPI_F08_STATUS_IGNORE
, is there any confirmed use case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the example of PETSc, it is a C library with API directly accepting parameters such as MPI_Comm
. So to support usage from the Fortran program, it needs MPI_Comm_f2c
to accept a communicator from the Fortran side. But doing so, it has to assume the Fortran program does not use mpi_f08
. Is there a mechanism from the C side to tell whether it is being called from a Fortran code that uses mpi.mod
or mpi_f08.mod
? Thus, I think the current picture of even those _f2c/c2f
APIs are broken. Rather than supporting and enlarging this mess, I think a sensible direction is to advise users not to use them. PETSc can work around this by hiding the direct MPI_Comm usage. It is more complex for some usages, but it will be more robust.
EDIT: revisiting the standard, I was mistaken. mpi_f08
need pass comm to C via comm%MPI_VAL
, which is an integer that can be converted in C via MPI_Comm_f2c
. So the interop from C side is supported via MPI_Fint
. Thus handle conversions are ok (although I still think they should be deprecated).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it must be brain-dead to design an API that need accept MPI_Status directly cross language.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In conclusion, having any Fortran inter-op constants or routines on the C side is problematic because Fortran does not have a reliable ABI and MPI does not have a standard Fortran spec (mpi.mod
vs mpi_f08.mod
), and also Fortran legacy code issue is non-fixable. The interop standard on the Fortran side is fine since we can fix a C ABI.
I think the principle can similarly extend to Python and Julia. With a stable C ABI, Python and Julia can work out the inter op. But if C side tries to specify some Python or Julia interop, then that'll be a huge mess.
There are some worry from Julia in mixing Fortran library and C ABI. If C ABI does not contain any Fortran stuff, then there will never be any issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But if C side tries to specify some Python or Julia interop, then that'll be a huge mess.
Definitely. That's the reason I'm here helping to fix C, rather than shoving my silly Python stuff on everyone's throat by getting Python things in the standard.
However, regarding Fortran, the damage has already been done. In fact, many API details of MPI are just bad stuff from 30 years ago that was done that way to ease Fortran interoperability. And we are still here, wasting our time, arguing about Fortran, and not being able to get things done due to its limitations.
There are some worry from Julia in mixing Fortran library and C ABI. If C ABI does not contain any Fortran stuff, then there will never be any issues.
You may be misreading the problem. What Julia folks want (and Python folks too, for sure), is to be able to swap MPI implementations at runtime for Fortran libraries. They are effectively pressing for a Fortran ABI.
The primary motivation of these new MPI_F_*
constants came from the possibility of using them to produce a Fortran ABI (within the constraint of using the same compiler). Jeff catched the quickly and liked it. Then he spent one afternoon playing with it, and learned the hard way that its 2024 and Fortran-C interoperability still comes limitations and platform issues.
At this point, after Jeff's linker experiments, I don't think the new MPI_F_*
will provide any help towards a Fortran ABI. I really hope I can be proven wrong in the next few days.
IIUC, the only use case left for the constants is tools and intercepting Fortran calls from C and mess around. That's a user case that, despite maybe being valid and worth supporting, I'm not going to care about or advocate for.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you desire rock solid, leave as much complication out as you can get away. If you cover every corner usage, even a rock will behave like a TOFU :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Case in point, PYTHON never bothered with its ABI, and they endured the Python 2-3 breakages. So don't worry for Fortran :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even for
MPI_F_STATUS_IGNORE
andMPI_F08_STATUS_IGNORE
, is there any confirmed use case?
Yes, please see mpi-forum/mpi-issues#837 (comment).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These features were requested by tools developers, e.g., mpi-forum/mpi-issues#837 (comment). I did not invent the need for them.
Comment from @hzhou condidering that these additions may be premature #28 (comment). |
These features are trivial to implement and make it possible for tools to write Fortran API interception code in C. If they do not exist, tools must write Fortran code to capture the addresses of all of the sentinels themselves. Some may recall that we added features like mpi-forum/mpi-issues#645 specifically so that third-party languages would not have to write C code in order to interact with MPI, even though they can (and do!) implement it using C today. This is a logically consistent case, only in a different direction from a language perspective. It is a good thing to not force mpi4py or MPI.jl to write C shims to interact with status fields and it is a good thing for profiling tools implemented in C to not have to write Fortran shims to interact with sentinels. |
They are definitely NOT TRIVIAL. You are trying to obtain addresses of Fortran objects. It is tied to the Fortran and Fortran bindings. You can't defined it in C-ABI alone. Thus it complicates the scope enormously. Second, I don't see why C tools need capture Fortran objects. Fortran binding takes care of it so the C tools don't need care about Fortran. And if they are Fortran tools, they don't need to touch the C side. Fortran bindings make sure of that. |
Well, the implementation in MPICH looked trivial to me :-) ... Anyway, if this is gone for good, I'm not going to miss it at all. RIP. |
No description provided.