-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TorchForce
missing getNumGlobalParameters()
method
#44
Comments
What do you mean? Those methods are there: openmm-torch/openmmapi/include/TorchForce.h Lines 69 to 72 in 76f55e7
And also exposed in the Python wrapper: openmm-torch/python/openmmtorch.i Line 36 in 76f55e7
|
you're right. I must be running into a different issue downstream. sorry. |
correction: it seems that |
Since TorchForce isn't part of the main OpenMM package, |
is there a way to make this consistent within OpenMM so that we don't have to be hacky with this object? and so it works seamlessly with repos like openmmtools. @jchodera , i think we might be keen on this functionality. |
Unfortunately, no. This is a result of how SWIG works. If a C++ function returns an abstract supertype, and you want it to create a Python object of the appropriate subtype, you have to give it a complete list of all allowed subtypes at build time. |
Oh no, this is terrible. This is going to cause all manner of problems for Python applications using OpenMM and these new ML options. @peastman : What are our options here? There must be some way to make this work without requiring all users to do crazy and highly-surprising non-idiomatic python gymnastics.
|
That would require making PyTorch into a dependency of OpenMM. Not going to happen!
That's impossible. It can only handle subclasses that are defined at compile time. If you try to include ones from external plugins, you'll get a compile error.
Reimplementing the Python wrappers from scratch is also not going to happen. That would be a huge task. And why do think any other wrapper generator wouldn't have the same problem? This is intrinsic in the disconnect between C++ classes and Python classes. How those Python classes were created isn't the issue.
This is the closest to a viable solution, but it wouldn't be robust. Let me demonstrate.
Scroll all the way to the right to see the difference. Until you import the Python module, Python doesn't know about the TorchForce class. If you load an XML file containing a TorchForce and call |
Thanks for the quick response! I don't think these are the blockers they might seem.
PyTorch could be an optional dependency: You have to elect to install it in order to use the I don't see a downside at all to this approach, since this is exactly the philosophy we've been using since the inception of OpenMM.
I probably didn't explain this well, but I am suggesting adding additional Python code to In essence, I'm suggesting we can just glue in the Python code you suggested @dominicrufa use so it's done transparently under the hood at the Python level in our Swig wrappers.
That's right---you'd have to import the Python module before it would appear in scope. That's still much better than having to craft a bunch of code on your own to do this. There may be a more clever way to figure out which plugins are present: Recall that we use the Python plugins discovery features to discover installed ffxml files---we could do something similar to discover which OpenMM plugins are installed in the same environment, and import them to discover all |
To make that work, we somehow need to bridge the C++ and Python elements of each plugin. When a C++ plugin is loaded, it needs to force the corresponding Python plugin to get loaded as well. I've been trying to work out a clean way of doing it. I think something along these lines might work. Have a static registry of Python module names defined within the C++ library. When a plugin is loaded, it can optionally add the name of a Python module to the registry. Plugins get loaded by the |
Which proposal, specifically? My understanding was that migrating the plugin into the OpenMM branch as part of OpenMM 8 would not require anything special---just gracefully failing with an appropriate exception if someone tried to use The proposal to use Python plugin discovery is clearly more complex, but sounds potentially workable. Given its complexity, though, perhaps it makes more sense to consider first migrating the plugin into OpenMM if it can be done in a clean way such that it clearly an optional dependency? This would just need:
|
PyTorch would become a compilation time dependency at the very least. That would be a maintenance nightmare. Dependency management in Python is a disaster and I go to great lengths to keep the number of dependencies for OpenMM to the absolute minimum. Requiring PyTorch even at build time is a non-starter. If you haven't been paying attention to the chaos in the ML ecosystem, you may think I'm overreacting. I'm not. For several months it has been impossible to create a working Python environment containing current versions of both PyTorch and TensorFlow. PyTorch requires NumPy 1.20 or later. TensorFlow only works with 1.19 or earlier. Try to install both and fire comes out. This has been going on for months with no indication of when it will be fixed. It has required convoluted workarounds for packages like DeepChem or HuggingFace that require both PyTorch and TensorFlow. That in turn has required convoluted workarounds for every downstream package that uses them. I wish I could say this was an exception, but it isn't. Weird breakages like this happen constantly. The only viable long term solution is to keep the dependencies for the core OpenMM package to only absolute essentials. That includes build time dependencies as well as runtime ones. |
Oh, I've been paying attention, and it seems not (much) worse than the CUDA nightmares. We're struggling with some of those issues right now as part of the https://github.com/openkinome ecosystem, but things are slowly resolving as more of the conda-forge issues get worked out. Despite the issues you note, it appears to be readily manageable if we wanted to pull the openmm-torch plugin into OpenMM:
I'm still focused on the issue of "how can we eliminate friction in the OpenMM user experience" here, so this doesn't sound insane to me as a tradeoff for conda-installable pytorch-compatible OpenMM. That doesn't mean this is the best way to do it, but it's important we consider all the alternatives from the user perspective. It sounds like we should also continue to explore the plugin idea as well. |
You'll have to trust my judgement on this: the plugin approach will be far less work in the long run. "Not much worse than the CUDA nightmares" is a very low bar to beat! And once it's implemented, it will continue to work with no further effort. We won't have to worry about it breaking because of a change to some other package in the future. |
It does seem like it would be a broadly useful feature to better support a plugin ecosystem! |
I noticed that while the
TorchForce
supportsaddGlobalParameter
, it doesn't havegetNumGlobalParameters
orgetGlobalParameterName
. Is it possible these could be added? or is there a way to wrap thisTorchForce
into aCustomCVForce
so that I have access to these methods?The text was updated successfully, but these errors were encountered: