Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Neural network force for subset of atoms #5

Open
alexrd opened this issue Oct 8, 2018 · 4 comments
Open

Neural network force for subset of atoms #5

alexrd opened this issue Oct 8, 2018 · 4 comments

Comments

@alexrd
Copy link

alexrd commented Oct 8, 2018

Hi! Thanks for posting this plugin it looks very exciting. We're trying to play around with a few things and it looks like the default behavior is to apply the NN force to every atom in the system.
We were thinking about how to specify an atom list when initializing the NN force and only apply this to a subset of atoms. Should we just pass this to the NeuralNetworkForce initialize function and then to (one of) the Kernel initialize functions?

Also, are there any problems that you foresee here that we're going to run in to?

Thanks in advance,
Alex

@msultan
Copy link

msultan commented Oct 8, 2018

I think you should be able to specify subsets by adding a boolean_mask to the network's first layer. A more involved alternate route might be to create a non-trainable affine layer with fixed 0 and 1 weights for atomic positions that you dont or want to keep.

@alexrd
Copy link
Author

alexrd commented Oct 9, 2018

I hadn't thought of that approach! It would be easier in the short term. But do you think that communication will affect performance at all? If we have ~100k atoms in the system, but we only want an NN force on ~50, will it slow us down to pass around the whole system? Or is this just done by reference?

@msultan
Copy link

msultan commented Oct 9, 2018

I am not sure how this is implemented under the hood (@peastman would be a better source for that). I would imagine that if the model is on the same GPU that is running the simulations that communication overhead should be minimal if any at all.

@peastman
Copy link
Member

peastman commented Oct 9, 2018

So far I haven't done anything to optimize (or even really test!) the performance of this plugin. I won't be at all surprised if the overhead turns out to be quite significant. Let me know what you find.

Ultimately it would be really cool if we could just copy the information around on the GPU so it never has to come back to the CPU. Currently there doesn't seem to be any supported way of doing that, though other people have asked. Hopefully at some point they'll create a mechanism we can use for it.

For the moment, though, all data has to come back to the CPU before it can pass between OpenMM and TensorFlow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants