You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the past I have successfully managed to get a speed up from parallel processing PDB files using C++. The way to go, in my opinion, is as follows:
The main thread parses a file
either:
a. as the parser loads data to a thread safe container, e.g. python's Manager, launch threads that instantiate individual Atom objects concurrently
b. after the parser is done start multiple threads that instantiate Atom objects
Check if Atom objects can be connected (this can be also processed in parallel but I suspect that there would be very little, if any, gain)
2a should be faster than 2b but you need to be careful with deadlocks and thread safety, which can be a pain to debug! In any case step 2 is where I think you could gain from parallel processing.
my two cents: the PDB and mmCIF parsers could be made 1-2 orders of magnitude faster, although not in pure Python. Then the parallel processing would not be needed.
You may have a look at https://github.com/project-gemmi/mmcif-benchmark
Hi - thanks for the benchmark link. I hadn't seen this before and it will be very useful.
atomium 0.12, curently under development and hopefully out in the next few days, does have large speed increases, though still in pure Python (see this tweet). Moving to compiled code is a medium term goal for this library.
The multiprocessing library could speed up parts of the PDB parsing process - especially those parts that are just processing thousands of records.
The text was updated successfully, but these errors were encountered: