-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
velveth results depending on OMP_NUM_THREADS
and stalling if OMP_THREAD_LIMIT=1
#56
Comments
Hello @bernt-matthias , Thanks for raising this issue. If I recall correctly (it's been 10 years after all!) the multithreaded version is not 100% deterministic, in that the threads each go at their own speed, and mix up the ordering of the reads. Do you know at all when velveth stalls when the value is set to 1? Cheers, Daniel |
This would have been also my guess. Is it also to be expected that the number of lines may change in the Roadmaps file?
What I see in the terminal output is
Then there is 100% CPU load but no progress (if using 2 or more threads then velveth finishes in seconds) |
Hello @bernt-matthias Apologies for the slow response (summer leave). In short, the parallelisation which you stalled on is described here: https://github.com/dzerbino/velvet/blob/master/src/splayTable.c#L1237 In it, you can see two OMP parallel sections:
In effect, it explicitly requires two threads. If you absolutely need Velvet to run on 1 thread, then you can simply turn off multithreading entirely, by using
Hope this helps, Daniel |
I'm currently working on the velvet Galaxy wrappers: galaxyproject/tools-iuc#4641
We recently changed our CI to use 2 instead of 1 core. So far we only used
OMP_NUM_THREADS
which produced different results after the change to 2 cores. More precisely theRoadmaps
file changed.When setting
OMP_THREAD_LIMIT
to the same value asOMP_NUM_THREADS
(as suggested here https://www.biostars.org/p/86907/) the results are again the same, butvelveth
stalls if the value is set to 1.Could you tell us if/how we can resolve this issue?
We need to set a hard limit on the number of used threads, since many HPC systems do not allow over utilization of CPUs.
The text was updated successfully, but these errors were encountered: