-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
long reads #18
Comments
In theory, the longer the reads are, the more accurate the results should become. This is certainly what we have seen for Illumina reads of different lengths. The most crucial setting is For very long reads (1000s or 10,000s of bp) the whole pipeline will still work, but mapping speed will decrease. So it could be beneficial to set Since longer reads have usually more errors than Illumina reads, I would probably set If you had calibration reads with ground truth data you could use those to optimize for the best tradeoff between speed+memory consumption and accuracy. If you are just interested in accuracy and speed and memory consumption don't matter, just stick to the default settings. |
I am interested in the precision for the strain level .... thank you !! |
If you want to distinguish reads on strain level then mapping accuracy is of course extremely important. I would first try the default settings which promise the highest accuracy and see if you can live with the mapping speed and memory consumption. As an added bonus you can process shorter reads with the same database without having to change anything. If however, you plan to process many more such datasets in the future, it might be worth optimizing the speed. |
Just to give a little update on this issue. I've recently processed Oxford Nanopore datasets with varying read lengths of 100 up to 21000. The N50 was relatively low (in the 500s). Turned out that MetaCache's default settings yielded the best results. |
can the use of long genomic reads improve the accuracy ?
tnx !!
The text was updated successfully, but these errors were encountered: