-
Notifications
You must be signed in to change notification settings - Fork 370
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Too slow to load a larger vocabulary #65
Comments
If you are still trying to load in txt file, I recommend to use binary files to save & load. Grab this pull request. #64 |
binary file is still slow... |
I also meet this issue when I read a 200M yaml file. //opencv/modules/core/src/persistence.cpp
FileNode FileNode::operator[](int i) const
{
if(!fs)
return FileNode();
CV_Assert( isSeq() );
int sz = (int)size();
CV_Assert( 0 <= i && i < sz );
FileNodeIterator it = begin();
it += i; //Here OpenCV use operator+=
return *it;
} but the FileNodeIterator::operator+=(int) is not O(1) .It uses FOR to get i. //opencv/modules/core/src/persistence.cpp
FileNodeIterator& FileNodeIterator::operator += (int _ofs)
{
CV_Assert( _ofs >= 0 );
for( ; _ofs > 0; _ofs-- )
this->operator ++();
return *this;
} |
In my case running ORBSLAM on ios , the bottleneck in loading voc is the initialization of CV:Mat in loop, I optimize it via allocating memory in a go outside the loop |
你好,安装你的方法,会导致kitti00的第 500 帧后,一直和第 0 帧发生回环。 stringstream ss(s);
} |
It took me an hour to load a 50M vocabulary. Is there a better way?
The text was updated successfully, but these errors were encountered: