Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cuda.warpPerspective wrong matrix type #173

Open
nbackfisch opened this issue Jan 27, 2017 · 5 comments
Open

cuda.warpPerspective wrong matrix type #173

nbackfisch opened this issue Jan 27, 2017 · 5 comments

Comments

@nbackfisch
Copy link

nbackfisch commented Jan 27, 2017

Hopefully it is very easy, since there was a similiar, already solved issue: #132
#132

Error here:
When using cuda.warpPerspective function, there is an error raised:
You should explicitly call download method for cuda::GpuMat object

I copy the solution suggestion that seemed to work for that other issue:

From my findings this error is due to a wrong wrapping of transform matrix passed to openCV.
Currently, in file torch-opencv/src/cudacarping.cpp, lines 46, 52, 55, the transform matrix M is converted to a GPU matrix by: M.toGpuMat().
Converting M in a "CPU" matrix via M.toMat() leads to a working version.
(openCV uses only coefficients of that matrix - see openCV source code of warp.cpp)

@shrubb: Since you have/had no GPU to test this, it sounds probable for me, that it is the reason here as well.
Btw: CPU Version of warpPerspective is running fine.

Thanks in advance!

@nbackfisch nbackfisch changed the title cuda.warpPerspective wrong matrix type cuda.warpPerspective wrong matrix type - Easy one?! Jan 27, 2017
@nbackfisch nbackfisch changed the title cuda.warpPerspective wrong matrix type - Easy one?! cuda.warpPerspective wrong matrix type Jan 27, 2017
shrubb added a commit that referenced this issue Jan 27, 2017
@shrubb
Copy link
Contributor

shrubb commented Jan 27, 2017

Thanks for your help! It's awesome to have enthusiastic users! :)
Hope this fix will remove the bug.

@nbackfisch
Copy link
Author

:)
Almost: now we end up in another error
OpenCV Error: Assertion failed (type == CV_32F || type == CV_64F) in invert, file /data/nbackfisch/opencv-3.2.0/modules/core/src/lapack.cpp, line 841

I called cv.cuda.warpPerspective{original:cuda(), nil, trafo:cuda()}

So maybe some wrong casting?
Also notice that I had issues installing lapack with cv if i remember right. So error can also be on my side.

Well but I need to go now, I have to catch a train and drive 700km home now ;)

@shrubb
Copy link
Contributor

shrubb commented Jan 28, 2017

Try keeping M in main memory. Does this line work?

cv.cuda.warpPerspective{original:cuda(), nil, trafo}

@apvijay
Copy link

apvijay commented Jan 30, 2017

I too end up with the same error:

b2 = cv.cuda.warpPerspective{a1,nil,M:cuda()} -- a1 is CudaTensor
OpenCV Error: Assertion failed (type == CV_32F || type == CV_64F) in invert, file /home/user/opencv-3.2.0/modules/core/src/lapack.cpp, line 841
C++ exception

If M is in main memory, the following error occurs
b2 = cv.cuda.warpPerspective{a1,nil,M}
OpenCV Error: Gpu API call (invalid configuration argument) in call, file /home/user/opencv-3.2.0/modules/cudawarping/src/cuda/warp.cu, line 260
C++ exception

@nbackfisch
Copy link
Author

Hi,
now I also ran in the Cuda error:
OpenCV Error: Gpu API call (invalid configuration argument) in call, file /data/nbackfisch/opencv-3.2.0/modules/cudawarping/src/cuda/warp.cu, line 258

Assume again, we use M:cuda() again. Can you check, whether the original and trafo are loaded to cv_8U and still need to be converted to cv_32F for some following method (e.g. from opencv)? If so, don't forget to cast back to cv_8U ;)
I found this post, where they also face such an Assertion Failure and loading the image to cv_8U sounds reasonable to me. http://stackoverflow.com/questions/26470149/c-convert-cvmat-from-cv-8u-to-cv-32f
What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants