Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mask image remains black #6

Closed
beetleskin opened this issue Mar 1, 2013 · 27 comments
Closed

Mask image remains black #6

beetleskin opened this issue Mar 1, 2013 · 27 comments

Comments

@beetleskin
Copy link

Hi again,

I tried to run the object_recognition_capture with a template but the mask image remains dark and no capturing is performed:

rosrun object_recognition_capture capture -i my_textured_plane -o orc_scan_dataIntensiv.bag -n 12 --preview --seg_z_min 0.0001

The pose estimation seems to work, the coordinate origin is reprojected correctly onto the template plane:
orc_no_mask

I tried different optional parameter but nothing changed. I compiled ecto and wg_perception completely from source within a catkin groovy workspace on Ubuntu 12.4, the data comes from a Kinect.

@vrabaud
Copy link
Member

vrabaud commented Mar 1, 2013

Great thx, it works here from packages so:

  • do you see a pose being drawn over the pattern ?
  • when drawing matches, do you see anything being drawn ?

On Fri, Mar 1, 2013 at 6:19 AM, beetleskin [email protected] wrote:

Hi again,

I tried to run the object_recognition_capture with a template but the mask
image remains dark and no capturing is performed:

rosrun object_recognition_capture capture -i my_textured_plane -o
orc_scan_dataIntensiv.bag -n 12 --preview --seg_z_min 0.0001

I tried different optional parameter but nothing changed. I compiled ecto
and wg_perception completely from source within a catkin groovy workspace
on Ubuntu 12.4, the data comes from a Kinect.


Reply to this email directly or view it on GitHubhttps://github.com//issues/6
.

@beetleskin
Copy link
Author

Yes, as you can see in the image the pose is drawn correctly in most frames. Also the matches look good. Could this be related to the openni-drivers? I built and installed OpenNi and SensorKinect from source.

@vrabaud
Copy link
Member

vrabaud commented Mar 1, 2013

Ah, my bad, I answered through mail and did not see the image. I looks good. You never ever get anything in the mask ? (even when you're far, close, seeing from the top or with a different object ? )

@beetleskin
Copy link
Author

Nope, nothing ever. Just black.

@vrabaud
Copy link
Member

vrabaud commented Mar 1, 2013

ok, might be related to your other bug. As you have everything from source, source your setup.sh and then run any of those two scripts in ecto_opencv/samples/rgbd/plane* (one tracks planes, the other one segments object on top).
You should have colors for each plane it finds, please let me know how that goes.

@beetleskin
Copy link
Author

Interesting! The plane_cluster.py crashes:

stfn@stfn-MacBook:~/devn/ros_dai_groovy_catkin$ python src/ecto_kitchen/ecto_opencv/samples/rgbd/plane_cluster.py 
Traceback (most recent call last):
  File "src/ecto_kitchen/ecto_opencv/samples/rgbd/plane_cluster.py", line 37, in <module>
    connections = [ source['depth_raw'] >> depth_to_3d['depth'],
  File "/home/stfn/devn/ros_dai_groovy_catkin/src/ecto_kitchen/ecto/python/ecto/blackbox.py", line 255, in __getitem__
    return self.__impl[key]
ecto.EctoException:            exception_type  EctoException
                 diag_msg  no inputs or outputs found
                cell_name  Source
              tendril_key  depth_raw

May this be related to the malfunctioning openni driver (I run into this problem with roboearth). capture_openni_usb.py yields the same error. The plane_sample.py however seems to work fine:
plane_sample1
plane_sample2
plane_sample3

@vrabaud
Copy link
Member

vrabaud commented Mar 4, 2013

The plane cluster is fine, just git pull it, I fixed it the other day.
Those results look fine though.

@vrabaud
Copy link
Member

vrabaud commented Mar 4, 2013

Ok, I do remember a problem that I forgot to fix that seems to fit your data: no pixel of your object is touching the plane, there are only NaN 's around it. Let me fix that one at least.

@vrabaud
Copy link
Member

vrabaud commented Mar 4, 2013

actually, did you try to change the --seg_radius_crop value ? Set it to something large like 0.5 or 1 meter.

@beetleskin
Copy link
Author

Ok thanks. This is what the clustering looks like:
plane_clusters1

@vrabaud
Copy link
Member

vrabaud commented Mar 4, 2013

ok, sorry to be picky, can you please try with an object whose depth will be reflected by the Kinect (this one here has glass it seems) like a cardboard orange juice box.
I agree it is a bug though but I just want to narrow it down and make sure it is due to all the NaN around your objects (it also has a strong shadow here).

@beetleskin
Copy link
Author

ok, here you go (with yet another ugly template :) )
plane_cluster.py
plane_cluster2
plane_sample.py
plane_sample4

@vrabaud
Copy link
Member

vrabaud commented Mar 6, 2013

ok, I was finally able to reproduce that bug and it was a C++ bug :) Can you please download the latest of capture and try it out ?
Some flickering might still happen because of the NaN but I am on it.

@beetleskin
Copy link
Author

Ok now the capturing with templates produces bag files, thanks! However (oh no issues incomming :) ) ...

  1. Some masks still seem to be empty
  2. Sometimes the clustering selects the wrong cluster (why don't you only consider points within a 3d bounding box above the template/dot pattern?)
  3. When running rosrun object_recognition_reconstruction mesh_object --all --visualize --commit even the correct scans (where the mask fits) are not aligned. Some Time ago I read, that you only allow z-Rotation around the center of the template/dot pattern (lazy suzan). Is this still the case? If yes, why? :) If no, then I guess the template matcher just produces this offset.

I uploaded you the bag file here.

@vrabaud
Copy link
Member

vrabaud commented Mar 7, 2013

  1. fixed I think (if no pose was found, clustering was still happening (we were always careful and never faced that))
  2. fixed too (yop, distance to the plane was not absolute which can be problematic if you have a plane under your main plane (I never had that config))
  3. fixed too thx to the above I think: a cylinder centered at the pose is used, I updated the docs. The pose is determine by the pattern only, you can move the board/camera in any way you want.

Fixes are in capture and ecto_opencv.

Thx for your very detailed bug report, that helps make the code more robust, I hope it's all good for you now ! Off to #7 now :)

@beetleskin
Copy link
Author

Hej, thanks for the updates. I just updated the sources ... is it correct, that the capture package now depends on household_object_database? Because I'm stuck now at installing the dependencies. Apparently there is nothing in the ros repo except household_object_database_msg.

@vrabaud
Copy link
Member

vrabaud commented Mar 16, 2013

tabletop is the only one depending on it and that is one of the pipelines so you can remove it safely if you don't want to use that pipeline. The package is out but only in shadow-fixed (not ros), you can get it from here:
https://github.com/ros-interactive-manipulation/household_objects_database
(or wait for the package to make it into ROS). I'll update the docs for it, thx.

@beetleskin
Copy link
Author

ok thanks, I just removed tabletop. I'll try the fixes soon :)

@beetleskin
Copy link
Author

Ok it looks a lot better, but there are some scans which are not aligned. Is this due to an insufficient template? Can I improve the merging further? Have a look at the mesh after mesh_object - its something, but not a bin ;)

@beetleskin
Copy link
Author

Ok I used a better template and scanned very carefully. The model looks ok, so I guess you can close this issue. Capturing with templates works very well now.

Just one thing ... is there a textured version of the mesh somewhere? Do you produce a UV-map ore something? How do you store the color information of a model when uploading/meshin?

@vrabaud
Copy link
Member

vrabaud commented Mar 23, 2013

For the non-alignment, we've noticed the Kinect is much worse than the ASUS when hand-held (most likely because the Kinect is not synchronized). That's why we recommend a lazy-Suzanne.
The mesh has no color as we don't need it for grasping or LINE-MOD. We also use straight up meshlab to produce the mesh. If you know of a library/program that can produce a mesh with texture from a colored point cloud, please let us know !

@vrabaud vrabaud closed this as completed Mar 23, 2013
@beetleskin
Copy link
Author

So you don't store any color information? What about tod?

For the texturing, pcl_kinfu_largeScale_texture_output comes into my mind, but I don't know if this is of any help. I didn't check what input they are using exactly.

@vrabaud
Copy link
Member

vrabaud commented Mar 26, 2013

Color is not stored right now but it should obviously be stored. TOD does not use the mesh (but should if meshes+texture were of great quality): it just uses the raw 2d input for descriptors and their 3d position.
Large scale kinfu is unstable and requires a specific GPU (as it requires real time): we'd like to keep ORK as generic as possible. There has to be a library around that does that but the result would not be of any use anyway (except for color LINEMOD that is soon to come).

@beetleskin
Copy link
Author

Oh I don't mean kinfu in general. They have a script whicht produces a texured mesh output from colored pointclouds: http://svn.pointclouds.org/pcl/trunk/gpu/kinfu_large_scale/tools/standalone_texture_mapping.cpp

@vrabaud
Copy link
Member

vrabaud commented Mar 28, 2013

Thx for the reference, I added an issue for it: #12

@mshb88
Copy link

mshb88 commented Apr 5, 2013

beetleskin, how did you solved the problem of mask remaining empty?

@beetleskin
Copy link
Author

Well I posted the issue here and vincent rabaud solved it ;) Try wg-perception and ecto from source.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants