Skip to content

arwtyxouymz/gpd

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

54 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Grasp Pose Detection (GPD)

1) Overview

This package detects 6-DOF grasp poses for a 2-finger grasp (e.g. a parallel jaw gripper) in 3D point clouds.

Grasp pose detection consists of three steps: sampling a large number of grasp candidates, classifying these candidates as viable grasps or not, and clustering viable grasps which are geometrically similar.

The reference for this package is: High precision grasp pose detection in dense clutter.

Update: A version of this package that does not require Caffe can be found here.</span style>

UR5 Video

UR5 demo

2) Requirements

  1. PCL 1.7 or later
  2. Eigen 3.0 or later
  3. Caffe
  4. ROS Indigo and Ubuntu 14.04 or ROS Kinetic and Ubuntu 16.04

3) Prerequisites

The following instructions work for Ubuntu 14.04 or Ubuntu 16.04. Similar instructions should work for other Linux distributions that support ROS.

  1. Install Caffe (Instructions). Follow the CMake Build instructions. Notice for Ubuntu 14.04: Due to a conflict between the Boost version required by Caffe (1.55) and the one installed as a dependency with the Debian package for ROS Indigo (1.54), you need to checkout an older version of Caffe that worked with Boost 1.54. So, when you clone Caffe, please use this command.

    git clone https://github.com/BVLC/caffe.git && cd caffe
    git checkout 923e7e8b6337f610115ae28859408bc392d13136
    
  2. Install ROS. In Ubuntu 14.04, install ROS Indigo (Instructions). In Ubuntu 16.04, install ROS Kinetic (Instructions).

  3. Clone the grasp_pose_generator repository into some folder:

    cd <location_of_your_workspace>
    git clone https://github.com/atenpas/gpg.git
    
  4. Build and install the grasp_pose_generator:

    cd gpg
    mkdir build && cd build
    cmake ..
    make
    sudo make install
    

4) Compiling GPD

  1. Clone this repository.

    cd <location_of_your_workspace/src>
    git clone https://github.com/atenpas/gpd.git
    
  2. Build your catkin workspace.

    cd <location_of_your_workspace>
    catkin_make
    

5) Generate Grasps for a Point Cloud File

Launch the grasp pose detection on an example point cloud:

roslaunch gpd tutorial0.launch

Within the GUI that appears, press r to center the view, and q to quit the GUI and load the next visualization. The output should look similar to the screenshot shown below.

rviz screenshot

6) Tutorials

  1. Detect Grasps With an RGBD camera
  2. Detect Grasps on a Specific Object

7) Parameters

Brief explanations of parameters are given in launch/classify_candidates_file_15_channels.launch for using PCD files. For use on a robot, see launch/ur5_15_channels.launch. The two parameters that you typically want to play with to improve on then number of grasps found are workspace and num_samples. The first defines the volume of space in which to search for grasps as a cuboid of dimensions [minX, maxX, minY, maxY, minZ, maxZ], centered at the origin. The second is the number of samples that are drawn from the point cloud to detect grasps. You should set the workspace as small as possible and the number of samples as large as possible.

8) Views

rviz screenshot

You can use this package with a single or with two depth sensors. The package comes with weight files for Caffe for both options. You can find these files in gpd/caffe/15channels. For a single sensor, use single_view_15_channels.caffemodel and for two depth sensors, use two_views_15_channels_[angle]. The [angle] is the angle between the two sensor views, as illustrated in the picture below. In the two-views setting, you want to register the two point clouds together before sending them to GPD.

rviz screenshot

To switch between one and two sensor views, change the parameter trained_file in the launch file launch/caffe/ur5_15channels.launch.

9) Input Channels for Neural Network

The package comes with weight files for two different input representations for the neural network that is used to decide if a grasp is viable or not: 3 or 15 channels. The default is 15 channels. However, you can use the 3 channels to achieve better runtime for a loss in grasp quality. For more details, please see the reference below.

10) Citation

If you like this package and use it in your own work, please cite our paper(s):

[1] Andreas ten Pas, Marcus Gualtieri, Kate Saenko, and Robert Platt. Grasp Pose Detection in Point Clouds. The International Journal of Robotics Research, Vol 36, Issue 13-14, pp. 1455 - 1473. October 2017.

[2] Marcus Gualtieri, Andreas ten Pas, Kate Saenko, and Robert Platt. High precision grasp pose detection in dense clutter. IROS 2016. 598-605.

11) Troubleshooting

  • GCC 4.8: The package might not compile with GCC 4.8. This is due to a bug in GCC. Solution: Upgrade to GCC 4.9.

  • During catkin_make, you get this error: [...]/caffe/include/caffe/util/cudnn.hpp:8:34: fatal error: caffe/proto/caffe.pb.h: No such file or directory. Solution (source):

    # In the directory you installed Caffe to
    protoc src/caffe/proto/caffe.proto --cpp_out=.
    mkdir include/caffe/proto
    mv src/caffe/proto/caffe.pb.h include/caffe/proto
    

About

Detect grasp poses in point clouds

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 90.2%
  • CMake 6.1%
  • Python 3.7%