Skip to content

qingchenkanlu/baxter_pick_and_place

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

#Edited by Shehan

Introducing the Human-Baxter Collaboration Framework

This repository contains BRML's Human-Baxter collaboration framework. It was implemented as part of the EIT CPS for Smart Factories project, in collaboration with the Neural Information Processing Group at ELTE, Budapest, Hungary and DFKI Saarbruecken, Germany.

Overview

The Human-Baxter collaboration framework aims at being a modular, easy to modify and adapt, framework for collaborative experiments with human collaborators and a Baxter research robot.

The distributed pick-and-place scenario integrates the Baxter robot, the Microsoft Kinect V2 sensor and deep neural networks to detect, pick up and place objects. Three types of experiments are possible:

  • picking and placing an object on a table,
  • handing over an object to a human collaborator and
  • taking over an object from a human collaborator.

Dependencies and Requirements

Two possibilities to use the Human-Baxter collaboration framework exist. You either need the open source Baxter simulator or access to a Baxter research robot. For more information please refer to the installation instructions here.

The Kinect V2 can be interfaced either on Ubuntu/ROS via the iai_kinect2 package (no skeleton tracking) or via a web socket connection and the ELTE Kinect Windows tool running on a Windows machine.

The framework heavily builds upon the Baxter SDK and depends on customized versions of baxter_interface and baxter_common from Rethink Robotics. For the simulation in Gazebo, the depth_sensors package is utilized. For image processing we rely on the OpenCV library.

The framework has been tested with ROS Indigo on Ubuntu 14.04. For the simulator running in Gazebo a machine with (any) NVIDIA GPU has been proven useful. For the object detection using faster R-CNN a GPU with at least 2GB RAM is required. For the object detection using R-FCN and object segmentation using MNC a GPU with at least 7 GB RAM are required. We made good experiences with NVIDIA Quadro K2200 and NVIDIA TITAN X GPUs.

License

We publish the Human-Baxter collaboration framework under a BSD license, hoping that it might be useful for others as well. The license text can be found in the LICENSE file and can be obtained from the Open Source Initiative.

If you find our Human-Baxter collaboration framework useful in your work, please consider citing it:

@misc{hbcf2016,
    author={Ludersdorfer, Marvin},
    title={{The Human-Baxter collaboration framework}},
    organization={{Biomimetic Robotics and Machine Learning laboratory}},
    address={{fortiss GmbH}},
    year={2015--2016},
    howpublished={\url{https://github.com/BRML/baxter\_pick\_and\_place}},
    note={Accessed November 30, 2016}
}

Installation

The framework is implemented as and uses several ROS packages that can be installed conveniently as described here.

Usage

How to run the distributed pick-and-place scenario is explained in detail here.

Acknowledgements

We thank Aron Fothi, Mike Olasz, Andras Sarkany and Zoltan Toser from the Neural Information Processing Group at ELTE for their help and many valuable discussions.

Known Limitations and Bugs

  • No Gazebo models of objects to manipulate are included.
  • The external calibration routine gives rather poor results. We can handle this, since the visual servoing is able to compensate for coarse position estimates.

About

The Human-Baxter collaboration framework.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 96.1%
  • CMake 2.5%
  • Other 1.4%