Skip to content

MEShufengRen/stacking_test

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tower Stacking

Team Members: Jiarui Li, Qinjie Lin, Shufeng Ren, Patricia Sung, Yuchen Wang

Northwestern University ME 495: Embedded Systems in Robotics (Fall 2018)

Introduction

Objective

The goal of this project is for Baxter to pick different blocks on the table and stack them together.

Overview

Basic Introduction

The Baxter is able to recognize all the blocks on the table through image processing. The location, size and color information of the blocks will be passed to the robot and it will calculate the goal position of each movement. The robot arm picks up the largest block first and places it to a fixed position. Then it repeats the previous step by finding a smaller block and stack it on the previous one.

All the blocks were 3D printed, with different size ranging from 2"x2"x2" up to 3"x3"x2". The color of the blocks are either red or yellow.

Computer Vision

Baxter's right hand camera has a decent resolution that works well in the project, so there is no need of using an external web cam. A node was written to process the raw image data basically using OpenCV library. When the user clicks on the camera image view and picks a color, the node would fetch the RGB value of the pixel being clicked and set a boundary range based on that value (in our case the boundary was ([B-30, G-30, R-30],[B+30, G+30, R+30])). All the contours that matches the RGB boundary would be detected and labeled. Then the contours will be sorted by area and listed from the largest to the smallest. The node will finally publish a topic containing the center coordinates of all the contours(sorted by area) in pixel value, and the angles that each contour tilted with respect to x-axis.

Translation from Pixel Coordinate to World Coordinate

  • Our group is using the baxter right hand camera to capture the blocks' position on the table. Their positions are caculated from the blocks' pixel coordinates published by the computer vision. The caculation is based on the following function.
  • The intrinsic matrix is get from the camera calibration using ROS package camera_calibration . The external matrix is obtained from the TF tree from camera to base. Since the table is flat, the z coordinate of the blocks can be obtained from the IR range sensor in the left hand.

Robot Control

Baxter was programmed to begin at the same initial position each time. Then when list of blocks' positions reached, the robot arm were programmed to reach the certain height over the block, and then the robot arm came down to pick the block. After that, the arm moved to the goal position to place the blocks.

For the robot arm control, Cartesian Path Planner Plug-In in MoveIt was used to generate waypoints between beginning position and goal position. So that the robot arm won't move in unusual way and avoid hitting obstacles. And then, MoveIT planner was used to generate plan to make the arm move to the goal position along the waypoints.

During the movement, the ranger in the left hand was used to detect whether the block was attached to the baxter's left hand successfully or not. If the distance from left hand was less than 0.1m, the baxter will assume that the block was not picked successfully. Then in the next placement, the height of the block being placed will not increase and also the baxter would came to the pick block again, which were not being picked successfully at last time.

Implementation

Following is the workflow of our project. workflow

Launch

Nodes

Computer Vision Node

  • image_process.py

  • Subscribed Topic:
    /cameras/right_hand_camera/camera_info
    /cameras/right_hand_camera/image
    /Confirm_pos

  • Published Topic: /location

Corrdinate Translation Node

  • RM_Qua.py

  • Subscribed Topic: /location
    /robot/range/left_hand_range/state

  • Published Topic: /world_location

Robot Control Node

Starting Task Node

message

float32[] x
float32[] y
float32[] theta

srv

float32 x
float32 y
float32 i
---
float32 flag

Work Process

  1. Nodes Starting(Debugging Mode)
rosrun baxter_interface joint_trajectory_action_server.py
roslaunch stacking_test demo_baxter.launch
rosrun stacking_test IKServer.py
rosrun stacking_test image_process.py
rosrun stacking_test RM_Qua.py
rosrun stacking_test position_sub_client.py

Then click blocks in the camera image window, make sure only blocks are selected without noise points.

rosrun stacking_test Confirm_Pos.py

Once completed and started another task, close the Confirm_Pos node and start it again.

  1. Launch Starting(Working Mode)
rosrun baxter_interface joint_trajectory_action_server.py
roslaunch stacking_test stacking_part1.launch

Then click blocks in the camera image window, make sure only blocks are selected without noise points.

roslaunch stacking_test stacking_part2.launch

Once completed and started another task, close the stacking_part2.launch and start it again.

Conclusion

Further Improvements

The team could improve the project by making a Hanoi Tower Game.

We could also further improve our controller so that Baxter could accomplish obstacle avoidance during trajectory movement.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •