This repo combines camera intrinsics, hand-eye and camera-laser extrinsic calibration methods for the blaser sensor and the UR Robot arm
- Ubuntu 18.04 with ROS Melodic. Comes with Python 2.7 and OpenCV 3.2
- Ubuntu 20.04 with ROS Noetic. Comes with Python 3.8 and OpenCV 4.2.
- blaser_ros Melodic / blaser_ros Noetic
- ximea_ros_cam (Install with the Blaser Dependencies)
- Universal_Robots_ROS_Driver
- MoveIt Melodic Tutorials / MoveIt Noetic Tutorials
- apriltag_ros
Build the blaser_ros ws and the blaser_dependencies before proceeding.
Follow instructions in 0.3 to install other dependencies
Download the install download scrip in the repo
To run the script,
chmod +x install
./install
catkin build several time (mostly three)
The calibration workspace should contain the following repo in the src:
- UR_arm_camera_calibration
- Universal_Robots_ROS_Driver
- MoveIt
- ximea_ros_cam
- camera_model
- apriltag
In ~/Calibration_ws/src/ximea_ros_cam/launch/example_cam.launch
change the value parameter to the serial number of your ximea camera
<param name="serial_no" type="string" value="your-camera-serial-no" />
set the num_cams_in_bus to 1
<param name="num_cams_in_bus" type="int" value="2" />
To change the exposure_time and white_balance_mode add the following line:
<param name="exposure_time" type="int" value="6000"/>
<param name="white_balance_mode" type="int" value="2"/>
Configure other camera params as needed
- UR robot arm
- Camera holder (Example)
To do the auto calibration, we need to firstly set the initial joint position for robot arm and the checker board origin position. To achieve that, go to ~/Calibration_ws/src/multi_calibration/cfg/trajectory_planner.yaml
and change the homePositions and bedOrigin.
Open four terminal: For the First one, connect the arm:
cd ~/Calibration_ws
source devel/setup.bash
roslaunch ur_robot_driver <robot_type>_bringup.launch robot_ip:=[robot ip]
For the Second one enable Moveit!:
cd ~/Calibration_ws
source devel/setup.bash
roslaunch ur5e_moveit_config ur5e_moveit_planning_execution.launch
For the Third one enable Ximea driver:
cd ~/Calibration_ws
source devel/setup.bash
roslaunch ximea_ros_cam example_cam.launch
For the Last one enable the auto_calibration:
cd ~/Calibration_ws
source devel/setup.bash
roslaunch multi_calibration auto_calibration.launch directory:="[file position]" camera_intrinsic:=1
The motion of the robot arm will be like:
Go to the folder where images collected from last step are saved
cd ~/Calibration_ws/intrinsic
rosrun camera_model Calibration -w 7 -h 10 -s 5 -i ./ -v --camera-model mei #use package camera_model to perform calibration.
The camera model parameters will be saved to camera_camera_calib.yaml.
Use help rosrun camera_model Calibration --help to see all the parameters.
Be careful about what camera model you use. For a narrow FoV camera without huge distortion, use pinhole model should suffice. If you use a large FoV camera with a lot of distortion, use mei or kannala-brandt. Typically you can try all the camera models and choose the one with the smallest reprojection error.
Finally, examine the intrinsics parameters and result
- Is the reprojection error reasonable? (should be at least < 1.0 px)
- Is fx and fy similar?
- Is cx and cy reasonable? cx ~ width / 2, cy ~ height / 2
- Is image height and width correct?
Update the calibration result to ~/Calibration_ws/src/multi_calibration/cfg/ximea_80_calib.yaml
After we finish the MEI model, we need to do the camera calibration for PINHOLE model.
Then, we do the do the calibration in PINHOLE model:
Launch the four terminals but instead of running auto_calibration in camera_intrinsic mode run it in camera_rect_intrinsic mode
cd ~/Calibration_ws
source devel/setup.bash
roslaunch multi_calibration auto_calibration.launch directory:="[file position]" camera_rect_intrinsic:=1
Go to the folder where images collected from last step are saved and run the pinhole calibration
source Blaser_ws/devel/setup.bash
rosrun camera_model Calibration -w 7 -h 10 -s 5 -i ./ -v --camera-model PINHOLE #use package camera_model to perform calibration.
Update the pinhole calibration result (fx fy cx cy) to rectCameraMatrix
in ~/Calibration_ws/src/multi_calibration/cfg/calib_params.yaml
To check the calibration result, run auto_calibration.launch again with camera_intrinsic:=2 shown below.
cd ~/Calibration_ws
source devel/setup.bash
roslaunch multi_calibration auto_calibration.launch directory:="[file position]" camera_intrinsic:=2
Then you can see the preset rqt
window which shows both raw image and rect_image.
- UR robot arm
- Camera holder(Example)
- April tag holder(Example)
- Hand eye calibration ipynb (no need after update)
The key to fiducial marker pose estimation is a correct estimate of the fiducial marker's actual size. To guarantee the precision of fiducial marker tag size, Photoshop editing and 100% print scale is recommended.
Note: The tag size should not be measured from the outside of the tag. The tag size is defined as the distance between the detection corners, or alternately, the length of the edge between the white border and the black border. The following illustration marks the detection corners with red Xs and the tag size with a red arrow for a tag from the 48h12Custom tag family.
-
Getting the camera to Tag transform:
- Start the ximea camera node
cd ~/Calibration_ws source devel/setup.bash roslaunch ximea_ros_cam example_cam.launch
- In a new window run the auto_calibration.launch in handeye mode
cd ~/Calibration_ws source devel/setup.bash roslaunch multi_calibration auto_calibration.launch hand_eye:=1
After properly setting up the Apriltag pose estimation pipeline(that includes image undistortion and publishing updated rectified camera matrix), the camera to Tag transform should be available to you.
With the measurement of camera to Tag and end-effector to Tag, input them into Calibration yaml file and get the EE to Camera transform matrix which will output to Handeye result colletion file
After launch the auto_calibration.launch
in handeye mode, an rviz will show out to help you evaluate the tag detection and measurement.
The calculattion result will also shown in the matplot which you can use it to compare with the predict value and do futher evaluation.
As a sanity check for the calculation, the tag position derived from your end-effector to Tag transform and from end-effector-camera-tag transform chains should be exactly the same.
The extrinsics parameters between camera and laser is the 3D position of the
laser plane in the camera reference frame, defined as
We first need to make sure that the laser stripe detection is working. The laser stripe detector basically performs an HSV filter with five parameters hue_min, hue_max, sat_min, val_min, and val_ratio, defining a 3D range filter H in [hue_low, hue_high], S in [sat_low, 255], and V in [max(val_low, image_max_val * val_ratio), 255]. Note that V range is dynamic and is set with every image.
When using red laser and you want a hue range containing the hue=180 value, set hue_min > hue_max and the program will generate two separate hue ranges: one is [hue_min, 180] and the other is [0, hue_max].
To set these parameters, first use python scripts/im_saver.py [image_topic]
to
save a couple of sample images, then use python scripts/im_hsv_viewer.py [image_file]
to determine the HSV value of the laser stripe and set appropriate
values for the threshold parameters. Load these values in a config file (todo
give example config and dir), which will be used in the calibration later.
To test these parameters, run laser_stripe_detector
node with sample images
and the config file. (todo turn on lsd visualization).
In order to collect more image for laser calibration, please go to Calibration_ws and run the following code:
Open four terminal: For the First one, connect the arm:
cd ~/Calibration_ws
source devel/setup.bash
roslaunch ur_robot_driver <robot_type>_bringup.launch robot_ip:=[robot ip]
For the Second one enable Moveit!:
cd ~/Calibration_ws
source devel/setup.bash
roslaunch ur5e_moveit_config ur5e_moveit_planning_execution.launch
Enable ximea driver in a new terminal:
cd ~/Calibration_ws
source devel/setup.bash
roslaunch ximea_ros_cam example_cam.launch
Next enable the auto_calibration in the laser_cam mode:
cd ~/Calibration_ws
source devel/setup.bash
roslaunch multi_calibration auto_calibration.launch laser_cam:=1
The motion of the robot arm will be like:
After collecting the image relaunch the auto_calibration.launch
with laser_cam:=2. The filter program will automatically start detecting if there is any noise image. Then it will run the laser calibration program and output the result.
Important: please paste the result of the intrinsics calibration into laser_calib.yaml
cd ~/Calibration_ws
source devel/setup.bash
roslaunch multi_calibration auto_calibration.launch laser_cam:=2
Then you can get the laser calibration result in cfg
folder in calib_results.txt
. Then, when you relaunch the auto_calibration.launch
with laser_cam:=3 you can see the laser calibration verification program. It will get the calibration data from calib_results.txt
and show the following image.
roslaunch multi_calibration auto_calibration.launch laser_cam:=3
Under developing