Authors: Yiting HE, Yuyang ZHANG, Linyi HUANG, Ronghe QIU
This repository contains the source code of the REAL Team's participation in the Robothon 2023 Grand Challenge competition.
The robot platform utilized by our team in the Robothon Grand Challenge is illustrated in the figure below.
The hardware system is composed of four primary components:
- UR 5e Collaborative Robot: A six-degree-of-freedom collaborative robot mounted on a fixed table, designed to ensure precise and efficient movement.
- RealSense D455 RGBD Camera: Mounted on the last joint of the robot, this camera supplies crucial RGB information to our system, enabling accurate object recognition and localization.
- Parallel Gripper: A gripper that is mounted on the final joint of the robot and provides versatile manipulation capabilities.
- Self-designed End Effector: A end effector that is design for special purpose of the task. It is mounted on the two-finger gripper.
We have divided the software into five separate sub workspaces, each containing a driver or a module. The primary code developed for the competition is stored in the robot main workspace to minimize re-compilation and reduce dependency between various drivers.
- ws_realsense_driver: Realsense driver
- ws_ur_driver: UR 5 driver
- ws_robot_main: Vision and control related
- ws_vision: YOLOv5
- Ubuntu 20.04
- ROS1 Noetic
- MoveIt 1 Noetic
The following are the third-party modules utilized by the REAL Team:
The following diagram presents an overview of our software framework:
The visual system processes RGB images from the RealSense camera to localize the task board and identify triangular setpoints on the screen. It comprises two subsystems: the Board Localization System and the Screen Detection System.
The Board Localization System performs in the following steps:
- Identifies the blue button using HSV color space-based color recognition
- Estimates the location of the red button through neighborhood object search
- Determines the task board rotation angle using the Hough Transform
- Refines the board location with physical feedback
- Locates different task positions using the task board model
The diagram below illustrates the board localization process:
The Screen Detection System detects triangular shapes and screen positions for use in the slider move task. We employ a YOLOv5x model as the backbone, fine-tuning it with our self-collected dataset.
The diagram below displays the outputs of the detection system:
We utilize the Real-Time Data Exchange (urde) library to control the UR Robot through the Control Interface. To accommodate the complexity and diversity of tasks, we have designed three control methods based on the properties of each specific task. The three control methods are:
- Force and Compliance Control for Insersion
- Follow Fixed Trajectries
- Wrap Cable
- Register the server's public key:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE || sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE
- Add the server to the list of repositories:
sudo add-apt-repository "deb https://librealsense.intel.com/Debian/apt-repo $(lsb_release -cs) main" -u
- Install the libraries :
sudo apt-get install librealsense2-dkms
sudo apt-get install librealsense2-utils
sudo apt-get install librealsense2-dev
sudo apt-get install librealsense2-dbg
cd <your project directory>/ws_realsense_driver
catkin_make -DCATKIN_ENABLE_TESTING=False -DCMAKE_BUILD_TYPE=Release
catkin_make install
echo "source <your install path>/Robothon_REAL/ws_realsense_driver/devel/setup.bash" >> ~/.bashrc
source ~/.bashrc
The above code is adopted from https://github.com/leggedrobotics/realsense-ros-rsl Choose Method 2: The RealSense™ distribution:
pip install pyrealsense2
cd <your install path>/Robothon_REAL/ws_ur_driver
sudo apt update -qq
rosdep update
rosdep install --from-paths src --ignore-src -y
cd <your project directory>/ws_ur_driver
catkin_make
echo "source <your install path>/Robothon_REAL/ws_ur_driver/devel/setup.bash" >> ~/.bashrc
source ~/.bashrc
The above code is adopted from https://github.com/UniversalRobots/Universal_Robots_ROS_Driver
sudo apt install python3-pip
pip install pyserial
sudo usermod -aG dialout $USER
#reboot
sudo add-apt-repository ppa:sdurobotics/ur-rtde
sudo apt-get update
sudo apt install librtde librtde-dev
pip install --user ur_rtde
The code above is adopted from https://sdurobotics.gitlab.io/ur_rtde/installation/installation.html
catkin bulid
Fisrt download our trained model from
https://drive.google.com/drive/folders/11PZrns0N1Gya7y67oJtZVVB2nY58xLxI?usp=sharing The put the model in the folder
<your project directory>/ws_vision/ws_vision/src/yolov5_ros/src/yolov5/models/
<your project directory>/ws_vision/ws_vision/src/yolov5_ros/src/yolov5/models
cd <your project directory>/ws_vision
catkin build
cd <your project directory>
echo "source <your project directory>/ws_realsense_driver/devel/setup.bash" >> ~/.bashrc
echo "source <your project directory>/ws_ur_driver/devel/setup.bash" >> ~.bashrc
echo "source <your project directory>/ws_robot_main/devel/setup.bash" >> ~.bashrc
echo "source <your project directory>/ws_vision/devel/setup.bash" >> ~.bashrc
NOTE Please run each launch in a seperate terminal
- Launch ur_rtde controller
rosrun robot_main ur_rtde_test.py
After this step, you can connect the robot control panel to ROS.
- Launch the Camera Driver
roslaunch realsense2_camera rs_camera.launch
- Launch Screen and Triangle Detection
roslaunch yolov5_ros yolov5.launch
rosrun robothon2023_vison bbox_subscriber.py
- Run Task Board Localization
rosrun robothon2023_vison localize.py
- Yiting HE, Harryting
- Yuyang ZHANG, Ryan Zhang
- Linyi HUANG, Lucky Huang
- Ronghe QIU, ConnerQiu