-
Notifications
You must be signed in to change notification settings - Fork 7
UAV Setup Guides (Imagery)
If utilizing a low-cost on-board camera, will most likely be presented with either a standard USB camera, or a Raspberry Pi Camera. It is recommended that you use video_stream_opencv and raspicam_node respectively, however some alternative options are listed in the Imagery & Cameras - Useful Applications section.
Regardless of the choice, the most convenient option going forward to preparing your subsystem will be to start by creating a launch file, as it will allow you to launch a camera interface, your image processing nodes, and also perform any interfacing between them, all in one step.
Firstly, create a new file somewhere easily accessible:
mkdir -p ~/catkin_ws/launch
nano ~/catkin_ws/launch/img_subsystem.launch
Now, copy in the appropriate launch file example (video_stream_opencv or raspicam_node). Save and exit.
You should now be able to open two terminals:
- Run:
roslaunch ~/catkin_ws/launch/img_subsystem.launch
- Run
rqt
, add in an image view plugin, and select the image topic from your camera node.
Performing a camera calibration is a fairly critical task to ensure that any calculations that are performed using your imagery are as correct as they could possibly be. For a standard monocular camera, check the Monocular Camera Calibration tutorial.
Note: If you are calibrating a Raspberry Pi camera that is running raspicam_node
, you will need to use a slightly different process as the raw image stream is disabled by default. Refer to these instructions on the node's README for further information.
Remember to set the board size and square size correctly when running the above instructions (specifically the --size 8x6
and --square 0.074
parts in the commands on their README)!
Once the camera calibration has been completed, remember to "commit" the final results. This should instruct your camera node to save them internally, such that they are automatically loaded when the camera node starts. Restarting your camera node should give a message that looks like:
[ INFO] [...]: Camera calibration file /home/piros/.ros/camera_info/camera_v2_460x380.yaml loaded!
A simple image processing script for Python is provided below:
#!/usr/bin/env python
import sys
import rospy
import cv2
from std_msgs.msg import String
from sensor_msgs.msg import CompressedImage
from cv_bridge import CvBridge, CvBridgeError
class ImageProcessor():
def __init__(self):
# Set up the CV Bridge
self.bridge = CvBridge()
# Set up the subscriber (input) and publisher (output)
self.sub_img = rospy.Subscriber('/raspicam_node/image/compressed', CompressedImage, self.callback_img)
self.pub_img = rospy.Publisher('~image_raw/compressed', CompressedImage, queue_size=1)
# Make sure we clean up all our code before exitting
rospy.on_shutdown(self.shutdown)
def shutdown(self):
# Unregister anything that needs it here
self.sub_img.unregister()
def callback_img(self, msg_in):
# Convert ROS image to CV image
try:
cv_image = self.bridge.compressed_imgmsg_to_cv2( msg_in, "bgr8" )
except CvBridgeError as e:
rospy.logerr(e)
# ===================
# Do processing here!
# ===================
gray = cv2.cvtColor(cv_image, cv2.COLOR_BGR2GRAY)
cv2.circle(gray, (200, 200), 20, (0,255,0), thickness=2)
# ===================
# Convert CV image to ROS image and publish
try:
self.pub_img.publish( self.bridge.cv2_to_compressed_imgmsg( gray ) )
except CvBridgeError as e:
rospy.logerr(e)
if __name__ == '__main__':
# Initialize
rospy.init_node('egh450_image_processor', anonymous=True)
ip = ImageProcessor()
rospy.loginfo("[IMG] Processing images...")
# Loop here until quit
rospy.spin()
Target pose estimation using SolvePnP.
If you are experiencing lag with your processed images, it is possible that you are experiencing a buffer backlog. To avoid processing older messages, make the following changes to the image processor example above:
class ImageProcessor():
def __init__(self):
self.time_finished_processing = rospy.Time(0)
# ...
def callback_img(self, msg_in):
if msg_in.header.stamp > self.time_finished_processing:
# Convert ROS image to CV image
try:
cv_image = self.bridge.compressed_imgmsg_to_cv2( msg_in, "bgr8" )
except CvBridgeError as e:
print(e)
# Rest of processing steps ...
# Place the following below last line of code of this function (before changes)
self.time_finished_processing = rospy.Time.now()
By adding this additional check on the timestamp of the image, the processor should discard images that were received during the processing steps. Thus, you should always end up processing the latest image (within some small margin).
- Home
- Common Terminology
- UAV Setup Guides (2024)
- UAV Setup Guides (2022)
- UAV Setup Guides (2021)
- UAV Setup Guides (--2020)
- General Computing
- On-Board Computers
- ROS
- Sensors & Estimation
- Flight Controllers & Autopilots
- Control in ROS
- Navigation & Localization
- Communications
- Airframes
- Power Systems
- Propulsion
- Imagery & Cameras
- Image Processing
- Web Interfaces
- Simulation