-
Notifications
You must be signed in to change notification settings - Fork 7
UAV Setup Guides (2024)
Build a drone in a day! (Disclaimer: may take more than a day)
Warning! Be careful manipulating the raspberry pi, avoid placing it over metallic/conductive surfaces, it will short circuit immediately. Also, be careful with the GPIO ports, any misuse can short circuit them and you will have to provide a replacement. Wait for instructions if not feel confident to use it safely.
As of time of writing, a basic configuration for a UAV with an on-board computer would use the following pieces of software to operate:
- Operating System (Linux) - ideally some version of Ubuntu to enable ROS
- Robotic Operating System (ROS) - to allow modular and expandable software development
- An flight controller/autopilot bridge - to allow communication with some form of a low-level controller
- Navigation software - to command the flight controller on what to do
- Data collection/processing - whatever else is required for the UAV to complete it's task
With this "typical" setup, there can be many choices for hardware, For EGB look like the following:
- Airframe - S300 Frame with a Standart Quad autopilot-supported configuration
- Autopilot - Pix32 v6 Autopilot (running the PX4 firmware FMU V6C, V 1.13.2)
- Computer - A Raspberry Pi device (running Ubuntu Mate Focal 20.04)
- Understand what is ROS.
- Understand why we use ROS.
- Take note of what useful resources are available.
- Take note of the general ROS wiki tutorials.
Click here to show reference material regarding ROS and initial UAV setup
- Installing ROS (Noetic).
- Set up your workspace.
- Creating a ROS package.
- Understand how to compile your workspace.
- Learn how to use
git
in 5 steps: - Install and plan out development for any additional software:
- Common Setup: Follow the steps listed in the QUTAS Flight Stack guide below.
- Autopilot & Navigation: Follow the steps listed in the Autopilot & Flight Control guide below.
- Imagery Processing:
- See the Image Processing - ROS Interfaces page for information on configuring your camera and logging data.
- Follow the steps listed in the On-Board Image Processing guide below.
- Payload & Deployment: Follow the steps listed in the Payload & Deployment guide below.
- Operator Interfaces: Follow the steps listed in the Operator Interfaces guide below.
Click here to show reference material further information that may assist system development
- Understand ROS Messages, Topics, and Services:
- A Message is like a mailed letter, with a specific set of internal variables. For example, the message std_msgs/Header contains the following data variables:
uint32 seq
time stamp
string frame_id
- A Topic is like a shared mailbox for ROS Node to send messages to (Publish messages) and receive messages from (Subscribe to messages).
- A Service is like a private mailbox where you can send out a letter (a Service Request), have something happen, then get a return letter with a result (a Service Response).
- A Message is like a mailed letter, with a specific set of internal variables. For example, the message std_msgs/Header contains the following data variables:
- Read up about ROS Names:
- If a node's name is
test_node
, and its namespace is/robot
:- Publishing the global topic
/foo/bar
will create the topic:/foo/bar
- Publishing the relative topic
foo/bar
will create the topic:/robot/foo/bar
- Publishing the private topic
~foo/bar
will create the topic:/robot/test_node/foo/bar
- Publishing the global topic
- By private topics, ensures that unrelated topics won't conflict. For example, two different nodes can have the topic name
~feedback
, which will be resolved as separate topics:/node_1/feedback
and/node_2/feedback
.
- If a node's name is
- Understand how to use roslaunch, mainly how to:
- Create a launch file and run it.
- Example in the uavasr_emulator package example).
- Run it with:
roslaunch uavasr_emulator emulator.launch
.
- Use use the remap tag to change topic names and link nodes together without editing node source code.
- Use a single Launch file to start multiple ROS Nodes.
- Create a launch file and run it.
- Understand how ROS Parameters work:
- Parameter client examples for Python and C++.
- Use Parameters in a ROS Launch file.
- Further reading on Dynamic Reconfigure.
- Read up on rosbag and record data in ROS:
- Record data with
rosbag record /topic/example
, with useful flags:-
-a
: record all topics (use care if there is raw images being captured) -
-d
: record for a specific duration -
--node
: Record all topics from a specific node
-
- Replay data with
rosbag play filename.bag
:-
-s
: start the replay at a specific time in the recording -
-d
: only replay data for a specific duration of time -
-l
: loop the replay after it's finished (can be used with-s
and-d
to replay a recording segment on loop) -
--clock
: output clock data (use with theuse_sim_time
parameter to use the timestamps captured in the bag file)
-
- Record data with
- Visualization and Feedback of data in ROS:
Click here to see more
The QUTAS Flight Stack (QFS) is comprised of the following software:
Before continuing, make sure you have done the following steps:
- Installed ROS correctly and are able to run
roscore
successfully (refer here). - Correctly set up and initialized your catkin workspace.
- Installed and configured
wstool
androsdep
:
sudo apt-get install python3-wstool python3-rosdep
sudo rosdep init
rosdep update
- Setup any additional access programs (i.e. SSH) if remote access is needed.
First we need to prepare the workspace:
cd ~/catkin_ws
wstool init src
Next, we download the QFS definitions:
cd ~/catkin_ws/
QFS_PACKAGE=qfs_noetic
curl https://raw.githubusercontent.com/qutas/info/master/Stack/$QFS_PACKAGE.rosinstall > /tmp/$QFS_PACKAGE.rosinstall
wstool merge -t src /tmp/$QFS_PACKAGE.rosinstall
Once you are happy with the added packages, we update and rebuild the workspace:
cd ~/catkin_ws/
wstool update -t src
Next, make sure you have all the prerequisite packages installed:
rosdep install --from-paths src --ignore-src -r -y
Next, compile your workspace:
catkin_make
source ~/catkin_ws/devel/setup.bash
Finally, if you have added any rqt
packages that do not show up in the plugin list, you may need start rqt
once with this command instead:
rqt --force-discover
With the flight stack set up, you should check out some of the links in the package listing for more information. If you are here for the first time, or just want to try out a demonstration, it is recommended that you take a look at one of the simulator packages.
Note: The steps that you have completed will have downloaded and updated the packages to their latest versions, and also compiled the packages. You will not have to repeat these steps when looking at the individual packages, and can simply jump straight to the "how to run" examples.
Click here to see more
To perform most of the back-end setup and ROS integration, the QUTAS Lab 450 is used. The main component is launched with the following command:
roslaunch qutas_lab_450 environment.launch
Provided that the OptiTrack server is running and that objects are loaded into the OptiTrack Tracker, the tracking information should automatically be added to ROS.
Some initial steps are required before continuing with the rest of this guide. It is recommended that you take the time to understand the flight controller and how to configure it.
The PX4 firmware is used to perform a majority of the onboard flight control tasks. This guide has been written and tested with Version 1.13.2. Information on configuring PX4 autopilots can be found here.
Install QGroundControl in your local machine using these instructions. This guide has been written and tested with Version 4.1.3.
The general process to setup PX4 for the flight environment is as follows. Open QGroundControl, connect your FCU using a USB cable, and flash the latest stable firmware. Calibrate (or recalibrate) your vehicle sensors using QGroundControl. Once you have flashed the new firmware you will need to run through the necessary calibration steps for your UAV and the px4 firmware. Connect the UAV to QGroundControl to perform the following calibration steps.
Then a number of flight controller parameters need to be edited within the QGroundControl software in order to enable OptiTrack based localisation:
EKF2_EV_DELAY = 50 // Default 175
EKF2_AID_MASK = 24 // Enable Bits 3 and 4 (Vision Position and Vision Yaw Fusion) and Disable GPS Fusion
EKF2_HGT_MODE = 3 // Disable Barometric Height and Enable Vision Height
RTL_RETURN_ALT = 2 // These edit the failsafe mode to prevent crashing into the ceiling
RTL_DESCEND_ALT = 2
Finally, configure the flight controller to accept input from the on-board computer using the TELEM3
port (Pix32 v6 Mini-base Ports) and (further details Onboard):
MAV_1_CONFIG = TELEM 3 // MAV_1_CONFIG is often used to map the TELEM 3 port to the correct settings
MAV_1_FLOW_CTRL = Force off // Sets flow control off, not wired in the current serial cable.
MAV_1_MODE = Onboard // Sets the expected input type to be from an "onboard" system
SER_TEL3_BAUD = 921600 // A baudrate 921600 or higher recommended for on-board applications
Remember to connect your on-board computer to the flight controller using a USB to UART Adaptor!
It is assumed that you have configured your autopilot, installed ROS, and set up a catkin workspace (located at ~/catkin_ws).
Interfacing the autopilot with ROS is the next big step. To do this, the MAVROS software is highly recommended.
Installation Steps
On Ubuntu, MAVROS is supported and available through the official ROS repositories. There are 2 main packages, "mavros" and "mavros-extras". It is recommended that both are installed. Run the following commands (reference material):
sudo apt install ros-noetic-mavros ros-noetic-mavros-extras
To finish the installation, a set of geographic datasets may need to be installed (this may cause MAVROS will not start):
roscat mavros install_geographiclib_datasets.sh | sudo bash
Make sure to follow the instructions on accessing serial adpaters in linux. Now, with the telemetry connection established, and find out the device name on the computer that it is plugged into (it will most likely be something like: /dev/ttyUSB0
). Note this down, as it will be used in the next step!
Before continuing, it is recommended that you look at the MAVROS page on the ROS Wiki to learn about each of the different plugins that are available.
Due to the intricacies of using GPS enabled autopilots indoors, better results may be achieved using a ROS-based control stack to perform indoor position control of an autopilot.
The qutas_lab_450
, spar
, and mavros
packages come with a selection of launch files that you can use to connect straight to the UAV. It is recommended you copy these to a local directory so you can edit them freely without having to worry about your setup getting erased when you perform an update. The following commands will create an area to keep your personal launch files, and copy in the template ones.
To get the base position control software ready for configuration, run:
mkdir -p ~/catkin_ws/launch
cd ~/catkin_ws/launch
roscp qutas_lab_450 px4_flight_control.launch ~/catkin_ws/launch/control.launch
roscp mavros px4_pluginlists.yaml ./
roscp mavros px4_config.yaml ./
The three files that have been copied into your launch directory do the following:
- control.launch: is the generic launch file that loads
spar
andmavros
(and some other useful ones!) - px4_pluginlists.yaml: a whitelist or blacklist of plugins for MAVROS to load (see the list of plugins for more information)
- px4_config.yaml: a configuration file for all plugins
From here, you will need to edit the stock control.launch
file to specify your UAV:
nano ~/catkin_ws/launch/control.launch
Then change the following lines to the appropriate values for "uav_name" and the home file directory ("uavusrgX"):
<arg name="uav_name" default="uav" />
Next, you must tell the launch file where to look to find the pluginlists
and config
files. Open the launch file, and look for the lines:
<arg name="pluginlists_yaml" value="$(find mavros)/launch/px4_pluginlists.yaml" />
<arg name="config_yaml" value="$(find mavros)/launch/px4_config.yaml" />
These must be changed to the directory where your local copies are. For example:
<arg name="pluginlists_yaml" value="/home/USERNAME/catkin_ws/launch/px4_pluginlists.yaml" />
<arg name="config_yaml" value="/home/USERNAME/catkin_ws/launch/px4_config.yaml" />
Secondly, you will need to tell MAVROS what port and baud rate to use. Look for the line:
<arg name="fcu_url" default="/dev/ttyACM0:57600" />
And change it to match your setup. For example:
<arg name="fcu_url" default="/dev/ttyUSB0:115200" />
Or:
<arg name="fcu_url" default="/dev/ttyUSB0:921600" />
With the launch files configured, you can run the control programs with the following command:
roslaunch ~/catkin_ws/launch/control.launch
As MAVROS starts, it will spit out a lot of messages. The last message that will be displayed before the autopilot is connected will say something along the lines of:
MAVROS started. MY ID 1.240, TARGET ID 1.1
If you get more messages after that, then your configuration works! If not, then you may have got your serial port or baud rate incorrect.
Spar will also output some messages to tell you that it is connected and ready to fly (they have a preceding [Spar]
label on them),
It is recommended that before attempting the first flight, you detach the propellers and manually move the UAV around the flight area to ensure the navigation and position control are operating as expected. It may be useful to view the feedback data from the flight controller (/mavros/local_position/pose
) in rviz to ensure the UAV thinks it is in the correct location. It may also be good to visualise feedback from spar
as well!
To get the environment running, in separate terminals, run the following commands (remember to configure Distributed ROS first)...
...On the on-board computer:
Change the following lines to the appropriate values for "vrpn_object_name" inside the environment.launch file:
<arg name="vrpn_object_name" default="cdrone" />
Then you are ready to launch:
roslaunch qutas_lab_450 environment.launch
roslaunch ~/catkin_ws/launch/control.launch
You should have feedback of the system connecting
...On the GCS:
rosrun spar_node takeoff_and_land
Here is a rough checklist to ensure the system is running as expected:
- The backend (
qutas_lab_450 environment.launch
) displays that vrpn_client_ros has located and is tracking the UAV. - MAVROS (
~/catkin_ws/launch/control.launch
) has successfully connected to the UAV, and it is ready to be controlled:CON: Got HEARTBEAT, connected. FCU: PX4 Autopilot
- Spar has connected to MAVROS and the autopilot (
qutas_lab_450 takeoff_test.launch
):[Spar] Received pose...
[Spar] Received mav_state...
- You can also start RVIZ and RQT to check any data displayed looks correct
Once you have verified that the system is running as expected, use the MAVROS GUI in rqt to do the arming procedure:
- Set the flight mode to "OFFBOARD" (and ensure that the autopilot accepts the mode change in the MAVROS terminal).
- Send the "ARM" command. In some cases, you will need to set the flight mode to POSITION to ARM and quickly change it to OFFBOARD to start the mission. There is an ARM timeout that will dis-ARM the UAV if you take too long to do the switch.
You should now see reported actions from mavros
confirming arming, and spar
confirming take-over of control:
[Spar] Started action server!
[Spar] flight action: Accepted
You UAV should now perform a takeoff, and with any luck, will fly up and hover at the 1-metre waypoint for a few seconds, then land. You may have to manually disarm the UAV once it has landed.
Once you have performed a successful take-off test, the system should be ready to perform some autonomous navigation. Spar provides two example scripts to test waypoint navigation.
To make copies of them for editing:
cd ~/catkin_ws
roscp spar_node demo_wp ./
roscp spar_node demo_wp_roi ./
The first file is a simple 4-waypoint test. The second file is a 4-waypoint test with a Region of Interest (ROI) interrupt (see comments for more details).
Go through these files, understand them, and edit them as you wish. Note, these files will not perform a take-off as well. It is possible for you to integrate this functionality (see the spar
takeoff
script).
To run the files:
cd ~/catkin_ws
./demo_wp
...or:
cd ~/catkin_ws
./demo_wp_roi
Click here to see more
Spar acts as a a sort of daemon to handle the path-planning and navigation integration with the flight controller. It presents a ROS action that allows high-level control of the mission planning routines (e.g. "take-off", "go to", "stop", and "land").
There are multiple tools provided as part of the spar_node
package for command-line use (see the spar_node
readme. You can test the opreation of these tools in simulation using the UAVASR Emulator, and some of the pre-configured examples in the spar
package (this assumes you are already familiar with running ROS nodes):
- Terminal #1 (start ROS):
roscore
- Terminal #2 (start simulation -- close and reopen this node to reset the simulation):
roslaunch spar_node spar_uavasr.launch
- Terminal #3 (vizualisation -- optional):
rviz
Note: make sure you add a Pose plugin to view the current pose of UAV. You should set the pose topic to be /uavasr/pose
.
You can then run the following in a new terminal to command a take-off:
- Terminal #4:
rosrun spar_node takeoff
After take-off, the drone will be commanded to maintain a position hold at the take-off location. You can get it to move around the room by using the goto
command:
- Terminal #4:
rosrun spar_node goto 3 2 1
You should now see the drone move to the location: [x=3,y=2,z=1]
. You can use the command rosrun spar_node goto --help
to see more options for this command. Also, check the spar_node
readme for information on the other commands you can run.
You test your own navigation script with the following minimal example:
#!/usr/bin/env python3
import sys
import rospy
import actionlib
from actionlib_msgs.msg import GoalStatus
from spar_msgs.msg import FlightMotionAction, FlightMotionGoal
def send_wp(spar_client, x, y, z, yaw):
goal = FlightMotionGoal()
goal.motion = FlightMotionGoal.MOTION_GOTO
goal.position.x = x
goal.position.y = y
goal.position.z = z
goal.yaw = yaw
goal.velocity_vertical = 1.0
goal.velocity_horizontal = 1.0
goal.yawrate = 1.0
goal.wait_for_convergence = True # Wait for our takeoff "waypoint" to be reached
goal.position_radius = 0.2
goal.yaw_range = 0.2
spar_client.send_goal(goal)
# If shutdown is issued, cancel current mission before rospy is shutdown
rospy.on_shutdown(lambda : spar_client.cancel_goal())
# Wait for the result of the goal
spar_client.wait_for_result()
# Output some feedback for our motion
return spar_client.get_state() == GoalStatus.SUCCEEDED
def main():
# Initialise ROS
rospy.init_node('nav_test', anonymous=True)
# Get our action namespace (where to send flight commands to)
action_ns = rospy.get_param("~action_topic", 'spar/flight')
# Create our action client
spar_client = actionlib.SimpleActionClient(action_ns, FlightMotionAction)
rospy.loginfo("Waiting for spar...")
spar_client.wait_for_server()
# This will lock the thread until it is "done"
# Make sure "ros is ok" in case we hit CTRL-C while waiting above
if not rospy.is_shutdown():
# Send the flight command
rospy.loginfo("Sending preset waypoint...")
success = send_wp(spar_client, -1.0, -1.0, 1.0, 0.0)
# If we had an error, print and stop sending
if success:
rospy.loginfo("Waypoint reached!")
else:
rospy.logerr("Waypoint failed!")
if __name__ == '__main__':
try:
main()
except rospy.ROSInterruptException:
pass
Spar is bundled with two 'starting point' examples for creating your own flight guidance system. It is recommended that after testing and understanding the example above, you read through and test these examples:
-
demo_wp:
- Performs a preset '4-waypoint' task from with a range of configurable settings (waypoint radius, movement speed, etc.)
- Terminal #4:
rosrun spar_node demo_wp
-
demo_wp_roi:
- Performs a preset '4-waypoint' task, with additional code to perform a diversion and continuation if a geometry_msgs/PoseStamped message is recieved.
- Terminal #4:
rosrun spar_node demo_wp_roi
If you have followed the previous steps in "Custom Flight Paths" section of Autopilot & Flight Control, you will be able to start modifying these examples imidiately. Either one should provide a suitable starting point for more advanced mission planning. However, be aware that is slightly complex to monitor an action while also monitoring another input stream (e.g. for a notification to stop at the current location). The second example, demo_wp_roi
, implements the additional steps required to do this, and is suitable for more complex tasks such as stopping and resuming a mission.
Breadcrumb handles some of the background work required to perform navigation in a cluttered environment by offering a ROS service that will calculate an A* flight plan.
A minimum example is shown below. Before continuing, make sure you have downloaded breadcrumb, and are running it successfully (you will have to also start the uavusr_emulator
, or something similar, to provide an occupancy grid for breadcrumb to use).
#!/usr/bin/env python3
import rospy
from breadcrumb.srv import RequestPath
from breadcrumb.srv import RequestPathRequest
if __name__ == '__main__':
# Setup ROS
rospy.init_node('path_planner', anonymous=True)
# Wait to connect with Breadcrumb
# Code will error if you try to connect to a service
# that does not exist
rospy.wait_for_service('/breadcrumb/request_path')
srvc_bc = rospy.ServiceProxy('/breadcrumb/request_path', RequestPath)
# Set up a path request for breadcrumb
req = RequestPathRequest()
req.start.x = -1.5
req.start.y = -1.5
req.start.z = 1.0
req.end.x = 1.5
req.end.y = 1.5
req.end.z = 1.0
res = srvc_bc(req)
# Breadcrumb will return a vector of poses if a solution was found
# If no solution was found (i.e. no solution, or request bad
# start/end), then breadcrumb returns and empty vector
if len(res.path.poses) > 0:
# Loop through the solution returned from breadcrumb
for i in range(len(res.path.poses) - 1):
rospy.loginfo(" [%0.2f;%0.2f;%0.2f]",
res.path.poses[i].position.x,
res.path.poses[i].position.y,
res.path.poses[i].position.z)
else:
rospy.logerr("solution not found")
A more in-depth example (for multiple waypoints) is as follows:
#!/usr/bin/env python3
import rospy
from breadcrumb.srv import RequestPath
from breadcrumb.srv import RequestPathRequest
if __name__ == '__main__':
# Setup ROS
rospy.init_node('path_planner', anonymous=True)
# List of waypoints (XYZ-Yaw)
waypoints = [[-1.0,-1.0,0.0,0.0],
[ 1.5,-1.0,0.0,0.0],
[ 1.5, 1.5,0.0,0.0]]
# Wait to connect with Breadcrumb
# Code will error if you try to connect to a service
# that does not exist
rospy.wait_for_service('/breadcrumb/request_path')
srvc_bc = rospy.ServiceProxy('/breadcrumb/request_path', RequestPath)
# Loop through the list of waypoints
for i in range(len(waypoints)-1):
# Set up a path request for breadcrumb
req = RequestPathRequest()
req.start.x = waypoints[i][0]
req.start.y = waypoints[i][1]
req.start.z = waypoints[i][2]
req.end.x = waypoints[i+1][0]
req.end.y = waypoints[i+1][1]
req.end.z = waypoints[i+1][2]
res = srvc_bc(req)
# Breadcrumb will return a vector of poses if a solution was found
# If no solution was found (i.e. no solution, or request bad
# start/end), then breadcrumb returns and empty vector
# XXX: You could also use res.path_sparse (see breadcrumb docs)
if len(res.path.poses) > 0:
# Print the path to the screen
rospy.loginfo("Segment %i:", i+1)
rospy.loginfo("[%0.2f;%0.2f;%0.2f] => [%0.2f;%0.2f;%0.2f]",
req.start.x,req.start.y,req.start.z,
req.end.x,req.end.y,req.end.z)
# Loop through the solution returned from breadcrumb
for i in range(len(res.path.poses) - 1):
rospy.loginfo(" [%0.2f;%0.2f;%0.2f]",
res.path.poses[i].position.x,
res.path.poses[i].position.y,
res.path.poses[i].position.z)
else:
rospy.logerr("solution not found")
Typical implementations of breadcrumb are as an "off-line path-planner", meaning that the system is to be run prior to starting the flight component of a mission (rather than "on-line", which would provide planning live during the mission). While either method is possible, it is typically much easier to do as much planning in advance as possible. A common approach is to defile a "high-level waypoint list", as in the example above, then use breadcrumb to refine these into a longer "low-level waypoint list". This lower-level list can then be used in the guidance examples above to command the system during flight.
This guide explains how to use the concepts of the transformation library TF2. The primary resources used are the TF Broadcaster, and the TF Listener.
First of, download the following 3 scripts to your computer:
Next, make you've started roscore
and launch the UAVUSR emulator as usual.
Run the Frame Transformations script. You will get a message saying "tf2_broadcaster_frames running.". The purpose of this script is two-fold, firstly, it provides the TF system with a source transformation to use, namely the transformation from the world to the UAV, named "map" and "uav" respectively. The second purpose is to output a static transformation that represents the transform from the UAV to the camera, named "uav" and "camera" respectively. With this data, a transformation chain is set up, that is, we can determine the transform between any of the 3 previously defined frames. We can use TF to provide us with the location of, say, where the camera is specifically relative to the map.
At this point, it may be good to open rviz
, then add in the TF plugin. You should see all 3 frames displayed in the environment, looking something like:
Next, run the Target Localization script. This script will do noting immediately, as it is simply waiting for a timestamp to be sent. It subscribes to the topic /emulated_uav/target_found
, and is waiting for a timestamp that represents a time when a valid target transformation has been sent to TF.
Finally, run the Target Broadcaster. This script does the following:
- Sets up the required systems, then runs
send_tf_target()
. - A timestamp that represents when a target has been found. This is simulated by grabbing the current time.
- Estimates the target's location in the camera frame (target position relative to the camera). This is simulated by putting random values into the transformation.
- The transform is sent to TF.
- The timestamp is published to
/emulated_uav/target_found
.
This is when the magic happens! Going backwards down the chain now:
- The timestamp is received by the Target Localization script.
-
Target Localization runs the function
t = tfBuffer.lookup_transform("map", "target", msg_in.data, rospy.Duration(0.5))
which does the following:- From the current TF information (
tfBuffer
) ... - ... lookup the transform (`lookup_transform) ...
- ... from the map (
"map"
) ... - ... to the target (
"target"
) ... - ... at a specific time (
msg_in.data
) ... - ... and allow this function to wait for up to 0.5 seconds for any data required to run this calculation (
rospy.Duration(0.5)
). - The output of this is
t
, which is data in the form ofgeometry_msgs/TransformStamped
, which represents the current pose of the "target" in the "map" frame.
- From the current TF information (
- From here, Target Localization simply extracts the
[XYZ]
location of the target, and dumps it to the screen (but this could be further used to command the UAV to fly above the target). - Lastly,
rviz
should briefly display the "target" transformation on the screen, although it will vanish as it was a once of measurement, which essentially can't be trusted to be correct after some amount of time.
Click here to see more
The OAK-D lite Fixed Focus camera use standard USB-C 2.0 or 3.0. It is recommended the installation process provide by the manufacturer depthai-ros. Alternative tools to manipulate videos and images are listed in the Imagery & Cameras - Useful Applications section, watch out, some of those tools might be outdated (kinetic -> noetic).
Performing a camera calibration is a fairly critical task to ensure that any calculations that are performed using your imagery are as correct as they could possibly be. For a standard monocular camera, check the Monocular Camera Calibration tutorial. [In Progress]
A simple image processing script for Python is provided below: [In Progress]
[In Progress]
Target pose estimation using SolvePnP.
If you are experiencing lag with your processed images, it is possible that you are experiencing a buffer backlog. To avoid processing older messages, make the following changes to the image processor example above:
class ImageProcessor():
def __init__(self):
self.time_finished_processing = rospy.Time(0)
# ...
def callback_img(self, msg_in):
if msg_in.header.stamp > self.time_finished_processing:
# Convert ROS image to CV image
try:
cv_image = self.bridge.compressed_imgmsg_to_cv2( msg_in, "bgr8" )
except CvBridgeError as e:
print(e)
# Rest of processing steps ...
# Place the following below last line of code of this function (before changes)
self.time_finished_processing = rospy.Time.now()
By adding this additional check on the timestamp of the image, the processor should discard images that were received during the processing steps. Thus, you should always end up processing the latest image (within some small margin).
Click here to see more
Payload actuation can be achieved in a number of ways, although it is typically convenient to utilise existing on-board computational devices if possible (e.g. the on-board computer or the flight controller) as this reduces the amount of additional components and unnecessary weight.
Interfacing with a Raspberry Pi can be achieved by utilising the GPIO pins directly using a library like RPi.GPIO. To control the servo provided, there is multiple resources online, check this video for example: RPi servo control
Once you have a working actuator, extending the functions of the GPIO so that it can be interfaced with other subsystems is then the more difficult part. However, ROS can be used to simplify this process.
Note: ROS does not work when run as a super user (e.g. using sudo
). Running a ROS script with sudo
may cause damage to your software setup! If your system is configured correctly, you should not need to use sudo
to access the GPIO and run the following script. If you must use sudo
, then your system is not properly configured!
Interfacing with other subsystems can be achieved using a simple ROS node (this example works as a base to join the python servo control you found online with ROS):
#!/usr/bin/env python3
import rospy
import RPi.GPIO as GPIO
from std_msgs.msg import Bool
sub_a = None
chan_list = [11,12] # Set the channels you want to use (see RPi.GPIO docs!)
def callback_a(msg_in):
# A bool message contains one field called "data" which can be true or false
# http://docs.ros.org/melodic/api/std_msgs/html/msg/Bool.html
# XXX: The following "GPIO.output" should be replaced to meet the needs of your actuators!
if msg_in.data:
rospy.loginfo("Setting output high!")
GPIO.output(chan_list, GPIO.HIGH)
else:
rospy.loginfo("Setting output low!")
GPIO.output(chan_list, GPIO.LOW)
def shutdown():
# Clean up our ROS subscriber if they were set, avoids error messages in logs
if sub is not None:
sub_a.unregister()
# XXX: Could perform some failsafe actions here!
# Close down our GPIO
GPIO.cleanup()
if __name__ == '__main__':
# Setup the ROS backend for this node
rospy.init_node('actuator_controller', anonymous=True)
# Setup the GPIO
GPIO.setup(chan_list, GPIO.OUT)
# Setup the publisher for a single actuator (use additional subscribers for extra actuators)
sub_a = rospy.Subscriber('/actuator_control/actuator_a', Bool, callback_a)
# Make sure we clean up all our code before exiting
rospy.on_shutdown(shutdown)
# Loop forever
rospy.spin()
Running that script will allow you to then send commands to the actuator:
rostopic pub /actuator_control/actuator_a std_msgs/Bool '{data: True}'
Click here to see more
Many options exist for making audible noises, with a common option to use speech synthesizers like espeak to read out text to the operator.
Click here to show code examples
To install espeak (and the Python bindings):
sudo apt install espeak python-espeak
To test the synthesizer in the command line:
espeak "Hello World!"
Note: If you experience errors with the above command, try running it with:
PA_ALSA_PLUGHW=1 espeak "Hello World!"
To use the python bindings:
from espeak import espeak
espeak.synth("Hello World!")
# This loop is only needed to stop the script exiting
# This can be omitted if your program will not exit
# immediately after the synth
while espeak.is_playing():
pass
Note: If you experience errors with the above command, try running it with (for example, if you saved the script as "speech.py"):
PA_ALSA_PLUGHW=1 python speech.py
ROS Integration example:
import rospy
from std_msgs.msg import String
from espeak import espeak
def callback(msg_in):
text = "I heard %s" % msg_in.data
espeak.synth(text)
rospy.loginfo(text)
if __name__ == "__main__":
rospy.init_node('synth')
pub = rospy.Subscriber("chatter", String, callback)
try:
rospy.spin()
except rospy.ROSInterrupyException:
pass
In rviz
, the plugin Marker
allows you to set a specific marker in the world display, and is useful for all sorts of feedback to the operator. The Marker
plugin listens for a visualization_msgs/Marker message, and will display it accordingly.
Click here to show a code example of placing a marker in the "map" frame of reference
To publish a simple marker to rviz using Python:
#!/usr/bin/env python3
import rospy
from visualization_msgs.msg import Marker
if __name__ == "__main__":
rospy.init_node('node_name')
pub = rospy.Publisher("tile_marker", Marker, queue_size=10, latch=True)
# Set up the message header
msg_out = Marker()
msg_out.header.frame_id = "map"
msg_out.header.stamp = rospy.Time.now()
# Namespace allows you to manage
# adding and modifying multiple markers
msg_out.ns = "my_marker"
# ID is the ID of this specific marker
msg_out.id = 0
# Type can be most primitive shapes
# and some custom ones (see rviz guide
# for more information)
msg_out.type = Marker.CUBE
# Action is to add / create a new marker
msg_out.action = Marker.ADD
# Lifetime set to Time(0) will make it
# last forever
msg_out.lifetime = rospy.Time(0)
# Frame Locked will ensure our marker
# moves around relative to the frame_id
# if this is applicable
msg_out.frame_locked = True
# Place the marker at [1.0,1.0,0.0]
# with no rotation
msg_out.pose.position.x = 1.0
msg_out.pose.position.y = 1.0
msg_out.pose.position.z = 0.0
msg_out.pose.orientation.w = 1.0
msg_out.pose.orientation.x = 0.0
msg_out.pose.orientation.y = 0.0
msg_out.pose.orientation.z = 0.0
# Make a square tile marker with
# size 0.1x0.1m square and 0.02m high
msg_out.scale.x = 0.1
msg_out.scale.y = 0.1
msg_out.scale.z = 0.02
# Make the tile a nice opaque blue
msg_out.color.r = 0.0
msg_out.color.g = 0.2
msg_out.color.b = 0.8
msg_out.color.a = 1.0
# Publish the marker
pub.publish(msg_out)
try:
rospy.spin()
except rospy.ROSInterrupyException:
pass
To view the marker, simply run the python node, then in rviz, set the marker topic to /tile_marker
.
Click here to show a code example of placing a marker using TF2
Instead of placing a marker at a point on the map, you could pick a frame of reference in TF2 and have rviz render the marker as though it was "attached" to that frame.
First, we will inject a "fake" target location into TF2 by adding a static transformation:
rosrun tf2_ros static_transform_publisher 1 1 0 0 0 0 map fake_target
The format for this command is "rosrun tf2_ros static_transform_publisher" to run the node, then "X Y Z Rx Ry Rz" where these are the location of the target and the roll, pitch, and yaw rotations, and finally "map" and "fake_target" are the frame IDs to put into TF2. So, this command would place a frame of reference called "fake_target" at the location (1,1,0) with 0 rotation relative to the "map" frame.
At this point you could check the location of the fake target in rviz using the "TF Plugin" to see how this marker is placed, and to try moving it around by changing the input variables. For a real mission, a real estimate of the target would be used instead of creating a fake target frame of reference (i.e. it would be added to TF by the image processing system).
The second step is to place a marker that is attached to this frame of reference. Run the following Python code as a ROS node:
import rospy
from visualization_msgs.msg import Marker
if __name__ == "__main__":
rospy.init_node('node_name')
pub = rospy.Publisher("tile_marker", Marker, queue_size=10, latch=True)
# Set up the message header
msg_out = Marker()
msg_out.header.frame_id = "fake_target"
msg_out.header.stamp = rospy.Time.now()
msg_out.ns = "my_marker"
msg_out.id = 0
msg_out.type = Marker.CUBE
msg_out.action = Marker.ADD
msg_out.lifetime = rospy.Time(0)
msg_out.frame_locked = True
# Place the at the center of the frame of reference with no rotation
msg_out.pose.position.x = 0.0
msg_out.pose.position.y = 0.0
msg_out.pose.position.z = 0.0
msg_out.pose.orientation.w = 1.0
msg_out.pose.orientation.x = 0.0
msg_out.pose.orientation.y = 0.0
msg_out.pose.orientation.z = 0.0
msg_out.scale.x = 0.1
msg_out.scale.y = 0.1
msg_out.scale.z = 0.02
msg_out.color.r = 0.0
msg_out.color.g = 0.2
msg_out.color.b = 0.8
msg_out.color.a = 1.0
# Publish the marker
pub.publish(msg_out)
try:
rospy.spin()
except rospy.ROSInterrupyException:
pass
Viewing the marker in rviz is then the same as the previous example. Changing the variables in the first command (i.e. moving the frame of reference) will cause the marker to also move without needing to restart the node.
- Home
- Common Terminology
- UAV Setup Guides (2024)
- UAV Setup Guides (2022)
- UAV Setup Guides (2021)
- UAV Setup Guides (--2020)
- General Computing
- On-Board Computers
- ROS
- Sensors & Estimation
- Flight Controllers & Autopilots
- Control in ROS
- Navigation & Localization
- Communications
- Airframes
- Power Systems
- Propulsion
- Imagery & Cameras
- Image Processing
- Web Interfaces
- Simulation