-
Notifications
You must be signed in to change notification settings - Fork 1
/
sbc-template.tex
282 lines (243 loc) · 20.2 KB
/
sbc-template.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
\documentclass[12pt]{article}
\usepackage{sbc-template}
\usepackage{graphicx,url}
\usepackage[utf8]{inputenc}
% UTF-8 encoding is recommended by ShareLaTex
\sloppy
\title{Team Description Paper of \\
Indian Institute of Technology Kharagpur for IARC}
\address{
Aditya Agarwal, Soumyadeep Mukherjee, Manash Pratim Das,\\
Gaurav Gardi, Sairam K, Kumar Ankit, Kumar Krishna Agarwal, \\
Vishnu Sharma }\\
}
\begin{document}
\maketitle
\\
\begin{abstract}
\begin{center}\textbf{ABSTRACT}\end{center}
This paper reports the current preparation strategy of Aerial Robotics Kharagpur,
participating in IARC Mission 7 2016. Our main goal includes robust, indoor
localization in GPS denied environments supported by optical flow sensors.
Other features like ground i-robots detection and differentiation between the
target bots and patrol bots is also discussed, followed by a brief description
of the herding algorithm AI algorithm to be used.
\end{abstract}
\section{INTRODUCTION}
Our research group, Aerial Robotics Kharagpur, started in January 2015 with IARC
being the prime target. Hence we started on with the problem statement of IARC
mission 7. Our team was organized into two major domains, which are "controls"
and "software". Keeping Robotics Operating System (ROS) as the base we developed
our simulation environment in ROS and Gazebo in order to speed up the software
development and testing while the hardware gets ready. As mission 7 is based
indoor, hence we made April Tags based indoor true value setup. We have a website[18]
and a GitHub Organization[19].
\section{SIMULATION}
Gazebo, an open source robotics simulator is used to simulate the robot along with
the mathematical, physical and visualization model. It also emulates the environment
with the physics and other interactive robots. We have make ROS plugin for the behaviour
of i-robots, whereas the quadcopter model,
is made on top of hector\_gazebo, supported by JSBSim, ArduCopter, RotorS, MAVROS,
ardupilot\_sitl\_gazebo plugin.
\begin{center}\includegraphics[scale=0.5]{image23} \\
\textbf{Figure 1. Gazebo Simulator.}\end{center}
\section{OVERALL SYSTEM DESIGN}
\begin{center}\includegraphics{image22} \\
\textbf{Figure 2. The above is our system design.}\end{center} \\
\begin{center}\includegraphics[scale=0.4]{ARK-System-flow-tdp} \\
\textbf{Figure 3. The above is our ROS based framework.}\end{center}
The diagram above shows the ROS packages, the ROS nodes inside them and the ROS topics they deal with.
This is the software control part which runs on the onboard computer.
\section{LOCALIZATION}
For indoor localization of the quadcopter, we use optical flow and grid localization methods,
based on px4flow optical flow sensor and bottom camera respectively. The sensor data is fed into,
an EKF/UKF state estimator that gives the estimated quadcopter pose.
\subsection{Grid Localization}
\subsubsection{Detecting the Grid}
\begin{itemize}
\item The image is acquired from the on-board cameras on the quadcopter.
\item The lines in the image are first filtered out and isolated by using segmentation, implemented using the Canny edge detector.
\begin{center}\includegraphics{image25} \\
\textbf{Figure 4. Image before canny is applied.}\end{center}
\begin{center}\includegraphics{image24} \\
\textbf{Figure 5. Image after Canny edge detection is applied.}\end{center}
\item Once Canny is applied, erosion and dilation is performed on the image to remove noise.
\item The lines are identified by using a Hough transform on the image.
\item After the Hough transform is applied and the lines found, the intersection points of these lines are identified as nodes of the grid.
\begin{center}\includegraphics{image27} \\
\textbf{Figure 6. Image after Hough transform is applied(on Canny filtered image).}\end{center}
\end{itemize}
\subsubsection{Grid Following}
This is achieved by Line following.
\begin{itemize}
\item The destination node is found.
\item If the robot is required to proceed from point (x1,y1) to (x2,y2), the robot first finds the nodes that are between these two points.
\item It then traverses the grid by following the lines connecting these nodes, keeping either x coordinate or y coordinate constant while following each line.
\item At each node, commands are sent to the robot to turn left, right or to move forward or backward to traverse the lines as described above.
\end{itemize}
\subsection{Optical Flow}
We are using px4flow optical flow sensor, which is interfaced to the onboard computer through USB. We use px4flow\_node[17], to get the optical values that are fed to the state estimation package. The quadcopter's position given by optical flow is further corrected by that given by the grid localization.
\section{GROUND ROBOT DETECTION AND TRACKING}
\subsection{Detection}
\subsubsection{Ellipse Detection}
\begin{itemize}
\item The detection of each Ground bot will be done with a modified form of Randomized Hough Transform(RHT), fully described in reference, to detect ellipses that correspond to the edges of the bots.
\item Two points are selected as the ends of a major axis, and a third point on the assumed ellipse is selected randomly and the vote of the accumulator is done on the length of the minor axis.
\end{itemize}
\subsubsection{Hog Based SVM Detection}
\begin{itemize}
\item Train HOG based classifier by using multiple images of ground robot and then run SVM detector.
\end{itemize}
\subsubsection{Template Matching}
\begin{itemize}
\item OpenCV Template Matching which is a technique for finding areas of an image that match (are similar) to a template image (patch).
\end{itemize}
\subsubsection{YUV Based Segmentation}
\begin{itemize}
\item YUV Based Cube Segmentation of Ground Robots followed by blob detection and extraction.
\end{itemize}
\subsection{Position Estimation}
Once the bots are detected, the noise associated with the dynamic observables of the moving bot will be filtered out using a Kalman filter to enable tracking of the bot. This is achieved by the following steps:
\begin{itemize}
\item The Kalman filter takes in the measured position of the bot(which in this case is the centre of the ellipse detected by the RHT) as well as its velocity from the video feed.
\item The position can also be estimated by integrating velocity over time (which incorporates the covariance between the position and velocity). This is called the predicted position.
\item The error associated with each of these quantities is also found by calculating the expected noise in the readings. The error is estimated as a Gaussian function.
\item These two quantities (measured and predicted positions) are compared and the best guess of the bot’s position is made by considering it to be the configuration for which both estimates are most likely after incorporating the associated errors.
\end{itemize}
\subsection{Tracking of Multiple Robots}
\begin{itemize}
\item Having found the most probable position of each bot using the Kalman filter, the next step is to track multiple bots.
\item A cost matrix is created which incorporates the direction in which the bots had been moving, the distance of the updated estimates of positions from the previous estimates, the expected collisions as well as the expected turns.
\item The cost matrix is run through the Hungarian algorithm to associate the updated positions with the previous positions, thus giving an identity to each bot, and enabling multiple bot tracking.
\end{itemize}
\section{OBSTACLE AVOIDANCE}
IARC consists of moving obstacle robots which need to be avoided when they are on the way of the desired trajectory and path.
\\ \\
Obstacles Description: 4 Ground Robots with cylinders attached on top of them to form vertical moving obstacles.
\\ \\
\textbf{Avoidance Algorithm Overview:} \\
The Obstacle Avoidance in IARC does not need to be global as the number of obstacles are less in number. So, we use eight sharp sensors attached on all four arms and between two propellers which measures the distance between the sensor and nearest obstacle. If the obstacle is closer than a threshold then multiple kinds of action can be taken:
\begin{itemize}
\item Wait for Obstacle to Move out of Path
\item Avoid Obstacle by Creating a New Trajectory around the obstacle and come back to desired trajectory.
\end{itemize}
\begin{center}\includegraphics{image26} \\
\textbf{Figure 7. Placement of IR sensor on the quadcopter.}\end{center}\\
\subsubsection*{Procedure I:}
\begin{itemize}
\item Interrupt Control for IR is triggered as soon as an obstacle comes closer than 40cm from any IR.
\item Depending on IR from 1-8 and comparing with velocity direction of robot, in case they are same, the robot is halted at the same position or commanded to follow line in reverse till last node is found.
\end{itemize}}
\subsubsection*{Procedure II:}
\begin{itemize}
\item Trajectory Generation Module is used to generate trajectories from node to node via the lines or directly.
\item This trajectory generation is recalculated on obstacle trigger and a visible graph is created around the obstacle and followed till next grid node.
\end{itemize}
\section{SYSTEM CONTROL}
\subsection{APM}
We have used APM which runs the ArduCopter-3.3 firmware. ArduCopter is a nearly feature-complete open source UAV firmware. Thus our high level control is utilizes the features of ArduCopter to its fullest. Since, most of ArduCopter’s autonomous features uses GPS, we used the APM in manual mode along with rc overrides mavlink messages to send signals to control the quadcopter. In this way APM thinks that the quadcopter is being controlled by an RC controller but actually it is controlled autonomously by an onboard computer.
\subsection{MAVROS}
Since APM communicates in Micro Air Vehicle Link(MAVLink) protocol and both our ground station and onboard computer uses Robot Operating System(ROS), we used MAVROS for communication between onboard computer and APM. MAVROS is a MAVLink extendable communication node for ROS.
\subsection{PID control}
We have used a two layer PID. The low level control is being taken care of by the ArduCopter. The high level controller, to control position in three dimension and yaw is implemented using MAVROS, in a script which runs on the onboard computer. The scripts changes the mode of ArduCopter to “AltHold” mode and computes the RC Override signal which is the output of PID controller and sends it to the APM.
\section{IARC ROBOT DESCRIPTION}
\begin{center}\includegraphics{image41} \\
\textbf{Figure 8. Our quadcopter.}\end{center}\\
\subsection{Configuration}
\begin{itemize}
\item Odroid XU4 : High Level Controller
\item APM : Low Level Controller
\item ESC : Electronic Speed Controller, 45A OPTO
\item LiPo : 11.1V, 3s, 5000mAh, 20C
\item Motors: 850kv BLDC
\item Propellers : 11” x 4.7”
\item Camera : 30fps, Field of View - 78 degrees, Aspect Ratio - 16:9
\item Receiver : 6 channel PPM
\item Frame : Glass Fibre X frame
\end{itemize}
\section{HERDING ALGORITHM AI}
\subsection{Greedy Method}
\begin{itemize}
\item The motif of this algorithm is to stop the ground bots from crossing a line while shifting the line ahead if the area behind is cleared of the bots.
\item Before takeoff, the quadcopter will be fed with the direction information about the green and red lines. The direction information will tell where is the red line (east/west/north/south).
\item Once the quadcopter takes off, it will hold altitude at 1.5m (from ground) and hold position on the nearest node while holding its yaw.
\item As it knows the general direction of the red line, so it turns on the node to a heading that is aligned with a grid line along the general direction of the red line.
\item It then follows the grid to:
\begin{itemize}
\item Detect the position of red line. Let us call it L0
\item Track back to the line ahead of it. L1.
\item Move to the end of L1. L(present) = L1.
\end{itemize}
\item The quadcopter will now start making “search & clear” maneuver. This maneuver is a loop that involves:
\begin{itemize}
\item Moving along line L(present) to search for ground bots.
\item If ground bots are found, then it will mark the location and predict trajectory of each one of them.
\item Once the search is complete it will target the ground bots it encountered and make strategic landing for each one of them to turn them in the general direction of the green line. This process will clear the particular section it can view while moving along the line and will ensure section present in L < L(present)
\item Once the section is clear it will go to an end of L(present) ++. And continue the loop.
\end{itemize}
\item So we keep clearing the area and pushing the ground bots until they cross the green line.
\end{itemize}
\begin{center}\includegraphics[scale=0.7]{image28} \\
\textbf{Figure 9. Herding algorithm visualization.}\end{center}\\
Assumption: Our robot is faster than the ground robots.
\section{EMERGENCY KILL SWITCH}
We have three level of control on the quadcopter: Wifi-software, standard Radio Control link and "IARC Common Kill Switch"[16] as given in IARC website.
As required in the competition, we have a hardware independent safety mechanism to act as a kill switch in case of an emergency that will cut power
to the electronic speed controllers. RC control link can be used for manual override control of the quadcopter or to change ArduCopter mode to LAND, while the same can be achieved by MAVROS commands over Wifi link to the on-board computer.
\section{TRUE VALUE SETUP}
Setup of a ground truth system is essential to register ground truth data. Obtained data can be used to evaluate, test and benchmark algorithms related to localization, mapping and motion planning in quadcopters. This setup should be robust to illumination changes, shadows and arena modifications. Such setup would even help in many different strategies to work upon further, like tracking of system, game strategies and for easy debugging. \\ \\
Depth perception from conventional stereo vision is based on the triangulation principle as mentioned. [1, 2, 6, 8, 9]. The 3D position of quadcopter can be obtained by the intersection of the two rays passing through the camera optical centers and its 2D projections in each image, but object disparity is an important cue for position estimation. We use two cameras arranged on a baseline, such that their view fields overlap at the desired object distance. By taking a picture with each camera, we capture the scene from two different viewpoints. Hence, a correspondence needs to be established between two images regions that correspond to some physical feature in space. Then the depth can be reconstructed using triangulation, which is discussed further. \\
We use April Tags[15] placed on the quadcopter to track it.
\begin{center}\includegraphics{image42} \\
\textbf{Figure 10. Left and right Camera Calibration using 9*6 checkerboard}\end{center}\\
\section{TESTING}
\begin{center}\includegraphics[scale=0.7]{ARK-lab-kgp1} \\
\textbf{Figure 11. Sample testing arena}\end{center}\\
We have an indoor testing arena with sample grid floor as in IARC arena and two iRobots. We have safety harness to
tie up the quadcopter while testing various controls and PID tuning. \\
We tested the AI (Herding) algorithms on the simulator. We tied the quadcopter with various allowed degree of
freedom to test altitude hold, yaw hold, node hold and grid following algorithms on the real quadcopter in our arena.
\\
We tested various quadcopter frames, onboard computers, flight control boards, cameras and other hardware components
like motors, electronic speed controllers, power distribution system etc. The following are a the combinations we tested.
\\
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
& \includegraphics[scale=0.3]{image01-2} & \includegraphics[scale=0.3]{image08-1} \\
\textbf{Frame} & Wooden X Frame & Plastic X Frame \\
\textbf{Processor} & Raspberry Pi & Odroid XU4 \\
\textbf{Controller} & Pixhawk & Arduino NANO \\
\textbf{IMU} & Pixhawk inbuilt & GY87\\
\textbf{ESC} & 30A BEC & 40A BEC\\
\textbf{Battery} & LiPo:14.8V, 4s, 5000mAh, 20C & LiPo: 11.1V, 3s, 5000mAh, 20C \\
\textbf{Motors} & 1000kv BLDC & 980kv BLDC \\
\textbf{Propellers} & 11” x 4.7” & 10” x 4.5” \\
\textbf{Protocol} & Mavlink & MultiWii\\
\textbf{Payload} & 600gms & 400gms \\
\hline
\end{tabular}
\end{center}
\section{References}
\begin{enumerate}
\item \href{https://books.google.co.in/books?id=4YCThwzeTBQC&redir_esc=y}{"Aerodynamics for Engineering Students". E. L. Houghton, P.W.Carpenter.}
\item \href{https://books.google.co.in/books/about/Fundamentals_of_Aerodynamics.html?id=CaBTAAAAMAAJ}{"Fundamentals of Aerodynamics". John David Anderson.}
\item \href{https://books.google.co.in/books/about/Fundamentals_of_Compressible_Flow.html?id=_NCz9iZIugYC}{"Engineer's Aerodynamics". S. M. Yahya.}
\item \href{http://eprints.qut.edu.au/33732/1/cep2009_modelling_and_control_paper_sub_final.pdf}{"Modelling and Control of a Large Quadrotor Robot". P. Pounds, R. Mahony, and P. Corke.}
\item \href{http://ieeexplore.ieee.org/iel5/5076472/5152175/05152390.pdf?arnumber=5152390}{"Design principles of large quadrotors for practical applications," P. Pounds and R. Mahony}
\item \href{https://www.researchgate.net/publication/220474158_Towards_autonomous_indoor_micro_VTOL}{"Towards autonomous indoor micro VTOL". Samir Bouabdallah, Samir Bouabdallah and Roland Siegwart.}
\item \href{http://www.itcon.org/cgi-bin/works/Show?2012_12}{"Usability assessment of drone technology as safety inspection tools". Javier Irizarry, Masoud Gheisari and Bruce N. Walker, Associate.}
\item \href{http://www.engadget.com/2011/04/21/t-hawk-uav-enters-fukushima-danger-zone-returns-with-video/}{Honig, Z. (2011) "T-Hawk UAV enters Fukushima danger zone, returns with video." 6:48PM April 21, 2011, retrieved on April 22, 2011.}
\item \href{http://9to5mac.com/2011/06/15/awesome-use-of-an-ipad-and-the-parrot-ar-drone/}{Zibreg. C. (2011) "Awesome use of an iPad and the Parrot AR Drone."}
\item \href{http://robotics.felk.cvut.cz/faiglj/thesis/papers/icr10.pdf}{"Surveillance Planning with Localization Uncertainty for UAVs". Jan Faigl, T Krajnık, V Vonásek, L Preucil}
\item \href{http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=6005280}{"Collocated interaction with flying robots". Wai Shan Ng and Ehud Sharlin}
\item \href{https://www.researchgate.net/publication/220947254_Flying_sports_assistant_External_visual_imagery_representation_for_sports_training}{"Flying sports assistant: external visual imagery representation for sports training." Higuchi, K., Shimada, T., and Rekimoto, J.}
\item \href{http://www.aviationsystemsdivision.arc.nasa.gov/publications/hitl/rtsim/Toms.pdf}{"Simulator aero model implementation." T. S. Alderete, NASA Ames Research Center, Moffett Field, California.}
\item \href{https://www.researchgate.net/publication/228962656_Nonlinear_observer_design_and_sliding_mode_control_of_four_rotor_helicopter}{“Nonlinear observer design and sliding mode control of four rotors helicopter”. H. Bouadi and M. Tadjine}
\item \href{https://april.eecs.umich.edu/papers/details.php?name=olson2010tags}{"AprilTag: A robust and flexible multi-purpose fiducial system". Edwin Olson}
\item \href{http://www.aerialroboticscompetition.org/downloads/killswitch.zip}{The IARC common safety switch}
\item \href{http://wiki.ros.org/px4flow_node}{px4flow\_node}
\item \href{http://quadrotor-iitkgp.github.io/}{Aerial Robotics Kharagpur: Website}
\item \href{https://github.com/quadrotor-IITKgp}{Aerial Robotics Kharagpur: GitHub organization}
\end{enumerate}
\end{document}