The Autonomous Self-Driving Test Vehicle project combines computer vision, perception algorithms, and advanced control systems to achieve autonomous navigation. It integrates perception, localization, path planning, actuators, and real-time hardware communication to create a comprehensive self-driving solution.
This project aims to build a self-driving vehicle capable of perceiving its environment, planning paths, and executing actions autonomously. It incorporates several key components:
-
Computer Vision and Perception: The project uses advanced computer vision techniques for road segmentation, obstacle detection, and environment understanding.
-
Localization: Precise localization is ensured by sensors and camera data to determine the vehicle's position accurately.
-
Path Planning: Path planning algorithms generate optimal trajectories based on environment perception and destination, creating waypoints for navigation.
-
Control Systems: PID controllers offer real-time control for accurate steering, speed regulation, and overall stability.
-
Arduino Communication: ROS and rosserial enable seamless communication with Arduino microcontrollers, allowing physical interactions and environment changes.
-
Camera Feed Integration: Real-time camera feeds from Logitech USB cameras and Oak-D Lite cameras provide critical visual data for perception.
-
Joystick Control: ROS-enabled joystick control allows manual interaction and controlled testing of vehicle functions.
-
Deep Learning Framework: TensorFlow integration facilitates deep learning models for advanced perception tasks and object detection.
- Install necessary ROS packages, including USB camera drivers and joystick support.
- Launch camera nodes to feed video data into the perception pipeline.
- Execute the perception node (
vision.py
) for road segmentation and object detection. - Generate waypoints and path plans using control systems and algorithms.
- Utilize PID controllers for precise steering and speed management.
- Communicate with Arduino microcontrollers for physical actuation.
The project presents two critical demonstrations. The first showcases road segmentation and perception through a GIF, while the second displays autonomous vehicle navigation using perception, control, and planning.
run.mp4
Testing_only_on_vision.mp4
vision_testing.mp4
As autonomous technology advances, this project serves as a foundation for further exploration. Future enhancements may involve advanced machine learning, semantic mapping, obstacle avoidance, and integration with larger-scale autonomous systems.
The Autonomous Self-Driving Test Vehicle project demonstrates the limitless potential of modern robotics and automation.
roslaunch usb_cam usb_cam-test.launch
publishes camera frame to "/usb_cam/image_raw" topic.vision.py
subscribes to "/usb_cam/image_raw" and publishes error to "error" topic.controller.py
subscribes to "error" and publishes to "steer" topic.steer.py
subscribes to "steer_arduino" topic.throttle.py
gives constant throttle and publishes to "arduino" topic.
serial_node_steer.py
and serial_node_throttle.py
used for communication with arduino using rosserial.
steer_joy.py
is to be used for joystick control only.
Download model weights from here. Download Manual control ROS package from here. Download Autonomous ROS package from here. Download Oakd-Lite ROS package from here.
-
Run
sudo apt-get install ros-noetic-usb-cam
to install camera node for video feed -
Run
sudo apt-get install ros-noetic-joy
on terminal to install joystick package. -
Run
rosrun joy joy_node
to start the node. -
Run
rostopic echo joy
to see the topic result. -
Run
rosrun usb_cam usb_cam_node
for the camera feed for logitech usb cam feed. -
Run
roslaunch depthai_examples mobile_publisher.launch
for oak-d lite cam feed. -
Run
python3 -m pip install tensorflow
andpip install tensorflow-addons
on terminal to install tensorflow forvision.py
. -
Run
sudo apt-get install ros-noetic-rosserial-arduino
to install rosserial for communication with arduino.
- Follow steps from here.
- Follow every step till Docker(excluded), then go to 'Executing a file'.
- There a 8 packages , do not use catkin_build_isolated but use catkin build.
- After build, simply launch file given above at point 6 in Setting up.
- Multiple topics will appear and desired image will be published on some topic.