We're building a model of an autonomous car using ROS framework. The idea is to have a fully functional car in 1:10 scale that we can use for experiments, development and testing of Computer Vision and Machine Learning algorithms.
Please check the documentation: https://autonomous-car-model.readthedocs.io/en/latest
Attention Clone repositories listed in download.repos before installing dependencies. You can do it manually or by using vcstool
You'll be able to:
- Manually steer your car using the nipple.js joystick
- Access a camera stream (the settings are hardcoded for raspberry camera, using different camera may require some modifications).
- Access ultrasonic sensors readings.
- Access accelerometer and gyroscope readings.
- Visualize the model in rviz and interact with its joints.
- (optional) Connect the sensors, motor and servo to pins as specified in the launch file (or modify the file accordingly.) You can run the code on Jetson without any hardware connected to it.
- From anywhere in the system run:
roslaunch car_bringup start_all.launch
- You can control the car with the web GUI. Simply modify the IP address in the sourcecode and open the html file in your browser. Note: of course you can open that file on any machine that can access your Jetson at its IP address.
Web GUI for manual steering and viewing sensors data
We're just learning ROS so in its current state the organisation of the code is not ideal, and some solutions may suboptimal/have a temporary character.
Please refere to Issues for the details.
- Install Lidar, run SLAM
- Add interface for CARLA simulator
- Implement path planning
- Implement image segmentation, drivable area detection
- Design environment-specific autonomous logic.
- ...