I have rewritten this library fully in python and it is under active development again. the new repo is here:
https://github.com/gregorianrants/composed-robot?tab=readme-ov-file
of note the speed control of motors has been broken out into a python package that can be used indpendantly and also loads the firmware onto the buildhat. other aspects of the codebase have also been broken out into resusable packages.
Obstacle avoiding raspberry pi robot, with distributed software architecture, at the moment this readme just gives an overview of what the hardware and software does. Soon more information will be added on both how the software works and how it was designed.
There are parts of this project that aren’t completed this will probably always be the case. This is a passion project, it is something I plan to continuously experiment with refactor and add to. Although there is much in an uncompleted state there is also much that can run and works brilliantly.
This library works in conjunction with 3 other libraries.
This github library is the code which runs on the robot however the full system involved code running on other network locations and there is also code shared between the libraries. The other libaries that make up the full system are listed bellow.
Desktop Library runs on remote machine, used for controlling the robot, viewing video from camera, running nodes that require intensive processing to free up the processor on the raspberry pi
Shared Library is for code that is required on both the pi and the desktop
Web Client is for controlling over the internet, viewing video captured by camera also used for object recognition using tensorflow.js
The current implementation of the web client was for a previous implementation of the on robot software and it currently needs reintegrated. This is a small job though.
Here is a video of the robot navigating a complex human environment.
Chassis – lego technic
Motors – lego technic large angular motor.
Sensors – 5 ultrasonic distance sensors and raspberry pi robot.
The Robot uses a kinematic model and the Speed control module to facilitate controlling the robot with translational velocity and rotational velocity.
Multiple Nodes running on separate processes both on robot and off robot across the network
Publisher Nodes register their node_name and topic with a manager node.
Subscriber nodes that are interested in listening to a topic and/or node_name get the address that the topic/nodes are publishing on from the manager node. The nodes then communicate directly with each other in peer to peer fashion using events.
Nodes can be in any language, currently there are nodes implemented in both python and NodeJS.
Nodes can be composed in diverse ways to create behaviors. npm scripts are used to launch suites of nodes and run behaviors.
There are currently 2 main behaviors, although they do the same thing they use completely different methods and are a base to build more complex behaviors, which will need to use both types of obstacle avoidance together.
Very robust navigation of complex human environments. I haven’t as yet seen this fail avoiding any obstacles. More info will be added here soon as to how this is achieved. uses 5 ultrasonic distance sensors and a potential fields
This is currently in a primitive state, it works in simple environments and will be getting improved.
Tracking robot position using odometery displaying the robot position on a map on the desktop. odometry will be used for shorterm position and then position will be corrected intermittently using computer vision and markers of a known position.
Expanding the readme to have full documentation and explanations of design decisions and how the various components of the robot were designed and implemented.