This project is a fork from this github.
The VMobi is a edge computing solution created to help visual impaired people on daily challenges, such as walking on the street and dodge any possible danger on the way, finding specific objects, etc.
The project is built to run on a Raspberry Pi 4 model B, capture images from the webcam, process it using the Neural Network MobileNet SSD v2 accelerated using the Google Coral TPU connected on the USB 3.0 Hub. It also uses Transfer Learning Networks based on the same Neural Network and they can be trained on this google collab notebook
.
This project is being developed by a team at Insper, São Paulo, Brasil, alongside with Prof. Kamal Sarkar at UTRGV, Texas, USA.
The VMobi has two different implemented modes:
-
Safari Mode
This mode runs natively and only stops when the query mode is called. It uses the MobileNet SSD v2 network to detect any possible danger on the way and it alerts the user if so.
-
Query Mode
The Query Mode works as a query made by the user and the software starts processing the frames of the webcam with the corresponding model to the chosen object/category and alert if it finds any correspondence, returning after it's end to the safari mode.
- Raspberry Pi 4 Model B
- WebCam (USB)
- EarPhones/HeadPhones (USB)
- Google Coral TPU (USB)
- GPIO Hardware:
- Protoboard
- 1 Simple Button
- At least 2 jumpers
The main hardware is the Raspberry Pi 4 Model B, all other pieces are connected to it.
This project was made to use the Hardware
mentioned on the topic above and the raspbian as the operating system.
First, download the raspbian OS on any computer: https://downloads.raspberrypi.org/raspios_armhf/images/raspios_armhf-2021-05-28/2021-05-07-raspios-buster-armhf.zip
Insert the SD card on the computer and run the following command on the same directory of the dowloaded image file:
$ sudo time dd if=2021-05-07-raspios-buster-armhf.img of=/dev/mmcblk0 bs=4M conv=sync,noerror status=progress
Now you recorded the OS to an SD Card. By this you can eject the SD card and put it on the Raspberry Pi 4.
On the Raspberry Pi 4:
Make sure you have Python version >= 3.7:
$ which python3
$ python3 --version
Both commands above must return a valid output indicating the python3 version. If it shows an error or nothing, you should install python3
To install python3, in case you don't have it:
$ sudo apt update
$ sudo apt install python3-pip
Then, run:
$ git clone https://github.com/pfeinsper/VMobi-objetc-detection-raspberry-pi
$ cd VMobi-objetc-detection-raspberry-pi
$ sudo chmod +x install.sh
And, without the Google Coral TPU connected to the hardware, run:
$ sudo su
$ ./install.sh
Disclaimer: At the end of the script, your raspberry pi should reboot.
To use other models, download the .tflite
file and move it to a new folder in the root directory of the project, named '{Model Name}_model'
(Change Model Name
).
The project is built to run as root, so run first:
$ sudo su
Now that you have a root shell, conect the Google Coral TPU on the USB 3.0 input and then, in the root directory of the project, run:
$ python3 main.py --modeldir={Name of the Model Directory} --edgetpu
Note: You can also pass as an argument the resolution of the video with
'--resolution={Resolution Value}'
at the end of the previous command. By default, this resolution has a value of1280x720
.