Skip to content

Latest commit

 

History

History
272 lines (172 loc) · 12.8 KB

README.md

File metadata and controls

272 lines (172 loc) · 12.8 KB

Mpb-1

Multipurpose Household Bot

One bot that does it all!
Explore the demo here »

Report Bug · Request Feature

Table of contents

  1. Idea Behind The Project
  2. About The Project
  3. Navigation
  4. Hardware
  5. Features
  6. Getting Started
  7. Usage
  8. Roadmap
  9. Contributing
  10. Acknowledgments

Idea behind the project

The basic idea of this bot is to create a multi-purpose household bot with a lot of features which justifies its price for an average household. Most household bots today only do single things like cleaning, playing but without increasing hardware significantly (which will not increase the cost) we can add a lot of features. The bot will help in all day to day activities ranging from baby care, cleaning, security and a lot more. Many small and innovative ideas like detachable components and DIY tutorials will increase its usability and decrease cost. We will also make it modifiable and programmable by users so that they can experiment with it, learn, and add new features. The core idea is to add innovative features with a limited hardware which solve small day to day problems.

This a boon for all the working parents out there who are looking for a reliable solution to their process of setting up a balance between parenting and work. We know how a parent wants the best for their child and is ready to spend any amount of money to establish an amazing future for them, but here, with our bot, we provide them with all the necessary help in the best quality while taking care of their pockets!

(back to top)

About the project

We built our own vacuum cleaning household bot entirely from scratch using SolidWork and did various testing like airflow simulation,stress analysis guaranteeing its functionality in the real world. For navigation we used differential drive along with ROS Navigation stack and implemented 3 algorithms for Autonomus Mapping,Autonomus Navigation and Autonomus Optimal Complete Coverage.Also we implemented Resnet and YoloV3 Algorithms for Baby Monitoring.Threat Detection and Face Recognition.

Built With

The technologies used while building and testing the project are:

(back to top)

Getting Started

How to set-up the project

Prerequisites

Before getting started, make sure your systme meets the following requirements:

Installation

Now that you're ready with the prerequisites, setup the project using the following steps-

cd catkin_ws/src
git clone https://github.com/harshmahesheka/Multi-Purpose-HouseHold-Bot
catkin build ./

To launch bot in household environment run

roslaunch rbot house.launch 

(back to top)

Hardware

The CAD model of the bot was created using Solidworks. Further, an URDF file was created using the model considering the motion along all the links which were to be controlled and simulated using ROS. The bot’s vacuum system is based on a centrifugal pump. Centrifugal pump is a machine that imparts energy to fluid. This energy can cause a fluid to flow or rise to a higher level. It consists of two basic parts: The rotary element or impeller and the stationary element or casing.

bot

We also conducted these tests to ensure the functioning of the bot-

a.Airflow (CFD-computational fluid dynamics): Simulated the trajectories of air, dust particles inside the vacuum system of the bot and around the fan to ensure its proper working and for ensuring proper dust collection, calculating suction force

b.Stress analysis (FEA-finite element analysis): To ensure that the main body or wheels do not deform/break due to the weight of other components present inside or when it (by mistake) collides with any object/wall / human interaction.

(back to top)

Navigation

ros

We used Differential Drive along with ROS Navigation Stack on our Bot to autonomously navigate it through house.We developed 4 moods for it's navigation-

a.Teleop Controlled-In this you can simply control bots motion by publishing on cmd_vel topic.

b.Autonomous Mapping-This will be used when a new bot comes to home,in this bot will autonomously map the whole house by finding regions which are not mapped.We used explore_lite along with ROS Navigation Stack for this.To launch this mood run

roslaunch rbot mapping.launch

c.Autonomous Navigation-This will allow you to autonomously send bot anywhere in generated also avoiding both static and dynamic obstacles.We used ROS Navigation Stack along with teb_local_planner as local planner.To launch this mood run

roslaunch rbot navigation.launch

d.Autonomous Complete Coverage-In night you can simply run this mood and bot will autonomously vacuum your whole house following an optimal complete coverage path.We have used full_coverage_path_planner as global planner along with teb_local_planner as local planner.To launch this mood run-

roslaunch rbot complete_coverage.launch

Find the demo of Navigation implementation here

(back to top)

Features

AI/ML Dependencies and Packages Required:

  • OpenCV
  • CV_bridge
  • Matplotlib
  • Numpy
  • Tensorflow-Keras(required for face recognition)

Face Recognition

  • We use a custom build AI model, combining both, a pretrained ResNet model and the OpenCV haarcascade funtion to develop face recognition algorithm.

  • The weights of the model were pretrained, they convert the image into a vector of length 128. And then the vectors are compared between themselves using a siamese network architecture.

  • The OpenCV haarcascade function first detects all the images in a given frame and then compares the faces with those present in the database. (Already stored in the system)

  • If the faces match or are within a threshold value, the bot recognises the person and greets them. If not, the bot considers it as an intruder and triggers an alarm through the system.

  • You can see the code here

ros2

Find the demo of Face Detection algorithm implementation here

Baby Threat Detection

  • The same algorithms used above are applied here.
  • The objective being to protect a baby from an outsider or intruder when the parents are not at home.
  • The algorithm first recognises all the faces in the frame and if both baby and intruder are detected in a frame, it triggers an alarm.

ros3

Find the demo of Threat Detection for baby here

Baby Following

  • As we know, Babies crawl around all the time.
  • And may swallow things or be around dangerous objects like knife, scissors, etc.
  • This feature of the bot follows the baby around and triggers an alarm or sends parents a message when the baby is near a harmful object or so.
  • Here we use custum build algorithm using pretrained YOLOv3 weights to perform object detection.
  • The YOLOv3 (You Only Look Once) model is an object detection architecture that uses a 106 layer network comprising of multiple Convolution, residual and 1*1 kernel layers to perform the object detection task.

ros3

For launching this mode run following command-

cd catkin_ws/src/Multi-Purpose-HouseHold-Bot/yolo/src
rosrun yolo cmd_vel_robot.py

Find the demo of baby following here

(back to top)

Usage

From cleaning the house to taking care of their baby by monitoring and following it, our bot is just what a working parent needs for their child. You can even video call from any remote location using our bot! Communication, security, hygiene, child’s social growth, you name it and we’ve got it covered with our bot that is truly multipurpose.

The project is open source and you are free to modify as per your needs.

(back to top)

Roadmap

  • Built our very own bot design completely from scratch
  • Testing of Airflow (CFD-Computational Fluid Dynamics)
  • Testing of Stress analysis (FEA-Finite Element Analysis)
  • Implemented custom-written YoLoV3 and ResNet algorithms
  • Created algorithms for navigation and complete-coverage vacuuming for the bot
  • Hardware of the bot (awaiting funding for our innovative idea)

See the open issues for a full list of proposed features (and known issues).

(back to top)

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b branchName)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin branchName)
  5. Open a Pull Request

(back to top)