Hello! Here's a short story about how we trained YOLOv4, YOLOv5, and Mask-RCNN models, and benchmarked them on dataset of Avocados (🥑) and Cavas (🐍).
Avocado contains more fat than any other vegetable 🥑
Let's kick off by describing two types of datasets we had for training: one for YOLO family of models and one for Mask RCNN as it requires not just bounding boxes (as YOLO do) but actual segmentaion masks. Pictures of our lovely plushies were taken all around the University of Innopolis (dormitories included) 🗺️.
For training YOLOv4 and YOLOv5 it was enough just to annotate bounding boxes with 🥑 and 🐍 on roboflow and use the download link in the Colab notebook to retrieve the dataset. It's labelled version looks like this:
For this model we needed some instance segmentation labelling interface, and we gladly used label studio as it is free and allows cooperative labelling which made the work twice as fast. Labelled avocados 🥑 look somewhat like this:
Very sequential and straightforward. Little changes were made to original notebooks 👍.
Legends say that metrics values are provided in the notebooks 🤓
No words needed. Let's just look at how trained models detected those cuties:
No speacial things needed. Just download the notebook, and run all cells, shoud be ok 🤔.
Yep. Nothing special needed too. All the links and downloads are provided in the ipynb 🤙.
Things get a bit tricky here as we didn't train it on dataset which was labeled on roboflow, we used label studio instead, and to have the notebook working, you need to have this dataset on your google drive. That's it 🙃.
All models were training according to the official Colab tutorials on roboflow and PyTorch: