Improving object detection using YOLO v4 for deep-weeding-bot project. Object detection is customized for corn vs. weed detection, in actual footage from robot traversing corn fields.
See Medium article on some mAP and cost analysis I did: link
-
Libraries used:
-
Tutorials used:
- Notebook: yolo_v4.ipynb
- Data augmentation code: yolo_setup
- mAP and project cost analysis: Medium
Create an object detection algorithm that distinguishes corn from weeds, on real-life video data collected in the field.
1. Distinguish corn from weeds
- as long as weeds can be distinguished from "other" entities the majority of time
2. Detect weeds with high recall
- robot will only target weeds for action, therefore it is more critical to produce high recall; i.e. correctly predict positives (i.e. detect "weed") out of all the actual positives in the dataset
Use case is to simulate a scenario in which a robot is deployed in a corn field to kill the majority of weeds, while minimizing tradeoff of damage to corn crops. Thus, main objective is to "play it safe" and mainly target weeds with high recall, rather than produce a response to corn detection.
Train YOLO model on custom data. Custom data is actual video footage of robot traversing corn field, with weeds in path. The objects trained on are corn and weed labels in the footage, separated into images.
- Video from robot with mounted GoPro camera, traversing actual corn fields with weeds growing on paths.
- Manually labelled corn & weeds in footage (i.e. images extracted from footage), using labelImg
- Data augmentation to distort images and expand dataset