Replies: 1 comment
-
@AyushExel sounds like a good research direction! VOC can return some pretty high mAP numbers in comparison to COCO, so it's hard to say if 69.1% is good or not. We have a public VOC project here showing about 86% and 88% [email protected] (VOC metric) with YOLOv5s and YOLOv5s6. Their 'computation cost (ops)' doesn't seem to align well with our own YOLOv3-tiny measurement of 13.2 GFLOPS at 640x640 either, not sure where the discrepancy lies, may be inference size asumptions https://github.com/ultralytics/yolov3#pretrained-checkpoints Note for VOC our default train script is shown in the Google Colab notebook here: # VOC
for b, m in zip([64, 48, 32, 16], ['yolov5s', 'yolov5m', 'yolov5l', 'yolov5x']): # zip(batch_size, model)
!python train.py --batch {b} --weights {m}.pt --data voc.yaml --epochs 50 --cache --img 512 --nosave --hyp hyp.finetune.yaml --project VOC --name {m} |
Beta Was this translation helpful? Give feedback.
-
Experiment in brief
I'm trying to test an implementation of YOLO Nano model. Here is my approach:
Motivation for this experiment:
The numbers mentioned in the YOLO nano research paper are quite striking. A 4MB model with 69.1% mAP on VOC 2017 should be a major improvement in terms of robustness for edge devices and online computation.
@glenn-jocher Are there any steps that I should keep in mind when running this study? Like for example, to make this reproducible, is it recommended to set/modify some random seeds?
Beta Was this translation helpful? Give feedback.
All reactions