You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am confused how you are defining the wrong bounding boxes as the yolov5 with generate multiple bounding box.
If True positive: 88.6%
False positive: 9.6%
False negative: 11.4%
how can I calculate the % for wrong bounding box and misclassification separately. I checked manually about the % of misclassfication of the validation dataset , it came out as 0.64%, so does it mean I have 8.96% of wrong bounding boxes? To confirm this I used IOU overlapping between annotations of my validation dataset and generating the detections from yolov5, the mean IOU value came out to be 0.995. Why so much difference ?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I am confused how you are defining the wrong bounding boxes as the yolov5 with generate multiple bounding box.
If True positive: 88.6%
False positive: 9.6%
False negative: 11.4%
how can I calculate the % for wrong bounding box and misclassification separately. I checked manually about the % of misclassfication of the validation dataset , it came out as 0.64%, so does it mean I have 8.96% of wrong bounding boxes? To confirm this I used IOU overlapping between annotations of my validation dataset and generating the detections from yolov5, the mean IOU value came out to be 0.995. Why so much difference ?
Beta Was this translation helpful? Give feedback.
All reactions