-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mAP is too low but detect objects well with trained ckpt #27
Comments
I have found the reason.My label is 4, I should construct corresponding dictionary variable which stores the AP of every label to compute mAP. |
Hi @Janezzliu ,glad to learn you've got the issue fixed. Nice debugging! |
@LevinJ , For all poor souls out there that still trying to figure out why there is such a big gap in evaluation:
I am leaving this comment here since it's the most recent regarding mAP. Also haven't tested your code but I see that the script there is the same, so I am gonna assume that the error persists. I hope this helps. |
Hi @bnbhehe , thanks for sharing your findings! Can you elaborate a bit on "The tfrecords have the difficults clamped to 0 , therefore the ground truths are wrong. All ground truths are labeled as non difficults where they shouldn't, because they are sorted out in the bboxes_matching method."? Which lines of code clamped the difficulites attribute of training/evaluation samples to 0? I tried checking the codes, but was not able to find them. |
in you can notice this data corruption if you try to evaluate with difficults by switching the |
Hi @bnbhehe, not sure if it's an environment setup related issue, but it looks I checked the annotation file, there is indeed a difficult field,
What are your thoughts on this? |
It should return but its NoneType and the check will fail. Only if you call .text that gives a value but if the label is not present it will crash. I used python 3.5 and didnt have the behavior i wanted with this xml parser therefore i rewrote the process_image in the tfrecords script with xmltodict. I would suggest you check the I would also make a small decoding script to see the number of difficult annotations present. Please reply me on what is the best map you get if this is fixed. |
Hi @bnbhehe , I checked the codes a bit more closely and agree that you are correct, currently all bounding boxes are mistakenly labelled as non-difficult. This is because As for evaluating the model on the non-difficult ground truth labels, I am currently quite tied up with other stuff, and might do it when i have some time :) By the way, can you tell me where you find that "The original caffe implementation achieves 0.69 mAP on evaluation with difficult ground truths."? Thanks. |
@Janezzliu I got the same problem. I wondered in which file to construct the dictionary. |
Hi @LevinJ ,I apply SSD_tensorflow_VOC to my own datasets.I first Train SSD specific weights with self.max_number_of_steps = 10000,then Train VGG16 ad SSD specific weights with self.max_number_of_steps = 900000.First step has finished and second step has reached 60000.My loss is around 1.8 ,training mAP is 0.18 and testing mAP is 0.17. However,when I use trained ckpt to detect objects in testing pictures,it does well! So I go to your codes and website https://sanchom.wordpress.com/tag/average-precision/ to learn how mAP is computed. I don't find anything wrong. I'm quite confused. The testing results with trained ckpt don't match the mAP with 0.17.
The text was updated successfully, but these errors were encountered: