-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Preprocessing annotations problem #16
Comments
@edgark31 confirmed he has the same problem, so the issue is upstream. The reason why this happens is very simple. Take a look at the top left corner of the image: The bboxes were generated by processing the GT segmentation of this image: However, if we do this directly on the semantic segmentation mask of the myelin, this is bound to happen. Both of your preprocessing functions extract the "individual" axons using the Lines 89 to 102 in e889db7
But if we do this on the myelin semantic segmentation mask, obviously the touching myelin sheaths get grouped together. Look at what happens when I select these 4 myelin regions in the image (using diagonal neighbors): We can see that these regions are bundled together, and this is actually 100% consistent with the bboxes. The simplest fix would be to use the axon mask instead of the myelin mask for the preprocessing, because the axons are naturally disjoint objects and this grouping would never happen. For @edgark31, the preprocessing function uses the myelin mask. It does read the axon mask but does nothing with it: axon-detection/src/preprocessing.py Lines 112 to 134 in e889db7
For COCO I'm not entirely sure why this is happening because in the preprocessing function, the axon mask is used. However this does not make sense because your bboxes are the same as Edgar's (and we can see it visually as well in your visualization). @MurielleMardenli200 are you using a different implementation than what is on master by any chance? axon-detection/src/preprocessing.py Lines 249 to 265 in e889db7
|
Ahh I see @MurielleMardenli200 you are working on |
Description of problem
The current method
preprocess_data_coco
in the preprocessing file which creates the coco annotations for RetinaNet model (see here) does not create the right bounding box dimensions.Example
This example shows the ground truth according to the generated by the validation annotation file. The problem is that some axons are grouped together in certain bounding boxes
The text was updated successfully, but these errors were encountered: