-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Focal loss value is very less #2
Comments
Hi, for Q1: what label should we set for the background, positive or negative? And where should we choose the bbox? How many should we set for background? We can simply think the loss function meaning. Q2: Yes you are right. The result of mine was also use sum() instead. The positive examples only consider the positive anchors losses. Q3: Use reduce_sum() instead. |
Thanks for your response. One more question:
In the first case, I m getting around 9000 +ve anchor boxes and in the second case I am getting 200 +ve anchor boxes. Which one do you think is correct ? |
It depends what Network you choose. For example, if you implement ssd, you may need searching anchor boxed on multi-layer features using max_iou threshold. If you implement Faster-RCNN series and YOLO, you just need computing max_iou on last level featur. Yes, you need to compute all anchor boxes and then check for max_iou. |
Thanks man for your time. I using RetinaNet.
Regards,
Prakash V
…On Mon, Mar 5, 2018 at 8:00 PM, Cheng Yang ***@***.***> wrote:
It depends what Network you choose. For example, if you implement ssd, you
may need searching anchor boxed on multi-layer features using max_iou
threshold. If you implement Faster-RCNN series and YOLO, you just need
computing max_iou on last level featur. Yes, you need to compute all anchor
boxes and then check for max_iou.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#2 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AK2yaNyKZ3CoEWw1nGworvI0gqS3efyqks5tbUv0gaJpZM4SWpQi>
.
|
You are welcome. |
Yup everything is working now as desired . Will be releasing the blog post
soon. Thanks for ur response man. Really helped me :)
…On Mar 7, 2018 7:02 AM, "Cheng Yang" ***@***.***> wrote:
You are welcome.
I'm curious about your final training result, does it significant
incrementation?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#2 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AK2yaFAj_WNL4O8ixP05fIe8DBXbkD-Xks5tbzi-gaJpZM4SWpQi>
.
|
Hi,
I have implemented your code in Pytorch and it worked properly but have the following concerns
My sudo code works like this
cls_targets = [batch_size, anchor_boxes, classes] # classes is 21 (voc_labels+background) [16, 67995, 21]
cls_preds = [batch_size, anchor_boxes] # anchor_boxes number ranges from -1 to 20 [67995, 21]
Now I remove all the anchor boxes with -1 (ignore_boxes)
cls_targets = [batch_size * valid_anchor_boxes, classes] # [54933, 21]
cls_preds = [batch_size * valid_anchor_boxes, classes] # [54933, 21] This is one hot encoding vector
Now, I followed your code and implemented focal loss as it is but My loss values are coming very less. Like random values is giving a score of 0.12 and quickly the loss is going 0.0012 and small
is der I am missing something:
Question1:
I am still not getting quite write, if I should use 0 as my background class and how normalization is done while focal loss is applied.
Question2:
I see u have taken mean() but the papers says we need to sum and normalize with positive anchors. Does positive anchors mean only positive anchor boxes are all valid anchor boxes ?
Question3:
The graphs presented by you shows that the loss starts from 6.45... and decreases but mine starts from 0.12 and quickly drops to small decimals..
The text was updated successfully, but these errors were encountered: