-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
train loss #6
Comments
Yes it's normal since the confidence loss (i.e., Line 114 in df9a529
|
would you like to share your train.log?Thx |
Sorry, I don't have access to the training logs. Did you try to visualize the predicted bounding boxes using the trained weights? |
not yet |
Total time: 18:52:17.857143, iter: 0:00:07.628324, epoch: 1:59:21.331554
[Iteration 8904] [learning rate 0.001] [Total loss 150.54] [img size 672]
level_21 total 6 objects: xy/gt 1.189, wh/gt 0.078, angle/gt 0.475, conf 28.056
level_42 total 14 objects: xy/gt 1.211, wh/gt 0.041, angle/gt 0.049, conf 42.867
level_84 total 7 objects: xy/gt 1.332, wh/gt 0.062, angle/gt 0.135, conf 40.987
Max GPU memory usage: 3.0111474990844727 GigaBytes
the conf is still high at iteration 8904,is it normal?
The text was updated successfully, but these errors were encountered: