Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

train loss #6

Open
ZackGoing opened this issue Aug 16, 2020 · 4 comments
Open

train loss #6

ZackGoing opened this issue Aug 16, 2020 · 4 comments

Comments

@ZackGoing
Copy link

Total time: 18:52:17.857143, iter: 0:00:07.628324, epoch: 1:59:21.331554
[Iteration 8904] [learning rate 0.001] [Total loss 150.54] [img size 672]
level_21 total 6 objects: xy/gt 1.189, wh/gt 0.078, angle/gt 0.475, conf 28.056
level_42 total 14 objects: xy/gt 1.211, wh/gt 0.041, angle/gt 0.049, conf 42.867
level_84 total 7 objects: xy/gt 1.332, wh/gt 0.062, angle/gt 0.135, conf 40.987
Max GPU memory usage: 3.0111474990844727 GigaBytes

the conf is still high at iteration 8904,is it normal?

@duanzhiihao
Copy link
Owner

Yes it's normal since the confidence loss (i.e., self.loss4obj in the code) is the summation of conf. losses of all anchor boxes. There are thousands of anchor boxes in each image so the confidence loss is usually very large.

self.loss4obj = nn.BCELoss(reduction='sum')

@ZackGoing
Copy link
Author

would you like to share your train.log?Thx

@duanzhiihao
Copy link
Owner

Sorry, I don't have access to the training logs.

Did you try to visualize the predicted bounding boxes using the trained weights?

@ZackGoing
Copy link
Author

not yet

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants