-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is this a typo in face-keypoint/train.py? #3
Comments
If I remember correctly, I think the goal was to just monitor the loss at the last layer (highest resolution). |
Thanks. Is there any reason that multi-stage loss is not being used here? I can recall that both openpose and CPM use multi-stage loss for backprop. |
Hi, I trained the facenet using the script you provided on LS3D-W dataset. The mse and neg training loss is quite low which are around 4e-4 at first few iterations and the validation loss is almost the same. Is there anything going wrong? Thanks. |
I haven't looked at face keypoints in a while, so I am not entirely sure if that is what I used to get in terms of loss. However, I don't think it is a problem. |
Hi,
Thanks for sharing the code! When I went through the training code, I got confused at https://github.com/e-lab/pytorch-demos/blob/master/face-keypoint/train.py#L133 and https://github.com/e-lab/pytorch-demos/blob/master/face-keypoint/train.py#L186. Should the
total_loss
be moved to the preceding for loop? Otherwise the loss is not accumulated after each for loop. Thank!The text was updated successfully, but these errors were encountered: