-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproduce the training result #30
Comments
Hello, I have been doing experimental reproducibility recently. I would like to ask you whether you are experimenting directly through the model shared by the author? Because my display file is damaged, and because of my weak foundation, I have not completed the recurrence of the experiment. I want to ask you some questions, you can ask for a fee, I hope you can help me, thank you very much
Hello, I have been doing experimental reproducibility recently. I would like to ask you whether you are experimenting directly through the model shared by the author? Because my display file is damaged, and because of my weak foundation, I have not completed the recurrence of the experiment. I want to ask you some questions, you can ask for a fee, I hope you can help me, thank you very much |
did you align the images? and you can load the pre-trained model directly
(using tar)
LiuYeeJ ***@***.***> 於 2021年4月26日週一 下午8:32寫道:
… Hello, I have been doing experimental reproducibility recently. I would
like to ask you whether you are experimenting directly through the model
shared by the author? Because my display file is damaged, and because of my
weak foundation, I have not completed the recurrence of the experiment. I
want to ask you some questions, you can ask for a fee, I hope you can help
me, thank you very much
Hi, thank you for your excellent work!
I am trying to reproduce the training results on the full FER_Plus dataset
by using the code (train_attention_rank_loss.py) and pretrained resnet18
model (ijba_res18_native.pth.tar) you provided. I also use the fixed
cropping strategy as in your paper (full image + 5 regions). Besides the
learning rate, all of the training settings are unchanged.
However, I could not achieve the accuracy mentioned in the paper. Most of
the time, the validation and testing accuracy are only around 85% and 83%,
respectively. The model can fit pretty well on the training set, though.
Do you have any special preparation for setting up the training or any
configuration of the hyperparameters? Your suggestion would be highly
appreciated!
Hello, I have been doing experimental reproducibility recently. I would
like to ask you whether you are experimenting directly through the model
shared by the author? Because my display file is damaged, and because of my
weak foundation, I have not completed the recurrence of the experiment. I
want to ask you some questions, you can ask for a fee, I hope you can help
me, thank you very much
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#30 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AI6LK4EN4I6P42KGJ2QEIJLTKVMOBANCNFSM4RXNXZZQ>
.
|
Thank you for your reply. The face images have been aligned by retina face before training. The pretrained model was loaded directly by torch.load() |
Hi, thank you for your excellent work!
I am trying to reproduce the training results on the full FER_Plus dataset by using the code (train_attention_rank_loss.py) and pretrained resnet18 model (ijba_res18_native.pth.tar) you provided. I also use the fixed cropping strategy as in your paper (full image + 5 regions). Besides the learning rate, all of the training settings are unchanged.
However, I could not achieve the accuracy mentioned in the paper. Most of the time, the validation and testing accuracy are only around 85% and 83%, respectively. The model can fit pretty well on the training set, though.
Do you have any special preparation for setting up the training or any configuration of the hyperparameters? Your suggestion would be highly appreciated!
The text was updated successfully, but these errors were encountered: