-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the results of this model #2
Comments
You can try the checkpoint I provided |
I try the checkpoint you provided and it achieves the result paper reporting. But I rerun the project several times still can't gain the best model(more than one percent drop). Would you recheck that the hyper-param settings shown in the GitHub is completely same to that in provided best model ? Thx. |
The parameters are the same. The performance could be affected by the GPUs you used. |
I was also unable to reproduce the result, only get 54.3 on MultiWOZ 2.1 dataset |
I get 55.04 on 2.1 hhh |
Thanks for your interest. Please check the training records here: https://github.com/smartyfh/DST-STAR/blob/main/out-bert/exp/exp.txt |
@smartyfh |
The reported results are the average of multiple random seeds. So is the joint goal accuracy. |
i got 55.79% on multiwoz2.1 with random seed=42, and i only change batch_size to 8, other params are same as the author's. i dont know why......... |
Hi, thank you for the feedback, batch size can indeed affect model performance, it is good to know that the model works when setting the batch size to 8. |
Hi, I rerun this project with the same experimental setting. But the joint accuracy only achieves around 55% on multiwoz2.1 dataset. Is there any special version of the transformers or something?
The text was updated successfully, but these errors were encountered: