-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A problem about training #1
Comments
For how many epochs did you train your model? |
About 10-15 Epochs. Strangely, the system cannot output meaningful sentences. I also tried the BERT-GPT2 structure, which is better than BERT-BERT, but the SARI score is still not very good. |
If your SARI is not good, the output sentences won't make much sense. I got a good SARI in 10-15 epochs. Are you using the Wikilarge dataset or the smaller one given in the repo? I would also suggest you to investigate the input sentences and label coming out of the data generator to verify if they are correct. |
Thanks for the warm help. I downloaded the GitHub version code, and run the code according to the method on GitHub. I just set the Epoch to 15 and I used the training set which is in the dataset folder. I additionally checked the code of datagen.py and utils.py of the project, I think the code is OK. |
Check your loss, is it still decreasing when you reach 15 epochs or getting plateaued? You can try training it further. |
Hello, I ran the training script of BERT-BERT architecture in the example, but a poor simplification result is obtained. Is this a program error?
The text was updated successfully, but these errors were encountered: