Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fine-tuning #5

Open
ArijRB opened this issue Mar 30, 2021 · 2 comments
Open

Fine-tuning #5

ArijRB opened this issue Mar 30, 2021 · 2 comments

Comments

@ArijRB
Copy link

ArijRB commented Mar 30, 2021

Hellor,
Thank you for sharing your code.
Have you tried to fine-tune the bert-like model during the fine-tuning?
If I understood well you don't use the tags when using the bert embeddings, did you try using both?

Also there is 2 minor changes for the char_lstm.py file to work :
Line 42 replace n_embed by n_word_embed and return None with embed to avoid changing the training code.

Thank you in advance.

@MustafaCeyhan
Copy link

hi ArijRB.

What exactly do you mean by "return None with embed to avoid changing the training code."?

@LuceleneL
Copy link

LuceleneL commented Feb 9, 2024

I ran into the same issue while trying to do training.

I did fix at line 42 of char_lstm.py the variable from n_embed to n_word_embed, but I didn't realized what "return None with embed" mean. Anyone can elaborate a little more about it?

Thanks in advance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants