-
Notifications
You must be signed in to change notification settings - Fork 346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Maximum Vocabulary Size #42
Comments
Yes it might be the reason. But restricting based on frequency ends up being a lot more difficult to implement. Since you have to rewrite all the examples because word IDs will change when you remove a word. |
I think we could take a pass on the dataset (only the lines used for the training) to count the frequencies of the words and then keep removing the words in decreasing order of frequency till we hit the vocabulary size. I think @chenb67 has already done it in the PR . We would not have to rewrite examples if dataset size and vocabulary size is same. In other case, we would have to ! |
Hey guys- I have a fork that does that TorchNeuralConvo And that's basically how I did it. (Order the vocab by count and then replace) But, there are some tricks if you want to stay within the LuaJIT memory limits, but load huge files |
Hi @macournoyer
We are replacing the words with "unknown" after we have encountered unique words equal to the vocab size.
I think we might get better results, if we replace the words with on basis of their frequency in the corpus and not the order of occurrence. We will start replacing the words in decreasing order of their frequency till we hit the vocab size. What do you think?
This might be a reason of inferior results when we restrict the vocabulary.
The text was updated successfully, but these errors were encountered: