You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The most basic version could simply retain the embeddings of the N most frequent words and map all the remaining words to the nearest neighbor in the N embeddings that are retained.
Select vectors such that the similarities to the pruned vectors is maximized. The challenge here is making it tractable.
An approach similar to quantization, where k-means clustering is performed with N clusters. The embedding matrix is then replaced by the cluster centroid matrix. Each word maps to the cluster it is in. (This could reuse the KMeans stuff from reductive, which is already a dependency of finalfusion).
I would focus on (1) and (3) first.
Benefits:
Compresses the embedding matrix.
Faster than quantized embedding matrices, because simple lookups are used.
Could later be applied to @sebpuetz 's non-hashed subword n-grams as well.
Could perhaps be combined with quantization for even better compression.
The text was updated successfully, but these errors were encountered:
Somewhat related, mapping all untrained subword embeddings to a NULL vector could also be done if we get some indirection for look ups (which would be introduced by all of the above options). The subword embeddings could be filtered by going through the vocabulary items, extracting their corresponding subword indices and keeping a log of which ones never appear. Those never appearing could be mapped to the same vector (whatever that should be...) or removed without replacement.
In some cases this would massively reduce model size.
Add support for pruning embeddings, where N embeddings are retained. Words for which embeddings are removed are mapped to their nearest neighbor.
This should provide more or less the same functionality as pruning in spaCy:
https://spacy.io/api/vocab#prune_vectors
I encourage some investigation here. Some ideas:
The most basic version could simply retain the embeddings of the N most frequent words and map all the remaining words to the nearest neighbor in the N embeddings that are retained.
Select vectors such that the similarities to the pruned vectors is maximized. The challenge here is making it tractable.
An approach similar to quantization, where k-means clustering is performed with N clusters. The embedding matrix is then replaced by the cluster centroid matrix. Each word maps to the cluster it is in. (This could reuse the KMeans stuff from reductive, which is already a dependency of finalfusion).
I would focus on (1) and (3) first.
Benefits:
The text was updated successfully, but these errors were encountered: