This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
Reduce memory usage on loading embedding from txt #191
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Original implementation of read_txt_embeddings takes a lot of memory. For example, to load an embedding txt file that contains a vocab size of 2,000,000 with 300 embedding dimension, vectors list takes 643002,000,000=4.8 GB, np.concatenate takes 4.8 GB and torch.from_numpy takes 2.4 GB, totally it takes around 12 GB. Knowing vocab_size in advance and setting dtype of vector to np.float32, memory requirement can be reduced to around 2.4 GB instead of 12GB.