You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When indexing a big dataset it is too common to trigger a too many open files. This error is thrown when indexing. This is likely to be produced by grenad and the extractors of milli that generate a lot of files. The dataset I was using is a 33M lines of JSON that is about 14GiB, sending in one single batch. It should also be triggered by having indexed a lot of documents and changing the settings, this will force the re-indexation of the full dataset.
We designed this new indexation system with @ManyTheFish to reduce the amount of RAM the engine was using and therefore stop the number of crashes (killed by the OS) witnessed by our users. We did a good job even if we can do better (see #3037).
I want to explore a new exploration on our extractors by @loiclec at meilisearch/milli#656. This refactoring should bring more efficient RAM usage by the extractor without using too much memory, speeding up the indexation process and reducing the number of created files.
The text was updated successfully, but these errors were encountered:
Regarding meilisearch/milli#656 , part of the reason that I did not continue developing it (besides time) is that it would increase the amount of memory used during indexing (up to the defined limit).
In theory, it is not a problem as we still stay within the memory usage limit. But in practice, it reduces our margin of error. So we need to be very confident about memory management both inside the data extractors and for any code that can be run parallel to indexing (most importantly, search queries I guess).
When indexing a big dataset it is too common to trigger a too many open files. This error is thrown when indexing. This is likely to be produced by grenad and the extractors of milli that generate a lot of files. The dataset I was using is a 33M lines of JSON that is about 14GiB, sending in one single batch. It should also be triggered by having indexed a lot of documents and changing the settings, this will force the re-indexation of the full dataset.
We designed this new indexation system with @ManyTheFish to reduce the amount of RAM the engine was using and therefore stop the number of crashes (killed by the OS) witnessed by our users. We did a good job even if we can do better (see #3037).
I want to explore a new exploration on our extractors by @loiclec at meilisearch/milli#656. This refactoring should bring more efficient RAM usage by the extractor without using too much memory, speeding up the indexation process and reducing the number of created files.
The text was updated successfully, but these errors were encountered: