-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to speed up insert process? #212
Comments
Try to use GPU instead, the spped will boost up. The insert process of LightRAG is much faster than that in GraphRAG, based on my actual testing. |
@JavieHush Can you elaborate on that more? |
Facing same issue, Could you describe how to achieve this? |
Guys :) I'm not quite sure about the situation you've encountered. my detailed situation is as follows SuggestionsThe insert process is highly related to LLM/Embedding model (the process use LLM to extract entities & relations, and EB model to index). This requires a significant amount of computing resources. If you run this locally, a GPU-accelerated model is recommended. if use CPU only, it will be much slower. About my situationwe use Ollama local service to power the framework, and a work station with 8 × Tesla P100 GPU. EvaluationUsing a fake fairy tale (2k tokens, generated by GPT-4o, this means all LLMs don't know this story) to test the LightRAG & GraphRAG. The insert process of LightRAG cost 2~3min, while it costs more than 15min for GraphRAG. |
@JavieHush That is why I got confused as in my situation I am not running LLM locally but rather using APIs so wondered what did you mean by using GPU. |
btw, how long did it cost for u to finish the inserting process? It should be much faster using API than local model service🤔 |
@JavieHush I used different document which at the end had 3k entities. I used 6.1 million GPT4o mini tokens and around 1 million embedding tokens (which is very cheap). So around $1 |
@JavieHush I'm running locally with ollama, can you explain the process to make use to GPU while indexing. import os ######## Environment="OLLAMA_KEEP_ALIVE=-1" WORKING_DIR = "./mydir" logging.basicConfig(format="%(levelname)s:%(message)s", level=logging.INFO) if not os.path.exists(WORKING_DIR): rag = LightRAG( pdf_path = "../CompaniesAct2013.pdf" pdf_text = "" with pdfplumber.open(pdf_path) as pdf: rag.insert(pdf_text) print(rag.query("What are the top themes in this story?", param=QueryParam(mode="naive"))) print(rag.query("What are the top themes in this story?", param=QueryParam(mode="global"))) print(rag.query("What are the top themes in this story?", param=QueryParam(mode="global"))) print(rag.query("What are the top themes in this story?", param=QueryParam(mode="hybrid"))) |
First of all you must make sure your GPU support accelerating model reasoning, are u using Nvidia series or ? GPU accelerating setting should be configured in ollama settings. plz refer to Run ollama with docker-compose and using gpu |
I have been able to offload the insert processes onto a free cloud service (streamlit.io) and provide a insert, query, visualize, download buttons on a LightRAG Gui. This does not exactly speed up insert, but it does offload compute in case you are constrained by local device resources. |
Can we do parallel inserts to RAG? Did anyone try? |
actually take a look at jina example. it's inserting docs concurrently. however, i didn't investigate much about the entity race issue. my suggestion is don't set the concurrency too high. only if you know exactly what you do. |
The insert process is quite slow for a small document. I tried to change
llm_model_max_async
value but the speed is never change. I also saw that the insert process is only using single core of my CPU. Is there any way to speed up the process? Maybe by using multiple thread or process?The text was updated successfully, but these errors were encountered: