-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Issue]: Failed to create final entities while using local vLLM Server and local Embedding Model. #1405
Comments
You may need to use a model with larger parameters, I tried llama3.1-70b and it didn't work, finally using deepseek-chat worked! |
well, it might work while using API key, however, we must use local LLM and embedding model due to some restrictions from the project we're now working on. |
Is this problem solved? |
I failed using qwen2.5 14B |
Is this problem solved? |
Routing to #657 |
Do you need to file an issue?
Describe the issue
GraphRAG failed to create final entities, while using local LLM and local Embedding Model.
embedding model is a fschat server, and LLM is Qwen2.5-7B, run by a vllm OpenAI server.
Steps to reproduce
GraphRAG Config Used
Logs and screenshots
indexing-engine.log
Additional Information
The text was updated successfully, but these errors were encountered: