You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to run the lightrag_ollama_demo.py file from examples folder in the GitHub repository. I have been constantly getting this error where Ollama encounters an internal server error and stops midway while doing entity extraction. I have tried Llama3.21b, TinyLlama, Phi, Qwen2.5:0.5b with nomic-embed-text, mxbai-embed-large and snowflake-arctic-embed:22m as embedding models. I have tried different combinations of LLM and the embedding models, but I get the same error for all these models. For Qwen, it did work a few times but other times I got this error again. I saw that others also got this error and some suggestions were to change the OLLAMA_KV_CACHE_TYPE to q8_0 and others suggested that after new changes, this error has been fixed. I tried changing the KV value to q8_0 through this command - launchctl setenv OLLAMA_KV_CACHE_TYPE q8_0 in my terminal but even that didn't work. And I pulled all the recent changes only day before yesterday but I am still getting this error.
Here is my Ollama log if that helps -
The text was updated successfully, but these errors were encountered:
I am trying to run the
lightrag_ollama_demo.py
file from examples folder in the GitHub repository. I have been constantly getting this error where Ollama encounters an internal server error and stops midway while doing entity extraction. I have triedLlama3.21b
,TinyLlama
,Phi
,Qwen2.5:0.5b
withnomic-embed-text
,mxbai-embed-large
andsnowflake-arctic-embed:22m
as embedding models. I have tried different combinations of LLM and the embedding models, but I get the same error for all these models. For Qwen, it did work a few times but other times I got this error again. I saw that others also got this error and some suggestions were to change theOLLAMA_KV_CACHE_TYPE
to q8_0 and others suggested that after new changes, this error has been fixed. I tried changing the KV value to q8_0 through this command -launchctl setenv OLLAMA_KV_CACHE_TYPE q8_0
in my terminal but even that didn't work. And I pulled all the recent changes only day before yesterday but I am still getting this error.Here is my Ollama log if that helps -
The text was updated successfully, but these errors were encountered: