Replies: 2 comments
-
I'm having this exact same issue. My solution was to downgrade to 0.1.73 at the cost of GGUF models. I'm currently running it using the "examples/low/examples/low_level_api" folder running "python3 Chat.py" Output:
This occurred right before it would normally load the model. Edit: This issue seems to be affecting at least GGUF AND GGML models. |
Beta Was this translation helpful? Give feedback.
0 replies
-
#680 This pull request should be the fix. Should get merged anytime. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, please help me figure out what is the cause of the error with low level api. Hight level api works fine
My steps:
git clone [email protected]:abetlen/llama-cpp-python.git
cd llama-cpp-python
git submodule update --init —recursive
python3 setup.py develop
…
Finished processing dependencies for llama-cpp-python==0.1.83
And I get this error:
llama-cpp-python > python3 examples/low_level_api/low_level_api_chat_cpp.py -m pathto/ggml-model-Q4_0.bin
seed = 1693947851
Traceback (most recent call last):
File "/Users/vadimmakarov/Documents/Work_home/llama/llama-cpp-python/examples/low_level_api/low_level_api_chat_cpp.py", line 567, in
with LLaMAInteract(params) as m:
File "/Users/vadimmakarov/Documents/Work_home/llama/llama-cpp-python/examples/low_level_api/low_level_api_chat_cpp.py", line 69, in init
self.ctx = llama_cpp.llama_init_from_file(self.params.model.encode("utf8"), self.lparams)
AttributeError: module 'llama_cpp' has no attribute 'llama_init_from_file'
P.S.
directory state:
llama-cpp-python > git show --summary
commit 1833726 (HEAD -> main, origin/main, origin/HEAD)
Merge: 186626d bf08d1b
Author: Andrei Betlen [email protected]
Date: Fri Sep 1 14:26:16 2023 -0400
llama-cpp-python > git submodule status
69fdbb9 vendor/llama.cpp (b1147-1-g69fdbb9)
Beta Was this translation helpful? Give feedback.
All reactions