How to build llama-cpp-python with Metal support? #339
-
I follow this: https://github.com/abetlen/llama-cpp-python#development
However when loading model in text-generation-webui with n-gpu-layers set to 1 i get following errors:
Any idea what is wrong and how to fix it? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
ok, found this topic: #317 |
Beta Was this translation helpful? Give feedback.
-
OK, I officially give up... I tried every possible permutation and cannot get llama-cpp-python (v0.1.59) to build with or without GPU on MacOS M2. Here are my test reports;
I can, however, get llama-cpp-python (v0.1.59) to install via standard pip ... albeit without Metal GPU support. I had a new conda env, with python 3.9.16, tried ggml-metal.metal next to the pytohn executable etc etc etc.. |
Beta Was this translation helpful? Give feedback.
ok, found this topic: #317