Current state of Llama vs. GPT4All on an M1 Mac in terms of speed? #327
Unanswered
gavtography
asked this question in
Q&A
Replies: 1 comment
-
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Just looking for the fastest way to run an LLM on an M1 Mac with Python bindings. Looking for honest opinions on this.
I've already setup my program with GPT4All, but I've heard others saying that there's faster ways on an M1 Mac.
Beta Was this translation helpful? Give feedback.
All reactions