-
Notifications
You must be signed in to change notification settings - Fork 395
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Local LLM #71
Comments
also wondering |
1 similar comment
also wondering |
I have confirmed that currently the models on Huggingface cannot be used. See the following codes: It can be found that if you are using a Huggingface model, zerox/py_zerox/pyzerox/models/modellitellm.py Lines 63 to 66 in aa3d881
Apparently, it means that the all Huggingface models will cause an error like this:
It seems that the only option to use the local model is to use ollama, which has been proved to be available, see Edited: Sorry for mentioning that. I have not noticed that the results in #106 are totally non-sense. Now I can reproduce the same non-sense results. The model I decide to quit from this package. Apparently, this package is difficult to understand for somehow. I have succeeded using |
Is it possible to use local models or are there any plans for that to happen? For example, using models from Hugging Face like the meta-llama/Llama-3.2-11B-Vision.
The text was updated successfully, but these errors were encountered: