-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update to readme and added application notes #168 #178
Conversation
Okay revised the readmes based on your suggestion. Also spent some time studying how models naming convention is currenting working in the field and how it's defined in llama.cpp . There is likely issues with the "Llamafile Naming Convention" section but everything else should hopefully be addressed now. |
If we settle on At least according to https://github.com/ggerganov/ggml/blob/master/docs/gguf.md you can get the ggml_type (e.g. |
Balloob founder of home assistant on what he would require LLM container to do
Is this achievable by adding Key Values to the GGUF? And maybe accessible via something like
Having a recommended way to easily constraint output to Json would help in the application notes.
|
8f310c4
to
131432e
Compare
Just did a rebase to keep this PR up to date with main |
While rebasing ggerganov/llama.cpp#4858 , decided to review my naming convention proposal and noticed that mixtral has a new naming approach for their model like I've added the new addition to both the llama.cpp default filename PR and also updated the readme notes in this repo's PR as well. |
622924c
to
9cf7363
Compare
ggerganov/llama.cpp#7165 now merged in so |
Added recommended path convention for installation as well as application notes. This commit is based on jart recommendation regarding llamafile convention. This is her quote that this is based on: > I want to enable people to integrate with llamafile any way they like. > In terms of recommendations and guidance, I've been following > TheBloke's naming convention when publishing llamafiles to Hugging > Face https://huggingface.co/jartine I also always use the llamafile > tag. So what I'd recommend applications do, is iterate all the files > tagged llamafile on Hugging Face to present those as choices to the > user for LLMs. Be sure to display which user is publishing them, and > sort by heart count. Then, when you download them, feel free to put > them in ~/.llamafile. Then, to show the users which models are > installed, you just look for ~/.llamafile/*.llamafile.
9503aea
to
3206f27
Compare
rebase to be on top of latest changes and squash all the other fixup commits. Did another review to make sure the doc matches with the now merged in change to llama.cpp convert.py |
Use https://github.com/ggerganov/ggml/blob/master/docs/gguf.md#gguf-naming-convention as the canonical reference.
Updated to use https://github.com/ggerganov/ggml/blob/master/docs/gguf.md#gguf-naming-convention as the canonical reference for llamafile filename convention. On a side note what generates |
Issue Ticket: #168
Added recommended path convention for installation as well as application notes.
This commit is based on jart recommendation regarding llamafile convention.
This is her quote that this is based on: