-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update to b4273, fix segfault on metal macs #12
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, with a suggestion for the build number.
One question for a later build, do you think we could take advantage of GGML_CPU_ALL_VARIANTS in a later build?
{% set gguf_version = "0.10.0"%} | ||
{% set build_number = 0 %} | ||
{% set build_number = 1 %} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see 4273 on ai-staging, it seems this could be reset to 0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I incremented it to fix prefect, fixing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cbouss I haven't dug into the GGML_CPU_ALL_VARIANTS stuff as of yet, I'll do that after this update.
llama.cpp 0.0.4273
gguf 0.10.0
llama.cpp-tools 0.0.4273
Destination channel: ai-staging
Links
Explanation of changes: