forked from ggerganov/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'master' into test-build
- Loading branch information
Showing
31 changed files
with
628 additions
and
443 deletions.
There are no files selected for viewing
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,73 @@ | ||
name: Bug (compilation) | ||
description: Something goes wrong when trying to compile llama.cpp. | ||
title: "Compile bug: " | ||
labels: ["bug-unconfirmed", "compilation"] | ||
body: | ||
- type: markdown | ||
attributes: | ||
value: > | ||
Thanks for taking the time to fill out this bug report! | ||
This issue template is intended for bug reports where the compilation of llama.cpp fails. | ||
Before opening an issue, please confirm that the compilation still fails with `-DGGML_CCACHE=OFF`. | ||
If the compilation succeeds with ccache disabled you should be able to permanently fix the issue | ||
by clearing `~/.cache/ccache` (on Linux). | ||
- type: textarea | ||
id: commit | ||
attributes: | ||
label: Git commit | ||
description: Which commit are you trying to compile? | ||
placeholder: | | ||
$git rev-parse HEAD | ||
84a07a17b1b08cf2b9747c633a2372782848a27f | ||
validations: | ||
required: true | ||
- type: dropdown | ||
id: operating-system | ||
attributes: | ||
label: Which operating systems do you know to be affected? | ||
multiple: true | ||
options: | ||
- Linux | ||
- Mac | ||
- Windows | ||
- BSD | ||
- Other? (Please let us know in description) | ||
validations: | ||
required: true | ||
- type: dropdown | ||
id: backends | ||
attributes: | ||
label: GGML backends | ||
description: Which GGML backends do you know to be affected? | ||
options: [AMX, BLAS, CPU, CUDA, HIP, Kompute, Metal, Musa, RPC, SYCL, Vulkan] | ||
multiple: true | ||
- type: textarea | ||
id: steps_to_reproduce | ||
attributes: | ||
label: Steps to Reproduce | ||
description: > | ||
Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it. | ||
If you can narrow down the bug to specific compile flags, that information would be very much appreciated by us. | ||
placeholder: > | ||
Here are the exact commands that I used: ... | ||
validations: | ||
required: true | ||
- type: textarea | ||
id: first_bad_commit | ||
attributes: | ||
label: First Bad Commit | ||
description: > | ||
If the bug was not present on an earlier version: when did it start appearing? | ||
If possible, please do a git bisect and identify the exact commit that introduced the bug. | ||
validations: | ||
required: false | ||
- type: textarea | ||
id: logs | ||
attributes: | ||
label: Relevant log output | ||
description: > | ||
Please copy and paste any relevant log output, including the command that you entered and any generated text. | ||
This will be automatically formatted into code, so no need for backticks. | ||
render: shell | ||
validations: | ||
required: true |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,98 @@ | ||
name: Bug (model use) | ||
description: Something goes wrong when using a model (in general, not specific to a single llama.cpp module). | ||
title: "Eval bug: " | ||
labels: ["bug-unconfirmed", "model evaluation"] | ||
body: | ||
- type: markdown | ||
attributes: | ||
value: > | ||
Thanks for taking the time to fill out this bug report! | ||
This issue template is intended for bug reports where the model evaluation results | ||
(i.e. the generated text) are incorrect or llama.cpp crashes during model evaluation. | ||
If you encountered the issue while using an external UI (e.g. ollama), | ||
please reproduce your issue using one of the examples/binaries in this repository. | ||
The `llama-cli` binary can be used for simple and reproducible model inference. | ||
- type: textarea | ||
id: version | ||
attributes: | ||
label: Name and Version | ||
description: Which version of our software are you running? (use `--version` to get a version string) | ||
placeholder: | | ||
$./llama-cli --version | ||
version: 2999 (42b4109e) | ||
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu | ||
validations: | ||
required: true | ||
- type: dropdown | ||
id: operating-system | ||
attributes: | ||
label: Which operating systems do you know to be affected? | ||
multiple: true | ||
options: | ||
- Linux | ||
- Mac | ||
- Windows | ||
- BSD | ||
- Other? (Please let us know in description) | ||
validations: | ||
required: true | ||
- type: dropdown | ||
id: backends | ||
attributes: | ||
label: GGML backends | ||
description: Which GGML backends do you know to be affected? | ||
options: [AMX, BLAS, CPU, CUDA, HIP, Kompute, Metal, Musa, RPC, SYCL, Vulkan] | ||
multiple: true | ||
- type: textarea | ||
id: hardware | ||
attributes: | ||
label: Hardware | ||
description: Which CPUs/GPUs are you using? | ||
placeholder: > | ||
e.g. Ryzen 5950X + 2x RTX 4090 | ||
validations: | ||
required: true | ||
- type: textarea | ||
id: model | ||
attributes: | ||
label: Model | ||
description: > | ||
Which model at which quantization were you using when encountering the bug? | ||
If you downloaded a GGUF file off of Huggingface, please provide a link. | ||
placeholder: > | ||
e.g. Meta LLaMA 3.1 Instruct 8b q4_K_M | ||
validations: | ||
required: false | ||
- type: textarea | ||
id: steps_to_reproduce | ||
attributes: | ||
label: Steps to Reproduce | ||
description: > | ||
Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it. | ||
If you can narrow down the bug to specific hardware, compile flags, or command line arguments, | ||
that information would be very much appreciated by us. | ||
placeholder: > | ||
e.g. when I run llama-cli with -ngl 99 I get garbled outputs. | ||
When I use -ngl 0 it works correctly. | ||
Here are the exact commands that I used: ... | ||
validations: | ||
required: true | ||
- type: textarea | ||
id: first_bad_commit | ||
attributes: | ||
label: First Bad Commit | ||
description: > | ||
If the bug was not present on an earlier version: when did it start appearing? | ||
If possible, please do a git bisect and identify the exact commit that introduced the bug. | ||
validations: | ||
required: false | ||
- type: textarea | ||
id: logs | ||
attributes: | ||
label: Relevant log output | ||
description: > | ||
Please copy and paste any relevant log output, including the command that you entered and any generated text. | ||
This will be automatically formatted into code, so no need for backticks. | ||
render: shell | ||
validations: | ||
required: true |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,78 @@ | ||
name: Bug (misc.) | ||
description: Something is not working the way it should (and it's not covered by any of the above cases). | ||
title: "Misc. bug: " | ||
labels: ["bug-unconfirmed"] | ||
body: | ||
- type: markdown | ||
attributes: | ||
value: > | ||
Thanks for taking the time to fill out this bug report! | ||
This issue template is intended for miscellaneous bugs that don't fit into any other category. | ||
If you encountered the issue while using an external UI (e.g. ollama), | ||
please reproduce your issue using one of the examples/binaries in this repository. | ||
- type: textarea | ||
id: version | ||
attributes: | ||
label: Name and Version | ||
description: Which version of our software are you running? (use `--version` to get a version string) | ||
placeholder: | | ||
$./llama-cli --version | ||
version: 2999 (42b4109e) | ||
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu | ||
validations: | ||
required: true | ||
- type: dropdown | ||
id: operating-system | ||
attributes: | ||
label: Which operating systems do you know to be affected? | ||
multiple: true | ||
options: | ||
- Linux | ||
- Mac | ||
- Windows | ||
- BSD | ||
- Other? (Please let us know in description) | ||
validations: | ||
required: true | ||
- type: dropdown | ||
id: module | ||
attributes: | ||
label: Which llama.cpp modules do you know to be affected? | ||
multiple: true | ||
options: | ||
- libllama (core library) | ||
- llama-cli | ||
- llama-server | ||
- llama-bench | ||
- llama-quantize | ||
- Python/Bash scripts | ||
- Other (Please specify in the next section) | ||
validations: | ||
required: true | ||
- type: textarea | ||
id: steps_to_reproduce | ||
attributes: | ||
label: Steps to Reproduce | ||
description: > | ||
Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it. | ||
validations: | ||
required: true | ||
- type: textarea | ||
id: first_bad_commit | ||
attributes: | ||
label: First Bad Commit | ||
description: > | ||
If the bug was not present on an earlier version: when did it start appearing? | ||
If possible, please do a git bisect and identify the exact commit that introduced the bug. | ||
validations: | ||
required: false | ||
- type: textarea | ||
id: logs | ||
attributes: | ||
label: Relevant log output | ||
description: > | ||
Please copy and paste any relevant log output, including the command that you entered and any generated text. | ||
This will be automatically formatted into code, so no need for backticks. | ||
render: shell | ||
validations: | ||
required: true |
This file was deleted.
Oops, something went wrong.
2 changes: 1 addition & 1 deletion
2
.github/ISSUE_TEMPLATE/05-enhancement.yml → .github/ISSUE_TEMPLATE/020-enhancement.yml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.