Skip to content

Commit

Permalink
Improve progress bar
Browse files Browse the repository at this point in the history
Set default width to whatever the terminal is. Also fixed a small bug around
default n_gpu_layers value.

Signed-off-by: Eric Curtin <[email protected]>
  • Loading branch information
ericcurtin committed Dec 17, 2024
1 parent 08ea539 commit 89a2e35
Show file tree
Hide file tree
Showing 3 changed files with 281 additions and 129 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -411,7 +411,7 @@ To learn more about model quantization, [read this documentation](examples/quant

</details>

[^1]: [examples/perplexity/README.md](examples/perplexity/README.md)
[^1]: [examples/perplexity/README.md](https://github.com/ggerganov/llama.cpp/blob/master/examples/perplexity/README.md)
[^2]: [https://huggingface.co/docs/transformers/perplexity](https://huggingface.co/docs/transformers/perplexity)

## [`llama-bench`](example/bench)
Expand Down Expand Up @@ -448,7 +448,7 @@ To learn more about model quantization, [read this documentation](examples/quant
</details>
[^3]: [https://github.com/containers/ramalama](RamaLama)
[^3]: [RamaLama](https://github.com/containers/ramalama)
## [`llama-simple`](examples/simple)
Expand Down
12 changes: 7 additions & 5 deletions examples/run/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ The purpose of this example is to demonstrate a minimal usage of llama.cpp for r

```bash
llama-run granite-code
...
```

```bash
llama-run -h
Expand All @@ -18,7 +18,9 @@ Options:
-c, --context-size <value>
Context size (default: 2048)
-n, --ngl <value>
Number of GPU layers (default: 0)
Number of GPU layers (default: 999)
-v, --verbose, --log-verbose
Set verbosity level to infinity (i.e. log all messages, useful for debugging)
-h, --help
Show help message

Expand All @@ -42,6 +44,6 @@ Examples:
llama-run https://example.com/some-file1.gguf
llama-run some-file2.gguf
llama-run file://some-file3.gguf
llama-run --ngl 99 some-file4.gguf
llama-run --ngl 99 some-file5.gguf Hello World
...
llama-run --ngl 999 some-file4.gguf
llama-run --ngl 999 some-file5.gguf Hello World
```
Loading

0 comments on commit 89a2e35

Please sign in to comment.