Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable use to the rebar feature to upload buffers to the device. #9251

Merged
merged 1 commit into from
Sep 28, 2024

Conversation

mtavenrath
Copy link
Contributor

Instead of copying host -> host staging -> device one can use the rebar feature to directly copy host -> device skipping the latency and 2nd memcpy which tripples memory bw consumption.

@github-actions github-actions bot added Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Aug 30, 2024
@mtavenrath
Copy link
Contributor Author

Benchmark with Mistral-Nemo-Instruct-2407.Q5_K.gguf

| model                          |       size |     params | backend    | ngl |          nkvo | mmap |          test |              t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | ------------: | ---: | ------------: | ---------------: |
| llama 13B Q5_K - Medium        |   8.12 GiB |    12.25 B | Vulkan     | 100 |             1 |    0 |         pp512 |    431.11 ± 3.66 |
| llama 13B Q5_K - Medium        |   8.12 GiB |    12.25 B | Vulkan     | 100 |             1 |    0 |         pp512 |    438.10 ± 0.52 |
| llama 13B Q5_K - Medium        |   8.12 GiB |    12.25 B | Vulkan     | 100 |             1 |    0 |         pp512 |    434.48 ± 1.57 |
| llama 13B Q5_K - Medium        |   8.12 GiB |    12.25 B | Vulkan     | 100 |             1 |    0 |         tg128 |     30.55 ± 0.75 |
| llama 13B Q5_K - Medium        |   8.12 GiB |    12.25 B | Vulkan     | 100 |             1 |    0 |         tg128 |     31.73 ± 0.11 |
| llama 13B Q5_K - Medium        |   8.12 GiB |    12.25 B | Vulkan     | 100 |             1 |    0 |         tg128 |     31.62 ± 0.13 |

rebar

| model                          |       size |     params | backend    | ngl |          nkvo | mmap |          test |              t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | ------------: | ---: | ------------: | ---------------: |
| llama 13B Q5_K - Medium        |   8.12 GiB |    12.25 B | Vulkan     | 100 |             1 |    0 |         pp512 |    435.98 ± 1.63 |
| llama 13B Q5_K - Medium        |   8.12 GiB |    12.25 B | Vulkan     | 100 |             1 |    0 |         pp512 |    433.19 ± 1.01 |
| llama 13B Q5_K - Medium        |   8.12 GiB |    12.25 B | Vulkan     | 100 |             1 |    0 |         pp512 |    435.82 ± 0.89 |
| llama 13B Q5_K - Medium        |   8.12 GiB |    12.25 B | Vulkan     | 100 |             1 |    0 |         tg128 |     35.07 ± 0.14 |
| llama 13B Q5_K - Medium        |   8.12 GiB |    12.25 B | Vulkan     | 100 |             1 |    0 |         tg128 |     34.89 ± 0.20 |
| llama 13B Q5_K - Medium        |   8.12 GiB |    12.25 B | Vulkan     | 100 |             1 |    0 |         tg128 |     34.36 ± 0.19 |

@mofosyne mofosyne added the Review Complexity : Low Trivial changes to code that most beginner devs (or those who want a break) can tackle. e.g. UI fix label Aug 30, 2024
@0cc4m 0cc4m assigned 0cc4m and unassigned 0cc4m Sep 8, 2024
@0cc4m 0cc4m self-requested a review September 8, 2024 19:42
Copy link
Collaborator

@0cc4m 0cc4m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mtavenrath I'm sorry that it took me so long to get to the review.

Thank you for the contribution, this makes a significant difference in specific cases. Looks good to me.

@0cc4m 0cc4m merged commit 89f9944 into ggerganov:master Sep 28, 2024
52 checks passed
matiaslin pushed a commit to matiaslin/llama.cpp that referenced this pull request Sep 28, 2024
@mtavenrath
Copy link
Contributor Author

@0cc4m No worries, this was just the prelude to my current work with win32 iorings to get amazingly fast load times. On Windows I've been able to read ~45gb/s from 4xNVMe with a Win32 LVM raid into CPU memory.

The open question is if reading to GPU memory exposed through rebar will result in similar read performance. If so, the next question is how can we either expose CPU pointers of the Vulkan rebar tensors to llama.cpp? If this is not possible the question which arises next is ggml should be able to support file i/o to hit the fastest possible path.

@slaren
Copy link
Collaborator

slaren commented Sep 30, 2024

You could probably make the buffer type a host buffer (return true from the is_host function of the buffer type interface), and set tensor::data to the CPU pointer. llama.cpp can load data directly into the tensor if it is allocated in a host buffer, and it also makes the tensor data directly accesible to the CPU backend without intermediate copies (but that's not always faster). You would probably need to make other changes to the Vulkan backend to find the device pointer where it is needed (which you can probably calculate from the CPU pointer).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning Review Complexity : Low Trivial changes to code that most beginner devs (or those who want a break) can tackle. e.g. UI fix Vulkan Issues specific to the Vulkan backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants