Skip to content

Commit

Permalink
CUDA: remove unnecessary warp reduce in FA (ggml/1032)
Browse files Browse the repository at this point in the history
* kqmax_new_j in every thread within warp is same after operate at line 199,this reduce can be omit

* same problem in vec32

---------

Co-authored-by: ZhaoXiaoYu <[email protected]>
  • Loading branch information
2 people authored and ggerganov committed Dec 3, 2024
1 parent efb6ae9 commit e9e661b
Show file tree
Hide file tree
Showing 2 changed files with 0 additions and 2 deletions.
1 change: 0 additions & 1 deletion ggml/src/ggml-cuda/fattn-vec-f16.cuh
Original file line number Diff line number Diff line change
Expand Up @@ -220,7 +220,6 @@ static __global__ void flash_attn_vec_ext_f16(
for (int j = 0; j < ncols; ++j) {
half kqmax_new_j = ncols == 1 ? kqmax_new : kqmax_new_arr[j];

kqmax_new_j = warp_reduce_max(kqmax_new_j);
if (threadIdx.x == 0) {
kqmax_shared[j][threadIdx.y] = kqmax_new_j;
}
Expand Down
1 change: 0 additions & 1 deletion ggml/src/ggml-cuda/fattn-vec-f32.cuh
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,6 @@ static __global__ void flash_attn_vec_ext_f32(
for (int j = 0; j < ncols; ++j) {
float kqmax_new_j = kqmax_new_arr[j];

kqmax_new_j = warp_reduce_max(kqmax_new_j);
if (threadIdx.x == 0) {
kqmax_shared[j][threadIdx.y] = kqmax_new_j;
}
Expand Down

0 comments on commit e9e661b

Please sign in to comment.