Skip to content

Commit

Permalink
Llama.cpp fix type of values for CLblast
Browse files Browse the repository at this point in the history
  • Loading branch information
Nexesenex committed Jul 11, 2024
1 parent a87a059 commit 54edc4a
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions llama.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -6053,15 +6053,15 @@ static bool llm_load_tensors(
model.n_gpu_layers = n_gpu_layers;

const int n_layer = hparams.n_layer;
const int i_gpu_start = std::max((int) hparams.n_layer - n_gpu_layers, (int) 0);
int i_gpu_start = std::max((int) hparams.n_layer - n_gpu_layers, (int) 0);
bool use_mmap_buffer = true;

#if defined(GGML_USE_CLBLAST)
if(clblast_offload_fallback_mode)
{
printf("\nOpenCL GPU Offload Fallback...");
clblast_offload_fallback_layers = n_gpu_layers;
i_gpu_start = std::max((int64_t) hparams.n_layer, (int64_t) 0);
i_gpu_start = std::max((int) hparams.n_layer, (int) 0);
}
#endif

Expand Down

0 comments on commit 54edc4a

Please sign in to comment.