Skip to content

Commit

Permalink
llama : add support for larger Granite Code Models (20B, 34B) (ggerga…
Browse files Browse the repository at this point in the history
…nov#7324)

Tie the weights for ARCH_STARCODER to support the larger Granite code models.
Partially addresses ggerganov/issues/7116

There still remains to be a few things to fix.
Currently requires `--override-kv tokenizer.ggml.add_bos_token=bool:false`
  • Loading branch information
sroecker authored and Nexesenex committed May 18, 2024
1 parent 53a1b30 commit 79d0166
Showing 1 changed file with 8 additions and 1 deletion.
9 changes: 8 additions & 1 deletion llama.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -5269,7 +5269,14 @@ static bool llm_load_tensors(
{
model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd});
model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd});
model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab});
model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, false);
if (!model.output) {
// needs to be on GPU
model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab});
ml.n_created--; // artificial tensor
ml.size_data += ggml_nbytes(model.output);
}

}

for (int i = 0; i < n_layer; ++i) {
Expand Down

0 comments on commit 79d0166

Please sign in to comment.