Skip to content

Commit

Permalink
fix output counting comment
Browse files Browse the repository at this point in the history
  • Loading branch information
iamlemec committed Jun 24, 2024
1 parent 4236ccc commit 940b1e8
Showing 1 changed file with 1 addition and 2 deletions.
3 changes: 1 addition & 2 deletions llama.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -12618,8 +12618,7 @@ static int llama_decode_internal(
std::vector<llama_seq_id *> seq_id_arr;
std::vector<std::vector<llama_seq_id>> seq_id;

// this indicates we are doing pooling on an embedding model. non-embedding models always
// use "output_ids" so we need to preserve all outputs in that case (somewhat inefficiently)
// this indicates we are doing pooled embedding, so we ignore batch.logits and output all tokens
bool embed_pooled = cparams.embeddings && cparams.pooling_type != LLAMA_POOLING_TYPE_NONE;

// count outputs
Expand Down

0 comments on commit 940b1e8

Please sign in to comment.