Skip to content

Commit

Permalink
llama : fix typo in llama_tensor_get_type comment [no ci] (ggerganov#…
Browse files Browse the repository at this point in the history
  • Loading branch information
danbev authored and arthw committed Nov 15, 2024
1 parent f246163 commit da89547
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion src/llama.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -15308,7 +15308,7 @@ static ggml_type llama_tensor_get_type(quantize_state_internal & qs, ggml_type n
const int n_expert = std::max(1, (int)qs.model.hparams.n_expert);
auto layer_info = [n_expert] (int i_layer, int n_layer, const char * name) {
if (n_expert > 1) {
// Believe it or not, "experts" in the FFN of Mixtral-8x7B are not consecutive, but iccasionally randomly
// Believe it or not, "experts" in the FFN of Mixtral-8x7B are not consecutive, but occasionally randomly
// sprinkled in the model. Hence, simply dividing i_ffn_down by n_expert does not work
// for getting the current layer as I initially thought, and we need to resort to parsing the
// tensor name.
Expand Down

0 comments on commit da89547

Please sign in to comment.