Skip to content

Commit

Permalink
clarify comment
Browse files Browse the repository at this point in the history
  • Loading branch information
ngxson committed Aug 28, 2024
1 parent f3a3033 commit fa0c2bd
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion convert_lora_to_gguf.py
Original file line number Diff line number Diff line change
Expand Up @@ -364,7 +364,9 @@ def get_tensors(self) -> Iterator[tuple[str, Tensor]]:

def modify_tensors(self, data_torch: Tensor, name: str, bid: int | None) -> Iterable[tuple[str, Tensor]]:
dest = list(super().modify_tensors(data_torch, name, bid))
# for now, we cannot convert archs that use the same tensor for tok_embd and output
# some archs may have the same tensor for lm_head and output (tie word embeddings)
# in this case, adapters targeting lm_head will fail when using llama-export-lora
# therefore, we ignore them for now
# see: https://github.com/ggerganov/llama.cpp/issues/9065
if name == "lm_head.weight" and len(dest) == 0:
raise ValueError("lm_head is present in adapter, but is ignored in base model")
Expand Down

0 comments on commit fa0c2bd

Please sign in to comment.