-
Notifications
You must be signed in to change notification settings - Fork 10k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for Chameleon #8543
Conversation
I have uploaded GGUFs to test this PR with here. |
Co-authored-by: compilade <[email protected]>
Co-authored-by: compilade <[email protected]>
Co-Authored-By: nopperl <[email protected]>
Co-Authored-By: nopperl <[email protected]>
will this ever get added :( |
I think it would still be a good addition. I've resolved all conflicts with master now, so it should be ready to merge. |
Thank you @nopperl looks like it got merged! |
* convert chameleon hf to gguf * add chameleon tokenizer tests * fix lint * implement chameleon graph * add swin norm param * return qk norm weights and biases to original format * implement swin norm * suppress image token output * rem tabs * add comment to conversion * fix ci * check for k norm separately * adapt to new lora implementation * fix layer input for swin norm * move swin_norm in gguf writer * add comment regarding special token regex in chameleon pre-tokenizer * Update src/llama.cpp Co-authored-by: compilade <[email protected]> * fix punctuation regex in chameleon pre-tokenizer (@compilade) Co-authored-by: compilade <[email protected]> * fix lint * trigger ci --------- Co-authored-by: compilade <[email protected]>
* convert chameleon hf to gguf * add chameleon tokenizer tests * fix lint * implement chameleon graph * add swin norm param * return qk norm weights and biases to original format * implement swin norm * suppress image token output * rem tabs * add comment to conversion * fix ci * check for k norm separately * adapt to new lora implementation * fix layer input for swin norm * move swin_norm in gguf writer * add comment regarding special token regex in chameleon pre-tokenizer * Update src/llama.cpp Co-authored-by: compilade <[email protected]> * fix punctuation regex in chameleon pre-tokenizer (@compilade) Co-authored-by: compilade <[email protected]> * fix lint * trigger ci --------- Co-authored-by: compilade <[email protected]>
* convert chameleon hf to gguf * add chameleon tokenizer tests * fix lint * implement chameleon graph * add swin norm param * return qk norm weights and biases to original format * implement swin norm * suppress image token output * rem tabs * add comment to conversion * fix ci * check for k norm separately * adapt to new lora implementation * fix layer input for swin norm * move swin_norm in gguf writer * add comment regarding special token regex in chameleon pre-tokenizer * Update src/llama.cpp Co-authored-by: compilade <[email protected]> * fix punctuation regex in chameleon pre-tokenizer (@compilade) Co-authored-by: compilade <[email protected]> * fix lint * trigger ci --------- Co-authored-by: compilade <[email protected]>
* convert chameleon hf to gguf * add chameleon tokenizer tests * fix lint * implement chameleon graph * add swin norm param * return qk norm weights and biases to original format * implement swin norm * suppress image token output * rem tabs * add comment to conversion * fix ci * check for k norm separately * adapt to new lora implementation * fix layer input for swin norm * move swin_norm in gguf writer * add comment regarding special token regex in chameleon pre-tokenizer * Update src/llama.cpp Co-authored-by: compilade <[email protected]> * fix punctuation regex in chameleon pre-tokenizer (@compilade) Co-authored-by: compilade <[email protected]> * fix lint * trigger ci --------- Co-authored-by: compilade <[email protected]>
@nopperl any plans to tackle image->text and text->image? |
@MasterScrat currently no plans, sorry for the late reply. AFAIK multimodal support would require a refactor of llama.cpp (#8010 (comment)). I'd love to work on it, but don't have the time right now. |
This PR adds support for the Chameleon model. For now, this implementation only supports text->text inference and serves as base to implement the (more interesting) image->text, text->image and interleaved pipelines. However, such an implementation will probably require some changes to the CLI and internal architecture, so I suggest to do this in a separate PR.
Chameleon is based on the Llama-2 architecture with the following changes:
Note 1: in order to enable text->text inference, the image token logits are suppressed similar to the HF implementation. This needs to be removed when support for images is added.
Note 2: I implemented swin-norm, but I haven't tested it yet, as it is only used by Chameleon-30B.
To test it:
Output:
Reference (requires transformers>=4.43.0.dev0):
Reference output:
Partially addresses #7995.