Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama: Add special tokens in hf_converter for RWKV v6 #9428

Merged
merged 1 commit into from
Sep 12, 2024

Conversation

MollySophia
Copy link
Collaborator

The initial implementation didn't add special tokens when converting RWKV v6 models.
It's not really an urgent problem, but it's better fix it as is discussed in #9315

Also this should close #9315 too.

@MollySophia
Copy link
Collaborator Author

BTW I'm sorry for being not able to solve the problems all at once :P

@github-actions github-actions bot added the python python script changes label Sep 11, 2024
@ggerganov ggerganov merged commit 39f852f into ggml-org:master Sep 12, 2024
9 checks passed
dsx1986 pushed a commit to dsx1986/llama.cpp that referenced this pull request Oct 29, 2024
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 15, 2024
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 18, 2024
Nexesenex pushed a commit to Nexesenex/croco.cpp that referenced this pull request Feb 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
python python script changes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Bug: RWKV 6 Finch 3B+ models crash llama.cpp with CPU backend
3 participants