Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix GLM4 alignment issue #2723

Merged
merged 2 commits into from
Jan 20, 2025
Merged

Conversation

guoqingbao
Copy link
Contributor

This PR primarily addresses the generation issue in GLM4, specifically the missing rope_ratio in the construction of the cosine-sine cache. Additionally, it enables the configuration to be loaded from a JSON file and allows weights to be loaded from a local path, facilitated by a new utility function named hub_load_local_safetensors. Furthermore, it resolves a compilation issue caused by the updated hf-hub crate in the previous #2691, which now depends on a newer version of the tokio package.

Tested case:

cargo run --release --example glm4 --features cuda -- --weight-path /home/weights/glm-4-9b-chat/ --prompt "Please talk about deep learning."

It now generates better answers (align with the official results).

@LaurentMazare LaurentMazare merged commit e4c3a71 into huggingface:main Jan 20, 2025
10 checks passed
@LaurentMazare
Copy link
Collaborator

Thanks, nice to have that fixed and in line with the official implementation!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants