Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lm-eval evaluates times incorrectly: When running test_see_lm using the environment given by requirements, the error shown above is not clear #16

Open
gxy0727 opened this issue Nov 1, 2024 · 4 comments

Comments

@gxy0727
Copy link

gxy0727 commented Nov 1, 2024

WeChat7a83540315638a07e935dc85f8d9edcd

@rickyang1114
Copy link
Collaborator

Hello, may I know if you set HF_ENDPOINT=https://hf-mirror.com? Doing so will result in error during the evaluation of lm-eval.

@gxy0727
Copy link
Author

gxy0727 commented Nov 2, 2024

I've already set up the mirror
export HF_ENDPOINT=https://hf-mirror.com
And login to huggingface was set when evaluating the model
huggingface-cli login
But an error is still reported

@rickyang1114
Copy link
Collaborator

rickyang1114 commented Nov 2, 2024

The root cause of the error is indeed setting export HF_ENDPOINT=https://hf-mirror.com. You may try running unset HF_ENDPOINT and using a VPN, which should resolve the issue.

This error occurs because https://hf-mirror.com restricts access to the Spaces section of HuggingFace to manage traffic. However, lm-eval requires access to this section to download evaluation scripts.

@rickyang1114
Copy link
Collaborator

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants