-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regarding ASR testing #40
Comments
Hello, I think it's probably not an issue with the prompt, each prompt has been seen many times during training. |
Hello, have you reproduced the results successfully? My reproduced performance on LibriSpeech test-clean is also a WER around 15 with the following configs: {
"do_sample": false,
"max_new_tokens": 100,
"min_new_tokens": 1,
"repetition_penalty": 1.0,
"num_beams": 5
} |
Hello, thank you very much for your work. I would like to reproduce the ASR performance of the AnyGPT Base model on the Librispeech test clean. I noticed that your paper stated a WER of 8.5, but my test result was 14.5 (using the command format speech | text | {speech file path}). Therefore, I am speculating whether this result is caused by randomly selecting a prompt for ASR during each inference in the ASR task? If possible, could you share the relevant code for calculating WER (I used 7 Composers from jiwer for calculation), as well as the text result obtained from ASR of the model. Looking forward to your reply.
The text was updated successfully, but these errors were encountered: