-
Notifications
You must be signed in to change notification settings - Fork 301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Whisper Fine-tuning Recipe on Aishell1 #1466
Conversation
On a 8xA100(80GB) machine, it takes about 15 mins per epoch for large-v2/3 models. Since I am always set the max_duration as large as possible, I think people could fine-tune it on 24GB cards with deepspeed and smaller batch size. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thank you!
It throws the above error while using Could you make your checkpoint compatible with whisper? For instance, you need to save |
Sorry, I added this converion script https://huggingface.co/yuekai/icefall_asr_aishell_whisper/blob/main/convert.sh. I have uploaded a converted medium model. Would you mind trying it again? https://huggingface.co/yuekai/icefall_asr_aishell_whisper/blob/main/exp_medium/whisper-medium-aishell1-epoch-10-avg-4.pt |
Thanks! It works perfectly. I have converted it to sherpa-onnx with k2-fsa/sherpa-onnx#565 You can find the converted model at You can also try it in the following huggingface space |
This PR supports fine-tuning whisper models using aishell1.