-
-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use Whisper / Whisper.cpp for voice recognition #416
Comments
I agree, integrating whisper would be nice. I already played around with options for that a bit, something like https://github.com/jianfch/stable-ts or a python wrapper for https://github.com/ggerganov/whisper.cpp (ggerganov/whisper.cpp#9 or https://github.com/o4dev/whispercpp.py) would be needed to get word-level timestamps. Note: I think we should add this, but not replace vosk with it, as vosk has much lower inference times and therefore is especially useful on slower machines |
some more thoughts on this: how do we do this in a performant and cross platform way? Sure whisper.cpp would be one option but it would also be cool to use something like tvm for general purpose gpu support and coreML for apple platform accelerators. Is there any ready made abstraction over these? would we need to invent something new (would that be too much work, etc...). |
I recently used whisperX to transcribe some interviews. I believe the |
We are currently working on something similar for the transcribee project. I'm not sure we have the time right now to integrate it into audapolis, but once we have a working solution for transcribee, it should be relatively simple to integrate it into audapolis (however we might run into some problems with packaging this in a reliable cross-plattform way) |
@pajowu What is the mission of transcribee? How is it different from audapolis? Please add a Readme. :) |
While with audapolis the focus was on editing multimedia and transcription was only a by-product, transcribee focusses fully on transcription. We only started working on it last week and will add a proper readme soon. Until then, you can have a look at the project description on the prototypefund website |
@pajowu Nice. I'm very interested as an ML engineer and podcaster. Please add some "help wanted" and "good first issue" tickets soon, I'd love to contribute. |
Would really second this, as the Vosk transcribtions are objectively aweful compared to whisper, would make audapolis much more useful to havemore accurate transcriptions! |
It would be nice to use whisper instead of vosk for the speech recognition on the server part, as it current seems to outperform other models in terms of quality of speech recognition.
The text was updated successfully, but these errors were encountered: