Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use Whisper / Whisper.cpp for voice recognition #416

Open
MathiasSchindler opened this issue Jan 26, 2023 · 8 comments
Open

Use Whisper / Whisper.cpp for voice recognition #416

MathiasSchindler opened this issue Jan 26, 2023 · 8 comments
Labels
enhancement New feature or request

Comments

@MathiasSchindler
Copy link

It would be nice to use whisper instead of vosk for the speech recognition on the server part, as it current seems to outperform other models in terms of quality of speech recognition.

@pajowu
Copy link
Member

pajowu commented Jan 26, 2023

I agree, integrating whisper would be nice. I already played around with options for that a bit, something like https://github.com/jianfch/stable-ts or a python wrapper for https://github.com/ggerganov/whisper.cpp (ggerganov/whisper.cpp#9 or https://github.com/o4dev/whispercpp.py) would be needed to get word-level timestamps.

Note: I think we should add this, but not replace vosk with it, as vosk has much lower inference times and therefore is especially useful on slower machines

@anuejn
Copy link
Member

anuejn commented Jan 26, 2023

some more thoughts on this: how do we do this in a performant and cross platform way? Sure whisper.cpp would be one option but it would also be cool to use something like tvm for general purpose gpu support and coreML for apple platform accelerators. Is there any ready made abstraction over these? would we need to invent something new (would that be too much work, etc...).

@anuejn anuejn added the enhancement New feature or request label Feb 12, 2023
@clstaudt
Copy link

clstaudt commented Mar 6, 2023

I recently used whisperX to transcribe some interviews. I believe the large model and perhaps even the medium model would perform significantly better than the current transcription. Inference times are a factor though - with GPU support I was able to transcribe at 7x speed with the large model.

@pajowu
Copy link
Member

pajowu commented Mar 6, 2023

We are currently working on something similar for the transcribee project. I'm not sure we have the time right now to integrate it into audapolis, but once we have a working solution for transcribee, it should be relatively simple to integrate it into audapolis (however we might run into some problems with packaging this in a reliable cross-plattform way)

@clstaudt
Copy link

clstaudt commented Mar 6, 2023

@pajowu What is the mission of transcribee? How is it different from audapolis? Please add a Readme. :)

@pajowu
Copy link
Member

pajowu commented Mar 6, 2023

While with audapolis the focus was on editing multimedia and transcription was only a by-product, transcribee focusses fully on transcription. We only started working on it last week and will add a proper readme soon. Until then, you can have a look at the project description on the prototypefund website

@clstaudt
Copy link

clstaudt commented Mar 6, 2023

@pajowu Nice. I'm very interested as an ML engineer and podcaster. Please add some "help wanted" and "good first issue" tickets soon, I'd love to contribute.

@lentemoore
Copy link

Would really second this, as the Vosk transcribtions are objectively aweful compared to whisper, would make audapolis much more useful to havemore accurate transcriptions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants