You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when the model is so slow not everyone is using the fastest gpu i want to refer it to some one i know one of them if a blind person and they own a windows latpop with 2 gigs of vram in their graphics card they are willing to learn about new things as they are Software engineer as well but unable to work and read properly because they cant read they have hard time using their pc , in this way they might be able to transcribe vedios and learn better due to text readers
Describe the solution you'd like
i just want you to keep the app the same just add the support for it to use the whisper.cpp model on cpu and the user have the choice to download whisper.cpp model from this repo https://github.com/bilalazhar72/whisper.cpp/tree/master/models
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
they show it being used on the i phone 13 and doing a good job the model seems to be very fast and accurate , there are currently no alternatives to use this app on the windows since you have already have everything setup i would just like it to have two features load the model version and let user speak in voice and outputs the texts (even long text) and also load any model size and let you give an mp3 file to transcribe since whisper models are show this implementation in .cpp will make sure that the model can be ran on any hardware and people will start using this app for this reason as well
Additional context Add any other context or screenshots about the feature request here.here is just the vedio of them running it on in the i phone 13 and the model working flawlessly i am not good at coading since i am beginner but it seems like everything is linked to one file and can be ran easily as well
The text was updated successfully, but these errors were encountered:
Hey thanks for the long and detailed post, and sorry for the very late reply.
i've seen whisper.cpp but never actually got to try it but I'll be sure to try adding this in the future. Right now i'm trying to integrate whisper_timestamped stable-ts and fixing the bugs while also improving the app experience in terms of UI and performance
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when the model is so slow not everyone is using the fastest gpu i want to refer it to some one i know one of them if a blind person and they own a windows latpop with 2 gigs of vram in their graphics card they are willing to learn about new things as they are Software engineer as well but unable to work and read properly because they cant read they have hard time using their pc , in this way they might be able to transcribe vedios and learn better due to text readers
Describe the solution you'd like
i just want you to keep the app the same just add the support for it to use the whisper.cpp model on cpu and the user have the choice to download whisper.cpp model from this repo https://github.com/bilalazhar72/whisper.cpp/tree/master/models
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
they show it being used on the i phone 13 and doing a good job the model seems to be very fast and accurate , there are currently no alternatives to use this app on the windows since you have already have everything setup i would just like it to have two features load the model version and let user speak in voice and outputs the texts (even long text) and also load any model size and let you give an mp3 file to transcribe since whisper models are show this implementation in .cpp will make sure that the model can be ran on any hardware and people will start using this app for this reason as well
Additional context
Add any other context or screenshots about the feature request here.here is just the vedio of them running it on in the i phone 13 and the model working flawlessly i am not good at coading since i am beginner but it seems like everything is linked to one file and can be ran easily as well
The text was updated successfully, but these errors were encountered: