Replies: 3 comments 1 reply
-
IMO experimental parameters are totally out of context for now. |
Beta Was this translation helpful? Give feedback.
-
Since the code for tokenization used for initial prompts is largely a copy-paste of llama.cpp, I'm wondering whether we can reuse it. I think that introducing a dependency to whisper.cpp on llama.cpp is not a terrible pain. |
Beta Was this translation helpful? Give feedback.
-
Callbacks can be useful for the local implementation but we have no way of transferring them through the API as params. The functionality that callbacks cover will have to pass through the progress callback of sessions. |
Beta Was this translation helpful? Give feedback.
-
All params for the inference can be checked here:
Whisper params needed for the MVP:
Whisper params not needed for the MVP:
params to enable/disable prints on stdout of the internal library during decoding. I don't think they will be useful by any means for the clients and we should hide them
suppress_regex - desc
common decoding params:
fallback parameters - seems like they are for fine tuning
grammar -> GBNF grammar to guide decoding. At some point we might need it.
encoder_begin_callback -> called each time before the encoder starts
Experimental in whisper
Beta Was this translation helpful? Give feedback.
All reactions