Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Model] Whisper model implementation #11280
[Model] Whisper model implementation #11280
Changes from 31 commits
cfbd164
ced0141
248bafb
6c9ee61
7329b2d
77ad7ed
755086b
b38f5b7
ff70bce
3fbd067
9032aa1
ce3a87c
04a0ef4
fd4ed14
26cfede
34c5830
bf111b2
a21470b
b457c01
d81d217
17712a4
b573fa9
6d6cbd9
94a867b
787708a
e943905
606642e
fe8e245
b59fddb
d66cd42
6ba1afc
26fd92a
4566b10
a21334c
1fe41fc
1c16ad2
7282280
3442852
e0cc63e
770534c
d73e004
9672af2
127f46e
edfec27
ba30886
dbd21a4
ab674fa
e920f2d
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we remove the eos token in the input processor? But maybe we can improve this later via the merged multi-modal processor since I assume HF can handle this automatically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems that
<|startoftranscript|>
is also a special token which will be added by tokenizer by default:It outputs
'<|startoftranscript|><|notimestamps|><|startoftranscript|><|endoftext|>'
, which means that we need to remove a"<|startoftranscript|>"
to get the original prompt as well.Perhaps we need to construct user prompts with non special tokens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, there are other special tokens that are added as well. There are also some other challenges
<|notimestamps|>
is 50363 for whisper-small but 50364 for whisper-large-v3. This makes it awkward to try to detect these special tokens from inside the input processor.a. User passed in text
prompt
which is tokenized in vLLM. We need to strip special tokens.b. User passed in tokens
prompt_token_ids
which should not be stripped.Basically, I'm not sure how to reliably remove special tokens after it's already encoded. It seems error-prone and brittle to third-party behavior in huggingface. I understand that having the tokenizer not add special tokens in the first place involves more changes inside vLLM, but IMO it's the most reliable solution that's more robust to future changes in vLLM and huggingface.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way to determine this without model type information?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure about generalizing this from a single example. In the long term it may be better to allow the model definition to specify exactly the mapping between input fields and where they go (e.g. encoder/decoder)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed with @aurickq , long-term it is probably best to either (1) have the model definition specify whether to map the input text prompt to the encoder, or (2) add a default behavior only for multi-modal models with cross-attention, wherein the text prompt is always routed to the decoder & the non-text modality is always mapped to the encoder.
(I worked on adding encoder/decoder cross-attention support to v0)