-
-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Misc] Avoid misleading warning messages #10438
Conversation
Signed-off-by: Jee Jee Li <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
I think we can only inherit from |
Sounds great, I will handling it asap |
Signed-off-by: Jee Jee Li <[email protected]>
@DarkLight1337 I have handled it, please review it again, thanks |
Signed-off-by: Jee Jee Li <[email protected]> Signed-off-by: Manjul Mohan <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]> Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]> Signed-off-by: rickyx <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]> Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
For
QWenLMHeadModel
andChatGLMForCausalLM
, where multimodal model and LLM share the same model class, incorrect warning messages are output when enabling LoRA for LLM.BTW, fix QWenLMHeadModel small typo.
ping @DarkLight1337