Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Model] Add BNB support to Llava and Pixtral-HF #10795

Merged
merged 1 commit into from
Dec 2, 2024

Conversation

Isotr0py
Copy link
Collaborator

@Isotr0py Isotr0py commented Nov 30, 2024

FIX #9967

  • Add BNB support to Llava and Pixtral-HF model

Signed-off-by: Isotr0py <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@DarkLight1337 DarkLight1337 requested a review from mgoin November 30, 2024 16:31
Copy link
Collaborator

@jeejeelee jeejeelee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your contribution and detailed explanation.

@jeejeelee jeejeelee enabled auto-merge (squash) December 1, 2024 23:58
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Dec 1, 2024
@jeejeelee jeejeelee merged commit b18c9bb into vllm-project:main Dec 2, 2024
62 checks passed
@Isotr0py Isotr0py deleted the llava-bnb branch December 2, 2024 04:48
afeldman-nm pushed a commit to neuralmagic/vllm that referenced this pull request Dec 2, 2024
sleepwalker2017 pushed a commit to sleepwalker2017/vllm that referenced this pull request Dec 13, 2024
BKitor pushed a commit to BKitor/vllm that referenced this pull request Dec 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Usage]: How to use llava-hf/llava-1.5-7b-hf with bitsandbytes quantization in vllm serve?
3 participants