Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[V1] Simpify vision block hash for prefix caching by removing offset from hash #11646

Merged
merged 2 commits into from
Dec 31, 2024

Conversation

heheda12345
Copy link
Collaborator

@heheda12345 heheda12345 commented Dec 31, 2024

Reuse the example in #11187

Taking a series of 3 blocks as an example: T0,T1,P00,P01 | P02,P03,P04,T2 | T3,P10,P11,P12, where Ti is i-th text token and Pxy is the y-th placeholder token of the x-th image, so this prompt has 2 images (P0 and P1). Assuming the image hash of P0 and P1 is aaa and bbb, respectively, and mm_positions=[(offset=2, length=5), (offset=9, length=3)], the hash of 3 blocks in #11187 is as follows

# (Parent hash,
#  token ID w. placeholders,
#  image hash, start)
hash0 = hash(None, T0,T1,P00,P01, (aaa,0))
hash1 = hash(hash0, P02,P03,P04,T2, (aaa,2))
hash2 = hash(hash1, T3,P10,P11,P12, (bbb,0))

But offset is redundant and the following hash format is enough:

# (Parent hash,
#  token ID w. placeholders,
#  image hash)
hash0 = hash(None, T0,T1,P00,P01, aaa)
hash1 = hash(hash0, P02,P03,P04,T2, aaa)
hash2 = hash(hash1, T3,P10,P11,P12, bbb)

Simple proof:
We need to distinguish the above example with the following cases:

  1. The same image but different offset, e.g., T0,T1,T2,P00 | P01,P02,P03,P04 | T3,P10,P11,P12. The hash0 becomes hash(None, T0,T1, T2,P00, aaa), which is different
  2. The same offset but different image, e.g., T0,T1,P20,P21 | P22,P23,P24,T2 | T3,P10,P11,P12. The hash0 becomes hash(None, T0,T1,P20,P21, ccc), which is different

This hash format has almost the same speed as previous one, but can avoid the confusion about why we need to add offset to the hash.

Benchmark result on H100 with the script in #11187: https://gist.github.com/comaniac/ea26df17fdffa533cf53d53b8455bc31 I ran 3 times for each commit

VLLM_USE_V1=1 VLLM_ENABLE_V1_MULTIPROCESSING=1 python3 mmmu_bench.py --model llava-hf/llava-v1.6-mistral-7b-hf --num-prompts 500  --image-hit-rate 0.3
Without this pr:
Throughput: 15.84 req/s
Throughput: 16.09 req/s
Throughput: 16.00 req/s

With this pr:
Throughput: 16.11 req/s
Throughput: 15.94 req/s
Throughput: 15.84 req/s

Note that the following format that only adds the image hash to the first block of this image is also correct, but is a little slower (15.50 reqs/s). The code is here 539e84c

# (Parent hash,
#  token ID w. placeholders,
#  image hash)
hash0 = hash(None, T0,T1,P00,P01, aaa)
hash1 = hash(hash0, P02,P03,P04,T2, )
hash2 = hash(hash1, T3,P10,P11,P12, bbb)

Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

Copy link
Collaborator

@comaniac comaniac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@comaniac comaniac added the ready ONLY add when PR is ready to merge/full CI is needed label Dec 31, 2024
@comaniac comaniac enabled auto-merge (squash) December 31, 2024 05:19
@comaniac comaniac merged commit 8c3230d into vllm-project:main Dec 31, 2024
66 checks passed
@ywang96
Copy link
Member

ywang96 commented Dec 31, 2024

I guess we previously overlooked the fact that the offset information can be already inferred from the token sequence and the hash of the previous block. Thanks for this simplification!

bjmsong pushed a commit to bjmsong/vllm that referenced this pull request Jan 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants