-
-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core] Reduce TTFT with concurrent partial prefills #10235
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
assert len(sampling_metadata.seq_groups) \ | ||
== len(maybe_deferred_sample_results) \ | ||
== len(prompt_logprobs) \ | ||
== len(sample_logprobs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the assertions currently would only happen if:
- All running sequences are chunked continuous prefills
- No decode sequences in the scheduler batch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sweet, we will try to add some unit tests to cover that case and see if we can make it crash. Thanks!
vllm/config.py
Outdated
@@ -1085,6 +1085,7 @@ def __init__(self, | |||
max_num_batched_tokens: Optional[int], | |||
max_num_seqs: int, | |||
max_model_len: int, | |||
num_prefill_slots: int = 1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this actually "maximum number of prefill sequence in a batch"? If so could we name it something more informative, like max_num_batched_prefill_seqs
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's technically only the number of partial prefills allowed in a batch. You could still have like 100 sequence groups with 5 prompt tokens each all schedule in a single step here.
max_num_partial_prefills
?
vllm/core/scheduler.py
Outdated
# Requests with more than (4% max context length) tokens to prefill | ||
# are "big". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this definition and threshold?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The entire goal here is to not allow decode to be starved by the prefill phase blocking on long requests- this part of the PR description:
A single very large prompt will block all other prompts from prefilling for many iterations. This can eventually starve decoding- for example a 130k token prompt with —max-num-batched-tokens=512 will take about 250 iterations to prefill, in which time the currently decoding sequences may all finish. Send a few of these requests at once and very quickly nothing will be decoding.
Just allowing concurrent partial prefills doesn't solve the problem by itself, because multiple long requests could still block up the prefill. So what we do is only allow a single long request to prefill, and allow smaller requests to be pulled from the waiting queue instead of more long ones
|
||
@pytest.mark.parametrize("model", ["facebook/opt-125m"]) | ||
@pytest.mark.parametrize("max_num_partial_prefills", [2, 4, 8]) | ||
def test_chunked_prefill_with_actual_engine(model: str, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @rickyyx here's what we tried to do to test that the sampler doesn't throw any assertions- we put multiple prompts into an engine and manually step it forward with them all partially prefilled
seq_group_meta, out = schedule_and_update_computed_tokens(scheduler) | ||
# large req gets 63 tokens (minus 1 for decode) | ||
assert seq_group_meta[0].token_chunk_size == 63 | ||
assert seq_group_meta[1].token_chunk_size == 1 # decode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if this is a bug, but at this stage, request#3 should be decoding, but it didn't get any budget. Request#2 got budget for 1
decode token, and request#0 got the remaining budget for prefilling 63
tokens. Is that expected?
Based on this comment,
# Update new running requests.
# By default, vLLM scheduler prioritizes prefills.
# Once chunked prefill is enabled,
# the policy is changed to prioritize decode requests.
vllm should have prioritized decode requests and given both request#2 and request#3 1
budget, and request#0 62
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does happen in the next iteration though
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Prashant Gupta <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Prashant Gupta <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Prashant Gupta <[email protected]>
Signed-off-by: Prashant Gupta <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Prashant Gupta <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Prashant Gupta <[email protected]>
Whoops fixed that! |
This pull request has merge conflicts that must be resolved before it can be |
ugh this is a tough merge conflict |
Signed-off-by: Prashant Gupta <[email protected]>
Signed-off-by: Prashant Gupta <[email protected]>
Signed-off-by: Prashant Gupta <[email protected]>
@@ -1755,6 +1935,8 @@ def _get_num_new_uncached_and_cached_tokens( | |||
budget, | |||
self._get_prompt_limit(seq_group), | |||
num_uncached_new_tokens, | |||
self.partial_prefill_budget_lookup_list, | |||
partial_prefill_metadata, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think _chunk_new_tokens_to_schedule
should not be static if this many of arguments are instance attributes
But doesn't have to block this PR
This pull request has merge conflicts that must be resolved before it can be |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is definitely a great improvement for workload that might have long prefill requests. Also great job on the merge (I think I had a PR that also touched on very similar components)
W.r.t the PR itself, I mainly had one question:
- What's the overheads of recomputing partial prefill metadata on each scheduler step? If it's not significant, could we avoid it? I feel the information we needed for the partial metadata could be accumulated as some very limited amount of internal state of the scheduler.
I am still a bit concerned of the potential consequences of having multiple concurrent prefills in the system, but if nothing breaks, then we are good. I will let folks with more knowledge on the other parts of the system to chime in.
cc @comaniac
self.partial_prefill_budget_lookup_list = [0] * ( | ||
self.scheduler_config.max_num_partial_prefills + 1) | ||
self.partial_prefill_budget_lookup_list[0] = ( | ||
scheduler_config.max_num_batched_tokens) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just so I am understanding it correctly:
partial_prefill_budget_lookup_list
is a list where num partial prefills -> token budget for each partial prefill
?
We only need to compute it once for an integer division when we schedule prefills right? Seems not too big an overhead to me comparing with what we are computing each scheduler step already for all the partial prefill metadata?
If we want to keep this, I think moving this into the metadata class might be better?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
partial_prefill_budget_lookup_list is a list where num partial prefills -> token budget for each partial prefill?
Yup! We calculate this once up-front when the scheduler is created so that we don't have to do a division for every sequence in every scheduling step. A list access measures slightly faster than an integer division, at least on the machines we have running in IBM cloud.
If we want to keep this, I think moving this into the metadata class might be better?
Yeah we could do that, I just wasn't quite sure how to make that properly static and cached on the metadata class, since we wouldn't want to re-create this list on every scheduler step
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I spent a few minutes looking into this and I'm not sure there's a clean way to add a class attribute to a @dataclass
, unfortunately. I'd rather not make the code even more hacky than this is!
What do you think between either:
- Leaving as is, or
- Doing the division in-line like
scheduler_config.max_num_batched_tokens // partial_prefill_metadata.schedulable_prefills
?
Signed-off-by: Joe Runde <[email protected]>
@rickyyx good question! I kinda already answered why we went this way in this comment but another reason we need to do some amount of up-front processing is that we have to first peek through the waiting queue at the beginning of each step to see if there are prefills that we can schedule so that we can then properly budget the tokens when we schedule the running queue. So, we could keep info around in the scheduler state about the number of prefills currently running, but we still have to peek at the waiting queue each step. As for exactly how long the |
Thanks for the response! Ah, I see the dependency between number of waiting requests and the running scheduling now. If possible, I think a profiling on the |
Related recent issue that I think would be improved by this: #11286 |
maybe related to #10774 |
@rickyyx Here are some numbers from running the serving benchmark and measuring the time of
This took at most 27 nanoseconds, so I don't think we need to worry about this overhead. What do you think? |
Oh nice, this is rather negligible. Thanks for collecting that - do you know roughly how many elements are in the queue? Or what's the load when serving? |
@rickyyx It was the "medium case" from the pr description at 12qps, but I didn't grab any logs from the server about the size of the queues while it was running. I can run the other cases and do it at higher qps to verify it doesn't get out of hand |
@rickyyx Some more numbers, with the "mixed case" under 24qps load:
With many requests running and a large waiting queue it maxes out at 100 nanoseconds. So it's a little worse, but I think still sub-millisecond should be fine here. |
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Replaces #10061, as inspired by @njhill and @comaniac's comments. Co-authored by @prashantgupta24
Context: our customers running large multi-tenanted SaaS deployments of vLLM have a problem where high volumes of small-prompt requests are usually processed smoothly, but quickly pile up in a giant queue when a small number of large-prompt requests are submitted. We see the decoding throughput drop to zero on multiple replicas when this happens.
The current chunked prefill implementation only allows a single sequence to be partially prefilled at a time. This has a few limitations:
—max-num-batched-tokens=512
will take about 250 iterations to prefill, in which time the currently decoding sequences may all finish. Send a few of these requests at once and very quickly nothing will be decoding.This PR implements both
--max-num-partial-prefills=N
--max-long-partial-prefills=N
to set the limit on the number of long sequences that can be concurrently prefilled. This defaults to 1 sequence.--long-prefill-threashold=x%
to set a percentage of the context length that determines which sequences are considered "long". This defaults to 4%This is implemented in the v0 scheduler. We’re aware that the v1 implementation is underway and will later become the default, but we need a fix for our customers soon and we hope that what we discover here may help inform a different, better solution in the v1 scheduler.
To test this we created three scenarios, a “medium request” case, a “large request” case, and a “mixed” case.
For the medium request case, we created a subset of the
sharegpt
dataset with 900 small requests (<50 prompt characters) and 100 of the largest requests (typically between 10k and 20k prompt characters, which we call “medium” sized). We modified thebenchmark_serving.py
test to not filter out any of the small or large requests, and ran it with this dataset. What we expect to find is similar throughput compared to the main branch, but much lower TTFT on the small requests. Since 10% of the requests are larger than the rest, we should see better TTFT at p90 and below, with comparable TTFT above p90.For the large request case, we took 990 of the smallest requests from the
sharegpt
dataset, and then took 10 of the largest requests and duplicated the prompts until they were around 100k characters in length. We ran this in the same way as the medium request case, and here we expect to see smaller TTFT across the board since the small requests will no longer be blocked from prefilling by the few very large requests.For the mixed case, we used 850 “small”, and 140 “medium” requests, as well as 10 "large" requests where we duplicated the prompts up to 200k characters.
All tests were run on a single 80GB A100, with the command:
We ran the tests against the main branch (commit
874f551b3626321f6bf9a902b8fd9fc1fa7c7f2e
), as well as this PR with the new optimization both disabled (--max-num-partial-prefills=1
), and enabled (--max-num-partial-prefills=4
)The results are shown here:
The TTFT improvements are very easy to see- in the medium case we cut the p90 TTFT in half, and in the large case we cut it nearly 30x. In both cases we did not measure a throughput drop when run with
--max-num-partial-prefills=1
, and the throughput drop with--max-num-partial-prefills=4
is minimal.Surprisingly, along with the massive TTFT improvements in the "mixed" test case, we also see a 4% throughput improvement (3506 tokens/s up from 3368 tokens/s). Based on the fact that ITL still looks a little slower, it seems that the throughput is higher simply because more requests were able to be successfully scheduled at the same time.
cc @rickyyx
PR Checklist (Click to Expand)
Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]
for bug fixes.[CI/Build]
for build or continuous integration improvements.[Doc]
for documentation fixes and improvements.[Model]
for adding a new model or improving an existing model. Model name should appear in the title.[Frontend]
For changes on the vLLM frontend (e.g., OpenAI API server,LLM
class, etc.)[Kernel]
for changes affecting CUDA kernels or other compute kernels.[Core]
for changes in the core vLLM logic (e.g.,LLMEngine
,AsyncLLMEngine
,Scheduler
, etc.)[Hardware][Vendor]
for hardware-specific changes. Vendor name should appear in the prefix (e.g.,[Hardware][AMD]
).[Misc]
for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
format.sh
to format your code.docs/source/
if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.Adding or changing kernels
Each custom kernel needs a schema and one or more implementations to be registered with PyTorch.
Tensors
require meta-functions. Meta-functions should be implemented and registered in python so that dynamic dims can be handled automatically. See above documents for a description of meta-functions.torch.libary.opcheck()
to test the function registration and meta-function for any registered ops. Seetests/kernels
for examples.Notes for Large Changes
Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with
rfc-required
and might not go through the PR.What to Expect for the Reviews
The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:
action-required
label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.Thank You
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!