Skip to content

Commit

Permalink
[V1] Fix non-cudagraph op name (vllm-project#10166)
Browse files Browse the repository at this point in the history
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: OmerD <[email protected]>
  • Loading branch information
WoosukKwon authored and omer-dayan committed Nov 10, 2024
1 parent a65ac40 commit a477e40
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion vllm/v1/worker/gpu_model_runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -411,7 +411,7 @@ def load_model(self) -> None:
set_compilation_config(
CompilationConfig(
use_cudagraph=True,
non_cudagraph_ops=["vllm.unified_flash_attention"],
non_cudagraph_ops=["vllm.unified_v1_flash_attention"],
use_inductor=True,
))

Expand Down

0 comments on commit a477e40

Please sign in to comment.