Skip to content

Multi models rebase #22792

Multi models rebase

Multi models rebase #22792

Triggered via pull request November 26, 2024 21:18
Status Failure
Total duration 28s
Artifacts

mypy.yaml

on: pull_request
Matrix: mypy
Fit to window
Zoom out
Zoom in

Annotations

30 errors
mypy (3.9): vllm/entrypoints/openai/serving_completion.py#L102
"EngineClient" has no attribute "get_tokenizer_mm"; maybe "get_tokenizer"? [attr-defined]
mypy (3.9): vllm/entrypoints/openai/serving_completion.py#L147
"generate" of "EngineClient" gets multiple values for keyword argument "lora_request" [misc]
mypy (3.9): vllm/entrypoints/openai/serving_completion.py#L151
Argument 4 to "generate" of "EngineClient" has incompatible type "str"; expected "Optional[LoRARequest]" [arg-type]
mypy (3.9): vllm/entrypoints/openai/serving_chat.py#L126
"EngineClient" has no attribute "get_tokenizer_mm"; maybe "get_tokenizer"? [attr-defined]
mypy (3.9): vllm/entrypoints/openai/mm_api_server.py#L133
Argument 1 to "is_unsupported_config" of "MMLLMEngineClient" has incompatible type "vllm.engine.mm_arg_utils.AsyncEngineArgs"; expected "vllm.engine.arg_utils.AsyncEngineArgs" [arg-type]
mypy (3.9): vllm/entrypoints/openai/mm_api_server.py#L141
Argument "engine_args" to "from_engine_args" of "AsyncLLM" has incompatible type "vllm.engine.mm_arg_utils.AsyncEngineArgs"; expected "vllm.engine.arg_utils.AsyncEngineArgs" [arg-type]
mypy (3.9)
Process completed with exit code 1.
mypy (3.11): vllm/entrypoints/openai/serving_completion.py#L102
"EngineClient" has no attribute "get_tokenizer_mm"; maybe "get_tokenizer"? [attr-defined]
mypy (3.11): vllm/entrypoints/openai/serving_completion.py#L147
"generate" of "EngineClient" gets multiple values for keyword argument "lora_request" [misc]
mypy (3.11): vllm/entrypoints/openai/serving_completion.py#L151
Argument 4 to "generate" of "EngineClient" has incompatible type "str"; expected "LoRARequest | None" [arg-type]
mypy (3.11): vllm/entrypoints/openai/serving_chat.py#L126
"EngineClient" has no attribute "get_tokenizer_mm"; maybe "get_tokenizer"? [attr-defined]
mypy (3.11): vllm/entrypoints/openai/mm_api_server.py#L133
Argument 1 to "is_unsupported_config" of "MMLLMEngineClient" has incompatible type "vllm.engine.mm_arg_utils.AsyncEngineArgs"; expected "vllm.engine.arg_utils.AsyncEngineArgs" [arg-type]
mypy (3.11): vllm/entrypoints/openai/mm_api_server.py#L141
Argument "engine_args" to "from_engine_args" of "AsyncLLM" has incompatible type "vllm.engine.mm_arg_utils.AsyncEngineArgs"; expected "vllm.engine.arg_utils.AsyncEngineArgs" [arg-type]
mypy (3.11)
Process completed with exit code 1.
mypy (3.10)
The job was canceled because "_3_9" failed.
mypy (3.10): vllm/entrypoints/openai/serving_completion.py#L102
"EngineClient" has no attribute "get_tokenizer_mm"; maybe "get_tokenizer"? [attr-defined]
mypy (3.10): vllm/entrypoints/openai/serving_completion.py#L147
"generate" of "EngineClient" gets multiple values for keyword argument "lora_request" [misc]
mypy (3.10): vllm/entrypoints/openai/serving_completion.py#L151
Argument 4 to "generate" of "EngineClient" has incompatible type "str"; expected "LoRARequest | None" [arg-type]
mypy (3.10): vllm/entrypoints/openai/serving_chat.py#L126
"EngineClient" has no attribute "get_tokenizer_mm"; maybe "get_tokenizer"? [attr-defined]
mypy (3.10): vllm/entrypoints/openai/mm_api_server.py#L133
Argument 1 to "is_unsupported_config" of "MMLLMEngineClient" has incompatible type "vllm.engine.mm_arg_utils.AsyncEngineArgs"; expected "vllm.engine.arg_utils.AsyncEngineArgs" [arg-type]
mypy (3.10): vllm/entrypoints/openai/mm_api_server.py#L141
Argument "engine_args" to "from_engine_args" of "AsyncLLM" has incompatible type "vllm.engine.mm_arg_utils.AsyncEngineArgs"; expected "vllm.engine.arg_utils.AsyncEngineArgs" [arg-type]
mypy (3.10)
Process completed with exit code 1.
mypy (3.12)
The job was canceled because "_3_9" failed.
mypy (3.12): vllm/entrypoints/openai/serving_completion.py#L102
"EngineClient" has no attribute "get_tokenizer_mm"; maybe "get_tokenizer"? [attr-defined]
mypy (3.12): vllm/entrypoints/openai/serving_completion.py#L147
"generate" of "EngineClient" gets multiple values for keyword argument "lora_request" [misc]
mypy (3.12): vllm/entrypoints/openai/serving_completion.py#L151
Argument 4 to "generate" of "EngineClient" has incompatible type "str"; expected "LoRARequest | None" [arg-type]
mypy (3.12): vllm/entrypoints/openai/serving_chat.py#L126
"EngineClient" has no attribute "get_tokenizer_mm"; maybe "get_tokenizer"? [attr-defined]
mypy (3.12): vllm/entrypoints/openai/mm_api_server.py#L133
Argument 1 to "is_unsupported_config" of "MMLLMEngineClient" has incompatible type "vllm.engine.mm_arg_utils.AsyncEngineArgs"; expected "vllm.engine.arg_utils.AsyncEngineArgs" [arg-type]
mypy (3.12): vllm/entrypoints/openai/mm_api_server.py#L141
Argument "engine_args" to "from_engine_args" of "AsyncLLM" has incompatible type "vllm.engine.mm_arg_utils.AsyncEngineArgs"; expected "vllm.engine.arg_utils.AsyncEngineArgs" [arg-type]
mypy (3.12)
The operation was canceled.