You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Due to the following code in vLLM 0.6.2, we cannot serve LoRA adapters on a per-request basis, as it does not support LoRA and Multimodal simultaneously:
if self.lora_config:
assert supports_lora(self.model), "Model does not support LoRA"
assert not supports_multimodal(
self.model
), "To be tested: Multi-modal model with LoRA settings."
Do you have a plan to upgrade the vLLM version for Aria?
The text was updated successfully, but these errors were encountered:
Hi @thanhnguyentung95
You can try upgrading vLLM in your local environment and continue development first. There are several indirect dependencies (like PyTorch, transformers), so I'll need some extra time to test the upgrade thoroughly to ensure it doesn't break any existing Aria functionality.
Due to the following code in vLLM 0.6.2, we cannot serve LoRA adapters on a per-request basis, as it does not support LoRA and Multimodal simultaneously:
Do you have a plan to upgrade the vLLM version for Aria?
The text was updated successfully, but these errors were encountered: