Skip to content

Commit

Permalink
fix
Browse files Browse the repository at this point in the history
  • Loading branch information
xiangw2 committed Nov 27, 2024
1 parent 9ffd7a8 commit bc80d1e
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion vllm/model_executor/models/telechat2.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,8 @@ def __init__(self, *, vllm_config: VllmConfig, prefix: str = ""):
vllm_config.model_config.hf_config.mlp_bias = True
super().__init__(vllm_config=vllm_config, prefix=prefix)
# 2. Remove the bias from the qkv_proj and gate_up_proj based on config
# FIXME: Handle qkv_bias etc
# Telechat2's gate_up_proj and qkv_proj don't have bias
# see: https://github.com/vllm-project/vllm/pull/10311#issuecomment-2490297566
for layer in self.layers:
layer.self_attn.qkv_proj.bias = layer.mlp.gate_up_proj.bias = None
layer.self_attn.qkv_proj.skip_bias_add = True
Expand Down

0 comments on commit bc80d1e

Please sign in to comment.