Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge err: scales and qzeros dimension mismatch #15

Open
sscheng216 opened this issue Nov 29, 2023 · 4 comments
Open

merge err: scales and qzeros dimension mismatch #15

sscheng216 opened this issue Nov 29, 2023 · 4 comments

Comments

@sscheng216
Copy link

Dear Sir,

Thanks for your sharing of this great work.
I followed the instruction in readme.md that using GPTQ-for-LLaMa
to quantize the llama-2-7b-chat model and applied lora finetuning, but I got error when using merge.py due to the dimension mismatch of scales and qzeros:
image

Did I miss something?

@yuhuixu1993
Copy link
Owner

@sscheng216 hi, you may check the type of zeros. You need to transfer the zeros from coded format into fp16

@StiphyJay
Copy link

@sscheng216 hi, you may check the type of zeros. You need to transfer the zeros from coded format into fp16

Hi, how to transfer the zeros from coded format into fp16

@xiangxiangGao1996
Copy link

hi, i also want to ask how to transfer the zeros from coded format into fp16

@YuanzeSun
Copy link

Hi, I also meet this problem? Anyone solved it? Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants