You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Personally, I would like to fine-tune with Huggingface's models that have been quantized by int4, but currently your code needs GPTQ local group-wise quantization before you can use QA-lora when fine-tuning, which feels that this limits some of the capabilities of your project.
The text was updated successfully, but these errors were encountered:
Personally, I would like to fine-tune with Huggingface's models that have been quantized by int4, but currently your code needs GPTQ local group-wise quantization before you can use QA-lora when fine-tuning, which feels that this limits some of the capabilities of your project.
Thanks for your suggestion. I would consider in the future.
Personally, I would like to fine-tune with Huggingface's models that have been quantized by int4, but currently your code needs GPTQ local group-wise quantization before you can use QA-lora when fine-tuning, which feels that this limits some of the capabilities of your project.
The text was updated successfully, but these errors were encountered: