Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trainig Process #40

Open
z7r7y7 opened this issue Nov 2, 2024 · 0 comments
Open

Trainig Process #40

z7r7y7 opened this issue Nov 2, 2024 · 0 comments

Comments

@z7r7y7
Copy link

z7r7y7 commented Nov 2, 2024

Hello,Thank you very much for your project and algorithm contributions! I have some questions regarding the process of fine-tuning and quantizing the llama model using the Qalora algorithm:
1.Do I need to quantize the pre-trained weights downloaded from Hugging Face using AutoGPT before training (considering the error encountered when directly loading the pre-trained weights: _FileNotFoundError: [Errno 2] No such file or directory: '/home/ud202481521/llm_model/llama-2-7b-hf/quantize_config.json'_)?

2.If quantization is required first, can Qalora continue with fine-tuning based on the quantized weights? When using AutoGPT to quantize the pre-trained weights and then training with Qalora based on those weights, I received the output message: _QuantLinear with cuda backend not support trainable mode yet, Switch to the pytorch backend_.

Additionally, I encountered the error: _RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn._

I might be misunderstanding how to use the algorithm properly. Your help would be greatly appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant