-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add lora fine tuning for llama 3.2 #958
Conversation
use_case_examples/lora_finetuning/data_finetune/raw_cml_1.7.0_examples.txt
Show resolved
Hide resolved
d2a25cf
to
372307f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your PR.
Some comments:
- if we want to go for LoRA, maybe we should add it in the forbidden list, I stopped spamming you with my LoRA comments lol
- The new Lora API is very cool
- GPT2 and LLAma notebooks follow the same logic and share same utility functions, maybe we can create a utils file for them.
- In GPT2 notebook, I think you don't use the full potential of the new LoRA API, or maybe you wanted to highlight what's happening behind the scene and I did not get it
- In the 3 notebooks, I think it's not clear for the reader, if we are using FHE only for the inference or for adapters as well, maybe you should explicitly specify it in the conclusion or the introduction.
I think they share a few function already with the utils file. GPT2 uses the previous API version without the LoraTrainer so a bit more complicated but more flexible as well.
Yes I kept GPT2 without LoraTrainer to show that one could use its own training method but it implies defining the hybrid model / remote layers and so on.
I will add a sentence at the beginning to make sure what we do here is clear. |
8d227cc
to
333c46d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the changes.
It would be nice to specify if the weights are encrypted too.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please fix the gpt2 notebook convergence
- fix wrong unpacking of inputs in LoraTraining + add check - add optimizer step in gpt2 - typo in llama notebook - update version in requirements
Coverage passed ✅Coverage details
|
No description provided.