Replies: 1 comment 1 reply
-
Thanks for the valuable suggestions! In particular, I like the first graphic idea. I think that plotting this with some actual values (from a small toy model) will be a nice bonus section or standalone article. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have read your article about Parameter-Efficient LLM Finetuning With Low-Rank Adaptation (LoRA), which I found quite insightful. One thing that I still could not grasp was a practical understanding how LoRA deals with the updated weight matrix.
My idea: It would be great for this book (and the article) if you could illustrate the effect of LoRA applied to a matrix, before and after decomposition to a specific rank. As an inspiration, this might be helpful: Visualization of the matrix multiplications in singular value decomposition
Apart from that, I would also think it would be nice to update this graphic about PCA projection that the data points in the the plot on the left are 3-dimensional and have the same size, labels and scale as well as number of ticks as in the plot on the right side.
Beta Was this translation helpful? Give feedback.
All reactions