-
-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Quantization/Parameter] WIP: Replace parameter subclasses with raw nn.Parameter with additional attributes #11622
base: main
Are you sure you want to change the base?
Conversation
…to add functionalities
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
As discussed, please fix the format. |
@dsikka can you please take a look? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please keep the docstrings and typing added for all the original parameters
This pull request has merge conflicts that must be resolved before it can be |
Will rereview soon |
@dsikka upon reflection, I find this previous code: weight_g_idx = RowvLLMParameter(data=torch.empty(
input_size_per_partition,
dtype=torch.int32,
),
input_dim=0,
weight_loader=weight_loader) intended: weight_g_idx = nn.Parameter(torch.empty(
input_size_per_partition,
dtype=torch.int32,
))
weight_g_idx.vllm_parameter = RowvLLMParameter(data=weight_g_idx, input_dim=0, weight_loader=weight_loader) this way, the only change is, does this sound better to you? |
This looks better. |
@dsikka All quantization class inheritances are kept, except that the BasevLLMParameter does not inherit from nn.Parameter. Here's the PR implemented in this new way (it has already been tested). It's convenient for you to compare the two implementations. You can review it to see if this approach is clearer. |
FIX: issue-10612, pull-10609
Problem:
Parameter subclasses are not fully compatible with the following aspects:
torch.compile: There are compatibility issues when using parameter subclasses in the context of torch.compile.
offloadedTensor: Parameter subclasses do not work well with tensor subclasses either.
Solution:
Remove all parameter subclasses and instead add the necessary properties and functions directly onto the raw nn.Parameter to achieve the required characteristics for quantization parameters. This approach mainly involves rewriting the code that defines and inherits parameter subclasses in the following way, and it requires minimal modifications to the parts of the code that call these parameter subclasses.
Example Code Changes:
Original Definition:
New Definition:
Unchanged Call Sites:
The parts of the code that call these parameter subclasses do not need to be modified. For example:
Verified Tests: