-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
example: int4 weight decompression #2193
base: main
Are you sure you want to change the base?
example: int4 weight decompression #2193
Conversation
The file name of the example int4_weight_decompression_cmnts.cpp doesn't seem good. What is cmnts? |
Removed the int4_weight_decompression_cmnts.cpp and added int4_weight_decompression,cpp |
minor changes make changes based on review remove file int4_weight_decompression_cmnts.cpp
…upakroyintel/oneDNN into add_int4_decompression_example
@rupakroyintel, please make sure commits in your branch comply with contributing guidelines and do not contain merge commits. @theComputeKid, @mgouicem, looks like |
// - Matrices A and B | ||
// Outputs: | ||
// - Matrix C | ||
void ref_compute_matmul_f32(int64_t M, int64_t N, int64_t K, int64_t G, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest dropping fp32 reference comparison from this example as it does not add value when explaining int4 quantization.
|
||
// Compares the results of reference matrix multiplication and oneDNN weights | ||
// decompression. | ||
void compare_ref_and_weights_decompression(engine::kind engine_kind) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be great to follow structure and flow of int8 decompression example (weights_decompression_matmul) and add additional information about specifics of int4 data storage. If you remember the case that triggered the request for example was related to feeding prepacked weights to oneDNN and dealing with groups and zero-points.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously in the Teams Pytorch channel, Dmitry provided the following detailed advice:
...
Secondly, using this group of 8 along N in case of PT is not required. Group size says how many consecutive points of the tensor zero points are applied to should share a single zero point value. It has nothing to do with how PT pack their zero points.Thirdly, it is the most important HOW these zero points are stored in memory. There was a recent story where IPEX engineer tried to enable oneDNN's int4 and failed to do so because weights were transposed (because of that 8xPack thing), and everything what should have been done was to transpose them again to match oneDNN's API. I would assume this story should follow the same pattern - before calling oneDNN API, it's highly likely those zero points should be transposed and only then passed as an int4 object inside the library to get correct results.
@dzarukin What do you suggest? It seems that it's better to pass an int4 object to oneDNN rather than to prepack 8*int4 and pass an int32 object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oneDNN developed API to work with int4 memory objects directly. This hasn't happened to PyTorch yet. Their implementation side has a detail of pre-packing. The example should probably demonstrate how to translate packed 8 int4 values as a single int value language into oneDNN language and what operations should be done in terms of memory (necessary transpositions and/or reorders).
Let me see what goes off in the jobs. I checked out the branch and ran locally, and it properly catches the first improper message.
|
@vpirogov @dzarukin We tried translating packed 8 int4 values into a single int value. However, it looks like the zero-points attribute wei:per_ocic:s4:32x8 is not supported. Here is the output from benchdnn:
|
@rupakroyintel, oneDNN doesn't have any idea about external to it 8-int4 values packing implementation detail. Zero-point group API is not designed for it. From oneDNN perspective you need to think about each value independently and use a single dimension in groups. The observed benchdnn output is expected. |
Description
oneDNN supports INT4 autoGPTQ and AWQ quantization features. This is an example in oneDNN example to demonstrate Matmul INT4 weights decompression support and how to configure the APIs for autoGPTQ and AWQ quantization features. The request originally came from IPEX team: "AWQ (activation-aware quantization) is very popular in the community and we need to support. We need oneDNN INT4 GEMM API support the below input packing approach.The weights is packed in N direction, [K, N/8]; zeros point is packed in both K and N, [K/G, N/8], scale is in K direction [K/G, N].The input data type of weight and zero point is int32 and scale is fp16."
Checklist
General
make test
andmake test_benchdnn_*
) pass locally for each commit?Performance improvements
New features
Bug fixes
RFC PR