You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As far as I understand, the oneMKL open source project provides implementation for the Intel MKL product, with multiple hardware support. I was checking the documentation for both of them to check the supported data type precisions for the matrix multiplication function gemm. I found that the Intel MKL product supports int8_t data type. However, the open source project only supports floating point precisions. Is integer precision missing from documentation or it is not supported ? and is it going to be supported soon ?
The text was updated successfully, but these errors were encountered:
@FatmaElbadry2 Thanks for your interest and your question. As you have noticed, GEMM API with int8_t data type have not been added to oneMKL open source interfaces project yet. Typically, Intel oneMKL product moves ahead of oneMKL open source interfaces project. Can you provide more information (priority, timeline, etc..) if this is a feature you are interested?
It is maybe worth pointing out that gemm_bias does support integer precisions, so it can be used with ao, bo, and co set to zero. See also #466 which is proposing to add support for batch gemm with some integer type support.
As far as I understand, the oneMKL open source project provides implementation for the Intel MKL product, with multiple hardware support. I was checking the documentation for both of them to check the supported data type precisions for the matrix multiplication function gemm. I found that the Intel MKL product supports int8_t data type. However, the open source project only supports floating point precisions. Is integer precision missing from documentation or it is not supported ? and is it going to be supported soon ?
The text was updated successfully, but these errors were encountered: