You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a way to convert/legalize mhlo.convolution to linalg.convolution ?
I see /mlir-hlo/tests/hlo-legalize-to-lhlo.mlir supporting a) conversion from "mhlo.convolution" to "lmhlo.convolution"
and further "lmhlo.convolution" b) conversion to "linalg.conv" in /mlir-hlo/tests/lhlo-legalize-to-linalg.mlir.
The problem with conversion b) is that the linalg.conv is not inside the linalg.generic .
I also see the linalg-fusion-for-tensor-ops pass which can fuse hlo pointwise operators into region something like add/mul fused as std operators inside a generic op
I would like to Fuse linalg Conv Relu inside a region using how do I get something like this where Conv gets inside the block ,
Can the methodology used by pointwise operators be extended to Conv and MatMul etc.
Is there a way to convert/legalize mhlo.convolution to linalg.convolution ?
I see /mlir-hlo/tests/hlo-legalize-to-lhlo.mlir supporting a) conversion from "mhlo.convolution" to "lmhlo.convolution"
and further "lmhlo.convolution" b) conversion to "linalg.conv" in /mlir-hlo/tests/lhlo-legalize-to-linalg.mlir.
The problem with conversion b) is that the linalg.conv is not inside the linalg.generic .
#map = affine_map<(d0, d1, d2, d3) -> (d0, d1, d2, d3)>
module {
func @conv(%arg0: memref<3x5x5x3xf32>, %arg1: memref<2x2x3x4xf32>, %arg2: memref<3x5x5x4xf32>) {
%c0 = constant 0 : index
%0 = alloc() : memref<3x5x5x4xf32>
linalg.conv(%arg0, %arg1, %0) {dilations = [1, 2], padding = dense<[[0, 1], [0, 1]]> : tensor<2x2xi64>, strides = [2, 1]} : memref<3x5x5x3xf32>, memref<2x2x3x4xf32>, memref<3x5x5x4xf32>
linalg.conv(%arg0, %arg1, %0) {dilations = [1, 1], strides = [2, 1]} : memref<3x5x5x3xf32>, memref<2x2x3x4xf32>, memref<3x5x5x4xf32>
linalg.generic {indexing_maps = [#map, #map], iterator_types = ["parallel", "parallel", "parallel", "parallel"]} ins(%0 : memref<3x5x5x4xf32>) outs(%arg2 : memref<3x5x5x4xf32>) {
^bb0(%arg3: f32, %arg4: f32): // no predecessors
linalg.yield %arg3 : f32
}
"lmhlo.terminator"() : () -> ()
}
}
I also see the linalg-fusion-for-tensor-ops pass which can fuse hlo pointwise operators into region something like add/mul fused as std operators inside a generic op
func @float_add(%lhs: tensor<2x2xf32>,
%rhs: tensor<2x2xf32>) -> tensor<2x2xf32> {
%0 = "mhlo.add"(%lhs, %rhs) : (tensor<2x2xf32>,
tensor<2x2xf32>) -> tensor<2x2xf32>
%1 = "mhlo.multiply"(%lhs, %0) : (tensor<2x2xf32>,
tensor<2x2xf32>) -> tensor<2x2xf32>
return %1 : tensor<2x2xf32>
}
module {
func @float_add(%arg0: tensor<2x2xf32>, %arg1: tensor<2x2xf32>) -> tensor<2x2xf32> {
%0 = linalg.init_tensor [2, 2] : tensor<2x2xf32>
%1 = linalg.generic {indexing_maps = [#map, #map, #map], iterator_types = ["parallel", "parallel"]} ins(%arg0, %arg1 : tensor<2x2xf32>, tensor<2x2xf32>) outs(%0 : tensor<2x2xf32>) {
^bb0(%arg2: f32, %arg3: f32, %arg4: f32): // no predecessors
%4 = addf %arg2, %arg3 : f32
linalg.yield %4 : f32
} -> tensor<2x2xf32>
%2 = linalg.init_tensor [2, 2] : tensor<2x2xf32>
%3 = linalg.generic {indexing_maps = [#map, #map, #map], iterator_types = ["parallel", "parallel"]} ins(%arg0, %1 : tensor<2x2xf32>, tensor<2x2xf32>) outs(%2 : tensor<2x2xf32>) {
^bb0(%arg2: f32, %arg3: f32, %arg4: f32): // no predecessors
%4 = mulf %arg2, %arg3 : f32
linalg.yield %4 : f32
} -> tensor<2x2xf32>
return %3 : tensor<2x2xf32>
}
}
#map = affine_map<(d0, d1) -> (d0, d1)>
module {
func @float_add(%arg0: tensor<2x2xf32>, %arg1: tensor<2x2xf32>) -> tensor<2x2xf32> {
%0 = linalg.init_tensor [2, 2] : tensor<2x2xf32>
%1 = linalg.generic {indexing_maps = [#map, #map, #map], iterator_types = ["parallel", "parallel"]} ins(%arg0, %arg1 : tensor<2x2xf32>, tensor<2x2xf32>) outs(%0 : tensor<2x2xf32>) {
^bb0(%arg2: f32, %arg3: f32, %arg4: f32): // no predecessors
%2 = addf %arg2, %arg3 : f32
%3 = mulf %arg2, %2 : f32
linalg.yield %3 : f32
} -> tensor<2x2xf32>
return %1 : tensor<2x2xf32>
}
}
I would like to Fuse linalg Conv Relu inside a region using how do I get something like this where Conv gets inside the block ,
Can the methodology used by pointwise operators be extended to Conv and MatMul etc.
The text was updated successfully, but these errors were encountered: