Skip to content

[rfc][mlir][gpu] Add an operation to rotate two subgroup matrices #139047

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Hsiangkai
Copy link
Contributor

@Hsiangkai Hsiangkai commented May 8, 2025

The gpu.subgroup_mma_rotate operation rotates data between 2 subgroup matrices.

This operation takes 2 subgroup matrices with the same type. Use offset as the starting position of the first subgroup matrix and append the beginning offset of elements in the second subgroup matrix to the end of the result. The result type is the same as the operands. For example, there are 16 elements, TA0 to TA15, in a 4x4 subgroup matrix and TB0 to TB15 in the second matrix. When offset is 1, it will use TA1 to TA15 plus TB0 to construct 4x4 result subgroup matrix.

Example:

 %0 = gpu.subgroup_mma_rotate %mma0, %mma1, %c4 :
      !gpu.mma_matrix<4x4xf32, "AOp">, !gpu.mma_matrix<4x4xf32, "AOp">, i32
      -> !gpu.mma_matrix<4x4xf32, "AOp">

RFC: https://discourse.llvm.org/t/rfc-add-gpu-operations-to-permute-data-in-2-loaded-mma-matrix/86148?u=hsiangkai

The `gpu.subgroup_mma_rotate` operation rotates data between 2 subgroup
matrices.

This operation takes 2 subgroup matrices with the same type. Use `offset` as
the starting position of the first subgroup matrix and append the beginning
`offset` of elements in the second subgroup matrix to the end of the result.
The result type is the same as the operands. For example, there are 16
elements, TA0 to TA15, in a 4x4 subgroup matrix and TB0 to TB15 in the
second matrix. When offset is 1, it will use TA1 to TA15 plus TB0 to
construct 4x4 result subgroup matrix.

Example:

```mlir
 %0 = gpu.subgroup_mma_rotate %mma0, %mma1, %c4 :
      !gpu.mma_matrix<4x4xf32, "AOp">, !gpu.mma_matrix<4x4xf32, "AOp">, i32
      -> !gpu.mma_matrix<4x4xf32, "AOp">
```
@llvmbot
Copy link
Member

llvmbot commented May 8, 2025

@llvm/pr-subscribers-mlir

Author: Hsiangkai Wang (Hsiangkai)

Changes

The gpu.subgroup_mma_rotate operation rotates data between 2 subgroup matrices.

This operation takes 2 subgroup matrices with the same type. Use offset as the starting position of the first subgroup matrix and append the beginning offset of elements in the second subgroup matrix to the end of the result. The result type is the same as the operands. For example, there are 16 elements, TA0 to TA15, in a 4x4 subgroup matrix and TB0 to TB15 in the second matrix. When offset is 1, it will use TA1 to TA15 plus TB0 to construct 4x4 result subgroup matrix.

Example:

 %0 = gpu.subgroup_mma_rotate %mma0, %mma1, %c4 :
      !gpu.mma_matrix&lt;4x4xf32, "AOp"&gt;, !gpu.mma_matrix&lt;4x4xf32, "AOp"&gt;, i32
      -&gt; !gpu.mma_matrix&lt;4x4xf32, "AOp"&gt;

Full diff: https://github.com/llvm/llvm-project/pull/139047.diff

2 Files Affected:

  • (modified) mlir/include/mlir/Dialect/GPU/IR/GPUOps.td (+38)
  • (modified) mlir/lib/Dialect/GPU/IR/GPUDialect.cpp (+31)
diff --git a/mlir/include/mlir/Dialect/GPU/IR/GPUOps.td b/mlir/include/mlir/Dialect/GPU/IR/GPUOps.td
index 68095b7bf5c59..79610d8380c16 100644
--- a/mlir/include/mlir/Dialect/GPU/IR/GPUOps.td
+++ b/mlir/include/mlir/Dialect/GPU/IR/GPUOps.td
@@ -1998,6 +1998,44 @@ def GPU_SubgroupMmaElementwiseOp : GPU_Op<"subgroup_mma_elementwise",
   }];
 }
 
+def GPU_SubgroupMmaRotateOp
+    : GPU_Op<"subgroup_mma_rotate", [Pure, AllTypesMatch<["opA", "opB", "res"]>]> {
+  let summary = "Construct a new mma_matrix by permuting two mma_matrices";
+
+  let description = [{
+    The `gpu.subgroup_mma_rotate` operation rotates data between 2 subgroup
+    matrices.
+
+    This operation takes 2 subgroup matrices with the same type. Use `offset` as
+    the starting position of the first subgroup matrix and append the beginning
+    `offset` of elements in the second subgroup matrix to the end of the result.
+    The result type is the same as the operands. For example, there are 16
+    elements, TA0 to TA15, in a 4x4 subgroup matrix and TB0 to TB15 in the
+    second matrix. When offset is 1, it will use TA1 to TA15 plus TB0 to
+    construct 4x4 result subgroup matrix.
+
+    Example:
+
+    ```mlir
+     %0 = gpu.subgroup_mma_rotate %mma0, %mma1, %c4 :
+          !gpu.mma_matrix<4x4xf32, "AOp">, !gpu.mma_matrix<4x4xf32, "AOp">, i32
+          -> !gpu.mma_matrix<4x4xf32, "AOp">
+    ```
+  }];
+
+  let arguments = (ins Arg<MMAMatrixOf<[SI8, UI8, F16, F32]>>:$opA,
+                       Arg<MMAMatrixOf<[SI8, UI8, F16, F32]>>:$opB,
+                       I32:$offset
+                  );
+
+  let results = (outs GPU_MMAMatrix : $res);
+
+  let assemblyFormat = [{
+    $opA`,` $opB`,` $offset attr-dict `:` type($opA)`,` type($opB)`,` type($offset) `->` type($res)
+  }];
+  let hasVerifier = 1;
+}
+
 //
 // Operation on sparse matrices, called from the host
 // (currently lowers to cuSparse for CUDA only, no ROCM lowering).
diff --git a/mlir/lib/Dialect/GPU/IR/GPUDialect.cpp b/mlir/lib/Dialect/GPU/IR/GPUDialect.cpp
index f20126618060a..b4bcb89965668 100644
--- a/mlir/lib/Dialect/GPU/IR/GPUDialect.cpp
+++ b/mlir/lib/Dialect/GPU/IR/GPUDialect.cpp
@@ -1961,6 +1961,37 @@ LogicalResult SubgroupMmaComputeOp::verify() {
   return success();
 }
 
+//===----------------------------------------------------------------------===//
+// GPU_SubgroupMmaRotateOp
+//===----------------------------------------------------------------------===//
+
+LogicalResult SubgroupMmaRotateOp::verify() {
+  auto resultType = dyn_cast<MMAMatrixType>(getResult().getType());
+  if (!resultType)
+    return emitOpError("result must be a gpu.mma_matrix type");
+
+  ArrayRef<int64_t> shape = resultType.getShape();
+  int64_t rows = shape[0];
+  int64_t cols = shape[1];
+  int64_t maxOffset = rows * cols - 1;
+
+  auto offsetValue = getOffset().getDefiningOp<arith::ConstantOp>();
+  if (!offsetValue)
+    return emitOpError("offset must be a constant integer");
+
+  auto offsetAttr = dyn_cast<IntegerAttr>(offsetValue.getValue());
+  if (!offsetAttr)
+    return emitOpError("offset must be an integer attribute");
+
+  int64_t offset = offsetAttr.getInt();
+  if (offset < 0 || offset > maxOffset)
+    return emitOpError() << "offset " << offset
+                         << " is out of bounds for matrix shape " << rows << "x"
+                         << cols;
+
+  return success();
+}
+
 LogicalResult MemcpyOp::fold(FoldAdaptor adaptor,
                              SmallVectorImpl<::mlir::OpFoldResult> &results) {
   return memref::foldMemRefCast(*this);

@llvmbot
Copy link
Member

llvmbot commented May 8, 2025

@llvm/pr-subscribers-mlir-gpu

Author: Hsiangkai Wang (Hsiangkai)

Changes

The gpu.subgroup_mma_rotate operation rotates data between 2 subgroup matrices.

This operation takes 2 subgroup matrices with the same type. Use offset as the starting position of the first subgroup matrix and append the beginning offset of elements in the second subgroup matrix to the end of the result. The result type is the same as the operands. For example, there are 16 elements, TA0 to TA15, in a 4x4 subgroup matrix and TB0 to TB15 in the second matrix. When offset is 1, it will use TA1 to TA15 plus TB0 to construct 4x4 result subgroup matrix.

Example:

 %0 = gpu.subgroup_mma_rotate %mma0, %mma1, %c4 :
      !gpu.mma_matrix&lt;4x4xf32, "AOp"&gt;, !gpu.mma_matrix&lt;4x4xf32, "AOp"&gt;, i32
      -&gt; !gpu.mma_matrix&lt;4x4xf32, "AOp"&gt;

Full diff: https://github.com/llvm/llvm-project/pull/139047.diff

2 Files Affected:

  • (modified) mlir/include/mlir/Dialect/GPU/IR/GPUOps.td (+38)
  • (modified) mlir/lib/Dialect/GPU/IR/GPUDialect.cpp (+31)
diff --git a/mlir/include/mlir/Dialect/GPU/IR/GPUOps.td b/mlir/include/mlir/Dialect/GPU/IR/GPUOps.td
index 68095b7bf5c59..79610d8380c16 100644
--- a/mlir/include/mlir/Dialect/GPU/IR/GPUOps.td
+++ b/mlir/include/mlir/Dialect/GPU/IR/GPUOps.td
@@ -1998,6 +1998,44 @@ def GPU_SubgroupMmaElementwiseOp : GPU_Op<"subgroup_mma_elementwise",
   }];
 }
 
+def GPU_SubgroupMmaRotateOp
+    : GPU_Op<"subgroup_mma_rotate", [Pure, AllTypesMatch<["opA", "opB", "res"]>]> {
+  let summary = "Construct a new mma_matrix by permuting two mma_matrices";
+
+  let description = [{
+    The `gpu.subgroup_mma_rotate` operation rotates data between 2 subgroup
+    matrices.
+
+    This operation takes 2 subgroup matrices with the same type. Use `offset` as
+    the starting position of the first subgroup matrix and append the beginning
+    `offset` of elements in the second subgroup matrix to the end of the result.
+    The result type is the same as the operands. For example, there are 16
+    elements, TA0 to TA15, in a 4x4 subgroup matrix and TB0 to TB15 in the
+    second matrix. When offset is 1, it will use TA1 to TA15 plus TB0 to
+    construct 4x4 result subgroup matrix.
+
+    Example:
+
+    ```mlir
+     %0 = gpu.subgroup_mma_rotate %mma0, %mma1, %c4 :
+          !gpu.mma_matrix<4x4xf32, "AOp">, !gpu.mma_matrix<4x4xf32, "AOp">, i32
+          -> !gpu.mma_matrix<4x4xf32, "AOp">
+    ```
+  }];
+
+  let arguments = (ins Arg<MMAMatrixOf<[SI8, UI8, F16, F32]>>:$opA,
+                       Arg<MMAMatrixOf<[SI8, UI8, F16, F32]>>:$opB,
+                       I32:$offset
+                  );
+
+  let results = (outs GPU_MMAMatrix : $res);
+
+  let assemblyFormat = [{
+    $opA`,` $opB`,` $offset attr-dict `:` type($opA)`,` type($opB)`,` type($offset) `->` type($res)
+  }];
+  let hasVerifier = 1;
+}
+
 //
 // Operation on sparse matrices, called from the host
 // (currently lowers to cuSparse for CUDA only, no ROCM lowering).
diff --git a/mlir/lib/Dialect/GPU/IR/GPUDialect.cpp b/mlir/lib/Dialect/GPU/IR/GPUDialect.cpp
index f20126618060a..b4bcb89965668 100644
--- a/mlir/lib/Dialect/GPU/IR/GPUDialect.cpp
+++ b/mlir/lib/Dialect/GPU/IR/GPUDialect.cpp
@@ -1961,6 +1961,37 @@ LogicalResult SubgroupMmaComputeOp::verify() {
   return success();
 }
 
+//===----------------------------------------------------------------------===//
+// GPU_SubgroupMmaRotateOp
+//===----------------------------------------------------------------------===//
+
+LogicalResult SubgroupMmaRotateOp::verify() {
+  auto resultType = dyn_cast<MMAMatrixType>(getResult().getType());
+  if (!resultType)
+    return emitOpError("result must be a gpu.mma_matrix type");
+
+  ArrayRef<int64_t> shape = resultType.getShape();
+  int64_t rows = shape[0];
+  int64_t cols = shape[1];
+  int64_t maxOffset = rows * cols - 1;
+
+  auto offsetValue = getOffset().getDefiningOp<arith::ConstantOp>();
+  if (!offsetValue)
+    return emitOpError("offset must be a constant integer");
+
+  auto offsetAttr = dyn_cast<IntegerAttr>(offsetValue.getValue());
+  if (!offsetAttr)
+    return emitOpError("offset must be an integer attribute");
+
+  int64_t offset = offsetAttr.getInt();
+  if (offset < 0 || offset > maxOffset)
+    return emitOpError() << "offset " << offset
+                         << " is out of bounds for matrix shape " << rows << "x"
+                         << cols;
+
+  return success();
+}
+
 LogicalResult MemcpyOp::fold(FoldAdaptor adaptor,
                              SmallVectorImpl<::mlir::OpFoldResult> &results) {
   return memref::foldMemRefCast(*this);

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants