Skip to content

Apply the trait NoMemoryEffect to most ReadOnly ops #3891

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Prev Previous commit
Next Next commit
Update torch-ods for new op
  • Loading branch information
zjgarvey committed Dec 9, 2024
commit 76787e82599fd895209646ee2e3bb9a947233382
15 changes: 10 additions & 5 deletions include/torch-mlir/Dialect/Torch/IR/GeneratedTorchOps.td
Original file line number Diff line number Diff line change
Expand Up @@ -6858,7 +6858,8 @@ def Torch_AtenConv3dOp : Torch_Op<"aten.conv3d", [
def Torch_AtenConv3dPaddingOp : Torch_Op<"aten.conv3d.padding", [
AllowsTypeRefinement,
HasValueSemantics,
ReadOnly
ReadOnly,
NoMemoryEffect
]> {
let summary = "Generated op for `aten::conv3d.padding : (Tensor, Tensor, Tensor?, int[], str, int[], int) -> (Tensor)`";
let arguments = (ins
Expand Down Expand Up @@ -6917,7 +6918,8 @@ def Torch_AtenConv2dOp : Torch_Op<"aten.conv2d", [
def Torch_AtenConv2dPaddingOp : Torch_Op<"aten.conv2d.padding", [
AllowsTypeRefinement,
HasValueSemantics,
ReadOnly
ReadOnly,
NoMemoryEffect
]> {
let summary = "Generated op for `aten::conv2d.padding : (Tensor, Tensor, Tensor?, int[], str, int[], int) -> (Tensor)`";
let arguments = (ins
Expand Down Expand Up @@ -6976,7 +6978,8 @@ def Torch_AtenConv1dOp : Torch_Op<"aten.conv1d", [
def Torch_AtenConv1dPaddingOp : Torch_Op<"aten.conv1d.padding", [
AllowsTypeRefinement,
HasValueSemantics,
ReadOnly
ReadOnly,
NoMemoryEffect
]> {
let summary = "Generated op for `aten::conv1d.padding : (Tensor, Tensor, Tensor?, int[], str, int[], int) -> (Tensor)`";
let arguments = (ins
Expand Down Expand Up @@ -13829,7 +13832,8 @@ def Torch_AtenFftFftOp : Torch_Op<"aten.fft_fft", [
def Torch_AtenFftRfftOp : Torch_Op<"aten.fft_rfft", [
AllowsTypeRefinement,
HasValueSemantics,
ReadOnly
ReadOnly,
NoMemoryEffect
]> {
let summary = "Generated op for `aten::fft_rfft : (Tensor, int?, int, str?) -> (Tensor)`";
let arguments = (ins
Expand Down Expand Up @@ -16641,7 +16645,8 @@ def Torch_AtenAddFloatIntOp : Torch_Op<"aten.add.float_int", [
def Torch_AtenMulFloatIntOp : Torch_Op<"aten.mul.float_int", [
AllowsTypeRefinement,
HasValueSemantics,
ReadOnly
ReadOnly,
NoMemoryEffect
]> {
let summary = "Generated op for `aten::mul.float_int : (float, int) -> (float)`";
let arguments = (ins
Expand Down
Loading