Skip to content

Match the inverse of m_AddOverflow #147215

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

Match the inverse of m_AddOverflow #147215

wants to merge 2 commits into from

Conversation

AZero13
Copy link
Contributor

@AZero13 AZero13 commented Jul 6, 2025

Basically where there are checks for overflow but the condition is inverted.

This is also really cool because it allows us to also fold usubs with constants better.

How? Well, if subtraction of a number underflows, addition of its negative overflows.

This identity is baked into the silicon of ARM chips and single-add.

I didn't even write code based on this property: I just negated the icmps and LLVM folding figured the rest out!

Basically where there are checks for overflow but the condition is inverted.

Revert "Catch inverse of uaddoverflow"

This reverts commit c5c8e2b.

Reapply "Catch inverse of uaddoverflow"

This reverts commit bcee466cdc0cdd99ab075a041ba07205aefec175.

s
@AZero13 AZero13 requested a review from nikic as a code owner July 6, 2025 20:57
@llvmbot llvmbot added backend:AArch64 llvm:codegen llvm:instcombine Covers the InstCombine, InstSimplify and AggressiveInstCombine passes llvm:ir llvm:transforms labels Jul 6, 2025
@llvmbot
Copy link
Member

llvmbot commented Jul 6, 2025

@llvm/pr-subscribers-backend-x86
@llvm/pr-subscribers-llvm-transforms
@llvm/pr-subscribers-llvm-ir

@llvm/pr-subscribers-backend-aarch64

Author: AZero13 (AZero13)

Changes

Basically where there are checks for overflow but the condition is inverted.

This is also really cool because it allows us to also fold usubs with constants better.

How? Well, if subtraction of a number underflows, addition of its negative overflows.

This identity is baked into the silicon of ARM chips and single-add.

I didn't even write code based on this property: I just negated the icmps and LLVM folding figured the rest out!


Patch is 81.36 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/147215.diff

8 Files Affected:

  • (modified) llvm/include/llvm/IR/PatternMatch.h (+74)
  • (modified) llvm/lib/CodeGen/CodeGenPrepare.cpp (+56-9)
  • (modified) llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp (+20-1)
  • (modified) llvm/test/CodeGen/AArch64/cgp-usubo.ll (+3-5)
  • (modified) llvm/test/CodeGen/AArch64/cmpxchg-idioms.ll (+6-6)
  • (modified) llvm/test/Transforms/CodeGenPrepare/AArch64/overflow-intrinsics.ll (+353-6)
  • (modified) llvm/test/Transforms/CodeGenPrepare/X86/optimizeSelect-DT.ll (+3-2)
  • (modified) llvm/test/Transforms/CodeGenPrepare/X86/overflow-intrinsics.ll (+397-27)
diff --git a/llvm/include/llvm/IR/PatternMatch.h b/llvm/include/llvm/IR/PatternMatch.h
index ed9b83d5d4361..36e88fa78d687 100644
--- a/llvm/include/llvm/IR/PatternMatch.h
+++ b/llvm/include/llvm/IR/PatternMatch.h
@@ -2595,6 +2595,70 @@ struct UAddWithOverflow_match {
   }
 };
 
+//===----------------------------------------------------------------------===//
+// Matchers for overflow check patterns: e.g. (a + b) u< a, (a ^ -1) <u b
+// Note that S might be matched to other instructions than AddInst.
+//
+
+template <typename LHS_t, typename RHS_t, typename Sum_t>
+struct UAddWithOverflowInv_match {
+  LHS_t L;
+  RHS_t R;
+  Sum_t S;
+
+  UAddWithOverflowInv_match(const LHS_t &L, const RHS_t &R, const Sum_t &S)
+      : L(L), R(R), S(S) {}
+
+  template <typename OpTy> bool match(OpTy *V) const {
+    Value *ICmpLHS, *ICmpRHS;
+    CmpPredicate Pred;
+    if (!m_ICmp(Pred, m_Value(ICmpLHS), m_Value(ICmpRHS)).match(V))
+      return false;
+
+    Value *AddLHS, *AddRHS;
+    auto AddExpr = m_Add(m_Value(AddLHS), m_Value(AddRHS));
+
+    // (a + b) u>= a, (a + b) u>= b
+    if (Pred == ICmpInst::ICMP_UGE)
+      if (AddExpr.match(ICmpLHS) && (ICmpRHS == AddLHS || ICmpRHS == AddRHS))
+        return L.match(AddLHS) && R.match(AddRHS) && S.match(ICmpLHS);
+
+    // a <=u (a + b), b <=u (a + b)
+    if (Pred == ICmpInst::ICMP_ULE)
+      if (AddExpr.match(ICmpRHS) && (ICmpLHS == AddLHS || ICmpLHS == AddRHS))
+        return L.match(AddLHS) && R.match(AddRHS) && S.match(ICmpRHS);
+
+    Value *Op1;
+    auto XorExpr = m_OneUse(m_Not(m_Value(Op1)));
+    // (~a) >= u b
+    if (Pred == ICmpInst::ICMP_UGE) {
+      if (XorExpr.match(ICmpLHS))
+        return L.match(Op1) && R.match(ICmpRHS) && S.match(ICmpLHS);
+    }
+    //  b <= u (~a)
+    if (Pred == ICmpInst::ICMP_ULE) {
+      if (XorExpr.match(ICmpRHS))
+        return L.match(Op1) && R.match(ICmpLHS) && S.match(ICmpRHS);
+    }
+
+    // Match special-case for increment-by-1.
+    if (Pred == ICmpInst::ICMP_NE) {
+      // (a + 1) != 0
+      // (1 + a) != 0
+      if (AddExpr.match(ICmpLHS) && m_ZeroInt().match(ICmpRHS) &&
+          (m_One().match(AddLHS) || m_One().match(AddRHS)))
+        return L.match(AddLHS) && R.match(AddRHS) && S.match(ICmpLHS);
+      // 0 != (a + 1)
+      // 0 != (1 + a)
+      if (m_ZeroInt().match(ICmpLHS) && AddExpr.match(ICmpRHS) &&
+          (m_One().match(AddLHS) || m_One().match(AddRHS)))
+        return L.match(AddLHS) && R.match(AddRHS) && S.match(ICmpRHS);
+    }
+
+    return false;
+  }
+};
+
 /// Match an icmp instruction checking for unsigned overflow on addition.
 ///
 /// S is matched to the addition whose result is being checked for overflow, and
@@ -2605,6 +2669,16 @@ m_UAddWithOverflow(const LHS_t &L, const RHS_t &R, const Sum_t &S) {
   return UAddWithOverflow_match<LHS_t, RHS_t, Sum_t>(L, R, S);
 }
 
+/// Match an icmp instruction checking for unsigned overflow on addition, but with the opposite check.
+///
+/// S is matched to the addition whose result is being checked for overflow, and
+/// L and R are matched to the LHS and RHS of S.
+template <typename LHS_t, typename RHS_t, typename Sum_t>
+UAddWithOverflowInv_match<LHS_t, RHS_t, Sum_t>
+m_UAddWithOverflowInv(const LHS_t &L, const RHS_t &R, const Sum_t &S) {
+  return UAddWithOverflowInv_match<LHS_t, RHS_t, Sum_t>(L, R, S);
+}
+
 template <typename Opnd_t> struct Argument_match {
   unsigned OpI;
   Opnd_t Val;
diff --git a/llvm/lib/CodeGen/CodeGenPrepare.cpp b/llvm/lib/CodeGen/CodeGenPrepare.cpp
index 9bbb89e37865d..6f6fb39269de5 100644
--- a/llvm/lib/CodeGen/CodeGenPrepare.cpp
+++ b/llvm/lib/CodeGen/CodeGenPrepare.cpp
@@ -470,7 +470,8 @@ class CodeGenPrepare {
 
   bool tryToSinkFreeOperands(Instruction *I);
   bool replaceMathCmpWithIntrinsic(BinaryOperator *BO, Value *Arg0, Value *Arg1,
-                                   CmpInst *Cmp, Intrinsic::ID IID);
+                                   CmpInst *Cmp, Intrinsic::ID IID,
+                                   bool NegateOverflow = false);
   bool optimizeCmp(CmpInst *Cmp, ModifyDT &ModifiedDT);
   bool optimizeURem(Instruction *Rem);
   bool combineToUSubWithOverflow(CmpInst *Cmp, ModifyDT &ModifiedDT);
@@ -1552,7 +1553,8 @@ static bool isIVIncrement(const Value *V, const LoopInfo *LI) {
 bool CodeGenPrepare::replaceMathCmpWithIntrinsic(BinaryOperator *BO,
                                                  Value *Arg0, Value *Arg1,
                                                  CmpInst *Cmp,
-                                                 Intrinsic::ID IID) {
+                                                 Intrinsic::ID IID,
+                                                 bool NegateOverflow) {
   auto IsReplacableIVIncrement = [this, &Cmp](BinaryOperator *BO) {
     if (!isIVIncrement(BO, LI))
       return false;
@@ -1624,6 +1626,8 @@ bool CodeGenPrepare::replaceMathCmpWithIntrinsic(BinaryOperator *BO,
     assert(BO->hasOneUse() &&
            "Patterns with XOr should use the BO only in the compare");
   Value *OV = Builder.CreateExtractValue(MathOV, 1, "ov");
+  if (NegateOverflow)
+    OV = Builder.CreateNot(OV, "not");
   replaceAllUsesWith(Cmp, OV, FreshBBs, IsHugeFunc);
   Cmp->eraseFromParent();
   BO->eraseFromParent();
@@ -1660,6 +1664,38 @@ static bool matchUAddWithOverflowConstantEdgeCases(CmpInst *Cmp,
   return false;
 }
 
+/// Match special-case patterns that check for unsigned add overflow but inverts
+/// the add check
+static bool
+matchUAddWithOverflowConstantEdgeCasesInverted(CmpInst *Cmp,
+                                               BinaryOperator *&Add) {
+  // Add = add A, 1; Cmp = icmp ne A,-1 (overflow if A is max val)
+  // Add = add A,-1; Cmp = icmp eq A, 0 (overflow if A is non-zero)
+  Value *A = Cmp->getOperand(0), *B = Cmp->getOperand(1);
+
+  // We are not expecting non-canonical/degenerate code. Just bail out.
+  if (isa<Constant>(A))
+    return false;
+
+  ICmpInst::Predicate Pred = Cmp->getPredicate();
+  if (Pred == ICmpInst::ICMP_NE && match(B, m_AllOnes()))
+    B = ConstantInt::get(B->getType(), 1);
+  else if (Pred == ICmpInst::ICMP_EQ && match(B, m_ZeroInt()))
+    B = Constant::getAllOnesValue(B->getType());
+  else
+    return false;
+
+  // Check the users of the variable operand of the compare looking for an add
+  // with the adjusted constant.
+  for (User *U : A->users()) {
+    if (match(U, m_Add(m_Specific(A), m_Specific(B)))) {
+      Add = cast<BinaryOperator>(U);
+      return true;
+    }
+  }
+  return false;
+}
+
 /// Try to combine the compare into a call to the llvm.uadd.with.overflow
 /// intrinsic. Return true if any changes were made.
 bool CodeGenPrepare::combineToUAddWithOverflow(CmpInst *Cmp,
@@ -1667,13 +1703,24 @@ bool CodeGenPrepare::combineToUAddWithOverflow(CmpInst *Cmp,
   bool EdgeCase = false;
   Value *A, *B;
   BinaryOperator *Add;
+  bool Negate = false;
   if (!match(Cmp, m_UAddWithOverflow(m_Value(A), m_Value(B), m_BinOp(Add)))) {
-    if (!matchUAddWithOverflowConstantEdgeCases(Cmp, Add))
-      return false;
-    // Set A and B in case we match matchUAddWithOverflowConstantEdgeCases.
-    A = Add->getOperand(0);
-    B = Add->getOperand(1);
-    EdgeCase = true;
+    if (matchUAddWithOverflowConstantEdgeCases(Cmp, Add)) {
+      // Set A and B in case we match matchUAddWithOverflowConstantEdgeCases.
+      A = Add->getOperand(0);
+      B = Add->getOperand(1);
+      EdgeCase = true;
+    } else {
+      Negate = true;
+      if (!match(Cmp,
+                 m_UAddWithOverflowInv(m_Value(A), m_Value(B), m_BinOp(Add)))) {
+        if (!matchUAddWithOverflowConstantEdgeCasesInverted(Cmp, Add))
+          return false;
+        A = Add->getOperand(0);
+        B = Add->getOperand(1);
+        EdgeCase = true;
+      }
+    }
   }
 
   if (!TLI->shouldFormOverflowOp(ISD::UADDO,
@@ -1688,7 +1735,7 @@ bool CodeGenPrepare::combineToUAddWithOverflow(CmpInst *Cmp,
     return false;
 
   if (!replaceMathCmpWithIntrinsic(Add, A, B, Cmp,
-                                   Intrinsic::uadd_with_overflow))
+                                   Intrinsic::uadd_with_overflow, Negate))
     return false;
 
   // Reset callers - do not crash by iterating over a dead instruction.
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp b/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
index c6f317a668cfe..662ef154f0683 100644
--- a/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
+++ b/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
@@ -7821,7 +7821,7 @@ Instruction *InstCombinerImpl::visitICmpInst(ICmpInst &I) {
         isa<IntegerType>(X->getType())) {
       Value *Result;
       Constant *Overflow;
-      // m_UAddWithOverflow can match patterns that do not include  an explicit
+      // m_UAddWithOverflow can match patterns that do not include an explicit
       // "add" instruction, so check the opcode of the matched op.
       if (AddI->getOpcode() == Instruction::Add &&
           OptimizeOverflowCheck(Instruction::Add, /*Signed*/ false, X, Y, *AddI,
@@ -7832,6 +7832,25 @@ Instruction *InstCombinerImpl::visitICmpInst(ICmpInst &I) {
       }
     }
 
+    if (match(&I, m_UAddWithOverflowInv(m_Value(X), m_Value(Y),
+                                     m_Instruction(AddI))) &&
+        isa<IntegerType>(X->getType())) {
+      Value *Result;
+      Constant *Overflow;
+      // m_UAddWithOverflowInv can match patterns that do not include an explicit
+      // "add" instruction, so check the opcode of the matched op.
+      if (AddI->getOpcode() == Instruction::Add &&
+          OptimizeOverflowCheck(Instruction::Add, /*Signed*/ false, X, Y, *AddI,
+                                Result, Overflow)) {
+        Overflow = Overflow->isNullValue()
+                       ? ConstantInt::getTrue(Overflow->getType())
+                       : ConstantInt::getFalse(Overflow->getType());
+        replaceInstUsesWith(*AddI, Result);
+        eraseInstFromFunction(*AddI);
+        return replaceInstUsesWith(I, Overflow);
+      }
+    }
+
     // (zext X) * (zext Y)  --> llvm.umul.with.overflow.
     if (match(Op0, m_NUWMul(m_ZExt(m_Value(X)), m_ZExt(m_Value(Y)))) &&
         match(Op1, m_APInt(C))) {
diff --git a/llvm/test/CodeGen/AArch64/cgp-usubo.ll b/llvm/test/CodeGen/AArch64/cgp-usubo.ll
index d307107fc07ee..e49e8e86561c7 100644
--- a/llvm/test/CodeGen/AArch64/cgp-usubo.ll
+++ b/llvm/test/CodeGen/AArch64/cgp-usubo.ll
@@ -108,11 +108,9 @@ define i1 @usubo_ugt_constant_op1_i8(i8 %x, ptr %p) nounwind {
 define i1 @usubo_eq_constant1_op1_i32(i32 %x, ptr %p) nounwind {
 ; CHECK-LABEL: usubo_eq_constant1_op1_i32:
 ; CHECK:       // %bb.0:
-; CHECK-NEXT:    cmp w0, #0
-; CHECK-NEXT:    sub w9, w0, #1
-; CHECK-NEXT:    cset w8, eq
-; CHECK-NEXT:    str w9, [x1]
-; CHECK-NEXT:    mov w0, w8
+; CHECK-NEXT:    subs w8, w0, #1
+; CHECK-NEXT:    cset w0, lo
+; CHECK-NEXT:    str w8, [x1]
 ; CHECK-NEXT:    ret
   %s = add i32 %x, -1
   %ov = icmp eq i32 %x, 0
diff --git a/llvm/test/CodeGen/AArch64/cmpxchg-idioms.ll b/llvm/test/CodeGen/AArch64/cmpxchg-idioms.ll
index b7817ebe59b9b..eb7c2151973ee 100644
--- a/llvm/test/CodeGen/AArch64/cmpxchg-idioms.ll
+++ b/llvm/test/CodeGen/AArch64/cmpxchg-idioms.ll
@@ -196,12 +196,12 @@ define i1 @test_conditional2(i32 %a, i32 %b, ptr %c) {
 ; CHECK-NEXT:    mov w22, #2 ; =0x2
 ; CHECK-NEXT:  LBB3_6: ; %for.cond
 ; CHECK-NEXT:    ; =>This Inner Loop Header: Depth=1
-; CHECK-NEXT:    cbz w22, LBB3_9
+; CHECK-NEXT:    subs w22, w22, #1
+; CHECK-NEXT:    b.lo LBB3_9
 ; CHECK-NEXT:  ; %bb.7: ; %for.body
 ; CHECK-NEXT:    ; in Loop: Header=BB3_6 Depth=1
-; CHECK-NEXT:    sub w22, w22, #1
-; CHECK-NEXT:    orr w9, w21, w20
 ; CHECK-NEXT:    ldr w10, [x19, w22, sxtw #2]
+; CHECK-NEXT:    orr w9, w21, w20
 ; CHECK-NEXT:    cmp w9, w10
 ; CHECK-NEXT:    b.eq LBB3_6
 ; CHECK-NEXT:  ; %bb.8: ; %if.then
@@ -238,12 +238,12 @@ define i1 @test_conditional2(i32 %a, i32 %b, ptr %c) {
 ; OUTLINE-ATOMICS-NEXT:    cset w8, eq
 ; OUTLINE-ATOMICS-NEXT:  LBB3_1: ; %for.cond
 ; OUTLINE-ATOMICS-NEXT:    ; =>This Inner Loop Header: Depth=1
-; OUTLINE-ATOMICS-NEXT:    cbz w22, LBB3_4
+; OUTLINE-ATOMICS-NEXT:    subs w22, w22, #1
+; OUTLINE-ATOMICS-NEXT:    b.lo LBB3_4
 ; OUTLINE-ATOMICS-NEXT:  ; %bb.2: ; %for.body
 ; OUTLINE-ATOMICS-NEXT:    ; in Loop: Header=BB3_1 Depth=1
-; OUTLINE-ATOMICS-NEXT:    sub w22, w22, #1
-; OUTLINE-ATOMICS-NEXT:    orr w9, w21, w20
 ; OUTLINE-ATOMICS-NEXT:    ldr w10, [x19, w22, sxtw #2]
+; OUTLINE-ATOMICS-NEXT:    orr w9, w21, w20
 ; OUTLINE-ATOMICS-NEXT:    cmp w9, w10
 ; OUTLINE-ATOMICS-NEXT:    b.eq LBB3_1
 ; OUTLINE-ATOMICS-NEXT:  ; %bb.3: ; %if.then
diff --git a/llvm/test/Transforms/CodeGenPrepare/AArch64/overflow-intrinsics.ll b/llvm/test/Transforms/CodeGenPrepare/AArch64/overflow-intrinsics.ll
index f72679f55e114..2b76df898a3aa 100644
--- a/llvm/test/Transforms/CodeGenPrepare/AArch64/overflow-intrinsics.ll
+++ b/llvm/test/Transforms/CodeGenPrepare/AArch64/overflow-intrinsics.ll
@@ -15,6 +15,16 @@ define i64 @uaddo1_overflow_used(i64 %a, i64 %b) nounwind ssp {
 ; CHECK-NEXT:    [[OV:%.*]] = extractvalue { i64, i1 } [[TMP1]], 1
 ; CHECK-NEXT:    [[Q:%.*]] = select i1 [[OV]], i64 [[B]], i64 42
 ; CHECK-NEXT:    ret i64 [[Q]]
+;
+; DEBUG-LABEL: @uaddo1_overflow_used(
+; DEBUG-NEXT:    [[TMP1:%.*]] = call { i64, i1 } @llvm.uadd.with.overflow.i64(i64 [[B:%.*]], i64 [[A:%.*]]), !dbg [[DBG14:![0-9]+]]
+; DEBUG-NEXT:    [[MATH:%.*]] = extractvalue { i64, i1 } [[TMP1]], 0, !dbg [[DBG14]]
+; DEBUG-NEXT:    [[OV:%.*]] = extractvalue { i64, i1 } [[TMP1]], 1, !dbg [[DBG14]]
+; DEBUG-NEXT:      #dbg_value(i64 [[MATH]], [[META9:![0-9]+]], !DIExpression(), [[DBG14]])
+; DEBUG-NEXT:      #dbg_value(i1 [[OV]], [[META11:![0-9]+]], !DIExpression(), [[META15:![0-9]+]])
+; DEBUG-NEXT:    [[Q:%.*]] = select i1 [[OV]], i64 [[B]], i64 42, !dbg [[DBG16:![0-9]+]]
+; DEBUG-NEXT:      #dbg_value(i64 [[Q]], [[META13:![0-9]+]], !DIExpression(), [[DBG16]])
+; DEBUG-NEXT:    ret i64 [[Q]], !dbg [[DBG17:![0-9]+]]
 ;
   %add = add i64 %b, %a
   %cmp = icmp ult i64 %add, %a
@@ -28,8 +38,19 @@ define i64 @uaddo1_math_overflow_used(i64 %a, i64 %b, ptr %res) nounwind ssp {
 ; CHECK-NEXT:    [[MATH:%.*]] = extractvalue { i64, i1 } [[TMP1]], 0
 ; CHECK-NEXT:    [[OV:%.*]] = extractvalue { i64, i1 } [[TMP1]], 1
 ; CHECK-NEXT:    [[Q:%.*]] = select i1 [[OV]], i64 [[B]], i64 42
-; CHECK-NEXT:    store i64 [[MATH]], ptr [[RES:%.*]]
+; CHECK-NEXT:    store i64 [[MATH]], ptr [[RES:%.*]], align 8
 ; CHECK-NEXT:    ret i64 [[Q]]
+;
+; DEBUG-LABEL: @uaddo1_math_overflow_used(
+; DEBUG-NEXT:    [[TMP1:%.*]] = call { i64, i1 } @llvm.uadd.with.overflow.i64(i64 [[B:%.*]], i64 [[A:%.*]]), !dbg [[DBG23:![0-9]+]]
+; DEBUG-NEXT:    [[MATH:%.*]] = extractvalue { i64, i1 } [[TMP1]], 0, !dbg [[DBG23]]
+; DEBUG-NEXT:    [[OV:%.*]] = extractvalue { i64, i1 } [[TMP1]], 1, !dbg [[DBG23]]
+; DEBUG-NEXT:      #dbg_value(i64 [[MATH]], [[META20:![0-9]+]], !DIExpression(), [[DBG23]])
+; DEBUG-NEXT:      #dbg_value(i1 [[OV]], [[META21:![0-9]+]], !DIExpression(), [[META24:![0-9]+]])
+; DEBUG-NEXT:    [[Q:%.*]] = select i1 [[OV]], i64 [[B]], i64 42, !dbg [[DBG25:![0-9]+]]
+; DEBUG-NEXT:      #dbg_value(i64 [[Q]], [[META22:![0-9]+]], !DIExpression(), [[DBG25]])
+; DEBUG-NEXT:    store i64 [[MATH]], ptr [[RES:%.*]], align 8, !dbg [[DBG26:![0-9]+]]
+; DEBUG-NEXT:    ret i64 [[Q]], !dbg [[DBG27:![0-9]+]]
 ;
   %add = add i64 %b, %a
   %cmp = icmp ult i64 %add, %a
@@ -45,6 +66,16 @@ define i64 @uaddo2_overflow_used(i64 %a, i64 %b) nounwind ssp {
 ; CHECK-NEXT:    [[OV:%.*]] = extractvalue { i64, i1 } [[TMP1]], 1
 ; CHECK-NEXT:    [[Q:%.*]] = select i1 [[OV]], i64 [[B]], i64 42
 ; CHECK-NEXT:    ret i64 [[Q]]
+;
+; DEBUG-LABEL: @uaddo2_overflow_used(
+; DEBUG-NEXT:    [[TMP1:%.*]] = call { i64, i1 } @llvm.uadd.with.overflow.i64(i64 [[B:%.*]], i64 [[A:%.*]]), !dbg [[DBG33:![0-9]+]]
+; DEBUG-NEXT:    [[MATH:%.*]] = extractvalue { i64, i1 } [[TMP1]], 0, !dbg [[DBG33]]
+; DEBUG-NEXT:    [[OV:%.*]] = extractvalue { i64, i1 } [[TMP1]], 1, !dbg [[DBG33]]
+; DEBUG-NEXT:      #dbg_value(i64 [[MATH]], [[META30:![0-9]+]], !DIExpression(), [[DBG33]])
+; DEBUG-NEXT:      #dbg_value(i1 [[OV]], [[META31:![0-9]+]], !DIExpression(), [[META34:![0-9]+]])
+; DEBUG-NEXT:    [[Q:%.*]] = select i1 [[OV]], i64 [[B]], i64 42, !dbg [[DBG35:![0-9]+]]
+; DEBUG-NEXT:      #dbg_value(i64 [[Q]], [[META32:![0-9]+]], !DIExpression(), [[DBG35]])
+; DEBUG-NEXT:    ret i64 [[Q]], !dbg [[DBG36:![0-9]+]]
 ;
   %add = add i64 %b, %a
   %cmp = icmp ult i64 %add, %b
@@ -58,8 +89,19 @@ define i64 @uaddo2_math_overflow_used(i64 %a, i64 %b, ptr %res) nounwind ssp {
 ; CHECK-NEXT:    [[MATH:%.*]] = extractvalue { i64, i1 } [[TMP1]], 0
 ; CHECK-NEXT:    [[OV:%.*]] = extractvalue { i64, i1 } [[TMP1]], 1
 ; CHECK-NEXT:    [[Q:%.*]] = select i1 [[OV]], i64 [[B]], i64 42
-; CHECK-NEXT:    store i64 [[MATH]], ptr [[RES:%.*]]
+; CHECK-NEXT:    store i64 [[MATH]], ptr [[RES:%.*]], align 8
 ; CHECK-NEXT:    ret i64 [[Q]]
+;
+; DEBUG-LABEL: @uaddo2_math_overflow_used(
+; DEBUG-NEXT:    [[TMP1:%.*]] = call { i64, i1 } @llvm.uadd.with.overflow.i64(i64 [[B:%.*]], i64 [[A:%.*]]), !dbg [[DBG42:![0-9]+]]
+; DEBUG-NEXT:    [[MATH:%.*]] = extractvalue { i64, i1 } [[TMP1]], 0, !dbg [[DBG42]]
+; DEBUG-NEXT:    [[OV:%.*]] = extractvalue { i64, i1 } [[TMP1]], 1, !dbg [[DBG42]]
+; DEBUG-NEXT:      #dbg_value(i64 [[MATH]], [[META39:![0-9]+]], !DIExpression(), [[DBG42]])
+; DEBUG-NEXT:      #dbg_value(i1 [[OV]], [[META40:![0-9]+]], !DIExpression(), [[META43:![0-9]+]])
+; DEBUG-NEXT:    [[Q:%.*]] = select i1 [[OV]], i64 [[B]], i64 42, !dbg [[DBG44:![0-9]+]]
+; DEBUG-NEXT:      #dbg_value(i64 [[Q]], [[META41:![0-9]+]], !DIExpression(), [[DBG44]])
+; DEBUG-NEXT:    store i64 [[MATH]], ptr [[RES:%.*]], align 8, !dbg [[DBG45:![0-9]+]]
+; DEBUG-NEXT:    ret i64 [[Q]], !dbg [[DBG46:![0-9]+]]
 ;
   %add = add i64 %b, %a
   %cmp = icmp ult i64 %add, %b
@@ -75,6 +117,16 @@ define i64 @uaddo3_overflow_used(i64 %a, i64 %b) nounwind ssp {
 ; CHECK-NEXT:    [[OV:%.*]] = extractvalue { i64, i1 } [[TMP1]], 1
 ; CHECK-NEXT:    [[Q:%.*]] = select i1 [[OV]], i64 [[B]], i64 42
 ; CHECK-NEXT:    ret i64 [[Q]]
+;
+; DEBUG-LABEL: @uaddo3_overflow_used(
+; DEBUG-NEXT:    [[TMP1:%.*]] = call { i64, i1 } @llvm.uadd.with.overflow.i64(i64 [[B:%.*]], i64 [[A:%.*]]), !dbg [[DBG52:![0-9]+]]
+; DEBUG-NEXT:    [[MATH:%.*]] = extractvalue { i64, i1 } [[TMP1]], 0, !dbg [[DBG52]]
+; DEBUG-NEXT:    [[OV:%.*]] = extractvalue { i64, i1 } [[TMP1]], 1, !dbg [[DBG52]]
+; DEBUG-NEXT:      #dbg_value(i64 [[MATH]], [[META49:![0-9]+]], !DIExpression(), [[DBG52]])
+; DEBUG-NEXT:      #dbg_value(i1 [[OV]], [[META50:![0-9]+]], !DIExpression(), [[META53:![0-9]+]])
+; DEBUG-NEXT:    [[Q:%.*]] = select i1 [[OV]], i64 [[B]], i64 42, !dbg [[DBG54:![0-9]+]]
+; DEBUG-NEXT:      #dbg_value(i64 [[Q]], [[META51:![0-9]+]], !DIExpression(), [[DBG54]])
+; DEBUG-NEXT:    ret i64 [[Q]], !dbg [[DBG55:![0-9]+]]
 ;
   %add = add i64 %b, %a
   %cmp = icmp ugt i64 %b, %add
@@ -88,8 +140,19 @@ define i64 @uaddo3_math_overflow_used(i64 %a, i64 %b, ptr %res) nounwind ssp {
 ; CHECK-NEXT:    [[MATH:%.*]] = extractvalue { i64, i1 } [[TMP1]], 0
 ; CHECK-NEXT:    [[OV:%.*]] = extractvalue { i64, i1 } [[TMP1]], 1
 ; CHECK-NEXT:    [[Q:%.*]] = select i1 [[OV]], i64 [[B]], i64 42
-; CHECK-NEXT:    store i64 [[MATH]], ptr [[RES:%.*]]
+; CHECK-NEXT:    store i64 [[MATH]], ptr [[RES:%.*]], align 8
 ; CHECK-NEXT:    ret i64 [[Q]]
+;
+; DEBUG-LABEL: @uaddo3_math_overflow_used(
+; DEBUG-NEXT:    [[TMP1:%.*]] = call { i64, i1 } @llvm.uadd.with.overflow.i64(i64 [[B:%.*]], i64 [[A:%.*]]), !dbg [[DBG61:![0-9]+]]
+; DEBUG-NEXT:    [[MATH:%.*]] = extractvalue { i64, i1 } [[TMP1]], 0, !dbg [[DBG61]]
+; DEBUG-NEXT:    [[OV:%.*]] = extractvalue { i64, i1 } [[TMP1]], 1, !dbg [[DBG61]]
+; DEBUG-NEXT:      #dbg_value(i64 [[MATH]], [[META58:![0-9]+]], !DIExpression(), [[DBG61]])
+; DEBUG-NEXT:      #dbg_value(i1 [[OV]], [[META59:![0-9]+]], !DIExpression(), [[META62:![0-9]+]])
+; DEBUG-NEXT:    [[Q:%.*]] = select i1 [[OV]], i64 [[B]], i64 42, !dbg [[DBG63:![0-9]+]]
+; DEBUG-NEXT:      #dbg_value(i64 [[Q]], [[META60:![0-9]+]], !DIExpression(), [[DBG63]])
+; DEBUG-NEXT:    store i64 [[MATH]], ptr [[RES:%.*]], align 8, !dbg [[DBG64:![0-9]+]]
+; DEBUG-NEXT:    ret i64 [[Q]], !dbg [[DBG65:![0-9]+]]
 ;
   %add = add i64 %b, %a
   %cmp = icmp ugt i64 %b, %add
@@ -106,6 +169,15 @@ define i64 @uaddo6_xor(i64 %a, i64 %b) {
 ; CHECK-NEXT:    [[OV:%.*]] = extractvalue { i64, i1 } [[TMP1]], 1
...
[truncated]

Copy link

github-actions bot commented Jul 6, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

Basically where there are checks for overflow but the condition is inverted.

This is also really cool because it allows us to also fold usubs with constants better.

How? Well, if subtraction of a number underflows, addition of its negative overflows.

This identity is baked into the silicon of ARM chips and single-add.

I didn't even write code based on this property: I just negated the icmps and LLVM folding figured the rest out!
Copy link
Collaborator

@RKSimon RKSimon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please review the CI failures:

2025-07-06T21:45:22.6845431Z Failed Tests (8):
2025-07-06T21:45:22.6845769Z   LLVM :: CodeGen/AArch64/lack-of-signed-truncation-check.ll
2025-07-06T21:45:22.6846217Z   LLVM :: CodeGen/AMDGPU/promote-constOffset-to-imm.ll
2025-07-06T21:45:22.6846611Z   LLVM :: CodeGen/Hexagon/loop-balign.ll
2025-07-06T21:45:22.6846941Z   LLVM :: CodeGen/PowerPC/loop-align.ll
2025-07-06T21:45:22.6847300Z   LLVM :: CodeGen/RISCV/overflow-intrinsics.ll
2025-07-06T21:45:22.6847755Z   LLVM :: Transforms/CodeGenPrepare/AArch64/overflow-intrinsics.ll
2025-07-06T21:45:22.6848356Z   LLVM :: Transforms/CodeGenPrepare/X86/optimizeSelect-DT.ll
2025-07-06T21:45:22.6848899Z   LLVM :: Transforms/CodeGenPrepare/X86/overflow-intrinsics.ll
2025-07-06T21:45:22.6849197Z 

//

template <typename LHS_t, typename RHS_t, typename Sum_t>
struct UAddWithOverflowInv_match {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we handle this by adding an Invert template parameter to UAddWithOverflow which just inverts the Pred at the start and otherwise shares the code?

; CHECK-NEXT: xorl %eax, %eax
; CHECK-NEXT: cmpq %rdi, %rsi
; CHECK-NEXT: addq %rdi, %rsi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regression.

eraseInstFromFunction(*AddI);
return replaceInstUsesWith(I, Overflow);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR needs to be split into smaller parts. You can introduce the matcher and use it in one place with adequate test coverage. For example, you are adding code to InstCombine here, but there is not a single changed InstCombine test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend:AArch64 backend:X86 llvm:codegen llvm:instcombine Covers the InstCombine, InstSimplify and AggressiveInstCombine passes llvm:ir llvm:transforms
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants