[AArch64][GlobalISel] Legalize narrow scalar ops again.

Since r279760, we've been marking as legal operations on narrow integer
types that have wider legal equivalents (for instance, G_ADD s8).
Compared to legalizing these operations, this reduced the amount of
extends/truncates required, but was always a weird legalization decision
made at selection time.

So far, we haven't been able to formalize it in a way that permits the
selector generated from SelectionDAG patterns to be sufficient.

Using a wide instruction (say, s64), when a narrower instruction exists
(s32) would introduce register class incompatibilities (when one narrow
generic instruction is selected to the wider variant, but another is
selected to the narrower variant).

It's also impractical to limit which narrow operations are matched for
which instruction, as restricting "narrow selection" to ranges of types
clashes with potentially incompatible instruction predicates.

Concerns were also raised regarding  MIPS64's sign-extended register
assumptions, as well as wrapping behavior.
See discussions in https://reviews.llvm.org/D26878.

Instead, legalize the operations.

Should we ever revert to selecting these narrow operations, we should
try to represent this more accurately: for instance, by separating
a "concrete" type on operations, and an "underlying" type on vregs, we
could move the "this narrow-looking op is really legal" decision to the
legalizer, and let the selector use the "underlying" vreg type only,
which would be guaranteed to map to a register class.

In any case, we eventually should mitigate:
- the performance impact by selecting no-op extract/truncates to COPYs
  (which we currently do), and the COPYs to register reuses (which we
  don't do yet).
- the compile-time impact by optimizing away extract/truncate sequences
  in the legalizer.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292827 91177308-0d34-0410-b5e6-96231b3b80d8
This commit is contained in:
Ahmed Bougacha 2017-01-23 21:10:05 +00:00
parent 5aa636d3fa
commit 868b1fe194
12 changed files with 43 additions and 456 deletions

View File

@ -358,41 +358,6 @@ existing patterns (as any pattern we can select is by definition legal).
Expanding that to describe legalization actions is a much larger but
potentially useful project.
.. _milegalizer-scalar-narrow:
Scalar narrow types
^^^^^^^^^^^^^^^^^^^
In the AArch64 port, we currently mark as legal operations on narrow integer
types that have a legal equivalent in a wider type.
For example, this:
%2(GPR,s8) = G_ADD %0, %1
is selected to a 32-bit instruction:
%2(GPR32) = ADDWrr %0, %1
This avoids unnecessarily legalizing operations that can be seen as legal:
8-bit additions are supported, but happen to have a 32-bit result with the high
24 bits undefined.
``TODO``:
This has implications regarding vreg classes (as narrow values can now be
represented by wider vregs) and should be investigated further.
``TODO``:
In particular, s1 comparison results can be represented as wider values in
different ways.
SelectionDAG has the notion of BooleanContents, which allows targets to choose
what true and false are when in a larger register:
* ``ZeroOrOne`` --- if only 0 and 1 are valid bools, even in a larger register.
* ``ZeroOrMinusOne`` --- if -1 is true (common for vector instructions,
where compares produce -1).
* ``Undefined`` --- if only the low bit is relevant in determining truth.
.. _milegalizer-non-power-of-2:
Non-power of 2 types

View File

@ -119,7 +119,7 @@ static bool unsupportedBinOp(const MachineInstr &I,
}
/// Select the AArch64 opcode for the basic binary operation \p GenericOpc
/// (such as G_OR or G_ADD), appropriate for the register bank \p RegBankID
/// (such as G_OR or G_SDIV), appropriate for the register bank \p RegBankID
/// and of size \p OpSize.
/// \returns \p GenericOpc if the combination is unsupported.
static unsigned selectBinaryOp(unsigned GenericOpc, unsigned RegBankID,
@ -140,9 +140,6 @@ static unsigned selectBinaryOp(unsigned GenericOpc, unsigned RegBankID,
return AArch64::EORWrr;
case TargetOpcode::G_AND:
return AArch64::ANDWrr;
case TargetOpcode::G_ADD:
assert(OpSize != 32 && "s32 G_ADD should have been selected");
return AArch64::ADDWrr;
case TargetOpcode::G_SUB:
return AArch64::SUBWrr;
case TargetOpcode::G_SHL:
@ -759,7 +756,6 @@ bool AArch64InstructionSelector::select(MachineInstr &I) const {
case TargetOpcode::G_ASHR:
case TargetOpcode::G_SDIV:
case TargetOpcode::G_UDIV:
case TargetOpcode::G_ADD:
case TargetOpcode::G_SUB:
case TargetOpcode::G_GEP: {
// Reject the various things we don't support yet.

View File

@ -39,8 +39,11 @@ AArch64LegalizerInfo::AArch64LegalizerInfo() {
for (auto BinOp : {G_ADD, G_SUB, G_MUL, G_AND, G_OR, G_XOR, G_SHL}) {
// These operations naturally get the right answer when used on
// GPR32, even if the actual type is narrower.
for (auto Ty : {s1, s8, s16, s32, s64, v2s32, v4s32, v2s64})
for (auto Ty : {s32, s64, v2s32, v4s32, v2s64})
setAction({BinOp, Ty}, Legal);
for (auto Ty : {s1, s8, s16})
setAction({BinOp, Ty}, WidenScalar);
}
setAction({G_GEP, p0}, Legal);
@ -148,6 +151,7 @@ AArch64LegalizerInfo::AArch64LegalizerInfo() {
setAction({G_UITOFP, 1, Ty}, Legal);
}
for (auto Ty : { s1, s8, s16 }) {
// FIXME: These should be widened on types smaller than s32.
setAction({G_FPTOSI, 0, Ty}, Legal);
setAction({G_FPTOUI, 0, Ty}, Legal);
setAction({G_SITOFP, 1, Ty}, WidenScalar);

View File

@ -8,34 +8,22 @@
--- |
target datalayout = "e-m:o-i64:64-i128:128-n32:64-S128"
define void @add_s8_gpr() { ret void }
define void @add_s16_gpr() { ret void }
define void @add_s32_gpr() { ret void }
define void @add_s64_gpr() { ret void }
define void @sub_s8_gpr() { ret void }
define void @sub_s16_gpr() { ret void }
define void @sub_s32_gpr() { ret void }
define void @sub_s64_gpr() { ret void }
define void @or_s1_gpr() { ret void }
define void @or_s16_gpr() { ret void }
define void @or_s32_gpr() { ret void }
define void @or_s64_gpr() { ret void }
define void @or_v2s32_fpr() { ret void }
define void @xor_s8_gpr() { ret void }
define void @xor_s16_gpr() { ret void }
define void @xor_s32_gpr() { ret void }
define void @xor_s64_gpr() { ret void }
define void @and_s8_gpr() { ret void }
define void @and_s16_gpr() { ret void }
define void @and_s32_gpr() { ret void }
define void @and_s64_gpr() { ret void }
define void @shl_s8_gpr() { ret void }
define void @shl_s16_gpr() { ret void }
define void @shl_s32_gpr() { ret void }
define void @shl_s64_gpr() { ret void }
@ -45,8 +33,6 @@
define void @ashr_s32_gpr() { ret void }
define void @ashr_s64_gpr() { ret void }
define void @mul_s8_gpr() { ret void }
define void @mul_s16_gpr() { ret void }
define void @mul_s32_gpr() { ret void }
define void @mul_s64_gpr() { ret void }
@ -156,62 +142,6 @@
define void @select() { ret void }
...
---
# CHECK-LABEL: name: add_s8_gpr
name: add_s8_gpr
legalized: true
regBankSelected: true
# CHECK: registers:
# CHECK-NEXT: - { id: 0, class: gpr32 }
# CHECK-NEXT: - { id: 1, class: gpr32 }
# CHECK-NEXT: - { id: 2, class: gpr32 }
registers:
- { id: 0, class: gpr }
- { id: 1, class: gpr }
- { id: 2, class: gpr }
# CHECK: body:
# CHECK: %0 = COPY %w0
# CHECK: %1 = COPY %w1
# CHECK: %2 = ADDWrr %0, %1
body: |
bb.0:
liveins: %w0, %w1
%0(s8) = COPY %w0
%1(s8) = COPY %w1
%2(s8) = G_ADD %0, %1
...
---
# CHECK-LABEL: name: add_s16_gpr
name: add_s16_gpr
legalized: true
regBankSelected: true
# CHECK: registers:
# CHECK-NEXT: - { id: 0, class: gpr32 }
# CHECK-NEXT: - { id: 1, class: gpr32 }
# CHECK-NEXT: - { id: 2, class: gpr32 }
registers:
- { id: 0, class: gpr }
- { id: 1, class: gpr }
- { id: 2, class: gpr }
# CHECK: body:
# CHECK: %0 = COPY %w0
# CHECK: %1 = COPY %w1
# CHECK: %2 = ADDWrr %0, %1
body: |
bb.0:
liveins: %w0, %w1
%0(s16) = COPY %w0
%1(s16) = COPY %w1
%2(s16) = G_ADD %0, %1
...
---
# Check that we select a 32-bit GPR G_ADD into ADDWrr on GPR32.
# Also check that we constrain the register class of the COPY to GPR32.
@ -271,62 +201,6 @@ body: |
%2(s64) = G_ADD %0, %1
...
---
# CHECK-LABEL: name: sub_s8_gpr
name: sub_s8_gpr
legalized: true
regBankSelected: true
# CHECK: registers:
# CHECK-NEXT: - { id: 0, class: gpr32 }
# CHECK-NEXT: - { id: 1, class: gpr32 }
# CHECK-NEXT: - { id: 2, class: gpr32 }
registers:
- { id: 0, class: gpr }
- { id: 1, class: gpr }
- { id: 2, class: gpr }
# CHECK: body:
# CHECK: %0 = COPY %w0
# CHECK: %1 = COPY %w1
# CHECK: %2 = SUBWrr %0, %1
body: |
bb.0:
liveins: %w0, %w1
%0(s8) = COPY %w0
%1(s8) = COPY %w1
%2(s8) = G_SUB %0, %1
...
---
# CHECK-LABEL: name: sub_s16_gpr
name: sub_s16_gpr
legalized: true
regBankSelected: true
# CHECK: registers:
# CHECK-NEXT: - { id: 0, class: gpr32 }
# CHECK-NEXT: - { id: 1, class: gpr32 }
# CHECK-NEXT: - { id: 2, class: gpr32 }
registers:
- { id: 0, class: gpr }
- { id: 1, class: gpr }
- { id: 2, class: gpr }
# CHECK: body:
# CHECK: %0 = COPY %w0
# CHECK: %1 = COPY %w1
# CHECK: %2 = SUBWrr %0, %1
body: |
bb.0:
liveins: %w0, %w1
%0(s16) = COPY %w0
%1(s16) = COPY %w1
%2(s16) = G_SUB %0, %1
...
---
# Same as add_s32_gpr, for G_SUB operations.
# CHECK-LABEL: name: sub_s32_gpr
@ -385,62 +259,6 @@ body: |
%2(s64) = G_SUB %0, %1
...
---
# CHECK-LABEL: name: or_s1_gpr
name: or_s1_gpr
legalized: true
regBankSelected: true
# CHECK: registers:
# CHECK-NEXT: - { id: 0, class: gpr32 }
# CHECK-NEXT: - { id: 1, class: gpr32 }
# CHECK-NEXT: - { id: 2, class: gpr32 }
registers:
- { id: 0, class: gpr }
- { id: 1, class: gpr }
- { id: 2, class: gpr }
# CHECK: body:
# CHECK: %0 = COPY %w0
# CHECK: %1 = COPY %w1
# CHECK: %2 = ORRWrr %0, %1
body: |
bb.0:
liveins: %w0, %w1
%0(s1) = COPY %w0
%1(s1) = COPY %w1
%2(s1) = G_OR %0, %1
...
---
# CHECK-LABEL: name: or_s16_gpr
name: or_s16_gpr
legalized: true
regBankSelected: true
# CHECK: registers:
# CHECK-NEXT: - { id: 0, class: gpr32 }
# CHECK-NEXT: - { id: 1, class: gpr32 }
# CHECK-NEXT: - { id: 2, class: gpr32 }
registers:
- { id: 0, class: gpr }
- { id: 1, class: gpr }
- { id: 2, class: gpr }
# CHECK: body:
# CHECK: %0 = COPY %w0
# CHECK: %1 = COPY %w1
# CHECK: %2 = ORRWrr %0, %1
body: |
bb.0:
liveins: %w0, %w1
%0(s16) = COPY %w0
%1(s16) = COPY %w1
%2(s16) = G_OR %0, %1
...
---
# Same as add_s32_gpr, for G_OR operations.
# CHECK-LABEL: name: or_s32_gpr
@ -530,62 +348,6 @@ body: |
%2(<2 x s32>) = G_OR %0, %1
...
---
# CHECK-LABEL: name: xor_s8_gpr
name: xor_s8_gpr
legalized: true
regBankSelected: true
# CHECK: registers:
# CHECK-NEXT: - { id: 0, class: gpr32 }
# CHECK-NEXT: - { id: 1, class: gpr32 }
# CHECK-NEXT: - { id: 2, class: gpr32 }
registers:
- { id: 0, class: gpr }
- { id: 1, class: gpr }
- { id: 2, class: gpr }
# CHECK: body:
# CHECK: %0 = COPY %w0
# CHECK: %1 = COPY %w1
# CHECK: %2 = EORWrr %0, %1
body: |
bb.0:
liveins: %w0, %w1
%0(s8) = COPY %w0
%1(s8) = COPY %w1
%2(s8) = G_XOR %0, %1
...
---
# CHECK-LABEL: name: xor_s16_gpr
name: xor_s16_gpr
legalized: true
regBankSelected: true
# CHECK: registers:
# CHECK-NEXT: - { id: 0, class: gpr32 }
# CHECK-NEXT: - { id: 1, class: gpr32 }
# CHECK-NEXT: - { id: 2, class: gpr32 }
registers:
- { id: 0, class: gpr }
- { id: 1, class: gpr }
- { id: 2, class: gpr }
# CHECK: body:
# CHECK: %0 = COPY %w0
# CHECK: %1 = COPY %w1
# CHECK: %2 = EORWrr %0, %1
body: |
bb.0:
liveins: %w0, %w1
%0(s16) = COPY %w0
%1(s16) = COPY %w1
%2(s16) = G_XOR %0, %1
...
---
# Same as add_s32_gpr, for G_XOR operations.
# CHECK-LABEL: name: xor_s32_gpr
@ -644,62 +406,6 @@ body: |
%2(s64) = G_XOR %0, %1
...
---
# CHECK-LABEL: name: and_s8_gpr
name: and_s8_gpr
legalized: true
regBankSelected: true
# CHECK: registers:
# CHECK-NEXT: - { id: 0, class: gpr32 }
# CHECK-NEXT: - { id: 1, class: gpr32 }
# CHECK-NEXT: - { id: 2, class: gpr32 }
registers:
- { id: 0, class: gpr }
- { id: 1, class: gpr }
- { id: 2, class: gpr }
# CHECK: body:
# CHECK: %0 = COPY %w0
# CHECK: %1 = COPY %w1
# CHECK: %2 = ANDWrr %0, %1
body: |
bb.0:
liveins: %w0, %w1
%0(s8) = COPY %w0
%1(s8) = COPY %w1
%2(s8) = G_AND %0, %1
...
---
# CHECK-LABEL: name: and_s16_gpr
name: and_s16_gpr
legalized: true
regBankSelected: true
# CHECK: registers:
# CHECK-NEXT: - { id: 0, class: gpr32 }
# CHECK-NEXT: - { id: 1, class: gpr32 }
# CHECK-NEXT: - { id: 2, class: gpr32 }
registers:
- { id: 0, class: gpr }
- { id: 1, class: gpr }
- { id: 2, class: gpr }
# CHECK: body:
# CHECK: %0 = COPY %w0
# CHECK: %1 = COPY %w1
# CHECK: %2 = ANDWrr %0, %1
body: |
bb.0:
liveins: %w0, %w1
%0(s16) = COPY %w0
%1(s16) = COPY %w1
%2(s16) = G_AND %0, %1
...
---
# Same as add_s32_gpr, for G_AND operations.
# CHECK-LABEL: name: and_s32_gpr
@ -758,62 +464,6 @@ body: |
%2(s64) = G_AND %0, %1
...
---
# CHECK-LABEL: name: shl_s8_gpr
name: shl_s8_gpr
legalized: true
regBankSelected: true
# CHECK: registers:
# CHECK-NEXT: - { id: 0, class: gpr32 }
# CHECK-NEXT: - { id: 1, class: gpr32 }
# CHECK-NEXT: - { id: 2, class: gpr32 }
registers:
- { id: 0, class: gpr }
- { id: 1, class: gpr }
- { id: 2, class: gpr }
# CHECK: body:
# CHECK: %0 = COPY %w0
# CHECK: %1 = COPY %w1
# CHECK: %2 = LSLVWr %0, %1
body: |
bb.0:
liveins: %w0, %w1
%0(s8) = COPY %w0
%1(s8) = COPY %w1
%2(s8) = G_SHL %0, %1
...
---
# CHECK-LABEL: name: shl_s16_gpr
name: shl_s16_gpr
legalized: true
regBankSelected: true
# CHECK: registers:
# CHECK-NEXT: - { id: 0, class: gpr32 }
# CHECK-NEXT: - { id: 1, class: gpr32 }
# CHECK-NEXT: - { id: 2, class: gpr32 }
registers:
- { id: 0, class: gpr }
- { id: 1, class: gpr }
- { id: 2, class: gpr }
# CHECK: body:
# CHECK: %0 = COPY %w0
# CHECK: %1 = COPY %w1
# CHECK: %2 = LSLVWr %0, %1
body: |
bb.0:
liveins: %w0, %w1
%0(s16) = COPY %w0
%1(s16) = COPY %w1
%2(s16) = G_SHL %0, %1
...
---
# Same as add_s32_gpr, for G_SHL operations.
# CHECK-LABEL: name: shl_s32_gpr
@ -988,62 +638,6 @@ body: |
%2(s64) = G_ASHR %0, %1
...
---
# CHECK-LABEL: name: mul_s8_gpr
name: mul_s8_gpr
legalized: true
regBankSelected: true
# CHECK: registers:
# CHECK-NEXT: - { id: 0, class: gpr32 }
# CHECK-NEXT: - { id: 1, class: gpr32 }
# CHECK-NEXT: - { id: 2, class: gpr32 }
registers:
- { id: 0, class: gpr }
- { id: 1, class: gpr }
- { id: 2, class: gpr }
# CHECK: body:
# CHECK: %0 = COPY %w0
# CHECK: %1 = COPY %w1
# CHECK: %2 = MADDWrrr %0, %1, %wzr
body: |
bb.0:
liveins: %w0, %w1
%0(s8) = COPY %w0
%1(s8) = COPY %w1
%2(s8) = G_MUL %0, %1
...
---
# CHECK-LABEL: name: mul_s16_gpr
name: mul_s16_gpr
legalized: true
regBankSelected: true
# CHECK: registers:
# CHECK-NEXT: - { id: 0, class: gpr32 }
# CHECK-NEXT: - { id: 1, class: gpr32 }
# CHECK-NEXT: - { id: 2, class: gpr32 }
registers:
- { id: 0, class: gpr }
- { id: 1, class: gpr }
- { id: 2, class: gpr }
# CHECK: body:
# CHECK: %0 = COPY %w0
# CHECK: %1 = COPY %w1
# CHECK: %2 = MADDWrrr %0, %1, %wzr
body: |
bb.0:
liveins: %w0, %w1
%0(s16) = COPY %w0
%1(s16) = COPY %w1
%2(s16) = G_MUL %0, %1
...
---
# Check that we select s32 GPR G_MUL. This is trickier than other binops because
# there is only MADDWrrr, and we have to use the WZR physreg.

View File

@ -69,7 +69,10 @@ body: |
bb.0.entry:
liveins: %x0, %x1, %x2, %x3
; CHECK-LABEL: name: test_scalar_add_small
; CHECK: [[RES:%.*]](s8) = G_ADD %2, %3
; CHECK: [[OP0:%.*]](s32) = G_ANYEXT %2(s8)
; CHECK: [[OP1:%.*]](s32) = G_ANYEXT %3(s8)
; CHECK: [[RES32:%.*]](s32) = G_ADD [[OP0]], [[OP1]]
; CHECK: [[RES:%.*]](s8) = G_TRUNC [[RES32]](s32)
%0(s64) = COPY %x0
%1(s64) = COPY %x1

View File

@ -22,7 +22,10 @@ body: |
bb.0.entry:
liveins: %x0, %x1, %x2, %x3
; CHECK-LABEL: name: test_scalar_and_small
; CHECK: %4(s8) = G_AND %2, %3
; CHECK: [[OP0:%.*]](s32) = G_ANYEXT %2(s8)
; CHECK: [[OP1:%.*]](s32) = G_ANYEXT %3(s8)
; CHECK: [[RES32:%.*]](s32) = G_AND [[OP0]], [[OP1]]
; CHECK: [[RES:%.*]](s8) = G_TRUNC [[RES32]](s32)
%0(s64) = COPY %x0
%1(s64) = COPY %x1

View File

@ -22,7 +22,10 @@ body: |
bb.0.entry:
liveins: %x0, %x1, %x2, %x3
; CHECK-LABEL: name: test_scalar_mul_small
; CHECK: %4(s8) = G_MUL %2, %3
; CHECK: [[OP0:%.*]](s32) = G_ANYEXT %2(s8)
; CHECK: [[OP1:%.*]](s32) = G_ANYEXT %3(s8)
; CHECK: [[RES32:%.*]](s32) = G_MUL [[OP0]], [[OP1]]
; CHECK: [[RES:%.*]](s8) = G_TRUNC [[RES32]](s32)
%0(s64) = COPY %x0
%1(s64) = COPY %x1

View File

@ -22,7 +22,10 @@ body: |
bb.0.entry:
liveins: %x0, %x1, %x2, %x3
; CHECK-LABEL: name: test_scalar_or_small
; CHECK: %4(s8) = G_OR %2, %3
; CHECK: [[OP0:%.*]](s32) = G_ANYEXT %2(s8)
; CHECK: [[OP1:%.*]](s32) = G_ANYEXT %3(s8)
; CHECK: [[RES32:%.*]](s32) = G_OR [[OP0]], [[OP1]]
; CHECK: [[RES:%.*]](s8) = G_TRUNC [[RES32]](s32)
%0(s64) = COPY %x0
%1(s64) = COPY %x1

View File

@ -45,8 +45,15 @@ body: |
; CHECK: [[RHS32:%[0-9]+]](s32) = G_SEXT %7
; CHECK: [[QUOT32:%[0-9]+]](s32) = G_SDIV [[LHS32]], [[RHS32]]
; CHECK: [[QUOT:%[0-9]+]](s8) = G_TRUNC [[QUOT32]]
; CHECK: [[PROD:%[0-9]+]](s8) = G_MUL [[QUOT]], %7
; CHECK: [[RES:%[0-9]+]](s8) = G_SUB %6, [[PROD]]
; CHECK: [[QUOT32_2:%.*]](s32) = G_ANYEXT [[QUOT]](s8)
; CHECK: [[RHS32_2:%.*]](s32) = G_ANYEXT %7(s8)
; CHECK: [[PROD32:%.*]](s32) = G_MUL [[QUOT32_2]], [[RHS32_2]]
; CHECK: [[PROD:%.*]](s8) = G_TRUNC [[PROD32]](s32)
; CHECK: [[LHS32_2:%.*]](s32) = G_ANYEXT %6(s8)
; CHECK: [[PROD32_2:%.*]](s32) = G_ANYEXT [[PROD]](s8)
; CHECK: [[RES:%[0-9]+]](s32) = G_SUB [[LHS32_2]], [[PROD32_2]]
%6(s8) = G_TRUNC %0
%7(s8) = G_TRUNC %1
%8(s8) = G_SREM %6, %7

View File

@ -39,6 +39,9 @@ body: |
; CHECK: %5(s8) = G_TRUNC [[RES32]]
%5(s8) = G_LSHR %2, %3
; CHECK: %6(s8) = G_SHL %2, %3
; CHECK: [[OP0:%.*]](s32) = G_ANYEXT %2(s8)
; CHECK: [[OP1:%.*]](s32) = G_ANYEXT %3(s8)
; CHECK: [[RES32:%.*]](s32) = G_SHL [[OP0]], [[OP1]]
; CHECK: [[RES:%.*]](s8) = G_TRUNC [[RES32]](s32)
%6(s8) = G_SHL %2, %3
...

View File

@ -22,7 +22,10 @@ body: |
bb.0.entry:
liveins: %x0, %x1, %x2, %x3
; CHECK-LABEL: name: test_scalar_sub_small
; CHECK: [[RES:%.*]](s8) = G_SUB %2, %3
; CHECK: [[OP0:%.*]](s32) = G_ANYEXT %2(s8)
; CHECK: [[OP1:%.*]](s32) = G_ANYEXT %3(s8)
; CHECK: [[RES32:%.*]](s32) = G_SUB [[OP0]], [[OP1]]
; CHECK: [[RES:%.*]](s8) = G_TRUNC [[RES32]](s32)
%0(s64) = COPY %x0
%1(s64) = COPY %x1

View File

@ -22,7 +22,10 @@ body: |
bb.0.entry:
liveins: %x0, %x1, %x2, %x3
; CHECK-LABEL: name: test_scalar_xor_small
; CHECK: %4(s8) = G_XOR %2, %3
; CHECK: [[OP0:%.*]](s32) = G_ANYEXT %2(s8)
; CHECK: [[OP1:%.*]](s32) = G_ANYEXT %3(s8)
; CHECK: [[RES32:%.*]](s32) = G_XOR [[OP0]], [[OP1]]
; CHECK: [[RES:%.*]](s8) = G_TRUNC [[RES32]](s32)
%0(s64) = COPY %x0
%1(s64) = COPY %x1