Simplify ZERO_EXTEND_VECTOR_INREG if the extended bits are not required.
Matches what we already do for ZERO_EXTEND.
Reapplies rL363850 but now with legality checks added at rL364290
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@364303 91177308-0d34-0410-b5e6-96231b3b80d8
This should not cause any visible change in output, but it's
more efficient because we were producing non-canonical 'sub x, 1'
and 'setcc ugt x, 0'. As mentioned in the TODO, we should also
be handling the inverse predicate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@364302 91177308-0d34-0410-b5e6-96231b3b80d8
Simplify SIGN_EXTEND_VECTOR_INREG if the extended bits are not required/known zero.
Matches what we already do for SIGN_EXTEND.
Reapplies rL363802 but now with legality checks added at rL364290
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@364299 91177308-0d34-0410-b5e6-96231b3b80d8
As part of the fix for rL364264 + rL364272 - limit the *_EXTEND conversion to !TLO.LegalOperations || isOperationLegal cases.
We'll improve X86 legality in future commits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@364290 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This addresses the regression that is being exposed by D50222 in `test/CodeGen/X86/jump_sign.ll`
The missing fold, at least partially, looks trivial:
https://rise4fun.com/Alive/Zsln
i.e. if we are comparing with zero, and comparing the `urem`-by-non-power-of-two,
and the `urem` is of something that may at most have a single bit set (or no bits set at all),
the `urem` is not needed.
Reviewers: RKSimon, craig.topper, xbolva00, spatel
Reviewed By: xbolva00, spatel
Subscribers: xbolva00, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63390
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@364286 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts the following patches.
"[TargetLowering] SimplifyDemandedBits SIGN_EXTEND_VECTOR_INREG -> ANY/ZERO_EXTEND_VECTOR_INREG"
"[TargetLowering] SimplifyDemandedBits ZERO_EXTEND_VECTOR_INREG -> ANY_EXTEND_VECTOR_INREG"
"[TargetLowering] SimplifyDemandedBits - add ANY_EXTEND_VECTOR_INREG support"
We can end up with an any_extend_vector_inreg with a 256 bit result type
and a 128 bit result type. This is allowed by the ISD opcode, but the
generic operation legalizer is only able to expand cases where the
total vector width is the same.
The X86 backend creates these mismatched cases for zext_vec_inreg/sext_vec_inreg.
The SimplifyDemandedBits changes are allowing those nodes to become
aext_vec_inreg. For the zext/sext cases, the X86 backend has Custom
handling and never lets them get to the generic legalizer. We need to do the same
for aext_vec_inreg.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@364264 91177308-0d34-0410-b5e6-96231b3b80d8
Simplify ZERO_EXTEND_VECTOR_INREG if the extended bits are not required.
Matches what we already do for ZERO_EXTEND.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@363850 91177308-0d34-0410-b5e6-96231b3b80d8
Simplify SIGN_EXTEND_VECTOR_INREG if the extended bits are not required/known zero.
Matches what we already do for SIGN_EXTEND.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@363802 91177308-0d34-0410-b5e6-96231b3b80d8
Match SIGN_EXTEND + ZERO_EXTEND handling - will be adding ANY_EXTEND_VECTOR_INREG support in a future patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@363716 91177308-0d34-0410-b5e6-96231b3b80d8
Other than adding consistent demanded elts handling which was a trivial addition, the other differences in functionality will be added in later patches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@363713 91177308-0d34-0410-b5e6-96231b3b80d8
Other than adding consistent demanded elts handling which was a trivial addition, the other differences in functionality will be added in later patches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@363710 91177308-0d34-0410-b5e6-96231b3b80d8
Also fold ANY_EXTEND_VECTOR_INREG -> BITCAST if we only need the bottom element.
Fixes temporary regression introduced in rL363693.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@363694 91177308-0d34-0410-b5e6-96231b3b80d8
As discussed on D62910, we need to check whether particular types of memory access are allowed, not just their alignment/address-space.
This NFC patch adds a MachineMemOperand::Flags argument to allowsMemoryAccess and allowsMisalignedMemoryAccesses, and wires up calls to pass the relevant flags to them.
If people are happy with this approach I can then update X86TargetLowering::allowsMisalignedMemoryAccesses to handle misaligned NT load/stores.
Differential Revision: https://reviews.llvm.org/D63075
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@363179 91177308-0d34-0410-b5e6-96231b3b80d8
Most parts of LLVM don't care whether the byval type is derived from an
explicit Attribute or from the parameter's pointee type, so it makes
sense for the main access function to just return the right value.
The very few users who do care (only BitcodeReader so far) can find out
how it's specified by accessing the Attribute directly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@362642 91177308-0d34-0410-b5e6-96231b3b80d8
When we switch to opaque pointer types we will need some way to describe
how many bytes a 'byval' parameter should occupy on the stack. This adds
a (for now) optional extra type parameter.
If present, the type must match the pointee type of the argument.
The original commit did not remap byval types when linking modules, which broke
LTO. This version fixes that.
Note to front-end maintainers: if this causes test failures, it's probably
because the "byval" attribute is printed after attributes without any parameter
after this change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@362128 91177308-0d34-0410-b5e6-96231b3b80d8
When we switch to opaque pointer types we will need some way to describe
how many bytes a 'byval' parameter should occupy on the stack. This adds
a (for now) optional extra type parameter.
If present, the type must match the pointee type of the argument.
Note to front-end maintainers: if this causes test failures, it's probably
because the "byval" attribute is printed after attributes without any parameter
after this change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@362012 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds the overridable TargetLowering::getTargetConstantFromLoad function which allows targets to return any constant value loaded by a LoadSDNode node - only X86 makes use of this so far but everything should be in place for other targets.
computeKnownBits then uses this function to improve codegen, notably vector code after legalization.
A future commit will do the same for ComputeNumSignBits but computeKnownBits sees the bigger benefit.
This required a couple of fixes:
* SimplifyDemandedBits must early-out for getTargetConstantFromLoad cases to prevent infinite loops of constant regeneration (similar to what we already do for BUILD_VECTOR).
* Fix a DAGCombiner::visitTRUNCATE issue as we had trunc(shl(v8i32),v8i16) <-> shl(trunc(v8i16),v8i32) infinite loops after legalization on AVX512 targets.
Differential Revision: https://reviews.llvm.org/D61887
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@361620 91177308-0d34-0410-b5e6-96231b3b80d8
Add an intrinsic that takes 2 signed integers with the scale of them provided
as the third argument and performs fixed point multiplication on them. The
result is saturated and clamped between the largest and smallest representable
values of the first 2 operands.
This is a part of implementing fixed point arithmetic in clang where some of
the more complex operations will be implemented as intrinsics.
Differential Revision: https://reviews.llvm.org/D55720
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@361289 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The endianess used in the calling convention does not always match the
endianess of the target on all architectures, namely AVR.
When an argument is too large to be legalised by the architecture and is
split for the ABI, a new hook TargetLoweringInfo::shouldSplitFunctionArgumentsAsLittleEndian
is queried to find the endianess that function arguments must be laid
out in.
This approach was recommended by Eli Friedman.
Originally reported in https://github.com/avr-rust/rust/issues/129.
Patch by Carl Peto.
Reviewers: bogner, t.p.northover, RKSimon, niravd, efriedma
Reviewed By: efriedma
Subscribers: JDevlieghere, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D62003
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@361222 91177308-0d34-0410-b5e6-96231b3b80d8
Fixes issue reported by aemerson on D57348. Vector op legalization
support is added for uaddo, usubo, saddo and ssubo (umulo and smulo
were already supported). As usual, by extracting TargetLowering methods
and calling them from vector op legalization.
Vector op legalization doesn't really deal with multiple result nodes,
so I'm explicitly performing a recursive legalization call on the
result value that is not being legalized.
There are some existing test changes because expansion happens
earlier, so we don't get a DAG combiner run in between anymore.
Differential Revision: https://reviews.llvm.org/D61692
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@361166 91177308-0d34-0410-b5e6-96231b3b80d8
This follows the pattern of the existing isCommutativeBinOp().
x86 shows improvements from vector narrowing for the min/max opcodes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@360639 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
X86TargetLowering::LowerAsmOperandForConstraint had better support than
TargetLowering::LowerAsmOperandForConstraint for arbitrary depth
getelementpointers for "i", "n", and "s" extended inline assembly
constraints. Hoist its support from the derived class into the base
class.
Link: https://github.com/ClangBuiltLinux/linux/issues/469
Reviewers: echristo, t.p.northover
Reviewed By: t.p.northover
Subscribers: t.p.northover, E5ten, kees, jyknight, nemanjai, javed.absar, eraman, hiraditya, jsji, llvm-commits, void, craig.topper, nathanchance, srhines
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61560
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@360604 91177308-0d34-0410-b5e6-96231b3b80d8
I've included a new fix in X86RegisterInfo to prevent PR41619 without
reintroducing r359392. We might be able to improve that in the base class
implementation of shouldRewriteCopySrc somehow. But this hopefully enables
forward progress on SimplifyDemandedBits improvements for now.
Original commit message:
This patch adds support for BigBitWidth -> SmallBitWidth bitcasts, splitting the DemandedBits/Elts accordingly.
The AMDGPU backend needed an extra (srl (and x, c1 << c2), c2) -> (and (srl(x, c2), c1) combine to encourage BFE creation, I investigated putting this in DAGComb
but it caused a lot of noise on other targets - some improvements, some regressions.
The X86 changes are all definite wins.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@360552 91177308-0d34-0410-b5e6-96231b3b80d8
Reverts "[X86] Remove (V)MOV64toSDrr/m and (V)MOVDI2SSrr/m. Use 128-bit result MOVD/MOVQ and COPY_TO_REGCLASS instead"
Reverts "[TargetLowering][AMDGPU][X86] Improve SimplifyDemandedBits bitcast handling"
Eric Christopher and Jorge Gorbe Moya reported some issues with these patches to me off list.
Removing the CodeGenOnly instructions has changed how fneg is handled during fast-isel with sse/sse2. We're now emitting fsub -0.0, x instead
moving to the integer domain(in a GPR), xoring the sign bit, and then moving back to xmm. This is because the fast isel table no longer
contains an entry for (f32/f64 bitcast (i32/i64)) so the target independent fneg code fails. The use of fsub changes the behavior of nan with
respect to -O2 codegen which will always use a pxor. NOTE: We still have a difference with double with -m32 since the move to GPR doesn't work
there. I'll file a separate PR for that and add test cases.
Since removing the CodeGenOnly instructions was fixing PR41619, I'm reverting r358887 which exposed that PR. Though I wouldn't be surprised
if that bug can still be hit independent of that.
This should hopefully get Google back to green. I'll work with Simon and other X86 folks to figure out how to move forward again.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@360066 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds support for BigBitWidth -> SmallBitWidth bitcasts, splitting the DemandedBits/Elts accordingly.
The AMDGPU backend needed an extra (srl (and x, c1 << c2), c2) -> (and (srl(x, c2), c1) combine to encourage BFE creation, I investigated putting this in DAGCombine but it caused a lot of noise on other targets - some improvements, some regressions.
The X86 changes are all definite wins.
Differential Revision: https://reviews.llvm.org/D60462
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@358887 91177308-0d34-0410-b5e6-96231b3b80d8
If the upper bits of the SHL result aren't used, we might be able to use a narrower shift. For example, on X86 this can turn a 64-bit into 32-bit enabling a smaller encoding.
Differential Revision: https://reviews.llvm.org/D60358
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@358257 91177308-0d34-0410-b5e6-96231b3b80d8
An older version of this could return false but now that this always succeeds we can just inline and simplify it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@357999 91177308-0d34-0410-b5e6-96231b3b80d8
When bitcasting from a source op to a larger bitwidth op, split the demanded bits and OR them on top of one another and demand those merged bits in the SimplifyDemandedBits call on the source op.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@357992 91177308-0d34-0410-b5e6-96231b3b80d8
Be more selective in the SimplifyDemandedBits -> SimplifyDemandedVectorElts bitcast call based on the demanded elts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@357942 91177308-0d34-0410-b5e6-96231b3b80d8
This helps us relax the extension of a lot of scalar elements before they are inserted into a vector.
Its exposes an issue in DAGCombiner::convertBuildVecZextToZext as some/all the zero-extensions may be relaxed to ANY_EXTEND, so we need to handle that case to avoid a couple of AVX2 VPMOVZX test regressions.
Once this is in it should be easier to fix a number of remaining failures to fold loads into VBROADCAST nodes.
Differential Revision: https://reviews.llvm.org/D59484
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@356989 91177308-0d34-0410-b5e6-96231b3b80d8