This patch replaces ORs with getHighBits/getLowBits etc. with setLowBits/setHighBits/setBitsFrom.
In a few of the places we weren't ORing, but the KnownZero/KnownOne vectors were already initialized to zero. We exploit this in most places already there were just some that were inconsistent.
Differential Revision: https://reviews.llvm.org/D30965
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@297860 91177308-0d34-0410-b5e6-96231b3b80d8
Create nodes for smulwb and smulwt and move their selection from
DAGToDAG to DAG combine. smlawb and smlawt can then be selected
using tablegen. Added some helper functions to detect shift patterns
as well as a wrapper around SimplifyDemandBits. Added a couple of
extra tests.
Differential Revision: https://reviews.llvm.org/D30708
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@297716 91177308-0d34-0410-b5e6-96231b3b80d8
The bug was introduced with:
https://reviews.llvm.org/rL294863
...and manifests as a selection failure in x86, but that's actually
another bug. This fix prevents wrong codegen with -0.0, but in the
more common case when we have NSZ and NNAN (-ffast-math), we should
still be able to fold this setcc/compare.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294924 91177308-0d34-0410-b5e6-96231b3b80d8
I don't know if anything other than x86 vectors is affected by this change, but this may allow
us to remove target-specific intrinsics for blendv* (vector selects). The simplification arises
from the fact that blendv* instructions only use the sign-bit when deciding which vector element
to choose for the destination vector. The mechanism to fold VSELECT into SHRUNKBLEND nodes already
exists in x86 lowering; this demanded bits change just enables the transform to fire more often.
The original motivation starts with a bug for DSE of masked stores that seems completely unrelated,
but I've explained the likely steps in this series here:
https://llvm.org/bugs/show_bug.cgi?id=11210
Differential Revision: https://reviews.llvm.org/D29687
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294863 91177308-0d34-0410-b5e6-96231b3b80d8
If a vector index is out of bounds, the result is supposed to be
undefined but is not undefined behavior. Change the legalization
for indexing the vector on the stack so that an out of bounds
index does not create an out of bounds memory access.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291604 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Most targets set the action for these nodes to Expand even though there
isn't actually any code for them in ExpandNode. Instead, targets simply
relied on the fact that no code generates these nodes as long as the
nodes aren't legal or custom.
However, generating these nodes can be useful e.g. for divide-by-constant
in wider integer types.
Expand of [US]MUL_LOHI will use MULH[US] when legal or custom, and
a sequence of half-width multiplications otherwise. Promote uses a wider
multiply.
This patch intends to not change the generated code, but indirect effects
are possible since expansions/promotions that were previously done in
DAGCombine may now be done in LegalizeDAG.
See D24822 for a change that actually uses the new expansion.
Reviewers: spatel, bkramer, venkatra, efriedma, hfinkel, ast, nadav, tstellarAMD
Subscribers: arsenm, jyknight, nemanjai, wdng, nhaehnle, llvm-commits
Differential Revision: https://reviews.llvm.org/D24956
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289050 91177308-0d34-0410-b5e6-96231b3b80d8
We treat bitwise 'not' as a special operation and try not to reduce its all-ones mask.
Presumably, this is because a 'not' may be cheaper than a generic 'xor' or it may get
folded into another logic op if the target has those. However, if we can remove a logic
instruction by changing the xor's constant mask value, that should always be a win.
Note that the IR version of SimplifyDemandedBits() does not treat 'not' as a special-case
currently (although that's marked with a FIXME). So if you run this IR through -instcombine,
you should get the same end result. I'm hoping to add a different backend transform that
will expose this problem though, so I need to solve this first.
Differential Revision: https://reviews.llvm.org/D27356
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288676 91177308-0d34-0410-b5e6-96231b3b80d8
Implemented widening (v2f32) and splitting (v16f64).
On splitting, I use "popcnt" to calculate memory increment.
More type legalization work will come in the next patches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@287761 91177308-0d34-0410-b5e6-96231b3b80d8
Fixed an issue with vector usage of TargetLowering::isConstTrueVal / TargetLowering::isConstFalseVal boolean result matching.
The comment said we shouldn't handle constant splat vectors with undef elements. But the the actual code was returning false if the build vector contained no undef elements....
This patch now ignores the number of undefs (getConstantSplatNode will return null if the build vector is all undefs).
The change has also unearthed a couple of missed opportunities in AVX512 comparison code that will need to be addressed.
Differential Revision: https://reviews.llvm.org/D26031
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286238 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The main purpose of this new helper is to enable simplifying operations that
have multiple uses. SimplifyDemandedBits does not handle multiple uses
currently, and this new function makes it possible to optimize:
and v1, v0, 0xffffff
mul24 v2, v1, v1 ; Multiply ignoring high 8-bits.
To:
mul24 v2, v0, v0
Where before this would not be optimized, because v1 has multiple uses.
Reviewers: bogner, arsenm
Subscribers: nhaehnle, wdng, llvm-commits
Differential Revision: https://reviews.llvm.org/D24964
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284266 91177308-0d34-0410-b5e6-96231b3b80d8
Add the ability to computeKnownBits and SimplifyDemandedBits to extract the known zero/one bits from BUILD_VECTOR, returning the known bits that are shared by every vector element.
This is an initial step towards determining the sign bits of a vector (PR29079).
Differential Revision: https://reviews.llvm.org/D24253
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@280927 91177308-0d34-0410-b5e6-96231b3b80d8
We used to combine "sext(setcc x, y, cc) -> (select (setcc x, y, cc), -1, 0)"
Instead, we should combine to (select (setcc x, y, cc), T, 0) where the value
of T is 1 or -1, depending on the type of the setcc, and getBooleanContents()
for the type if it is not i1.
This fixes PR28504.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@277371 91177308-0d34-0410-b5e6-96231b3b80d8
The following condition expression ( a >> n) & 1 is converted to "bt a, n" instruction. It works on all intel targets.
But on AVX-512 it was broken because the expression is modified to (truncate (a >>n) to i1).
I added the new sequence (truncate (a >>n) to i1) to the BT pattern.
Differential Revision: https://reviews.llvm.org/D22354
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275950 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Instead, we take a single flags arg (a bitset).
Also add a default 0 alignment, and change the order of arguments so the
alignment comes before the flags.
This greatly simplifies many callsites, and fixes a bug in
AMDGPUISelLowering, wherein the order of the args to getLoad was
inverted. It also greatly simplifies the process of adding another flag
to getLoad.
Reviewers: chandlerc, tstellarAMD
Subscribers: jholewinski, arsenm, jyknight, dsanders, nemanjai, llvm-commits
Differential Revision: http://reviews.llvm.org/D22249
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275592 91177308-0d34-0410-b5e6-96231b3b80d8
I think this converts all the simple cases that really just care about
the generated code being position independent or not. The remaining
uses are a bit more complicated and are checking things like "is this
a library or executable" or "can this symbol be preempted".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274055 91177308-0d34-0410-b5e6-96231b3b80d8
The setCallee function will set the number of fixed arguments based
on the size of the argument list. The FixedArgs parameter was often
explicitly set to 0, leading to a lack of consistent value for non-
vararg functions.
Differential Revision: http://reviews.llvm.org/D20376
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@273403 91177308-0d34-0410-b5e6-96231b3b80d8
This used to be free, copying and moving DebugLocs became expensive
after the metadata rewrite. Passing by reference eliminates a ton of
track/untrack operations. No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@272512 91177308-0d34-0410-b5e6-96231b3b80d8
There are at least 2 places (DAGCombiner, X86ISelLowering) where this could be used instead
of ad-hoc and watered down code that is trying to match a power-of-2 pattern.
Differential Revision: http://reviews.llvm.org/D20439
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270073 91177308-0d34-0410-b5e6-96231b3b80d8
After looking at D19087 again, it occurred to me that we can do better. If we consolidate
the valueHasExactlyOneBitSet() transforms, we won't incur extra overhead from calling it a
2nd time, and we can shrink SimplifySetCC() a bit. No functional change intended.
Differential Revision: http://reviews.llvm.org/D20050
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@268932 91177308-0d34-0410-b5e6-96231b3b80d8
When custom lowered, this is not called if the store is custom
lowered. Move it to be a utility function so targets can
easily expand unaligned accesses when custom lowering.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@267029 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The motivation for this new function is to move an invalid assumption
about the relationship between the names of register definitions in
tablegen files and their assembly names into TargetRegisterInfo, so that
we can begin working on fixing this assumption.
The current problem is that if you have a register definition in
TableGen like:
def MYReg0 : Register<"r0", 0>;
The function TargetLowering::getRegForInlineAsmConstraint() derives the
assembly name from the tablegen name: "MyReg0" rather than the given
assembly name "r0". This is working, because on most targets the
tablegen name and the assembly names are case insensitive matches for
each other (e.g. def EAX : X86Reg<"eax", ...>
getRegAsmName() will allow targets to override this default assumption and
return the correct assembly name.
Reviewers: echristo, hfinkel
Subscribers: SamWot, echristo, hfinkel, llvm-commits
Differential Revision: http://reviews.llvm.org/D15614
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@265955 91177308-0d34-0410-b5e6-96231b3b80d8
A ``swifterror`` attribute can be applied to a function parameter or an
AllocaInst.
This commit does not include any target-specific change. The target-specific
optimization will come as a follow-up patch.
Differential Revision: http://reviews.llvm.org/D18092
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@265189 91177308-0d34-0410-b5e6-96231b3b80d8