Avoid unnecessary spills of byval arguments of device functions to
local space on SASS level and subsequent pointer conversion to generic
address space that follows. Instead, make a local copy in IR, provide
a way to access arguments directly, and let LLVM optimize the copy away
when possible.
Differential Review: https://reviews.llvm.org/D21421
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276153 91177308-0d34-0410-b5e6-96231b3b80d8
In D12090, the ExprValueMap was added to reuse existing value during SCEV expansion.
However, const folding and sext/zext distribution can make the reuse still difficult.
A simplified case is: suppose we know S1 expands to V1 in ExprValueMap, and
S1 = S2 + C_a
S3 = S2 + C_b
where C_a and C_b are different SCEVConstants. Then we'd like to expand S3 as
V1 - C_a + C_b instead of expanding S2 literally. It is helpful when S2 is a
complex SCEV expr and S2 has no entry in ExprValueMap, which is usually caused
by the fact that S3 is generated from S1 after const folding.
In order to do that, we represent ExprValueMap as a mapping from SCEV to
ValueOffsetPair. We will save both S1->{V1, 0} and S2->{V1, C_a} into the
ExprValueMap when we create SCEV for V1. When S3 is expanded, it will first
expand S2 to V1 - C_a because of S2->{V1, C_a} in the map, then expand S3 to
V1 - C_a + C_b.
Differential Revision: https://reviews.llvm.org/D21313
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276136 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: In r275989 we enabled the folding of `logic(cast(icmp), cast(icmp))` to `cast(logic(icmp, icmp))`. Here we add more test cases to assure this folding works for all logical operations `and`/`or`/`xor`.
Reviewers: grosser
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D22561
Contributed-by: Matthias Reisinger
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276105 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds costs for the vectorized implementations of CTPOP, the default values were seriously underestimating the cost of these and was encouraging vectorization on targets where serialized use of POPCNT would be much better.
Differential Revision: https://reviews.llvm.org/D22456
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276104 91177308-0d34-0410-b5e6-96231b3b80d8
Retry r275776 (no changes, we suspect the issue was with another commit).
The current logic for handling inline asm operands in DAGToDAGISel interprets
the operands by looking for constants, which should represent the flags
describing the kind of operand we're dealing with (immediate, memory, register
def etc). The operands representing actual data are skipped only if they are
non-const, with the exception of immediate operands which are skipped explicitly
when a flag describing an immediate is found.
The oversight is that memory operands may be const too (e.g. for device drivers
reading a fixed address), so we should explicitly skip the operand following a
flag describing a memory operand. If we don't, we risk interpreting that
constant as a flag, which is definitely not intended.
Fixes PR26038
Differential Revision: https://reviews.llvm.org/D22103
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276101 91177308-0d34-0410-b5e6-96231b3b80d8
Inference of the 'returned' attribute was fixed in r276008, lets try
turning the backend support back on.
This reverts commit r275677.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276081 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
getVectorizablePrefix previously didn't work properly in the face of
aliasing loads/stores. It unwittingly assumed that the loads/stores
appeared in the BB in address order. If they didn't, it would do the
wrong thing.
Reviewers: asbirlea, tstellarAMD
Subscribers: arsenm, llvm-commits, mzolotukhin
Differential Revision: https://reviews.llvm.org/D22535
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276072 91177308-0d34-0410-b5e6-96231b3b80d8
Reverting this commit for now as it seems to be causing failures on
test-suite tests on the clang-ppc64le-linux-lnt bot.
This reverts commit r276044.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276068 91177308-0d34-0410-b5e6-96231b3b80d8
Revert "[LoopSimplify] Update LCSSA after separating nested loops."
This reverts commit r275891.
Revert "[LCSSA] Post-process PHI-nodes created by SSAUpdate when constructing LCSSA form."
This reverts commit r275883.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276064 91177308-0d34-0410-b5e6-96231b3b80d8
We just set PreserveLCSSA to always true since we don't have an
analogous method `mustPreserveAnalysisID(LCSSA)`.
Also port LoopInfo verifier pass to test LoopUnrollPass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276063 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Previously, the insertion point for stores was the last instruction in
Chain *before calling getVectorizablePrefixEndIdx*. Thus if
getVectorizablePrefixEndIdx didn't return Chain.size(), we still would
insert at the last instruction in Chain.
This patch changes our internal API a bit in an attempt to make it less
prone to this sort of error. As a result, we end up recalculating the
Chain's boundary instructions, but I think worrying about the speed hit
of this is a premature optimization right now.
Reviewers: asbirlea, tstellarAMD
Subscribers: mzolotukhin, arsenm, llvm-commits
Differential Revision: https://reviews.llvm.org/D22534
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276056 91177308-0d34-0410-b5e6-96231b3b80d8
If 2.5 ulp is acceptable, denormals are not required, and
isn't a reciprocal which will already be handled, replace
with a faster fdiv.
Simplify the lowering tests by using per function
subtarget features.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276051 91177308-0d34-0410-b5e6-96231b3b80d8
This is a variant of scavengeRegister() that works for
enterBasicBlockEnd()/backward(). The benefit of the backward mode is
that it is not affected by incomplete kill flags.
This patch also changes
PrologEpilogInserter::doScavengeFrameVirtualRegs() to use the register
scavenger in backwards mode.
Differential Revision: http://reviews.llvm.org/D21885
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276044 91177308-0d34-0410-b5e6-96231b3b80d8
The pattern may look more obviously like a sext if written as:
define i32 @g(i16 %x) {
%zext = zext i16 %x to i32
%xor = xor i32 %zext, 32768
%add = add i32 %xor, -32768
ret i32 %add
}
We already have that fold in visitAdd().
Differential Revision: https://reviews.llvm.org/D22477
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276035 91177308-0d34-0410-b5e6-96231b3b80d8
This step builds on Lang Hames work to change Archive::child_iterator
for better interoperation with Error/Expected. Building on that it is now
possible to return an error message when the size field of an archive
contains non-decimal characters.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276025 91177308-0d34-0410-b5e6-96231b3b80d8
r274801 did not go far enough to allow gcov+tsan to cooperate. With this
commit it's possible to run the following code without false positives:
std::thread T1(fib), T2(fib);
T1.join(); T2.join();
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276015 91177308-0d34-0410-b5e6-96231b3b80d8
There's not much functional change, but it really is an architectural feature
(on v6T2, v7A, v7R and v7EM) rather than something each CPU implements
individually.
The main functional change is the default behaviour you get when specifying
only "-triple".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276013 91177308-0d34-0410-b5e6-96231b3b80d8
Also verify that we never try to set the size of a vreg associated
to a register class.
Report an error when we encounter that in MIR. Fix a testcase that
hit that error and had a size for no reason.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276012 91177308-0d34-0410-b5e6-96231b3b80d8
We skipped over ReturnInsts which didn't return an argument which would
lead us to incorrectly conclude that an argument returned by another
ReturnInst was 'returned'.
This reverts commit r275756.
This fixes PR28610.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276008 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Currently, InstCombine is already able to fold expressions of the form `logic(cast(A), cast(B))` to the simpler form `cast(logic(A, B))`, where logic designates one of `and`/`or`/`xor`. This transformation is implemented in `foldCastedBitwiseLogic()` in InstCombineAndOrXor.cpp. However, this optimization will not be performed if both `A` and `B` are `icmp` instructions. The decision to preclude casts of `icmp` instructions originates in r48715 in combination with r261707, and can be best understood by the title of the former one:
> Transform (zext (or (icmp), (icmp))) to (or (zext (cimp), (zext icmp))) if at least one of the (zext icmp) can be transformed to eliminate an icmp.
Apparently, it introduced a transformation that is a reverse of the transformation that is done in `foldCastedBitwiseLogic()`. Its purpose is to expose pairs of `zext icmp` that would subsequently be optimized by `transformZExtICmp()` in InstCombineCasts.cpp. Therefore, in order to avoid an endless loop of switching back and forth between these two transformations, the one in `foldCastedBitwiseLogic()` has been restricted to exclude `icmp` instructions which is mirrored in the responsible check:
`if ((!isa<ICmpInst>(Cast0Src) || !isa<ICmpInst>(Cast1Src)) && ...`
This check seems to sort out more cases than necessary because:
- the reverse transformation is obviously done for `or` instructions only
- and also not every `zext icmp` pair is necessarily the result of this reverse transformation
Therefore we now remove this check and replace it by a more finegrained one in `shouldOptimizeCast()` that now rejects only those `logic(zext(icmp), zext(icmp))` that would be able to be optimized by `transformZExtICmp()`, which also avoids the mentioned endless loop. That means we are now able to also simplify expressions of the form `logic(cast(icmp), cast(icmp))` to `cast(logic(icmp, icmp))` (`cast` being an arbitrary `CastInst`).
As an example, consider the following IR snippet
```
%1 = icmp sgt i64 %a, %b
%2 = zext i1 %1 to i8
%3 = icmp slt i64 %a, %c
%4 = zext i1 %3 to i8
%5 = and i8 %2, %4
```
which would now be transformed to
```
%1 = icmp sgt i64 %a, %b
%2 = icmp slt i64 %a, %c
%3 = and i1 %1, %2
%4 = zext i1 %3 to i8
```
This issue became apparent when experimenting with the programming language Julia, which makes use of LLVM. Currently, Julia lowers its `Bool` datatype to LLVM's `i8` (also see https://github.com/JuliaLang/julia/pull/17225). In fact, the above IR example is the lowered form of the Julia snippet `(a > b) & (a < c)`. Like shown above, this may introduce `zext` operations, casting between `i1` and `i8`, which could for example hinder ScalarEvolution and Polly on certain code.
Reviewers: grosser, vtjnash, majnemer
Subscribers: majnemer, llvm-commits
Differential Revision: https://reviews.llvm.org/D22511
Contributed-by: Matthias Reisinger
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275989 91177308-0d34-0410-b5e6-96231b3b80d8
D20859 and D20860 attempted to replace the SSE (V)CVTTPS2DQ and VCVTTPD2DQ truncating conversions with generic IR instead.
It turns out that the behaviour of these intrinsics is different enough from generic IR that this will cause problems, INF/NAN/out of range values are guaranteed to result in a 0x80000000 value - which plays havoc with constant folding which converts them to either zero or UNDEF. This is also an issue with the scalar implementations (which were already generic IR and what I was trying to match).
This patch changes both scalar and packed versions back to using x86-specific builtins.
It also deals with the other scalar conversion cases that are runtime rounding mode dependent and can have similar issues with constant folding.
A companion clang patch is at D22105
Differential Revision: https://reviews.llvm.org/D22106
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275981 91177308-0d34-0410-b5e6-96231b3b80d8
Recommitting after r274347 was reverted. This patch introduces some
classes to refactor the 3 and 4 register Thumb2 multiplication
instruction descriptions, plus improved tests for some of those
instructions.
Differential Revision: https://reviews.llvm.org/D21929
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275979 91177308-0d34-0410-b5e6-96231b3b80d8
The standard local dynamic model for TLS on ARM systems needs two
relocations:
- R_ARM_TLS_LDM32 (module idx)
- R_ARM_TLS_LDO32 (offset of object from origin of module TLS block)
In GNU style assembler we use symbol(tlsldm) and symbol(tlsldo) to
produce these relocations.
llvm-mc for ARM supports symbol(tlsldo) but does not support symbol(tlsldm).
This patch wires up the existing symbol(tlsldm) to R_ARM_TLS_LDM32.
TLS for ARM is defined in Addenda to, and Errata in, the ABI for the
ARM Architecture
Differential Revision: https://reviews.llvm.org/D22461
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275977 91177308-0d34-0410-b5e6-96231b3b80d8
As discussed on PR27654, this patch fixes the triples of a lot of aarch64 tests and enables lit tests on windows
This will hopefully help stop cases where windows developers break the aarch64 target
Differential Revision: https://reviews.llvm.org/D22191
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275973 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
N32 and N64 follow the standard ELF conventions (.L) whereas O32 uses its own
($).
This fixes the majority of object differences between -fintegrated-as and
-fno-integrated-as.
Reviewers: sdardis
Subscribers: dsanders, sdardis, llvm-commits
Differential Revision: https://reviews.llvm.org/D22412
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275967 91177308-0d34-0410-b5e6-96231b3b80d8
The following condition expression ( a >> n) & 1 is converted to "bt a, n" instruction. It works on all intel targets.
But on AVX-512 it was broken because the expression is modified to (truncate (a >>n) to i1).
I added the new sequence (truncate (a >>n) to i1) to the BT pattern.
Differential Revision: https://reviews.llvm.org/D22354
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275950 91177308-0d34-0410-b5e6-96231b3b80d8
This patch updates MemorySSA's use-optimizing walker to be more
accurate and, in some cases, faster.
Essentially, this changed our core walking algorithm from a
cache-as-you-go DFS to an iteratively expanded DFS, with all of the
caching happening at the end. Said expansion happens when we hit a Phi,
P; we'll try to do the smallest amount of work possible to see if
optimizing above that Phi is legal in the first place. If so, we'll
expand the search to see if we can optimize to the next phi, etc.
An iteratively expanded DFS lets us potentially quit earlier (because we
don't assume that we can optimize above all phis) than our old walker.
Additionally, because we don't cache as we go, we can now optimize above
loops.
As an added bonus, this patch adds a ton of verification (if
EXPENSIVE_CHECKS are enabled), so finding bugs is easier.
Differential Revision: https://reviews.llvm.org/D21777
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275940 91177308-0d34-0410-b5e6-96231b3b80d8