For -O0 there might be unreachable BBs, which breaks the assumption that all the
BBs have an auxiliary data structure. In this patch, we add another interface
called findBBInfo() so that a nullptr can be returned for the unreachable BBs
(and the callers can ignore those BBs).
This fixes the bug reported
https://llvm.org/bugs/show_bug.cgi?id=31209
Differential Revision: https://reviews.llvm.org/D27280
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288528 91177308-0d34-0410-b5e6-96231b3b80d8
Add assembler support for all atomic instructions that weren't already
supported. Some of those could be used to implement codegen for 128-bit
atomic operations, but this isn't done here yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288526 91177308-0d34-0410-b5e6-96231b3b80d8
Add assembler support for instructions manipulating the FPC.
Also add codegen support via the GCC compatibility builtins:
__builtin_s390_sfpc
__builtin_s390_efpc
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288525 91177308-0d34-0410-b5e6-96231b3b80d8
Move setting of hasSideEffects out of SystemZInstrFormats.td,
to allow use of the format classes for instructions where this
flag shouldn't be set. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288524 91177308-0d34-0410-b5e6-96231b3b80d8
Demonstrate missed opportunity for urem -> and combine for powerof2 or zero non-uniform constant dividers
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288510 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r288497, as it broke the AArch64 build of Compiler-RT's
builtins (twice: once in r288412 and once in r288497). We should investigate
this offline.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288508 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
When X = 0 and Y = inf, the original code produces inf, but the transformed
code produces nan. So this transform (and its relatives) should only be
used when the no-infs-fp-math flag is explicitly enabled.
Also disable the transform using fmad (intermediate rounding) when unsafe-math
is not enabled, since it can reduce the precision of the result; consider this
example with binary floating point numbers with two bits of mantissa:
x = 1.01
y = 111
x * (y + 1) = 1.01 * 1000 = 1010 (this is the exact result; no rounding occurs at any step)
x * y + x = 1000.11 + 1.01 =r 1000 + 1.01 = 1001.01 =r 1000 (with rounding towards zero)
The example relies on rounding towards zero at least in the second step.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=98578
Reviewers: RKSimon, tstellarAMD, spatel, arsenm
Subscribers: wdng, llvm-commits
Differential Revision: https://reviews.llvm.org/D26602
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288506 91177308-0d34-0410-b5e6-96231b3b80d8
When trying to vectorize trees that start at insertelement instructions
function tryToVectorizeList() uses vectorization factor calculated as
MinVecRegSize/ScalarTypeSize. But sometimes it does not work as tree
cost for this fixed vectorization factor is too high.
Patch tries to improve the situation. It tries different vectorization
factors from max(PowerOf2Floor(NumberOfVectorizedValues),
MinVecRegSize/ScalarTypeSize) to MinVecRegSize/ScalarTypeSize and tries
to choose the best one.
Differential Revision: https://reviews.llvm.org/D27215
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288497 91177308-0d34-0410-b5e6-96231b3b80d8
getTargetConstantBitsFromNode currently only extracts constant pool vector data, but it will need to be generalized to support broadcast and scalar constant pool data as well.
Converted Constant bit extraction and Bitset splitting to helper lambda functions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288496 91177308-0d34-0410-b5e6-96231b3b80d8
Now that PointerType is no longer a SequentialType, all SequentialTypes
have an associated number of elements, so we can move that information to
the base class, allowing for a number of simplifications.
Differential Revision: https://reviews.llvm.org/D27122
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288464 91177308-0d34-0410-b5e6-96231b3b80d8
As proposed on llvm-dev:
http://lists.llvm.org/pipermail/llvm-dev/2016-October/106640.html
This is for a couple of reasons:
- Values of type PointerType are unlike the other SequentialTypes (arrays
and vectors) in that they do not hold values of the element type. By moving
PointerType we can unify certain aspects of how the other SequentialTypes
are handled.
- PointerType will have no place in the SequentialType hierarchy once
pointee types are removed, so this is a necessary step towards removing
pointee types.
Differential Revision: https://reviews.llvm.org/D26595
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288462 91177308-0d34-0410-b5e6-96231b3b80d8
Instead, expose whether the current type is an array or a struct, if an array
what the upper bound is, and if a struct the struct type itself. This is
in preparation for a later change which will make PointerType derive from
Type rather than SequentialType.
Differential Revision: https://reviews.llvm.org/D26594
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288458 91177308-0d34-0410-b5e6-96231b3b80d8
In r266692, we made it possible to emit linkage names for just inlined
functions, putting the attribute on the abstract origin. Make sure we
don't think the linkage-name was already emitted on a declaration.
Differential Revision: http://reviews.llvm.org/D27320
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288450 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
We were doing an optimization in the ThinLTO backends of importing
constant unnamed_addr globals unconditionally as a local copy (regardless
of whether the thin link decided to import them). This should be done in
the thin link instead, so that resulting exported references are marked
and promoted appropriately, but will need a summary enhancement to mark
these variables as constant unnamed_addr.
The function import logic during the thin link was trying to handle
this proactively, by conservatively marking all values referenced in
the initializer lists of exported global variables as also exported.
However, this only handled values referenced directly from the
initializer list of an exported global variable. If the value is itself
a constant unnamed_addr variable, we could end up exporting its
references as well. This caused multiple issues. The first is that the
transitively exported references weren't promoted. Secondly, some could
not be promoted/renamed (e.g. they had a section or other constraint).
recursively, instead of just adding the first level of initializer list
references to the ExportList directly.
Remove this optimization and the associated handling in the function
import backend. SPEC measurements indicate we weren't getting much
from it in any case.
Fixes PR31052.
Reviewers: mehdi_amini
Subscribers: krasin, llvm-commits
Differential Revision: https://reviews.llvm.org/D26880
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288446 91177308-0d34-0410-b5e6-96231b3b80d8
Since the spill is for the whole wave, these
don't have the swizzling problems that vector stores do
and a single 4-byte allocation is enough to spill a 64 element
register. This should reduce the number of spill instructions and
put all the spills for a register in the same cacheline.
This should save allocated private size, but for now it doesn't.
The extra slots are allocated for each component, but never used
because the frame layout is essentially finalized before frame
indices are replaced. For always using the scalar store path,
this should probably be moved into processFunctionBeforeFrameFinalized.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288445 91177308-0d34-0410-b5e6-96231b3b80d8
This SmallVector is using up 128 bytes on the stack every time despite
almost always being empty[1], and since this function can recurse quite
deeply that adds up to a lot of overhead. We've seen this run afoul of
ulimits in some cases with ASAN on.
Replacing the SmallVector with a std::vector trades an occasional heap
allocation for vastly less stack usage.
[1]: I gathered some stats on an internal test suite and the vector
was non-empty in only 45,000 of 10,000,000 calls to this function.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288441 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Make AArch64InstrInfo::foldMemoryOperandImpl more general by folding all
full COPYs between register classes of the same size that are either
spilled or refilled.
Reviewers: MatzeB, qcolombet
Subscribers: aemerson, rengolin, mcrosier, llvm-commits
Differential Revision: https://reviews.llvm.org/D27271
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288439 91177308-0d34-0410-b5e6-96231b3b80d8
Move the cast<MCSymbolELF> inside emitELFSize, so that:
- it's done in one place instead of at each call
- it's more consistent with similar functions like EmitCOFFSafeSEH
- ambiguity between cast<> and dyn_cast<> is avoided (which also
eliminates an unnecessary dyn_cast call)
This also makes it easier to experiment with using ".size" directives on
non-ELF targets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288437 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This patch fixes comparison of 64-bit atomic with its expected value in CMP_SWAP_64 expansion.
Currently, the low words are compared with CMP, while the high words are compared with SBC. SBC expects the carry flag to be set if CMP detects a difference. CMP might leave the carry unset for unequal arguments though if the first one is >= than the second. This might cause the comparison logic to detect false equality.
Example of the broken C++ code:
```
std::atomic<long long> at(2);
long long ll = 1;
std::atomic_compare_exchange_strong(&at, &ll, 3);
```
Even though the atomic `at` and the expected value `ll` are not equal and `atomic_compare_exchange_strong` returns `false`, `at` is changed to 3.
The patch replaces SBC with CMPEQ.
Reviewers: t.p.northover
Subscribers: aemerson, rengolin, llvm-commits, asl
Differential Revision: https://reviews.llvm.org/D27315
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288433 91177308-0d34-0410-b5e6-96231b3b80d8
This change fixes a regression in r279537 and
makes getRawSubclassData behave like r279536.
Without this change, the fp128-g.ll test case will have an
infinite loop involving SoftenFloatRes_LOAD.
Differential Revision: http://reviews.llvm.org/D26942
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288420 91177308-0d34-0410-b5e6-96231b3b80d8
This time the issue is fortunately just a simple mistake rather than a horrible
design spectre. I thought SUBS/SBCS provided sufficient NZCV flags for
comparing two 64-bit values, but they don't.
The fix is slightly clunkier in AArch64 because we can't use conditional
execution to emit a pair of CMPs. Traditionally an "icmp ne i128" would map to
an EOR/EOR/ORR/CBNZ, but that uses more registers so it's easier to go with a
CSET/CINC/CBNZ combination. Slightly less efficient, but this is -O0 anyway.
Thanks to Anton Korobeynikov for pointing out the issue.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288418 91177308-0d34-0410-b5e6-96231b3b80d8