The fix committed in r288851 doesn't cover all the cases.
In particular, if we have an instruction with side effects
which has a no non-dbg use not depending on the bits, we still
perform RAUW destroying the dbg.value's first argument.
Prevent metadata from being replaced here to avoid the issue.
Differential Revision: https://reviews.llvm.org/D27534
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288987 91177308-0d34-0410-b5e6-96231b3b80d8
This patch attempts to scalarize the operand expressions of predicated
instructions if they were conditionally executed in the original loop. After
scalarization, the expressions will be sunk inside the blocks created for the
predicated instructions. The transformation essentially performs
un-if-conversion on the operands.
The cost model has been updated to determine if scalarization is profitable. It
compares the cost of a vectorized instruction, assuming it will be
if-converted, to the cost of the scalarized instruction, assuming that the
instructions corresponding to each vector lane will be sunk inside a predicated
block, possibly avoiding execution. If it's more profitable to scalarize the
entire expression tree feeding the predicated instruction, the expression will
be scalarized; otherwise, it will be vectorized. We only consider the cost of
the entire expression to accurately estimate the cost of the required
insertelement and extractelement instructions.
Differential Revision: https://reviews.llvm.org/D26083
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288909 91177308-0d34-0410-b5e6-96231b3b80d8
In the case of a fully redundant load LI dominated by an equivalent load V, GVN
should always preserve the original debug location of V. Otherwise, we risk to
introduce an incorrect stepping.
If V has debug info, then clearly it should not be modified. If V has a null
debugloc, then it is still potentially incorrect to propagate LI's debugloc
because LI may not post-dominate V.
Differential Revision: https://reviews.llvm.org/D27468
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288903 91177308-0d34-0410-b5e6-96231b3b80d8
When a function F is inlined, InlineFunction extends the debug location of every
instruction inlined from F by adding an InlinedAt.
However, if an instruction has a 'null' debug location, InlineFunction would
propagate the callsite debug location to it. This behavior existed since
revision 210459.
Revision 210459 was originally committed specifically to workaround the lack of
debug information for instructions inlined from intrinsic functions (which are
usually declared with attributes `__always_inline__, __nodebug__`).
The problem with revision 210459 is that it doesn't make any sort of distinction
between instructions inlined from a 'nodebug' function and instructions which
are inlined from a function built with debug info. This issue may lead to
incorrect stepping in the debugger.
This patch works under the assumption that a nodebug function does not have a
DISubprogram. When a function F is inlined into another function G,
InlineFunction checks if F has debug info associated with it.
For nodebug functions, the InlineFunction logic is unchanged (i.e. it would
still propagate the callsite debugloc to the inlined instructions). Otherwise,
InlineFunction no longer propagates the callsite debug location.
Differential Revision: https://reviews.llvm.org/D27462
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288895 91177308-0d34-0410-b5e6-96231b3b80d8
Requesting metadata for a global is a relatively expensive operation as it
involves a map lookup, but it's one that we need to do relatively frequently in
this pass to collect the list of type metadata nodes associated with a global.
This change improves the performance of type metadata queries by prebuilding
data structures that keep the global together with its list of type metadata,
and changing the pass to use that data structure wherever we were previously
passing global references around.
This change also eliminates some O(N^2) behavior by collecting the list of
globals associated with each type identifier during the first pass over the
list of globals rather than visiting each global to compute that list every
time we add a new type identifier.
Reduces pass runtime on a module containing Chrome's vtables from over 60s
to 0.9s.
Differential Revision: https://reviews.llvm.org/D27484
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288859 91177308-0d34-0410-b5e6-96231b3b80d8
BDCE has two phases:
1. It asks SimplifyDemandedBits if all the bits of an instruction are dead, and if so,
replaces all its uses with the constant zero.
2. Then, it asks SimplifyDemandedBits again if the instruction is really dead
(no side effects etc..) and if so, eliminates it.
Now, in 1) if all the bits of an instruction are dead, we may end up replacing a dbg use:
%call = tail call i32 (...) @g() #4, !dbg !15
tail call void @llvm.dbg.value(metadata i32 %call, i64 0, metadata !8, metadata !16), !dbg !17
->
%call = tail call i32 (...) @g() #4, !dbg !15
tail call void @llvm.dbg.value(metadata i32 0, i64 0, metadata !8, metadata !16), !dbg !17
but not eliminating the call because it may have arbitrary side effects.
In other words, we lose some debug informations.
This patch fixes the problem making sure that BDCE does nothing with the instruction if
it has side effects and no non-dbg uses.
Differential Revision: https://reviews.llvm.org/D27471
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288851 91177308-0d34-0410-b5e6-96231b3b80d8
There are two cases handled here:
1) a branch on undef
2) a switch with an undef condition.
Both cases are currently handled by ResolvedUndefsIn. If we have
a branch on undef, we force its value to false (which is trivially
foldable). If we have a switch on undef, we force to the first
constant (which is also foldable).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288725 91177308-0d34-0410-b5e6-96231b3b80d8
so we can stop using DW_OP_bit_piece with the wrong semantics.
The entire back story can be found here:
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20161114/405934.html
The gist is that in LLVM we've been misinterpreting DW_OP_bit_piece's
offset field to mean the offset into the source variable rather than
the offset into the location at the top the DWARF expression stack. In
order to be able to fix this in a subsequent patch, this patch
introduces a dedicated DW_OP_LLVM_fragment operation with the
semantics that we used to apply to DW_OP_bit_piece, which is what we
actually need while inside of LLVM. This patch is complete with a
bitcode upgrade for expressions using the old format. It does not yet
fix the DWARF backend to use DW_OP_bit_piece correctly.
Implementation note: We discussed several options for implementing
this, including reserving a dedicated field in DIExpression for the
fragment size and offset, but using an custom operator at the end of
the expression works just fine and is more efficient because we then
only pay for it when we need it.
Differential Revision: https://reviews.llvm.org/D27361
rdar://problem/29335809
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288683 91177308-0d34-0410-b5e6-96231b3b80d8
For -O0 there might be unreachable BBs, which breaks the assumption that all the
BBs have an auxiliary data structure. In this patch, we add another interface
called findBBInfo() so that a nullptr can be returned for the unreachable BBs
(and the callers can ignore those BBs).
This fixes the bug reported
https://llvm.org/bugs/show_bug.cgi?id=31209
Differential Revision: https://reviews.llvm.org/D27280
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288528 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r288497, as it broke the AArch64 build of Compiler-RT's
builtins (twice: once in r288412 and once in r288497). We should investigate
this offline.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288508 91177308-0d34-0410-b5e6-96231b3b80d8
When trying to vectorize trees that start at insertelement instructions
function tryToVectorizeList() uses vectorization factor calculated as
MinVecRegSize/ScalarTypeSize. But sometimes it does not work as tree
cost for this fixed vectorization factor is too high.
Patch tries to improve the situation. It tries different vectorization
factors from max(PowerOf2Floor(NumberOfVectorizedValues),
MinVecRegSize/ScalarTypeSize) to MinVecRegSize/ScalarTypeSize and tries
to choose the best one.
Differential Revision: https://reviews.llvm.org/D27215
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288497 91177308-0d34-0410-b5e6-96231b3b80d8
Now that PointerType is no longer a SequentialType, all SequentialTypes
have an associated number of elements, so we can move that information to
the base class, allowing for a number of simplifications.
Differential Revision: https://reviews.llvm.org/D27122
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288464 91177308-0d34-0410-b5e6-96231b3b80d8
As proposed on llvm-dev:
http://lists.llvm.org/pipermail/llvm-dev/2016-October/106640.html
This is for a couple of reasons:
- Values of type PointerType are unlike the other SequentialTypes (arrays
and vectors) in that they do not hold values of the element type. By moving
PointerType we can unify certain aspects of how the other SequentialTypes
are handled.
- PointerType will have no place in the SequentialType hierarchy once
pointee types are removed, so this is a necessary step towards removing
pointee types.
Differential Revision: https://reviews.llvm.org/D26595
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288462 91177308-0d34-0410-b5e6-96231b3b80d8
Instead, expose whether the current type is an array or a struct, if an array
what the upper bound is, and if a struct the struct type itself. This is
in preparation for a later change which will make PointerType derive from
Type rather than SequentialType.
Differential Revision: https://reviews.llvm.org/D26594
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288458 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
We were doing an optimization in the ThinLTO backends of importing
constant unnamed_addr globals unconditionally as a local copy (regardless
of whether the thin link decided to import them). This should be done in
the thin link instead, so that resulting exported references are marked
and promoted appropriately, but will need a summary enhancement to mark
these variables as constant unnamed_addr.
The function import logic during the thin link was trying to handle
this proactively, by conservatively marking all values referenced in
the initializer lists of exported global variables as also exported.
However, this only handled values referenced directly from the
initializer list of an exported global variable. If the value is itself
a constant unnamed_addr variable, we could end up exporting its
references as well. This caused multiple issues. The first is that the
transitively exported references weren't promoted. Secondly, some could
not be promoted/renamed (e.g. they had a section or other constraint).
recursively, instead of just adding the first level of initializer list
references to the ExportList directly.
Remove this optimization and the associated handling in the function
import backend. SPEC measurements indicate we weren't getting much
from it in any case.
Fixes PR31052.
Reviewers: mehdi_amini
Subscribers: krasin, llvm-commits
Differential Revision: https://reviews.llvm.org/D26880
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288446 91177308-0d34-0410-b5e6-96231b3b80d8
The instcombine code which folds loads and stores into their use types can trip up if the use is a bitcast to a type which we can't directly load or store in the IR. In principle, such types shouldn't exist, but in practice they do today. This is a workaround to avoid a bug while we work towards the long term goal.
Differential Revision: https://reviews.llvm.org/D24365
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288415 91177308-0d34-0410-b5e6-96231b3b80d8
This just extracts out the transfer rules for constant ranges into a single shared point. As it happens, neither bit of code actually overlaps in terms of the handled operators, but with this change that could easily be tweaked in the future.
I also want to have this separated out to make experimenting with a eager value info implementation and possibly a ValueTracking-like fixed depth recursion peephole version. There's no reason all four of these can't share a common implementation which reduces the chances of bugs.
Differential Revision: https://reviews.llvm.org/D27294
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288413 91177308-0d34-0410-b5e6-96231b3b80d8
When trying to vectorize trees that start at insertelement instructions
function tryToVectorizeList() uses vectorization factor calculated as
MinVecRegSize/ScalarTypeSize. But sometimes it does not work as tree
cost for this fixed vectorization factor is too high.
Patch tries to improve the situation. It tries different vectorization
factors from max(PowerOf2Floor(NumberOfVectorizedValues),
MinVecRegSize/ScalarTypeSize) to MinVecRegSize/ScalarTypeSize and tries
to choose the best one.
Differential Revision: https://reviews.llvm.org/D27215
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288412 91177308-0d34-0410-b5e6-96231b3b80d8
Currently when cost of scalar operations is evaluated the vector type is
used for scalar operations. Patch fixes this issue and fixes evaluation
of the vector operations cost.
Several test showed that vector cost model is too optimistic. It
allowed vectorization of 8 or less add/fadd operations, though scalar
code is faster. Actually, only for 16 or more operations vector code
provides better performance.
Differential Revision: https://reviews.llvm.org/D26277
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288398 91177308-0d34-0410-b5e6-96231b3b80d8
This class represents a symbol table built from in-memory IR. It provides
access to GlobalValues and should only be used if such access is required
(e.g. in the LTO implementation). We will eventually change IRObjectFile
to read from a bitcode symbol table rather than using ModuleSymbolTable,
so it would not be able to expose the module.
Differential Revision: https://reviews.llvm.org/D27073
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288319 91177308-0d34-0410-b5e6-96231b3b80d8
If LoopInfo is available during GVN, BasicAA will use it. However
MergeBlockIntoPredecessor does not update LI as it merges blocks.
This didn't use to cause problems because LI was freed before
GVN/BasicAA. Now with OptimizationRemarkEmitter, the lifetime of LI is
extended so LI needs to be kept up-to-date during GVN.
Differential Revision: https://reviews.llvm.org/D27288
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288307 91177308-0d34-0410-b5e6-96231b3b80d8
This implements PGO-driven loop peeling.
The basic idea is that when the average dynamic trip-count of a loop is known,
based on PGO, to be low, we can expect a performance win by peeling off the
first several iterations of that loop.
Unlike unrolling based on a known trip count, or a trip count multiple, this
doesn't save us the conditional check and branch on each iteration. However,
it does allow us to simplify the straight-line code we get (constant-folding,
etc.). This is important given that we know that we will usually only hit this
code, and not the actual loop.
This is currently disabled by default.
Differential Revision: https://reviews.llvm.org/D25963
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288274 91177308-0d34-0410-b5e6-96231b3b80d8
We had a limited version of this for scalar 'and'; this expands
the transform to 'or' and 'xor' and allows vectors types too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288273 91177308-0d34-0410-b5e6-96231b3b80d8
Michel Dänzer reported that r288051, "[StructurizeCFG] Use range-based
for loops", introduced a bug into rebuildSSA, wherein we were iterating
over an instruction's use list while modifying it, without taking care
to do this correctly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288200 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r288046.
Trying to see if the revert fixes a compiler crash during a stage2 LTO
build with a GVN backtrace.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288179 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r288047.
Trying to see if the revert fixes a compiler crash during a stage2 LTO
build with a GVN backtrace.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288178 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r288090.
Trying to see if the revert fixes a compiler crash during a stage2 LTO
build with a GVN backtrace.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288177 91177308-0d34-0410-b5e6-96231b3b80d8
The flag was introduced because the optimization controlled by the flag initially caused regressions. All the regressions were fixed some time ago and the flag has been false for quite a while.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288154 91177308-0d34-0410-b5e6-96231b3b80d8