Summary: LICM may hoist instructions to preheader speculatively. Before code generation, we need to sink down the hoisted instructions inside to loop if it's beneficial. This pass is a reverse of LICM: looking at instructions in preheader and sinks the instruction to basic blocks inside the loop body if basic block frequency is smaller than the preheader frequency.
Reviewers: hfinkel, davidxl, chandlerc
Subscribers: anna, modocache, mgorny, beanz, reames, dberlin, chandlerc, mcrosier, junbuml, sanjoy, mzolotukhin, llvm-commits
Differential Revision: https://reviews.llvm.org/D22778
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285308 91177308-0d34-0410-b5e6-96231b3b80d8
When we have a loop with a known upper bound on the number of iterations, and
furthermore know that either the number of iterations will be either exactly
that upper bound or zero, then we can fully unroll up to that upper bound
keeping only the first loop test to check for the zero iteration case.
Most of the work here is in plumbing this 'max-or-zero' information from the
part of scalar evolution where it's detected through to loop unrolling. I've
also gone for the safe default of 'false' everywhere but howManyLessThans which
could probably be improved.
Differential Revision: https://reviews.llvm.org/D25682
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284818 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This allows us to mark when uses have been optimized.
This lets us avoid rewalking (IE when people call getClobberingAccess on everything), and also
enables us to later relax the requirement of use optimization during updates with less cost.
Reviewers: george.burgess.iv
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D25172
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284771 91177308-0d34-0410-b5e6-96231b3b80d8
All of these existed because MSVC 2013 was unable to synthesize default
move ctors. We recently dropped support for it so all that error-prone
boilerplate can go.
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284721 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This pass shrink-wraps a condition to some library calls where the call
result is not used. For example:
sqrt(val);
is transformed to
if (val < 0)
sqrt(val);
Even if the result of library call is not being used, the compiler cannot
safely delete the call because the function can set errno on error
conditions.
Note in many functions, the error condition solely depends on the incoming
parameter. In this optimization, we can generate the condition can lead to
the errno to shrink-wrap the call. Since the chances of hitting the error
condition is low, the runtime call is effectively eliminated.
These partially dead calls are usually results of C++ abstraction penalty
exposed by inlining. This optimization hits 108 times in 19 C/C++ programs
in SPEC2006.
Reviewers: hfinkel, mehdi_amini, davidxl
Subscribers: modocache, mgorny, mehdi_amini, xur, llvm-commits, beanz
Differential Revision: https://reviews.llvm.org/D24414
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284542 91177308-0d34-0410-b5e6-96231b3b80d8
This CL didn't actually address the test case in PR30499, and clang
still crashes.
Also revert dependent change "Memory-SSA cleanup of clobbers interface, NFC"
Reverts r283965 and r283967.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284093 91177308-0d34-0410-b5e6-96231b3b80d8
Reappy r284044 after revert in r284051. Krzysztof fixed the error in r284049.
The original summary:
This patch tries to fully unroll loops having break statement like this
for (int i = 0; i < 8; i++) {
if (a[i] == value) {
found = true;
break;
}
}
GCC can fully unroll such loops, but currently LLVM cannot because LLVM only
supports loops having exact constant trip counts.
The upper bound of the trip count can be obtained from calling
ScalarEvolution::getMaxBackedgeTakenCount(). Part of the patch is the
refactoring work in SCEV to prevent duplicating code.
The feature of using the upper bound is enabled under the same circumstance
when runtime unrolling is enabled since both are used to unroll loops without
knowing the exact constant trip count.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284053 91177308-0d34-0410-b5e6-96231b3b80d8
This patch tries to fully unroll loops having break statement like this
for (int i = 0; i < 8; i++) {
if (a[i] == value) {
found = true;
break;
}
}
GCC can fully unroll such loops, but currently LLVM cannot because LLVM only
supports loops having exact constant trip counts.
The upper bound of the trip count can be obtained from calling
ScalarEvolution::getMaxBackedgeTakenCount(). Part of the patch is the
refactoring work in SCEV to prevent duplicating code.
The feature of using the upper bound is enabled under the same circumstance
when runtime unrolling is enabled since both are used to unroll loops without
knowing the exact constant trip count.
Differential Revision: https://reviews.llvm.org/D24790
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284044 91177308-0d34-0410-b5e6-96231b3b80d8
This is a refreshed version of a patch that was reverted: it fixes
the problems reported in both PR30216 and PR30499, and
contains all the test-cases from both bugs.
To hoist stores past loads, we used to search for potential
conflicting loads on the hoisting path by following a MemorySSA
def-def link from the store to be hoisted to the previous
defining memory access, and from there we followed the def-use
chains to all the uses that occur on the hoisting path. The
problem is that the def-def link may point to a store that does
not alias with the store to be hoisted, and so the loads that are
walked may not alias with the store to be hoisted, and even as in
the testcase of PR30216, the loads that may alias with the store
to be hoisted are not visited.
The current patch visits all loads on the path from the store to
be hoisted to the hoisting position and uses the alias analysis
to ask whether the store may alias the load. I was not able to
use the MemorySSA functionality to ask for whether load and
store are clobbered: I'm not sure which function to call, so I
used a call to AA->isNoAlias().
Store past store is still working as before using a MemorySSA
query: I added an extra test to pr30216.ll to make sure store
past store does not regress.
Tested on x86_64-linux with check and a test-suite run.
Differential Revision: https://reviews.llvm.org/D25476
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@283965 91177308-0d34-0410-b5e6-96231b3b80d8
The additional fix is:
When adding debug information to a lowered phi node in mem2reg
check that we have a valid insertion point after the phi for adding
the debug information.
This change addresses the issue in pr30468 where a lowered phi was
added before a catchswitch and no debug information should be added
after the phi in this case.
Differential Revision: https://reviews.llvm.org/D24797
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@282155 91177308-0d34-0410-b5e6-96231b3b80d8
The routines llvm::ConvertDebugDeclareToDebugValue() always returned
a true value which was never checked at the call site; change the
function return type to void.
This NFC cleanup was approved in the review https://reviews.llvm.org/D23715
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281964 91177308-0d34-0410-b5e6-96231b3b80d8
The comments of SplitBlockAndInsertIfThenElse say the SplitBefore instruction will stay in the old block.
But according to the implementation(split the block at SplitBefore by using splitBasicBlock), the SplitBefore will be moved to the new block.
This patch fixes the comments.
Patch by Zhe Yu Wu.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281939 91177308-0d34-0410-b5e6-96231b3b80d8
When phi nodes are created in the -mem2reg phase, the @llvm.dbg.declare
entries are converted to @llvm.dbg.value entries at the place where the
store instructions existed. However no entry is created to describe
the resulting value of the phi node.
The effect of this is especially noticeable in for loops which have a
constant for the intial value; the loop control variable's location
would be described as the intial constant value in the loop body once
the -mem2reg optimization phase was run.
This change adds the creation of the @llvm.dbg.value entries to describe
variables whose location is the result of a phi node created in -mem2reg.
Also when the phi node is finally lowered to a machine instruction it
is important that the lowered "load" instruction is placed before the
associated DEBUG_VALUE entry describing the value loaded.
Differential Revision: https://reviews.llvm.org/D23715
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281895 91177308-0d34-0410-b5e6-96231b3b80d8
Refactor replaceDominatedUsesWith to have a flag to control whether to replace uses in BB itself.
Summary: This is in preparation for LoopSink pass which calls replaceDominatedUsesWith to update after sinking.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@280949 91177308-0d34-0410-b5e6-96231b3b80d8
r280425 | dehao | 2016-09-01 16:15:50 -0700 (Thu, 01 Sep 2016) | 9 lines
Refactor LICM pass in preparation for LoopSink pass.
Summary: LoopSink pass uses some common function in LICM. This patch refactor the LICM code to make it usable by LoopSink pass (https://reviews.llvm.org/D22778).
r280429 | dehao | 2016-09-01 16:31:25 -0700 (Thu, 01 Sep 2016) | 9 lines
Refactor LICM to expose canSinkOrHoistInst to LoopSink pass.
Summary: LoopSink pass shares the same canSinkOrHoistInst functionality with LICM pass. This patch exposes this function in preparation of https://reviews.llvm.org/D22778
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@280453 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: This is in preparation for LoopSink pass which calls replaceDominatedUsesWith to update after sinking.
Reviewers: chandlerc, davidxl, danielcdh
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D24170
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@280427 91177308-0d34-0410-b5e6-96231b3b80d8
Remove all the dead code around ilist_*sentinel_traits. This is a
follow-up to gutting them as part of r279314 (originally r278974),
staged to prevent broken builds in sub-projects.
Uses were removed from clang in r279457 and lld in r279458.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279473 91177308-0d34-0410-b5e6-96231b3b80d8
Share code for the (mostly problematic) embedded sentinel traits.
- Move the LLVM_NO_SANITIZE("object-size") attribute to
ilist_half_embedded_sentinel_traits and ilist_embedded_sentinel_traits
(previously it spread throughout the code duplication).
- Add an ilist_full_embedded_sentinel_traits which has no UB (but has
the downside of storing the complete node).
- Replace all the custom sentinel traits in LLVM with a declaration of
ilist_sentinel_traits that inherits from one of the embedded sentinel
traits classes.
There are still custom sentinel traits in other LLVM subprojects. I'll
remove those in a follow-up.
Nothing at all should be changing here, this is just rearranging code.
Note that the final goal here is to remove the sentinel traits
altogether, settling on the memory layout of
ilist_half_embedded_sentinel_traits without the UB. This intermediate
step moves the logic into ilist.h.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278513 91177308-0d34-0410-b5e6-96231b3b80d8
Besides a general consistently benefit, the extra layer of indirection
allows the mechanical part of https://reviews.llvm.org/D23256 that
requires touching every transformation and analysis to be factored out
cleanly.
Thanks to David for the suggestion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278077 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Ensure that the MemorySSA object never changes address when using the
new pass manager since the walkers contained by MemorySSA cache pointers
to it at construction time. This is achieved by wrapping the
MemorySSAAnalysis result in a unique_ptr. Also add some asserts that
check for this bug.
Reviewers: george.burgess.iv, dberlin
Subscribers: mcrosier, hfinkel, chandlerc, silvas, llvm-commits
Differential Revision: https://reviews.llvm.org/D23171
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278028 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The correctness fix here is that when we CSE a load with another load,
we need to combine the metadata on the two loads. This matches the
behavior of other passes, like instcombine and GVN.
There's also a minor optimization improvement here: for load PRE, the
aliasing metadata on the inserted load should be the same as the
metadata on the original load. Not sure why the old code was throwing
it away.
Issue found by inspection.
Differential Revision: http://reviews.llvm.org/D21460
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@277977 91177308-0d34-0410-b5e6-96231b3b80d8