minimal and boring form than the old pass manager's version.
This pass does the very minimal amount of work necessary to inline
functions declared as always-inline. It doesn't support a wide array of
things that the legacy pass manager did support, but is alse ... about
20 lines of code. So it has that going for it. Notably things this
doesn't support:
- Array alloca merging
- To support the above, bottom-up inlining with careful history
tracking and call graph updates
- DCE of the functions that become dead after this inlining.
- Inlining through call instructions with the always_inline attribute.
Instead, it focuses on inlining functions with that attribute.
The first I've omitted because I'm hoping to just turn it off for the
primary pass manager. If that doesn't pan out, I can add it here but it
will be reasonably expensive to do so.
The second should really be handled by running global-dce after the
inliner. I don't want to re-implement the non-trivial logic necessary to
do comdat-correct DCE of functions. This means the -O0 pipeline will
have to be at least 'always-inline,global-dce', but that seems
reasonable to me. If others are seriously worried about this I'd like to
hear about it and understand why. Again, this is all solveable by
factoring that logic into a utility and calling it here, but I'd like to
wait to do that until there is a clear reason why the existing
pass-based factoring won't work.
The final point is a serious one. I can fairly easily add support for
this, but it seems both costly and a confusing construct for the use
case of the always inliner running at -O0. This attribute can of course
still impact the normal inliner easily (although I find that
a questionable re-use of the same attribute). I've started a discussion
to sort out what semantics we want here and based on that can figure out
if it makes sense ta have this complexity at O0 or not.
One other advantage of this design is that it should be quite a bit
faster due to checking for whether the function is a viable candidate
for inlining exactly once per function instead of doing it for each call
site.
Anyways, hopefully a reasonable starting point for this pass.
Differential Revision: https://reviews.llvm.org/D23299
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278896 91177308-0d34-0410-b5e6-96231b3b80d8
When comparing a User* to a BasicBlock::iterator in
passingValueIsAlwaysUndefined, don't dereference the iterator in case it
is end().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278872 91177308-0d34-0410-b5e6-96231b3b80d8
Clearing out the AssumptionCache can cause us to rescan the entire
function for assumes. If there are many loops, then we are scanning
over the entire function many times.
Instead of clearing out the AssumptionCache, register all cloned
assumes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278854 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r278660.
It causes downstream assertion failure in InstCombine on shuffle
instructions. Comes up in __mm_swizzle_epi32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278672 91177308-0d34-0410-b5e6-96231b3b80d8
The new version has several advantages:
1) IMSHO it's more readable and neater
2) It handles loads and stores properly
3) It can handle any number of incoming blocks rather than just two. I'll be taking advantage of this in a followup patch.
With this change we can now finally sink load-modify-store idioms such as:
if (a)
return *b += 3;
else
return *b += 4;
=>
%z = load i32, i32* %y
%.sink = select i1 %a, i32 5, i32 7
%b = add i32 %z, %.sink
store i32 %b, i32* %y
ret i32 %b
When this works for switches it'll be even more powerful.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278660 91177308-0d34-0410-b5e6-96231b3b80d8
If the result of the find is only used to compare against end(), just
use is_contained instead.
No functionality change is intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278469 91177308-0d34-0410-b5e6-96231b3b80d8
If the result of the find is only used to compare against end(), just
use is_contained instead.
No functionality change is intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278433 91177308-0d34-0410-b5e6-96231b3b80d8
We are seeing r276077 drastically increasing compiler time for our larger
benchmarks in PGO profile generation build (both clang based and IR based
mode) -- it can be 20x slower than without the patch (like from 30 secs to
780 secs)
The increased time are all in pass LCSSA. The problematic code is about
PostProcessPHIs after use-rewrite. Note that the InsertedPhis from ssa_updater
is accumulating (never been cleared). Since the inserted PHIs are added to the
candidate for each rewrite, The earlier ones will be repeatedly added. Later
when adding the new PHIs to the work-list, we don't check the duplication
either. This can result in extremely long work-list that containing tons of
duplicated PHIs.
This patch fixes the issue by hoisting the code out of the loop.
Differential Revision: http://reviews.llvm.org/D23344
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278250 91177308-0d34-0410-b5e6-96231b3b80d8
Hal pointed out that the semantic of our intrinsic and the libc
call are slightly different. Add a comment while I'm here to
explain why we can't emit an intrinsic. Thanks Hal!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278200 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This hopefully fixes PR28825. The problem now was that a value from the
original loop was used in a subloop, which became a sibling after separation.
While a subloop doesn't need an lcssa phi node, a sibling does, and that's
where we broke LCSSA. The most natural way to fix this now is to simply call
formLCSSA on the original loop: it'll do what we've been doing before plus
it'll cover situations described above.
I think we don't need to run formLCSSARecursively here, and we have an assert
to verify this (I've tried testing it on LLVM testsuite + SPECs). I'd be happy
to be corrected here though.
I also changed a run line in the test from '-lcssa -loop-unroll' to
'-lcssa -loop-simplify -indvars', because it exercises LCSSA
preservation to the same extent, but also makes less unrelated
transformation on the CFG, which makes it easier to verify.
Reviewers: chandlerc, sanjoy, silvas
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D23288
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278173 91177308-0d34-0410-b5e6-96231b3b80d8
Besides a general consistently benefit, the extra layer of indirection
allows the mechanical part of https://reviews.llvm.org/D23256 that
requires touching every transformation and analysis to be factored out
cleanly.
Thanks to David for the suggestion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278077 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Ensure that the MemorySSA object never changes address when using the
new pass manager since the walkers contained by MemorySSA cache pointers
to it at construction time. This is achieved by wrapping the
MemorySSAAnalysis result in a unique_ptr. Also add some asserts that
check for this bug.
Reviewers: george.burgess.iv, dberlin
Subscribers: mcrosier, hfinkel, chandlerc, silvas, llvm-commits
Differential Revision: https://reviews.llvm.org/D23171
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278028 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
In the use optimizer, we need to keep of whether the lower bound still
dominates us or else we may decide a lower bound is still valid when it
is not due to intervening pushes/pops. Fixes PR28880 (and probably a
bunch of other things).
Reviewers: george.burgess.iv
Subscribers: MatzeB, llvm-commits, sebpop
Differential Revision: https://reviews.llvm.org/D23237
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@277978 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The correctness fix here is that when we CSE a load with another load,
we need to combine the metadata on the two loads. This matches the
behavior of other passes, like instcombine and GVN.
There's also a minor optimization improvement here: for load PRE, the
aliasing metadata on the inserted load should be the same as the
metadata on the original load. Not sure why the old code was throwing
it away.
Issue found by inspection.
Differential Revision: http://reviews.llvm.org/D21460
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@277977 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r277901. Reaaply the commit as it looks like it has
nothing to do with the bots failures.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@277946 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Originally the plan was to use the custom worklist to do some block popping,
and because we don't actually need a visited set. The custom one we have
here is slightly broken, and it's not worth fixing vs using depth_first_iterator since we aren't going to go the route we originally
were.
Fixes PR28874
Reviewers: george.burgess.iv
Subscribers: llvm-commits, gberry
Differential Revision: https://reviews.llvm.org/D23187
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@277880 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes PR28825. The problem was that we only checked if a value from
a created inner loop is used in the outer loop, and fixed LCSSA for
them. But we missed to fixup LCSSA for values used in exits of the outer
loop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@277877 91177308-0d34-0410-b5e6-96231b3b80d8
This generated IR based on the order of evaluation, which is different
between GCC and Clang. With that in mind you get bootstrap miscompares
if you compare a Clang built with GCC-built Clang vs. Clang built with
Clang-built Clang. Diagnosing that made my head hurt.
This also reverts commit r277337, which "fixed" the test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@277820 91177308-0d34-0410-b5e6-96231b3b80d8
This reinstates r277611 + r277614 and reverts r277642. A cast_or_null
should have been a dyn_cast_or_null.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@277691 91177308-0d34-0410-b5e6-96231b3b80d8
Not a correctness issue, but it would be nice if we didn't have to
recompute our block numbering (worst-case) every time we move MSSA.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@277652 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r277611 and the followup r277614.
Bootstrap builds and chromium builds are crashing during inlining after
this change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@277642 91177308-0d34-0410-b5e6-96231b3b80d8
Didn't want to fold this in with r277640, since it touches bits that
aren't entirely related to r277640.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@277641 91177308-0d34-0410-b5e6-96231b3b80d8
This is a follow-up to r277637. It teaches MemorySSA that invariant
loads (and loads of provably constant memory) are always liveOnEntry.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@277640 91177308-0d34-0410-b5e6-96231b3b80d8
This patch makes MemorySSA recognize atomic/volatile loads, and makes
MSSA treat said loads specially. This allows us to be a bit more
aggressive in some cases.
Administrative note: Revision was LGTM'ed by reames in person.
Additionally, this doesn't include the `invariant.load` recognition in
the differential revision, because I feel it's better to commit that
separately. Will commit soon.
Differential Revision: https://reviews.llvm.org/D16875
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@277637 91177308-0d34-0410-b5e6-96231b3b80d8
It is possible for the value map to not have an entry for some value
that has already been removed.
I don't have a testcase, this is fall-out from a buildbot.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@277614 91177308-0d34-0410-b5e6-96231b3b80d8
We were able to figure out that the result of a call is some constant.
While propagating that fact, we added the constant to the value map.
This is problematic because it results in us losing the call site when
processing the value map.
This fixes PR28802.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@277611 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes a bug where we'd sometimes cache overly-conservative results
with our walker. This bug was made more obvious by r277480, which makes
our cache far more spotty than it was. Test case is llvm-unit, because
we're likely going to use CachingWalker only for def optimization in the
future.
The bug stems from that there was a place where the walker assumed that
`DefNode.Last` was a valid target to cache to when failing to optimize
phis. This is sometimes incorrect if we have a cache hit. The fix is to
use the thing we *can* assume is a valid target to cache to. :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@277559 91177308-0d34-0410-b5e6-96231b3b80d8