Commit Graph

15838 Commits

Author SHA1 Message Date
Ivan Krasin
2c83cdacf1 WholeProgramDevirt: print remarks with devirtualized method names.
Summary:
Chrome on Linux uses WholeProgramDevirt for speed ups, and it's
important to detect regressions on both sides: the toolchain,
if fewer methods get devirtualized after an update, and Chrome,
if an innocently looking change caused many hot methods become
virtual again.

The need to track devirtualized methods is not Chrome-specific,
but it's probably the only user of the pass at this time.

Reviewers: kcc

Differential Revision: https://reviews.llvm.org/D23219

llvm-svn: 277856
2016-08-05 19:45:16 +00:00
David Callahan
066bd5eb0e [ADCE] Refactoring for new functionality (NFC)
Summary:
This is another refactoring to break up the one function into three logical components functions.
Another non-functional change before we start added in features.

Reviewers: nadav, mehdi_amini, majnemer

Subscribers: twoh, freik, llvm-commits

Differential Revision: https://reviews.llvm.org/D23102

llvm-svn: 277855
2016-08-05 19:38:11 +00:00
Dehao Chen
359e5901cf Do not assign new discriminator for all intrinsics.
Summary: We do not care about intrinsic calls when assigning discriminators.

Reviewers: davidxl, dnovillo

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23212

llvm-svn: 277843
2016-08-05 17:56:49 +00:00
Benjamin Kramer
e0adb168f1 [SimplifyCFG] Make range reduction code deterministic.
This generated IR based on the order of evaluation, which is different
between GCC and Clang. With that in mind you get bootstrap miscompares
if you compare a Clang built with GCC-built Clang vs. Clang built with
Clang-built Clang. Diagnosing that made my head hurt.

This also reverts commit r277337, which "fixed" the test case.

llvm-svn: 277820
2016-08-05 14:55:02 +00:00
Nicolai Haehnle
7286dd419e [InstCombine] try to fold (select C, (sext A), B) into logical ops
Summary:
Turn (select C, (sext A), B) into (sext (select C, A, B')) when A is i1 and
B is a compatible constant, also for zext instead of sext. This will then be
further folded into logical operations.

The transformation would be valid for non-i1 types as well, but other parts of
InstCombine prefer to have sext from non-i1 as an operand of select.

Motivated by the shader compiler frontend in Mesa for AMDGPU, which emits i32
for boolean operations. With this change, the boolean logic is fully
recovered.

Reviewers: majnemer, spatel, tstellarAMD

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22747

llvm-svn: 277801
2016-08-05 08:22:29 +00:00
Justin Bogner
443b814563 InstCombine: Clean up some trailing whitespace. NFC
llvm-svn: 277793
2016-08-05 01:09:48 +00:00
Justin Bogner
a9ca8f6cb7 InstCombine: Replace some never-null pointers with references. NFC
llvm-svn: 277792
2016-08-05 01:06:44 +00:00
Sebastian Pop
908ec888ad GVN-hoist: enable by default
llvm-svn: 277786
2016-08-04 23:49:07 +00:00
Sebastian Pop
16f65162d1 GVN-hoist: fix early exit logic
The patch splits a complex && if condition into easier to read and understand
logic.  That wrong early exit condition was letting some instructions with not
all operands available pass through when HoistingGeps was true.

Differential Revision: https://reviews.llvm.org/D23174

llvm-svn: 277785
2016-08-04 23:49:05 +00:00
Justin Bogner
c1fadbec04 IR: Provide an IRBuilder Inserter that calls a callback after insertion
Add a generalized IRBuilderCallbackInserter, which is just given a
callback to execute after insertion. This can be used to get rid of
the custom inserter in InstCombine, which will in turn allow me to add
target specific InstCombineCalls API for intrinsics without horrible
layering violations.

llvm-svn: 277784
2016-08-04 23:41:01 +00:00
Michael Kuperstein
7044572182 [LV, X86] Be more optimistic about vectorizing shifts.
Shifts with a uniform but non-constant count were considered very expensive to
vectorize, because the splat of the uniform count and the shift would tend to
appear in different blocks. That made the splat invisible to ISel, and we'd
scalarize the shift at codegen time.

Since r201655, CodeGenPrepare sinks those splats to be next to their use, and we
are able to select the appropriate vector shifts. This updates the cost model to
to take this into account by making shifts by a uniform cheap again.

Differential Revision: https://reviews.llvm.org/D23049

llvm-svn: 277782
2016-08-04 22:48:03 +00:00
Sanjay Patel
d4b53a013e [InstCombine] use m_APInt to allow icmp eq (mul X, C1), C2 folds for splat constant vectors
This concludes the splat vector enhancements for foldICmpEqualityWithConstant().
Other commits in this series:
https://reviews.llvm.org/rL277762
https://reviews.llvm.org/rL277752
https://reviews.llvm.org/rL277738
https://reviews.llvm.org/rL277731
https://reviews.llvm.org/rL277659
https://reviews.llvm.org/rL277638
https://reviews.llvm.org/rL277629

llvm-svn: 277779
2016-08-04 22:19:27 +00:00
Matt Arsenault
256e602a26 GVNHoist: Don't hoist convergent calls
llvm-svn: 277767
2016-08-04 20:52:57 +00:00
David Majnemer
cac831b368 [coroutines] Part 4[ab]: Coroutine Devirtualization: Lower coro.resume and coro.destroy.
This is the forth patch in the coroutine series. CoroEaly pass now lowers coro.resume
and coro.destroy intrinsics by replacing them with an indirect call to an address
returned by coro.subfn.addr intrinsic. This is done so that CGPassManager recognizes
devirtualization when CoroElide replaces a call to coro.subfn.addr with an appropriate
function address.

Patch by Gor Nishanov!

Differential Revision: https://reviews.llvm.org/D22998

llvm-svn: 277765
2016-08-04 20:30:07 +00:00
Sanjay Patel
465c30dcbd [InstCombine] use m_APInt to allow icmp eq (and X, C1), C2 folds for splat constant vectors
llvm-svn: 277762
2016-08-04 20:05:02 +00:00
Sanjay Patel
f15750bb28 [InstCombine] use m_APInt to allow icmp eq (or X, C1), C2 folds for splat constant vectors
llvm-svn: 277752
2016-08-04 19:12:12 +00:00
Sanjay Patel
9e469c7feb [InstCombine] use m_APInt to allow icmp eq (op X, Y), C folds for splat constant vectors
I'm removing a misplaced pair of more specific folds from InstCombine in this patch as well,
so we know where those folds are happening in InstSimplify.

llvm-svn: 277738
2016-08-04 17:48:04 +00:00
Alina Sbirlea
6b31c219ab LoadStoreVectorizer: Remove TargetBaseAlign. Keep alignment for stack adjustments.
Summary:
TargetBaseAlign is no longer required since LSV checks if target allows misaligned accesses.
A constant defining a base alignment is still needed for stack accesses where alignment can be adjusted.

Previous patch (D22936) was reverted because tests were failing. This patch also fixes the cause of those failures:
- x86 failing tests either did not have the right target, or the right alignment.
- NVPTX failing tests did not have the right alignment.
- AMDGPU failing test (merge-stores) should allow vectorization with the given alignment but the target info
  considers <3xi32> a non-standard type and gives up early. This patch removes the condition and only checks
  for a maximum size allowed and relies on the next condition checking for %4 for correctness.
  This should be revisited to include 3xi32 as a MVT type (on arsenm's non-immediate todo list).

Note that checking the sizeInBits for a MVT is undefined (leads to an assertion failure),
so we need to create an EVT, hence the interface change in allowsMisaligned to include the Context.

Reviewers: arsenm, jlebar, tstellarAMD

Subscribers: jholewinski, arsenm, mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D23068

llvm-svn: 277735
2016-08-04 16:38:44 +00:00
Sanjay Patel
3cb5010af3 [InstCombine] use m_APInt to allow icmp eq (sub C1, X), C2 folds for splat constant vectors
llvm-svn: 277731
2016-08-04 15:19:25 +00:00
Amaury Sechet
f98549c3e0 Add popcount(n) == bitsize(n) -> n == -1 transformation.
Summary: As per title.

Reviewers: majnemer, spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23139

llvm-svn: 277694
2016-08-04 05:27:20 +00:00
David Majnemer
0f8f039a03 Forgot the dyn_cast_or_null intended for r277691.
llvm-svn: 277693
2016-08-04 04:47:18 +00:00
David Majnemer
3351460081 Reinstate "[CloneFunction] Don't remove side effecting calls"
This reinstates r277611 + r277614 and reverts r277642.  A cast_or_null
should have been a dyn_cast_or_null.

llvm-svn: 277691
2016-08-04 04:24:02 +00:00
Bruno Cardoso Lopes
7a82064117 Revert "GVN-hoist: enable by default" & "Make GVN Hoisting obey optnone/bisect."
This reverts commits r277685 & r277688. r277685 broke compiler-rt
compilation http://lab.llvm.org:8080/green/job/clang-stage1-configure-RA_build/23335
and r277685 is a followup from it.

llvm-svn: 277690
2016-08-04 04:16:24 +00:00
Sebastian Pop
4efd363a4c GVN-hoist: enable by default
As we addressed all compilation time problems with GVN-hoist
https://llvm.org/bugs/show_bug.cgi?id=28670
this patch turns GVN-hoist back by default.

Differential Revision: https://reviews.llvm.org/D23136

llvm-svn: 277685
2016-08-04 01:59:42 +00:00
Sanjay Patel
073234d16a [InstCombine] use m_APInt to allow icmp eq (add X, C1), C2 folds for splat constant vectors
llvm-svn: 277659
2016-08-03 22:08:44 +00:00
George Burgess IV
859253a79c [MSSA] Fix a bug in MemorySSA's move ctor.
Not a correctness issue, but it would be nice if we didn't have to
recompute our block numbering (worst-case) every time we move MSSA.

llvm-svn: 277652
2016-08-03 21:07:52 +00:00
Sebastian Pop
b6bf4ff13e GVN-hoist: limit the length of dependent instructions
Limit the number of times the while(1) loop is executed. With this restriction
the number of hoisted instructions does not change in a significant way on the
test-suite.

Differential Revision: https://reviews.llvm.org/D23028

llvm-svn: 277651
2016-08-03 20:54:38 +00:00
Sebastian Pop
ab400702fd GVN-hoist: compute DFS numbers once
With this patch we compute the DFS numbers of instructions only once and update
them during the code generation when an instruction gets hoisted.

Differential Revision: https://reviews.llvm.org/D23021

llvm-svn: 277650
2016-08-03 20:54:36 +00:00
Sebastian Pop
ed49a58ae9 GVN-hoist: compute MSSA once per function (PR28670)
With this patch we compute the MemorySSA once and update it in the code generator.

Differential Revision: https://reviews.llvm.org/D22966

llvm-svn: 277649
2016-08-03 20:54:33 +00:00
Reid Kleckner
6f4f80c273 Revert "[CloneFunction] Don't remove side effecting calls"
This reverts commit r277611 and the followup r277614.

Bootstrap builds and chromium builds are crashing during inlining after
this change.

llvm-svn: 277642
2016-08-03 20:01:01 +00:00
George Burgess IV
46c38fa458 [MSSA] clang-format. NFC.
Didn't want to fold this in with r277640, since it touches bits that
aren't entirely related to r277640.

llvm-svn: 277641
2016-08-03 19:59:11 +00:00
George Burgess IV
556bbef48d [MSSA] Add special handling for invariant/constant loads.
This is a follow-up to r277637. It teaches MemorySSA that invariant
loads (and loads of provably constant memory) are always liveOnEntry.

llvm-svn: 277640
2016-08-03 19:57:02 +00:00
Sanjay Patel
b18bd3fefa [InstCombine] use m_APInt to allow icmp eq (srem X, C1), C2 folds for splat constant vectors
llvm-svn: 277638
2016-08-03 19:48:40 +00:00
George Burgess IV
e244b0eafa [MSSA] Add logic for special handling of atomics/volatiles.
This patch makes MemorySSA recognize atomic/volatile loads, and makes
MSSA treat said loads specially. This allows us to be a bit more
aggressive in some cases.

Administrative note: Revision was LGTM'ed by reames in person.
Additionally, this doesn't include the `invariant.load` recognition in
the differential revision, because I feel it's better to commit that
separately. Will commit soon.

Differential Revision: https://reviews.llvm.org/D16875

llvm-svn: 277637
2016-08-03 19:39:54 +00:00
Tobias Grosser
1b0721730e [InstCombine] Refactor optimization of zext(or(icmp, icmp)) to enable more aggressive cast-folding
Summary:
InstCombine unfolds expressions of the form `zext(or(icmp, icmp))` to `or(zext(icmp), zext(icmp))` such that in a later iteration of InstCombine the exposed `zext(icmp)` instructions can be optimized. We now combine this unfolding and the subsequent `zext(icmp)` optimization to be performed together. Since the unfolding doesn't happen separately anymore, we also again enable the folding of `logic(cast(icmp), cast(icmp))` expressions to `cast(logic(icmp, icmp))` which had been disabled due to its interference with the unfolding transformation.

Tested via `make check` and `lnt`.

Background
==========

For a better understanding on how it came to this change we subsequently summarize its history. In commit r275989 we've already tried to enable the folding of `logic(cast(icmp), cast(icmp))` to `cast(logic(icmp, icmp))` which had to be reverted in r276106 because it could lead to an endless loop in InstCombine (also see http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20160718/374347.html). The root of this problem is that in `visitZExt()` in InstCombineCasts.cpp there also exists a reverse of the above folding transformation, that unfolds `zext(or(icmp, icmp))` to `or(zext(icmp), zext(icmp))` in order to expose `zext(icmp)` operations which would then possibly be eliminated by subsequent iterations of InstCombine. However, before these `zext(icmp)` would be eliminated the folding from r275989 could kick in and cause InstCombine to endlessly switch back and forth between the folding and the unfolding transformation. This is the reason why we now combine the `zext`-unfolding and the elimination of the exposed `zext(icmp)` to happen at one go because this enables us to still allow the cast-folding in `logic(cast(icmp), cast(icmp))` without entering an endless loop again.

Details on the submitted changes
================================

- In `visitZExt()` we combine the unfolding and optimization of `zext` instructions.
- In `transformZExtICmp()` we have to use `Builder->CreateIntCast()` instead of `CastInst::CreateIntegerCast()` to make sure that the new `CastInst` is inserted in a `BasicBlock`. The new calls to `transformZExtICmp()` that we introduce in `visitZExt()` would otherwise cause according assertions to be triggered (in our case this happend, for example, with lnt for the MultiSource/Applications/sqlite3 and SingleSource/Regression/C++/EH/recursive-throw tests). The subsequent usage of `replaceInstUsesWith()` is necessary to ensure that the new `CastInst` replaces the `ZExtInst` accordingly.
- In InstCombineAndOrXor.cpp we again allow the folding of casts on `icmp` instructions.
- The instruction order in the optimized IR for the zext-or-icmp.ll test case is different with the introduced changes.
- The test cases in zext.ll have been adopted from the reverted commits r275989 and r276105.

Reviewers: grosser, majnemer, spatel

Subscribers: eli.friedman, majnemer, llvm-commits

Differential Revision: https://reviews.llvm.org/D22864

Contributed-by: Matthias Reisinger <d412vv1n@gmail.com>
llvm-svn: 277635
2016-08-03 19:30:35 +00:00
Sanjay Patel
bdf8805e8c [InstCombine] use m_APInt to allow icmp (binop X, Y), C folds with constant splat vectors
This removes the restriction for the icmp constant, but as noted by the FIXME comments, 
we still need to change individual checks for binop operand constants.

llvm-svn: 277629
2016-08-03 18:59:03 +00:00
David Majnemer
4c6a38add6 [CloneFunction] Don't crash if the value map doesn't hold something
It is possible for the value map to not have an entry for some value
that has already been removed.

I don't have a testcase, this is fall-out from a buildbot.

llvm-svn: 277614
2016-08-03 17:37:10 +00:00
Sanjay Patel
eaab16c374 use local variables; NFC
llvm-svn: 277612
2016-08-03 17:23:08 +00:00
David Majnemer
629ccdfd98 [CloneFunction] Don't remove side effecting calls
We were able to figure out that the result of a call is some constant.
While propagating that fact, we added the constant to the value map.
This is problematic because it results in us losing the call site when
processing the value map.

This fixes PR28802.

llvm-svn: 277611
2016-08-03 17:12:47 +00:00
Renato Golin
02cd526c3c Revert "Teach CorrelatedValuePropagation to mark adds as no wrap"
This reverts commit r277592, trying to fix the AArch64 42VMA buildbot.

llvm-svn: 277607
2016-08-03 16:20:48 +00:00
Gil Rapaport
6f90b5aa48 [Loop Vectorizer] Move store-predication into its own function, remove obsolete comment (NFC)
Differential Revision: https://reviews.llvm.org/D23013

llvm-svn: 277595
2016-08-03 13:23:43 +00:00
Artur Pilipenko
2cd454f768 Teach CorrelatedValuePropagation to mark adds as no wrap
Use LVI to prove that adds do not wrap. The change is motivated by https://llvm.org/bugs/show_bug.cgi?id=28620 bug and it's the first step to fix that problem.

Reviewed By: sanjoy

Differential Revision: http://reviews.llvm.org/D23059

llvm-svn: 277592
2016-08-03 13:11:39 +00:00
David Callahan
2e2197d231 [ADCE] Refactor anticipating new functionality (NFC)
Summary:
This is the first refactoring before adding new functionality.
Add a class wrapper for the functions and container for
state associated with the transformation.

No functional change

Reviewers: majnemer, nadav, mehdi_amini

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23065

llvm-svn: 277565
2016-08-03 04:28:39 +00:00
George Burgess IV
591266dd81 [MSSA] Fix a caching bug.
This fixes a bug where we'd sometimes cache overly-conservative results
with our walker. This bug was made more obvious by r277480, which makes
our cache far more spotty than it was. Test case is llvm-unit, because
we're likely going to use CachingWalker only for def optimization in the
future.

The bug stems from that there was a place where the walker assumed that
`DefNode.Last` was a valid target to cache to when failing to optimize
phis. This is sometimes incorrect if we have a cache hit. The fix is to
use the thing we *can* assume is a valid target to cache to. :)

llvm-svn: 277559
2016-08-03 01:22:19 +00:00
Chandler Carruth
dc1947bc2e [Inliner] clang-format various parts of the inliner prior to changes
here. NFC.

llvm-svn: 277557
2016-08-03 01:02:31 +00:00
Ivan Krasin
2b69153c7d Add -lowertypetests-bitsets-level to control bitsets generation.
Summary:
Sometimes, bitsets could get really large (>300k entries) and
we might want to drop a check, as it would have a too much cost.

Adding a flag to control how much penalty are we willing to pay
for bitsets.

Reviewers: kcc

Differential Revision: https://reviews.llvm.org/D23088

llvm-svn: 277556
2016-08-03 00:59:38 +00:00
Daniel Berlin
f7c2acc123 Support for lifetime begin/end markers in the MemorySSA use optimizer
Summary: Depends on D23072

Reviewers: george.burgess.iv

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23076

llvm-svn: 277553
2016-08-03 00:01:46 +00:00
Sanjay Patel
18b8942e30 [InstCombine] replace dyn_casts with matches; NFCI
Clean-up before changing this to allow folds for vectors.

llvm-svn: 277538
2016-08-02 22:38:33 +00:00
Piotr Padlewski
4260831e08 Imported statistics types changes
Reviewers: tejohnson, eraman

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22980

llvm-svn: 277534
2016-08-02 22:18:47 +00:00
Daniel Berlin
fa1f2ddd00 Move to having a single real instructionClobbersQuery
Summary: We really want to move towards MemoryLocOrCall (or fix AA) everywhere, but for now, this lets us have a single instructionClobbersQuery.

Reviewers: george.burgess.iv

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23072

llvm-svn: 277530
2016-08-02 21:57:52 +00:00