Commit Graph

108859 Commits

Author SHA1 Message Date
Lang Hames
9793d69437 [ADT] Add a 'find_as' operation to DenseSet.
This operation is analogous to its counterpart in DenseMap: It allows lookup
via cheap-to-construct keys (provided that getHashValue and isEqual are
implemented for the cheap key-type in the DenseMapInfo specialization).

Thanks to Chandler for the review.

llvm-svn: 220168
2014-10-19 19:36:33 +00:00
Chandler Carruth
4f93ec40bc Do a better and more complete job of preserving metadata when combining
loads.

This handles many more cases than just the AA metadata, some of them
suggested by Hal in his review of the AA metadata handling patch. I've
tried to test this behavior where tractable to do so.

I'll point out that I have specifically *not* included a test for
debuginfo because it was going to require 2 or 3 times as much work to
craft some input which would survive the "helpful" stripping of debug
info metadata that doesn't match the desired schema. This is another
good example of why the current state of write-ability for our debug
info metadata is unacceptable. I spent over 30 minutes trying to conjure
some test case that would survive, even copying from other debug info
tests, but it always failed to survive with no explanation of why or how
I might fix it. =[

llvm-svn: 220165
2014-10-19 10:46:46 +00:00
Chandler Carruth
9c87bbcef7 Move previously dead code to handle computing the known bits of an alias
up to where it actually works as intended. The problem is that
a GlobalAlias isa GlobalValue and so the prior block handled all of the
cases.

This allows us to constant fold based on the actual constant expression
in the global alias. As an example, see the last function in the newly
added test case which explicitly aligns an unaligned pointer using
constant expression math. Without this change, we fail to see that and
fold an alignment test to zero.

llvm-svn: 220164
2014-10-19 09:06:56 +00:00
David Majnemer
6e9cd62474 InstCombine: (sub (or A B) (xor A B)) --> (and A B)
The following implements the transformation:
(sub (or A B) (xor A B)) --> (and A B).

Patch by Ankur Garg!

Differential Revision: http://reviews.llvm.org/D5719

llvm-svn: 220163
2014-10-19 08:32:32 +00:00
David Majnemer
4c15df3adb InstCombine: Optimize icmp eq/ne (shl Const2, A), Const1
The following implements the optimization for sequences of the form:
icmp eq/ne (shl Const2, A), Const1

Such sequences can be transformed to:
icmp eq/ne A, (TrailingZeros(Const1) - TrailingZeros(Const2))

This handles only the equality operators for now. Other operators need
to be handled.

Patch by Ankur Garg!

llvm-svn: 220162
2014-10-19 08:23:08 +00:00
Chandler Carruth
b7ea0ce700 Fix a long-standing miscompile in the load analysis that was uncovered
by my refactoring of this code.

The method isSafeToLoadUnconditionally assumes that the load will
proceed with the preferred type alignment. Given that, it has to ensure
that the alloca or global is at least that aligned. It has always done
this historically when a datalayout is present, but has never checked it
when the datalayout is absent. When I refactored the code in r220156,
I exposed this path when datalayout was present and that turned the
latent bug into a patent bug.

This fixes the issue by just removing the special case which allows
folding things without datalayout. This isn't worth the complexity of
trying to tease apart when it is or isn't safe without actually knowing
the preferred alignment.

llvm-svn: 220161
2014-10-19 08:17:50 +00:00
Chandler Carruth
318612525a Switch how the datalayout availability test is handled in this code to
make much more sense and in theory be more correct.

If you trace the code alllll the way back to when it was first
introduced, the comments make it slightly more clear what was going on
here. At that time, the only way Base != V was if DL (then TD) was
non-null. As a consequence, if DL *was* null, that meant we were loading
directly from the alloca or global found above the test. After
refactoring, this has become at least terribly subtle and potentially
incorrect. There are many forms of pointer manipulation that can be
traversed without DataLayout, and some of them would in fact change the
size of object being loaded vs. allocated.

Rather than this subtlety, I've hoisted the actual 'return true' bits
into the code which actually found an alloca or global and based them on
the loaded pointer being that alloca or global. This is both more clear
and safer. I've also added comments about exactly why this set of
predicates is used.

I've also corrected a misleading comment about globals -- if overridden
they may not just have a different size, they may be null and completely
unsafe to load from!

Hopefully this confuses the next reader a bit less. I don't have any
test cases or anything, the patch is motivated strictly to improve the
readability of the code.

llvm-svn: 220156
2014-10-19 00:42:16 +00:00
Bob Wilson
113df60c0c Use triple predicate functions instead of checking values directly. NFC.
llvm-svn: 220155
2014-10-19 00:39:30 +00:00
Chandler Carruth
4768aadcc2 Rename 'TD' to 'DL' in this function as the argument is now a DataLayout
argument.

llvm-svn: 220151
2014-10-18 23:47:22 +00:00
Chandler Carruth
480012ecff Fix the other comment to use modern doxygen style and be a bit more
direct. Notably, comment on the fact that the loaded type is significant
in that it determines how wide of an access must be safe.

llvm-svn: 220150
2014-10-18 23:46:17 +00:00
Chandler Carruth
5cac61e244 More formatting cleanup brought to you by clang-format.
llvm-svn: 220149
2014-10-18 23:41:25 +00:00
Chandler Carruth
693361f496 Clean up doxygen syntax and reword comments to flow better, have a brief
section, and not have unfinished sentence fragments.

llvm-svn: 220147
2014-10-18 23:31:55 +00:00
Chandler Carruth
908f1ee18d Clean up the formatting and trailing whitespace of a routine before
editting it.

llvm-svn: 220146
2014-10-18 23:19:03 +00:00
Lang Hames
7e26130bd2 [PBQP] Move register-allocation specific PBQP code into RegAllocPBQP.h.
Just clean-up - no functional change.

llvm-svn: 220145
2014-10-18 22:23:55 +00:00
Lang Hames
86d69f67b6 [PBQP] Replace the interference-constraints algorithm with a faster version
loosely based on linear scan.

On x86-64 this is good for a ~2% drop in compile time on the nightly test suite.

llvm-svn: 220143
2014-10-18 17:26:07 +00:00
Chandler Carruth
472e6c1ad4 Preserve AA metadata when combining (cast (load (...))) -> (load (cast
(...))).

llvm-svn: 220141
2014-10-18 11:00:12 +00:00
Chandler Carruth
b95f276385 [InstCombine] Do an about-face on how LLVM canonicalizes (cast (load
...)) and (load (cast ...)): canonicalize toward the former.

Historically, we've tried to load using the type of the *pointer*, and
tried to match that type as closely as possible removing as many pointer
casts as we could and trading them for bitcasts of the loaded value.
This is deeply and fundamentally wrong.

Repeat after me: memory does not have a type! This was a hard lesson for
me to learn working on SROA.

There is only one thing that should actually drive the type used for
a pointer, and that is the type which we need to use to load from that
pointer. Matching up pointer types to the loaded value types is very
useful because it minimizes the physical size of the IR required for
no-op casts. Similarly, the only thing that should drive the type used
for a loaded value is *how that value is used*! Again, this minimizes
casts. And in fact, the *only* thing motivating types in any part of
LLVM's IR are the types used by the operations in the IR. We should
match them as closely as possible.

I've ended up removing some tests here as they were testing bugs or
behavior that is no longer present. Mostly though, this is just cleanup
to let the tests continue to function as intended.

The only fallout I've found so far from this change was SROA and I have
fixed it to not be impeded by the different type of load. If you find
more places where this change causes optimizations not to fire, those
too are likely bugs where we are assuming that the type of pointers is
"significant" for optimization purposes.

llvm-svn: 220138
2014-10-18 06:36:22 +00:00
Chandler Carruth
3f24e95e6a Remove a test that was ported from the old llvm-gcc frontend test suite.
This test is pretty awesome. It is claiming to test devirtualization.
However, the code in question is not in fact devirtualized by LLVM. If
you take the original C++ test case and run it through Clang at -O3 we
fail to devirtualize it completely. It also isn't a sufficiently focused
test case.

The *reason* we fail to devirtualize it isn't because of any missing
instcombine though. Instead, it is because we fail to emit an available
externally vtable and thus the vtable is just an external and completely
opaque. If I cause the vtable to be emitted, we successfully
devirtualize things.

Anyways, I'm just removing it because it is providing negative value at
this point: it isn't representative of the output of Clang really, LLVM
isn't doing the transform it claims to be testing, LLVM's failure to do
the transform isn't actually an LLVM bug at all and we shouldn't be
testing for it here, and finally the test is written in such a way that
it will trivially pass even when the point of the test is failing.

llvm-svn: 220137
2014-10-18 06:36:18 +00:00
Nick Kledzik
00ab9f637f [llvm-objdump] don't test timestamp dump as that is time zone dependent
llvm-svn: 220123
2014-10-18 02:28:01 +00:00
Nick Kledzik
c1ca67e9a3 [llvm-objdump] enhance test case for mach-o -private-headers
llvm-svn: 220120
2014-10-18 01:50:55 +00:00
Nick Kledzik
2baf7b8a8e [llvm-objdump] Fix mach-o binding decompression error
llvm-svn: 220119
2014-10-18 01:21:02 +00:00
Chandler Carruth
3111128783 [SROA] Change how SROA does vector-based promotion of allocas to handle
cases where the alloca type, the load types, and the store types used
all disagree.

Previously, the only way that vector-based promotion occured was if the
alloca type was a vector type. This was one of the *very* few remaining
uses of the alloca's type to guide SROA/mem2reg left in LLVM. It turns
out it was a bad idea.

The alloca type can change very easily based on the mixture of types
loaded and stored to that alloca. We shouldn't be relying on it as
a signal for very much. Instead, the source of truth should be loads and
stores. We should canonicalize the loads and stores as much as possible
and then rely on them exclusively in SROA.

When looking and loads and stores, we may find many different candidate
vector types. This change will let SROA try all of them to find a vector
type which is a viable way to promote the entire alloca to a vector
register.

With this change, it becomes possible to do better canonicalization and
optimization of loads and stores without breaking SROA in random ways,
and that should allow fixing a core source of performance loss in hot
numerical loops such as those in Eigen.

llvm-svn: 220116
2014-10-18 00:44:02 +00:00
Aaron Watry
ea88a00c76 R600/SI: Add global atomicrmw xchg
v2: Add separate offset/no-offset tests

Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>
llvm-svn: 220110
2014-10-17 23:33:03 +00:00
Aaron Watry
d852df8bcb R600/SI: Add global atomicrmw xor
v2: Add separate offset/no-offset tests

Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>
llvm-svn: 220109
2014-10-17 23:33:01 +00:00
Aaron Watry
5f5371bf76 R600/SI: Add global atomicrmw or
v2: Add separate offset/no-offset tests

Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>
llvm-svn: 220108
2014-10-17 23:32:59 +00:00
Aaron Watry
d94b6445ca R600/SI: Add global atomicrmw min/umin
v2: Add separate offset/no-offset tests

Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>
llvm-svn: 220107
2014-10-17 23:32:57 +00:00
Aaron Watry
0c7ec3cff8 R600/SI: Add global atomicrmw max/umax
v2: Add separate offset/no-offset tests

Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>
llvm-svn: 220106
2014-10-17 23:32:56 +00:00
Aaron Watry
f93e43a3b6 R600/SI: Add global atomicrmw and
v2: Add separate offset/no-offset tests

Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>
llvm-svn: 220105
2014-10-17 23:32:54 +00:00
Aaron Watry
ad528e135c R600/SI: Add global atomicrmw sub
v2: Add separate offset/no-offset tests

Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>
llvm-svn: 220104
2014-10-17 23:32:52 +00:00
Aaron Watry
20f3932334 R600/SI: Fix/add tests for atomicrmw add
The previous tests claimed to test constant offsets in the function name,
but the tests weren't actually testing them.

Clone the tests, and do testing of all combinations of the following:
1) with/without constant pointer offset
2) 32/64-bit addressing modes
3) Usage and non-usage of the return value from the atomicrmw

Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>
llvm-svn: 220103
2014-10-17 23:32:50 +00:00
Aaron Watry
33006bfc61 R600: Rename atomic_load global tests to atomic_add
The function name now matches what it's actually testing.

Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>
llvm-svn: 220102
2014-10-17 23:32:49 +00:00
Evgeniy Stepanov
b8d9dfd62d [msan] Fix handling of byval arguments with large alignment.
MSan param-tls slots are 8-byte aligned. This change clips
alignment of memcpy into param-tls to 8.

llvm-svn: 220101
2014-10-17 23:29:44 +00:00
Pete Cooper
bedb6f3c4b Check for dynamic alloca's when selecting lifetime intrinsics.
TL;DR: Indexing maps with [] creates missing entries.

The long version:

When selecting lifetime intrinsics, we index the *static* alloca map with the AllocaInst we find for that lifetime.  Trouble is, we don't first check to see if this is a dynamic alloca.

On the attached example, this causes a dynamic alloca to create an entry in the static map, and returns 0 (the default) as the frame index for that lifetime.  0 was used for the frame index of the stack protector, which given that it now has a lifetime, is coloured, and merged with other stack slots.

PEI would later trigger an assert because it expects the stack protector to not be dead.

This fix ensures that we only get frame indices for static allocas, ie, those in the map.  Dynamic ones are effectively dropped, which is suboptimal, but at least isn't completely broken.

rdar://problem/18672951

llvm-svn: 220099
2014-10-17 22:59:33 +00:00
Bill Schmidt
2a091228e8 [PowerPC] Disable +vsx RUN line for fma.ll due to inconsistency on other builders
llvm-svn: 220094
2014-10-17 21:32:22 +00:00
Rafael Espindola
eb9335bdab Revert "TRE: make TRE a bit more aggressive"
This reverts commit r219899.

This also updates byval-tail-call.ll to make it clear what was breaking.
Adding r219899 again will cause the load/store to disappear.

llvm-svn: 220093
2014-10-17 21:25:48 +00:00
Bill Schmidt
4290f0e275 [PowerPC] Change assert to better form
llvm-svn: 220092
2014-10-17 21:19:59 +00:00
Matt Arsenault
8a07439402 R600/SI: Remove redundant setting of instruction bits
These are all set on the instruction base classes.

llvm-svn: 220091
2014-10-17 21:13:11 +00:00
Bill Schmidt
3e4d52c10c [PowerPC] Change liveness testing in VSX FMA mutation pass
With VSX enabled, LLVM crashes when compiling
test/CodeGen/PowerPC/fma.ll.  I traced this to the liveness test
that's revised in this patch. The interval test is designed to only
work for virtual registers, but in this case the AddendSrcReg is
physical. Since there is already a walk of the MIs between the
AddendMI and the FMA, I added a check for def/kill of the AddendSrcReg
in that loop.  At Hal Finkel's request, I converted the liveness test
to an assert restricted to virtual registers.

I've changed the fma.ll test to have VSX and non-VSX variants so we
can test both kinds of multiply-adds.

llvm-svn: 220090
2014-10-17 21:02:44 +00:00
Peter Collingbourne
55ac2d4e1f Disable ccache for go tests.
Should fix llvm-clang-lld-x86_64-debian-fast bot.

llvm-svn: 220071
2014-10-17 18:32:36 +00:00
Matt Arsenault
58e20fe65d Fix typo
llvm-svn: 220068
2014-10-17 18:02:31 +00:00
Matt Arsenault
b82c7f40b2 R600/SI: Also check for FPImm literal constants
llvm-svn: 220067
2014-10-17 18:00:50 +00:00
Matt Arsenault
9be7600d7c R600/SI: Allow commuting with source modifiers
llvm-svn: 220066
2014-10-17 18:00:48 +00:00
Matt Arsenault
2d0382bc47 R600/SI: Simplify code with hasModifiersSet
llvm-svn: 220065
2014-10-17 18:00:45 +00:00
Matt Arsenault
9c6d84b515 R600/SI: Fix general commuting breaking src mods
The generic code trying to use findCommutedOpIndices won't
understand that it needs to swap the modifier operands also,
so it should fail if they are set.

llvm-svn: 220064
2014-10-17 18:00:43 +00:00
Matt Arsenault
c43bde1e40 R600/SI: Cleanup code with ChangeToFPImmediate
llvm-svn: 220063
2014-10-17 18:00:41 +00:00
Matt Arsenault
69078d03ff R600/SI: Allow comuting fp immediates
llvm-svn: 220062
2014-10-17 18:00:39 +00:00
Matt Arsenault
082244cff4 R600/SI: Use early return instead of checking condition twice
Any commutable instruction will have at least src1.

llvm-svn: 220061
2014-10-17 18:00:37 +00:00
Peter Collingbourne
30dcbffe24 We also need to catch OSError here.
llvm-svn: 220058
2014-10-17 17:46:46 +00:00
Matt Arsenault
0b41e1680b R600/SI: Use complex pattern for MUBUF load patterns.
This eliminates a use of the SI_ADDR64_RSRC pseudo

llvm-svn: 220057
2014-10-17 17:43:00 +00:00
Matt Arsenault
3ce8aba254 R600/SI: Remove SI_BUFFER_RSRC pseudo
Just use REG_SEQUENCE directly, so there are fewer
instructions to need to deal with later.

llvm-svn: 220056
2014-10-17 17:42:56 +00:00