Commit Graph

4057 Commits

Author SHA1 Message Date
Duncan Sands
5ff30e70f8 Just mark the sign bit as known zero, rather than any other irrelevant bits
known zero in the LHS.  Fixes PR12541.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155818 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-30 11:56:58 +00:00
Bill Wendling
bfbab99b58 Second attempt at PR12573:
Allow the "SplitCriticalEdge" function to split the edge to a landing pad. If
the pass is *sure* that it thinks it knows what it's doing, then it may go ahead
and specify that the landing pad can have its critical edge split. The loop
unswitch pass is one of these passes. It will split the critical edges of all
edges coming from a loop to a landing pad not within the loop. Doing so will
retain important loop analysis information, such as loop simplify.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155817 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-30 10:44:54 +00:00
Rafael Espindola
9719cf329b Make sure HoistInsertPosition finds a position that is dominated by all
inputs.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155809 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-30 03:53:06 +00:00
Hal Finkel
e32e5440d6 Don't vectorize target-specific types (ppc_fp128, x86_fp80, etc.).
Target specific types should not be vectorized. As a practical matter,
these types are already register matched (at least in the x86 case),
and codegen does not always work correctly (at least in the ppc case,
and this is not worth fixing because ppc_fp128 is currently broken and
will probably go away soon).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155729 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-27 19:34:00 +00:00
Dan Gohman
03e091f0b5 Reapply r155682, making constant folding more consistent, with a fix to work
properly with how the code handles all-undef PHI nodes.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155721 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-27 17:50:22 +00:00
NAKAMURA Takumi
d213ee7643 Revert r155682, "Use ConstantExpr::getExtractElement when constant-folding vectors"
It broke stage2 build. stage1/clang sometimes crashed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155699 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-27 07:59:20 +00:00
Dan Gohman
97b44f9b80 Use ConstantExpr::getExtractElement when constant-folding vectors
instead of getAggregateElement. This has the advantage of being
more consistent and allowing higher-level constant folding to
procede even if an inner extract element cannot be folded.

Make ConstantFoldInstruction call ConstantFoldConstantExpression
on the instruction's operands, making it more consistent with 
ConstantFoldConstantExpression itself. This makes sure that
ConstantExprs get TargetData-aware folding before being handed
off as operands for further folding.

This causes more expressions to be folded, but due to a known
shortcoming in constant folding, this currently has the side effect
of stripping a few more nuw and inbounds flags in the non-targetdata
side of constant-fold-gep.ll. This is mostly harmless.

This fixes rdar://11324230.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155682 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-27 00:54:36 +00:00
Chad Rosier
c1fc5e4464 Add instcombine patterns for the following transformations:
(x & y) | (x ^ y) -> x | y 
 (x & y) + (x ^ y) -> x | y 

Patch by Manman Ren.
rdar://10770603


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155674 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-26 23:29:14 +00:00
Chandler Carruth
464bda3a16 Teach the reassociate pass to fold chains of multiplies with repeated
elements to minimize the number of multiplies required to compute the
final result. This uses a heuristic to attempt to form near-optimal
binary exponentiation-style multiply chains. While there are some cases
it misses, it seems to at least a decent job on a very diverse range of
inputs.

Initial benchmarks show no interesting regressions, and an 8%
improvement on SPASS. Let me know if any other interesting results (in
either direction) crop up!

Credit to Richard Smith for the core algorithm, and helping code the
patch itself.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155616 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-26 05:30:30 +00:00
Chandler Carruth
3ef91f575e Actually delete now-empty file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155532 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-25 02:30:00 +00:00
Lang Hames
87aac6a877 Reverting r155468. Chris and Chandler have convinced me that it's dangerous and
in poor taste.

Talking through some alternate solutions with Chandler.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155530 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-25 02:16:54 +00:00
Nadav Rotem
80c1ea6f9b ConstantFoldSelectInstruction swapped the operands of the select.
Fix 12592. Patch by Matt Pharr.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155480 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-24 20:18:49 +00:00
Lang Hames
1d9e68dab1 Add support for llvm.arm.neon.vmull* intrinsics to InstCombine. This fixes
<rdar://problem/11291436>.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155468 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-24 18:58:36 +00:00
Chandler Carruth
7362ac7f8c Fix a crash on valid (if UB) bitcode that is produced for some global
constants in C++11 mode. I have no idea why it required such particular
circumstances to get here, the code seems clearly to rely upon unchecked
assumptions.

Specifically, when we decide to form an index into a struct type, we may
have gone through (at least one) zero-length array indexing round, which
would have left the offset un-adjusted, and thus not necessarily valid
for use when indexing the struct type.

This is just an canonicalization step, so the correct thing is to refuse
to canonicalize nonsensical GEPs of this form. Implemented, and test
case added.

Fixes PR12642. Pair debugged and coded with Richard Smith. =] I credit
him with most of the debugging, and preventing me from writing the wrong
code.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155466 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-24 18:42:47 +00:00
Jakob Stoklund Olesen
72847f3057 Reapply r155136 after fixing PR12599.
Original commit message:

Defer some shl transforms to DAGCombine.

The shl instruction is used to represent multiplication by a constant
power of two as well as bitwise left shifts. Some InstCombine
transformations would turn an shl instruction into a bit mask operation,
making it difficult for later analysis passes to recognize the
constsnt multiplication.

Disable those shl transformations, deferring them to DAGCombine time.
An 'shl X, C' instruction is now treated mostly the same was as 'mul X, C'.

These transformations are deferred:

  (X >>? C) << C   --> X & (-1 << C)  (When X >> C has multiple uses)
  (X >>? C1) << C2 --> X << (C2-C1) & (-1 << C2)   (When C2 > C1)
  (X >>? C1) << C2 --> X >>? (C1-C2) & (-1 << C2)  (When C1 > C2)

The corresponding exact transformations are preserved, just like
div-exact + mul:

  (X >>?,exact C) << C   --> X
  (X >>?,exact C1) << C2 --> X << (C2-C1)
  (X >>?,exact C1) << C2 --> X >>?,exact (C1-C2)

The disabled transformations could also prevent the instruction selector
from recognizing rotate patterns in hash functions and cryptographic
primitives. I have a test case for that, but it is too fragile.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155362 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-23 17:39:52 +00:00
Chandler Carruth
a3e3481c57 Tidy up this test more:
1) Make the checked assertions a bit more precise. We really want the
   canonical forms coming out of reassociate to be exactly what is
   expected.
2) Remove other passes, and switch the test to actually directly check
   that reassociate makes the important transforms and
   canonicalizations.
3) Fold in a related test case now that we're using FileCheck. Make the
   same tidying changes to it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155311 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-22 10:11:26 +00:00
Chandler Carruth
71f8bc37f2 FileCheck-ize a test, and tidy it up a touch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155310 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-22 10:11:23 +00:00
Jakob Stoklund Olesen
eece9dc81c Revert r155136 "Defer some shl transforms to DAGCombine."
While the patch was perfect and defect free, it exposed a really nasty
bug in X86 SelectionDAG that caused an llc crash when compiling lencod.

I'll put the patch back in after fixing the SelectionDAG problem.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155181 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-20 00:38:45 +00:00
Dan Gohman
8b74e5afda Avoid a bug in the path count computation, preventing an infinite
loop repeatedlt making the same change. This is for rdar://11256239.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155160 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-19 21:50:46 +00:00
Jakob Stoklund Olesen
0d5fcae6cd Defer some shl transforms to DAGCombine.
The shl instruction is used to represent multiplication by a constant
power of two as well as bitwise left shifts. Some InstCombine
transformations would turn an shl instruction into a bit mask operation,
making it difficult for later analysis passes to recognize the
constsnt multiplication.

Disable those shl transformations, deferring them to DAGCombine time.
An 'shl X, C' instruction is now treated mostly the same was as 'mul X, C'.

These transformations are deferred:

  (X >>? C) << C   --> X & (-1 << C)  (When X >> C has multiple uses)
  (X >>? C1) << C2 --> X << (C2-C1) & (-1 << C2)   (When C2 > C1)
  (X >>? C1) << C2 --> X >>? (C1-C2) & (-1 << C2)  (When C1 > C2)

The corresponding exact transformations are preserved, just like
div-exact + mul:

  (X >>?,exact C) << C   --> X
  (X >>?,exact C1) << C2 --> X << (C2-C1)
  (X >>?,exact C1) << C2 --> X >>?,exact (C1-C2)

The disabled transformations could also prevent the instruction selector
from recognizing rotate patterns in hash functions and cryptographic
primitives. I have a test case for that, but it is too fragile.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155136 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-19 16:46:26 +00:00
Jakob Stoklund Olesen
0d77b9c29c Extract the broken part of XFAILed test into its own file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155081 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-19 00:20:38 +00:00
Jakob Stoklund Olesen
f5782e2d60 FileCheckize
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155010 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-18 17:01:26 +00:00
Jakob Stoklund Olesen
377bf1acb9 Nobody likes shifty instructions, but that was a bit strong.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@155009 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-18 16:44:44 +00:00
Joe Groff
41c3e9a326 FileCheckify, un-XFAIL SimplifyLibCalls/floor test
Fixes build on MSVC

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154970 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-18 00:36:07 +00:00
Joe Groff
d15c581100 Move win32 SimplifyLibcall test under Transforms
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154967 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-18 00:07:45 +00:00
Chandler Carruth
9e67db4af1 Flip the new block-placement pass to be on by default.
This is mostly to test the waters. I'd like to get results from FNT
build bots and other bots running on non-x86 platforms.

This feature has been pretty heavily tested over the last few months by
me, and it fixes several of the execution time regressions caused by the
inlining work by preventing inlining decisions from radically impacting
block layout.

I've seen very large improvements in yacr2 and ackermann benchmarks,
along with the expected noise across all of the benchmark suite whenever
code layout changes. I've analyzed all of the regressions and fixed
them, or found them to be impossible to fix. See my email to llvmdev for
more details.

I'd like for this to be in 3.1 as it complements the inliner changes,
but if any failures are showing up or anyone has concerns, it is just
a flag flip and so can be easily turned off.

I'm switching it on tonight to try and get at least one run through
various folks' performance suites in case SPEC or something else has
serious issues with it. I'll watch bots and revert if anything shows up.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154816 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-16 13:49:17 +00:00
Hal Finkel
bba23ed672 Fix an error in BBVectorize important for vectorizing pointer types.
When vectorizing pointer types it is important to realize that potential
pairs cannot be connected via the address pointer argument of a load or store.
This is because even after vectorization, the address is still a scalar because
the address of the higher half of the pair is implicit from the address of the
lower half (it need not be, and should not be, explicitly computed).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154735 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-14 07:32:50 +00:00
Hal Finkel
f3f5a1e6f7 Enhance BBVectorize to more-properly handle pointer values and vectorize GEPs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154734 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-14 07:32:43 +00:00
Hal Finkel
fc3665c875 Add support to BBVectorize for vectorizing selects.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154700 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-13 20:45:45 +00:00
Dan Gohman
4423477548 Consider ObjC runtime calls objc_storeWeak and others which make a copy of
their argument as "escape" points for objc_retainBlock optimization.
This fixes rdar://11229925.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154682 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-13 18:28:58 +00:00
Dan Gohman
6c189ecbe6 Use the new Use-aware dominates method to apply the objc runtime
library return value optimization for phi uses. Even when the
phi itself is not dominated, the specific use may be dominated.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154647 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-13 01:08:28 +00:00
Dan Gohman
511568dd1f Don't move objc_autorelease calls past autorelease pool boundaries when
optimizing autorelease calls on phi nodes with null operands.
This fixes rdar://11207070.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154642 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-13 00:59:57 +00:00
Andrew Trick
d9fc1ce809 Fix 12513: Loop unrolling breaks with indirect branches.
Take this opportunity to generalize the indirectbr bailout logic for
loop transformations. CFG transformations will never get indirectbr
right, and there's no point trying.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154386 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-10 05:14:42 +00:00
Chandler Carruth
2450eca960 Teach InstCombine to nuke a common alloca pattern -- an alloca which has
GEPs, bit casts, and stores reaching it but no other instructions. These
often show up during the iterative processing of the inliner, SROA, and
DCE. Once we hit this point, we can completely remove the alloca. These
were actually showing up in the final, fully optimized code in a bunch
of inliner tests I've been working on, and notably they show up after
LLVM finishes optimizing away all function calls involved in
hash_combine(a, b).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154285 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-08 14:36:56 +00:00
Chandler Carruth
c0d18b6696 Fix ValueTracking to conclude that debug intrinsics are safe to
speculate. Without this, loop rotate (among many other places) would
suddenly stop working in the presence of debug info. I found this
looking at loop rotate, and have augmented its tests with a reduction
out of a very hot loop in yacr2 where failing to do this rotation costs
sometimes more than 10% in runtime performance, perturbing numerous
downstream optimizations.

This should have no impact on performance without debug info, but the
change in performance when debug info is enabled can be extreme. As
a consequence (and this how I got to this yak) any profiling of
performance problems should be treated with deep suspicion -- they may
have been wildly innacurate of debug info was enabled for profiling. =/
Just a heads up.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154263 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-07 19:22:18 +00:00
Chandler Carruth
9ceebb7e92 Sink the collection of return instructions until after *all*
simplification has been performed. This is a bit less efficient
(requires another ilist walk of the basic blocks) but shouldn't matter
in practice. More importantly, it's just too much work to keep track of
all the various ways the return instructions can be mutated while
simplifying them. This fixes yet another crasher, reported by Daniel
Dunbar.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154179 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-06 17:21:31 +00:00
Chandler Carruth
be2df1675d Tweak this test to ensure the inliner did indeed fire. Thanks to Richard
Smith for pointing this out in review.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154178 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-06 17:21:28 +00:00
Chandler Carruth
c0a7a1280c Actually finish this sentence in the comment the way I intended. Thanks
Matt for pointing this out.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154158 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-06 01:19:38 +00:00
Chandler Carruth
6bbab86af9 Sink the return instruction collection until after we're done deleting
dead code, including dead return instructions in some cases. Otherwise,
we end up having a bogus poniter to a return instruction that blows up
much further down the road.

It turns out that this pattern is both simpler to code, easier to update
in the face of enhancements to the inliner cleanup, and likely cheaper
given that it won't add dead instructions to the list.

Thanks to John Regehr's numerous test cases for teasing this out.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154157 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-06 01:11:52 +00:00
Dan Gohman
036ebfd874 Fix accidentally inverted logic from r152803, and make the
testcase slightly less trivial. This fixes rdar://11171718.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154118 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-05 20:27:21 +00:00
Hongbin Zheng
99c8a5a64a Add testcase for r154007, when a function has the optsize attribute,
the loop should be unrolled according the value of OptSizeUnrollThreshold.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154014 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-04 13:24:40 +00:00
Rafael Espindola
26c8dcc692 Always compute all the bits in ComputeMaskedBits.
This allows us to keep passing reduced masks to SimplifyDemandedBits, but
know about all the bits if SimplifyDemandedBits fails. This allows instcombine
to simplify cases like the one in the included testcase.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154011 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-04 12:51:34 +00:00
Stepan Dyatkovskiy
aad9c3f17a Fast fix for PR12343:
http://llvm.org/bugs/show_bug.cgi?id=12343

We have not trivial way for splitting edges that are goes from indirect branch. We can do it with some tricks, but it should be additionally discussed. And it is still dangerous due to difficulty of indirect branches controlling.

Fix forbids this case for unswitching.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153879 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-02 17:16:45 +00:00
Chandler Carruth
48ec3b50e7 Add some more testing to cover the remaining two cases where
always-inlining is disabled: recursive functions and indirectbr.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153833 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-01 10:36:17 +00:00
Chandler Carruth
6052eef8bd Fix a pretty scary bug I introduced into the always inliner with
a single missing character. Somehow, this had gone untested. I've added
tests for returns-twice logic specifically with the always-inliner that
would have caught this, and fixed the bug.

Thanks to Matt for the careful review and spotting this!!! =D

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153832 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-01 10:21:05 +00:00
Chandler Carruth
0b42f9dd2f Replace four tiny tests with various uses of grep and not with a single
test and FileCheck.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153831 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-01 10:11:17 +00:00
Chandler Carruth
f2286b0152 Initial commit for the rewrite of the inline cost analysis to operate
on a per-callsite walk of the called function's instructions, in
breadth-first order over the potentially reachable set of basic blocks.

This is a major shift in how inline cost analysis works to improve the
accuracy and rationality of inlining decisions. A brief outline of the
algorithm this moves to:

- Build a simplification mapping based on the callsite arguments to the
  function arguments.
- Push the entry block onto a worklist of potentially-live basic blocks.
- Pop the first block off of the *front* of the worklist (for
  breadth-first ordering) and walk its instructions using a custom
  InstVisitor.
- For each instruction's operands, re-map them based on the
  simplification mappings available for the given callsite.
- Compute any simplification possible of the instruction after
  re-mapping, and store that back int othe simplification mapping.
- Compute any bonuses, costs, or other impacts of the instruction on the
  cost metric.
- When the terminator is reached, replace any conditional value in the
  terminator with any simplifications from the mapping we have, and add
  any successors which are not proven to be dead from these
  simplifications to the worklist.
- Pop the next block off of the front of the worklist, and repeat.
- As soon as the cost of inlining exceeds the threshold for the
  callsite, stop analyzing the function in order to bound cost.

The primary goal of this algorithm is to perfectly handle dead code
paths. We do not want any code in trivially dead code paths to impact
inlining decisions. The previous metric was *extremely* flawed here, and
would always subtract the average cost of two successors of
a conditional branch when it was proven to become an unconditional
branch at the callsite. There was no handling of wildly different costs
between the two successors, which would cause inlining when the path
actually taken was too large, and no inlining when the path actually
taken was trivially simple. There was also no handling of the code
*path*, only the immediate successors. These problems vanish completely
now. See the added regression tests for the shiny new features -- we
skip recursive function calls, SROA-killing instructions, and high cost
complex CFG structures when dead at the callsite being analyzed.

Switching to this algorithm required refactoring the inline cost
interface to accept the actual threshold rather than simply returning
a single cost. The resulting interface is pretty bad, and I'm planning
to do lots of interface cleanup after this patch.

Several other refactorings fell out of this, but I've tried to minimize
them for this patch. =/ There is still more cleanup that can be done
here. Please point out anything that you see in review.

I've worked really hard to try to mirror at least the spirit of all of
the previous heuristics in the new model. It's not clear that they are
all correct any more, but I wanted to minimize the change in this single
patch, it's already a bit ridiculous. One heuristic that is *not* yet
mirrored is to allow inlining of functions with a dynamic alloca *if*
the caller has a dynamic alloca. I will add this back, but I think the
most reasonable way requires changes to the inliner itself rather than
just the cost metric, and so I've deferred this for a subsequent patch.
The test case is XFAIL-ed until then.

As mentioned in the review mail, this seems to make Clang run about 1%
to 2% faster in -O0, but makes its binary size grow by just under 4%.
I've looked into the 4% growth, and it can be fixed, but requires
changes to other parts of the inliner.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153812 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31 12:42:41 +00:00
Chandler Carruth
426d5715b1 Clean up the naming in this test. Someone pointed this out in review at
one point, and I forgot to go back and clean it up. Sorry about that. =/

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153801 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31 10:38:48 +00:00
Chandler Carruth
c3e955927f FileCheck-ize this test, and generally tidy it up prior to changing
things around.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153799 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31 09:22:33 +00:00
Hal Finkel
6173ed95da Correctly vectorize powi.
The powi intrinsic requires special handling because it always takes a single
integer power regardless of the result type. As a result, we can vectorize
only if the powers are equal. Fixes PR12364.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153797 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31 03:38:40 +00:00