Commit Graph

6443 Commits

Author SHA1 Message Date
Dan Gohman
82f46afcdd Delete some unneeded casts.
llvm-svn: 96429
2010-02-17 00:42:19 +00:00
Dan Gohman
493a1fcbe0 Don't attempt to divide INT_MIN by -1; consider such cases to
have overflowed.

llvm-svn: 96428
2010-02-17 00:41:53 +00:00
Bob Wilson
86dded571f Rename SuccessorNumber to GetSuccessorNumber.
llvm-svn: 96387
2010-02-16 21:06:42 +00:00
Dan Gohman
e2da5181db Refactor rewriting for PHI nodes into a separate function.
llvm-svn: 96382
2010-02-16 20:25:07 +00:00
Bob Wilson
a866c660db Split critical edges as needed for load PRE.
llvm-svn: 96378
2010-02-16 19:51:59 +00:00
Bob Wilson
1115365fcf Refactor to share code to find the position of a basic block successor in the
terminator's list of successors.

llvm-svn: 96377
2010-02-16 19:49:17 +00:00
Dan Gohman
815d966a5a Fix whitespace.
llvm-svn: 96372
2010-02-16 19:42:34 +00:00
Duncan Sands
1b33dd3c83 There are two ways of checking for a given type, for example isa<PointerType>(T)
and T->isPointerTy().  Convert most instances of the first form to the second form.
Requested by Chris.

llvm-svn: 96344
2010-02-16 11:11:14 +00:00
Dan Gohman
d19ecedc40 Split the main for-each-use loop again, this time for GenerateTruncates,
as it also peeks at which registers are being used by other uses. This
makes LSR less sensitive to use-list order.

llvm-svn: 96308
2010-02-16 01:42:53 +00:00
Chris Lattner
a8505609fe fix PR6305 by handling BlockAddress in a helper function
called by jump threading.

llvm-svn: 96263
2010-02-15 20:47:49 +00:00
Duncan Sands
2acaf3609c Uniformize the names of type predicates: rather than having isFloatTy and
isInteger, we now have isFloatTy and isIntegerTy.  Requested by Chris!

llvm-svn: 96223
2010-02-15 16:12:20 +00:00
Dan Gohman
c954fc3333 Fix whitespace.
llvm-svn: 96179
2010-02-14 18:51:39 +00:00
Dan Gohman
900be69752 Fix a comment.
llvm-svn: 96178
2010-02-14 18:51:20 +00:00
Dan Gohman
de32b2fac0 When complicated expressions are broken down into subexpressions
with multiplication by constants distributed through, occasionally
those subexpressions can include both x and -x. For now, if this
condition is discovered within LSR, just prune such cases away,
as they won't be profitable. This fixes a "zero allocated in a
base register" assertion failure.

llvm-svn: 96177
2010-02-14 18:50:49 +00:00
Dan Gohman
dbffcb6c92 Actually, this code doesn't have to be quite so conservative in
the no-TLI case. But it should still default to declining the
transformation.

llvm-svn: 96152
2010-02-14 03:21:49 +00:00
Dan Gohman
3b065f43af Don't attempt aggressive post-inc uses if TargetLowering is not available,
because profitability can't be sufficiently approximated.

llvm-svn: 96148
2010-02-14 02:45:21 +00:00
John McCall
5089836939 Make LSR not crash if invoked without target lowering info, e.g. if invoked
from opt.

llvm-svn: 96135
2010-02-13 23:40:16 +00:00
Eric Christopher
96f3c4222f Fix a problem where we had bitcasted operands that gave us
odd offsets since the bitcasted pointer size and the offset pointer
size are going to be different types for the GEP vs base object.

llvm-svn: 96134
2010-02-13 23:38:01 +00:00
Chris Lattner
23368e55d7 remove dead code.
llvm-svn: 96109
2010-02-13 19:07:06 +00:00
Chris Lattner
0874dc6dab Split some code out to a helper function (FindReusablePredBB)
and add a doxygen comment.

Cache the phi entry to avoid doing tons of 
PHINode::getBasicBlockIndex calls in the common case.

On my insane testcase from re2c, this speeds up CGP from
617.4s to 7.9s (78x).

llvm-svn: 96083
2010-02-13 05:35:08 +00:00
Chris Lattner
9c797fdf1d Speed up codegen prepare from 3.58s to 0.488s.
llvm-svn: 96081
2010-02-13 05:01:14 +00:00
Chris Lattner
975da20868 PHINode::getBasicBlockIndex is O(n) in the number of inputs
to a PHI, avoid it in the common case where the BB occurs
in the same index for multiple phis.  This speeds up CGP on
an insane testcase from 8.35 to 3.58s.

llvm-svn: 96080
2010-02-13 04:24:19 +00:00
Chris Lattner
1a067d53b0 iterate over preds using PHI information when available instead of
using pred_begin/end.  It is much faster.

llvm-svn: 96079
2010-02-13 04:15:26 +00:00
Chris Lattner
9be0b06335 speed up CGP a bit by scanning predecessors through phi operands
instead of with pred_begin/end.

llvm-svn: 96078
2010-02-13 04:04:42 +00:00
Dan Gohman
25722fd3fc Fix a pruning heuristic which implicitly assumed that SmallPtrSet is
deterministically sorted.

llvm-svn: 96071
2010-02-13 02:06:02 +00:00
Jakob Stoklund Olesen
8ce1b3d280 Enable the inlinehint attribute in the Inliner.
Functions explicitly marked inline will get an inlining threshold slightly
more aggressive than the default for -O3. This means than -O3 builds are
mostly unaffected while -Os builds will be a bit bigger and faster.

The difference depends entirely on how many 'inline's are sprinkled on the
source.

In the CINT2006 suite, only these tests are significantly affected under -Os:

               Size   Time
471.omnetpp   +1.63% -1.85%
473.astar     +4.01% -6.02%
483.xalancbmk +4.60%  0.00%

Note that 483.xalancbmk runs too quickly to give useful timing results.

llvm-svn: 96066
2010-02-13 01:51:53 +00:00
Dan Gohman
cbf5771473 Reapply 95979, a compile-time speedup, now that the bug it exposed is fixed.
llvm-svn: 96005
2010-02-12 19:35:25 +00:00
Dan Gohman
006952d39d Fix this code to avoid dereferencing an end() iterator in
offset distributions it doesn't expect.

llvm-svn: 96002
2010-02-12 19:20:37 +00:00
Chris Lattner
ecb203898a 1. modernize the constantmerge pass, using densemap/smallvector.
2. don't bother trying to merge globals in non-default sections,
   doing so is quite dubious at best anyway.
3. fix a bug reported by Arnaud de Grandmaison where we'd try to
   merge two globals in different address spaces.

llvm-svn: 95995
2010-02-12 18:17:23 +00:00
Daniel Dunbar
43ad3dfe00 Revert "Reverse the order for collecting the parts of an addrec. The order", it
is breaking llvm-gcc bootstrap.

llvm-svn: 95988
2010-02-12 17:27:08 +00:00
Dan Gohman
2f46f79492 Reverse the order for collecting the parts of an addrec. The order
doesn't matter, except that ScalarEvolution tends to need less time
to fold the results this way.

llvm-svn: 95979
2010-02-12 11:08:26 +00:00
Dan Gohman
c40eb525ad Reapply the new LoopStrengthReduction code, with compile time and
bug fixes, and with improved heuristics for analyzing foreign-loop
addrecs.

This change also flattens IVUsers, eliminating the stride-oriented
groupings, which makes it easier to work with.

llvm-svn: 95975
2010-02-12 10:34:29 +00:00
Eric Christopher
2e0201ee18 Make sure that ConstantExpr offsets also aren't off of extern
symbols.

Thanks to Duncan Sands for the testcase!

llvm-svn: 95877
2010-02-11 17:44:04 +00:00
Chris Lattner
a59eb7c09c Rename ValueRequiresCast to ShouldOptimizeCast, to better reflect
what it does.  Enhance it to return false to optimizing vector
sign extensions from vector comparisions, which is the idiom used
to get a splatted vector for a vector comparison.

Doing this breaks vector-casts.ll, add some compensating 
transformations to handle the important case they cover without
depending on this canonicalization.

This fixes rdar://7434900 a serious pessimization of vector compares.

llvm-svn: 95855
2010-02-11 06:26:33 +00:00
Chris Lattner
a087e6e82f Make DSE only scan blocks that are reachable from the entry
block.  Other blocks may have pointer cycles that will crash
basicaa and other alias analyses.  In any case, there is no
point wasting cycles optimizing dead blocks.  This fixes 
rdar://7635088

llvm-svn: 95852
2010-02-11 05:11:54 +00:00
Chris Lattner
733ffcdb1f Make jump threading honor x|undef -> true and x&undef -> false,
instead of considering x|undef -> x, which may not be true.

llvm-svn: 95850
2010-02-11 04:40:44 +00:00
Eric Christopher
9516309f55 Add ConstantExpr handling to Intrinsic::objectsize lowering.
Update testcase accordingly now that we can optimize another
section.

llvm-svn: 95846
2010-02-11 01:48:54 +00:00
Devang Patel
e87a8a944d Ignore dbg info intrinsics.
llvm-svn: 95828
2010-02-11 00:20:49 +00:00
Devang Patel
43cc7530be Strip new llvm.dbg.value intrinsic.
llvm-svn: 95807
2010-02-10 21:19:56 +00:00
Dan Gohman
92b6122204 Fix "the the" and similar typos.
llvm-svn: 95781
2010-02-10 16:03:48 +00:00
Eric Christopher
6691a59247 Move Intrinsic::objectsize lowering back to InstCombineCalls and
enable constant 0 offset lowering.

llvm-svn: 95691
2010-02-09 21:24:27 +00:00
Eric Christopher
871cf7bce2 Pull these back out, they're a little too aggressive and time
consuming for a simple optimization.

llvm-svn: 95671
2010-02-09 17:29:18 +00:00
Chris Lattner
f6d9c1f90c simplify this code, duh.
llvm-svn: 95643
2010-02-09 01:14:06 +00:00
Chris Lattner
26b712379f fix PR6193, only considering sign extensions *from i1* for this
xform.

llvm-svn: 95642
2010-02-09 01:12:41 +00:00
Eric Christopher
1ff7f162e2 Add file in here too.
llvm-svn: 95641
2010-02-09 01:11:03 +00:00
Eric Christopher
428b385575 Add a new pass to do llvm.objsize lowering using SCEV.
Initial skeleton and SCEVUnknown lowering implemented,
the rest should come relatively quickly.  Move testcase
to new directory.

Move pass to right before SimplifyLibCalls - which is
moved down a bit so we can take advantage of a few opts.

llvm-svn: 95628
2010-02-09 00:35:38 +00:00
Chris Lattner
9635fe03ca fix some problems handling large vectors reported in PR6230
llvm-svn: 95616
2010-02-08 23:56:03 +00:00
Jakob Stoklund Olesen
83ebc265b3 Reintroduce the InlineHint function attribute.
This time it's for real! I am going to hook this up in the frontends as well.

The inliner has some experimental heuristics for dealing with the inline hint.
When given a -respect-inlinehint option, functions marked with the inline
keyword are given a threshold just above the default for -O3.

We need some experiments to determine if that is the right thing to do.

llvm-svn: 95466
2010-02-06 01:16:28 +00:00
Jakob Stoklund Olesen
7b4c60adae Don't unroll loops containing function calls.
llvm-svn: 95454
2010-02-05 23:21:31 +00:00
Jakob Stoklund Olesen
670458b3be Teach SimplifyCFG about magic pointer constants.
Weird code sometimes uses pointer constants other than null. This patch
teaches SimplifyCFG to build switch instructions in those cases.

Code like this:

void f(const char *x) {
  if (!x)
    puts("null");
  else if ((uintptr_t)x == 1)
    puts("one");
  else if (x == (char*)2 || x == (char*)3)
    puts("two");
  else if ((intptr_t)x == 4)
    puts("four");
  else
    puts(x);
}

Now becomes a switch:

define void @f(i8* %x) nounwind ssp {
entry:
  %magicptr23 = ptrtoint i8* %x to i64            ; <i64> [#uses=1]
  switch i64 %magicptr23, label %if.else16 [
    i64 0, label %if.then
    i64 1, label %if.then2
    i64 2, label %if.then9
    i64 3, label %if.then9
    i64 4, label %if.then14
  ]

Note that LLVM's own DenseMap uses magic pointers.

llvm-svn: 95439
2010-02-05 22:03:18 +00:00