506 Commits

Author SHA1 Message Date
Chris Lattner
66d1a6ad69 rename InlineInfo.DevirtualizedCalls -> InlinedCalls to
reflect that it includes all inlined calls now, not just
devirtualized ones.

llvm-svn: 102824
2010-05-01 01:26:13 +00:00
Chris Lattner
9ee72a47c2 Implement rdar://6295824 and PR6724 with two tiny changes
that can have a big effect :).  The first is to enable the
iterative SCC passmanager juice that kicks in when the
scc passmgr detects that a function pass has devirtualized
a call.  In this case, it will rerun all the passes it 
manages on the SCC, up to the iteration count limit (4). This
is useful because a function pass may devirualize a call, and
we want the inliner to inline it, or pruneeh to infer stuff
about it, etc.

The second patch is to add *all* call sites to the 
DevirtualizedCalls list the inliner uses.  This list is
about to get renamed, but the jist of this is that the 
inliner now reconsiders *all* inlined call sites as candidates
for further inlining.  The intuition is this that in cases 
like this:

f() { g(1); }     g(int x) { h(x); }

We analyze this bottom up, and may decide that it isn't 
profitable to inline H into G.  Next step, we decide that it is
profitable to inline G into F, and do so, which means that F 
now calls H.  Even though the call from G -> H may not have been
profitable to inline, the call from F -> H may be (in this case
because a constant allows folding etc).

In my spot checks, this doesn't have a big impact on code.  For
example, the LLC output for 252.eon grew from 0.02% (from
317252 to 317308) and 176.gcc actually shrunk by .3% (from 1525612
to 1520964 bytes).  252.eon never iterated in the SCC Passmgr,
176.gcc iterated at most 1 time.

llvm-svn: 102823
2010-05-01 01:15:56 +00:00
Chris Lattner
a61306d68f switch InlineInfo.DevirtualizedCalls's list to be of WeakVH.
This fixes a bug where calls inlined into an invoke would get
changed into an invoke but the array would keep pointing to
the (now dead) call.  The improved inliner behavior is still
disabled for now.

llvm-svn: 102196
2010-04-23 18:37:01 +00:00
Chris Lattner
5d87e1be44 The inliner was choosing to not consider call sites
that appear in the SCC as a result of inlining as candidates
for inlining.  Change this so that it *does* consider call 
sites that change from being indirect to being direct as a
result of inlining.  This allows it to completely 
"devirtualize" the testcase.

llvm-svn: 102146
2010-04-22 23:37:35 +00:00
Chris Lattner
5edbd34cff refactor the interface to InlineFunction so that most of the in/out
arguments are handled with a new InlineFunctionInfo class.  This 
makes it easier to extend InlineFunction to return more info in the
future.

llvm-svn: 102137
2010-04-22 23:07:58 +00:00
Chris Lattner
707b6f87ba when inlining something like this:
define void @f3(void (i8*)* %__f) ssp {
entry:
  call void %__f(i8* undef)
  unreachable
}

define void @f4(i8* %this) ssp align 2 {
entry:
  call void @f3(void (i8*)* @f2) ssp
  ret void
}

The inliner is turning the indirect call to %__f into a direct
call to F2.  Make the call graph more precise when this happens.

The inliner doesn't revisit call sites introduced by inlining,
so there isn't an easy way to test for this, but a more precise
callgraph is a good thing.

llvm-svn: 102131
2010-04-22 21:31:00 +00:00
Chris Lattner
efb9e5bf75 eliminate dead #include.
llvm-svn: 102119
2010-04-22 20:41:10 +00:00
Eric Christopher
e78496e5f1 Revert 101465, it broke internal OpenGL testing.
Probably the best way to know that all getOperand() calls have been handled
is to replace that API instead of updating.

llvm-svn: 101579
2010-04-16 23:37:20 +00:00
Gabor Greif
e7d6812008 reapply r101434
with a fix for self-hosting

rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary

llvm-svn: 101465
2010-04-16 15:33:14 +00:00
Gabor Greif
cd116e8c6a back out r101423 and r101397, they break llvm-gcc self-host on darwin10
llvm-svn: 101434
2010-04-16 01:16:20 +00:00
Gabor Greif
2e18d34d80 reapply r101364, which has been backed out in r101368
with a fix

rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary

llvm-svn: 101397
2010-04-15 20:51:13 +00:00
Gabor Greif
6022150477 back out r101364, as it trips the linux nightlybot on some clang C++ tests
llvm-svn: 101368
2010-04-15 12:46:56 +00:00
Gabor Greif
428ca23bbd rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary

llvm-svn: 101364
2010-04-15 10:49:53 +00:00
Mon P Wang
484bbe6aa9 Reapply address space patch after fixing an issue in MemCopyOptimizer.
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)

llvm-svn: 100304
2010-04-04 03:10:48 +00:00
Mon P Wang
0ccf050ca3 Revert r100191 since it breaks objc in clang
llvm-svn: 100199
2010-04-02 18:43:02 +00:00
Mon P Wang
a01350755e Reapply address space patch after fixing an issue in MemCopyOptimizer.
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)

llvm-svn: 100191
2010-04-02 18:04:15 +00:00
Bob Wilson
aae933cc81 Revert Mon Ping's change 99928, since it broke all the llvm-gcc buildbots.
llvm-svn: 99948
2010-03-30 22:27:04 +00:00
Mon P Wang
9351ea594a Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
A update of langref will occur in a subsequent checkin.

llvm-svn: 99928
2010-03-30 20:55:56 +00:00
Eric Christopher
e293604548 Temporarily revert this, it's causing an issue with an internal project.
llvm-svn: 99451
2010-03-24 23:35:21 +00:00
Chris Lattner
a9eb6a6987 add some accessors to callsite/callinst/invokeinst to check
for the noinline attribute, and make the inliner refuse to
inline a call site when the call site is marked noinline even
if the callee isn't.  This fixes PR6682.

llvm-svn: 99341
2010-03-23 22:59:07 +00:00
Devang Patel
3b08c33f33 Remove dead debug info intrinsics.
Intrinsic::dbg_stoppoint
 Intrinsic::dbg_region_start 
 Intrinsic::dbg_region_end 
 Intrinsic::dbg_func_start
AutoUpgrade simply ignores these intrinsics now.

llvm-svn: 92557
2010-01-05 01:10:40 +00:00
Devang Patel
5c983cb2ab Implement support to debug inlined functions.
llvm-svn: 86748
2009-11-10 23:06:00 +00:00
Chris Lattner
aecb9a4040 Fix a pretty serious misfeature of the inliner: if it inlines a function
with multiple return values it inserts a PHI to merge them all together.
However, if the return values are all the same, it ends up with a pointless
PHI and this pointless PHI happens to really block SRoA from happening in 
at least a silly C++ example written by Doug, but probably others.  This 
fixes rdar://7339069.

llvm-svn: 85206
2009-10-27 05:39:41 +00:00
Chris Lattner
7f32b72975 Simplify some code (first hunk) and fix PR5208 (second hunk) by
updating the callgraph when introducing a call.

llvm-svn: 84310
2009-10-17 05:39:39 +00:00
Duncan Sands
2400ad7236 Introduce and use convenience methods for getting pointer types
where the element is of a basic builtin type.  For example, to get
an i8* use getInt8PtrTy.

llvm-svn: 83379
2009-10-06 15:40:36 +00:00
Nick Lewycky
d21c325f09 Instruction::clone does not need to take an LLVMContext&. Remove that and
update all the callers.

llvm-svn: 82889
2009-09-27 07:38:41 +00:00
Eric Christopher
6acdc39b86 Fix comment.
llvm-svn: 81138
2009-09-06 22:20:54 +00:00
Chris Lattner
e4173b39bd remove a bunch of explicit code previously needed to update the
callgraph.  This is now dead because RAUW does the job.

llvm-svn: 80703
2009-09-01 18:44:06 +00:00
Chris Lattner
53698d7fd7 Change CallGraphNode to maintain it's Function as an AssertingVH
for sanity.  This didn't turn up any bugs.

Change CallGraphNode to maintain its "callsite" information in the 
call edges list as a WeakVH instead of as an instruction*.  This fixes
a broad class of dangling pointer bugs, and makes CallGraph have a number
of useful invariants again.  This fixes the class of problem indicated
by PR4029 and PR3601.

llvm-svn: 80663
2009-09-01 06:31:31 +00:00
Devang Patel
fbaeda732e Reapply 79977.
Use MDNodes to encode debug info in llvm IR.

llvm-svn: 80406
2009-08-28 23:24:31 +00:00
Chris Lattner
becb2e9534 enhance InlineFunction to be able to optionally return
a the list of static allocas that it inlined.

llvm-svn: 80203
2009-08-27 04:20:52 +00:00
Chris Lattner
7fbde3dedf smallvectorize the list of returns built by CloneAndPruneFunctionInto.
llvm-svn: 80202
2009-08-27 04:02:30 +00:00
Chris Lattner
71e787d93b reduce inlining factor some stuff out to a static helper function,
and other code cleanups.  No functionality change.

llvm-svn: 80199
2009-08-27 03:51:50 +00:00
Devang Patel
10c075a316 Revert 79977. It causes llvm-gcc bootstrap failures on some platforms.
llvm-svn: 80073
2009-08-26 05:01:18 +00:00
Devang Patel
7d42bfab6c Update DebugInfo interface to use metadata, instead of special named llvm.dbg.... global variables, to encode debugging information in llvm IR. This is mostly a mechanical change that tests metadata support very well.
This change speeds up llvm-gcc by more then 6% at "-O0 -g" (measured by compiling InstructionCombining.cpp!)

llvm-svn: 79977
2009-08-25 05:24:07 +00:00
Owen Anderson
9df206d02d Push LLVMContexts through the IntegerType APIs.
llvm-svn: 78948
2009-08-13 21:58:54 +00:00
Owen Anderson
93ccaf5c60 Move more code back to 2.5 APIs.
llvm-svn: 77635
2009-07-30 23:03:37 +00:00
Owen Anderson
881d928f9b Move types back to the 2.5 API.
llvm-svn: 77516
2009-07-29 22:17:13 +00:00
Owen Anderson
0ce2151b36 Move ConstantExpr to 2.5 API.
llvm-svn: 77494
2009-07-29 18:55:55 +00:00
Owen Anderson
cc33e89571 Revert the ConstantInt constructors back to their 2.5 forms where possible, thanks to contexts-on-types. More to come.
llvm-svn: 77011
2009-07-24 23:12:02 +00:00
Owen Anderson
cc287b28c9 Get rid of the Pass+Context magic.
llvm-svn: 76702
2009-07-22 00:24:57 +00:00
Owen Anderson
4483fbda5e Revert yesterday's change by removing the LLVMContext parameter to AllocaInst and MallocInst.
llvm-svn: 75863
2009-07-15 23:53:25 +00:00
Owen Anderson
8c85061ee6 Move EVER MORE stuff over to LLVMContext.
llvm-svn: 75703
2009-07-14 23:09:55 +00:00
Owen Anderson
49226b1075 This started as a small change, I swear. Unfortunately, lots of things call the [I|F]CmpInst constructors. Who knew!?
llvm-svn: 75200
2009-07-09 23:48:35 +00:00
Owen Anderson
977aa11bc6 More LLVMContext-ification.
llvm-svn: 74807
2009-07-05 22:41:43 +00:00
Eli Friedman
a280375b23 PR4123: don't crash when inlining a call which uses its own result.
llvm-svn: 71199
2009-05-08 00:22:04 +00:00
Devang Patel
7323064183 While inlining, clone llvm.dbg.func.start intrinsic and adjust
llvm.dbg.region.end instrinsic. This nested llvm.dbg.func.start/llvm.dbg.region.end pair now enables DW_TAG_inlined_subroutine support in code generator.

llvm-svn: 69118
2009-04-15 00:17:06 +00:00
Devang Patel
7fa69ef109 Update call graph after inlining invoke.
Patch by Jay Foad.

llvm-svn: 68120
2009-03-31 17:36:12 +00:00
Dale Johannesen
8ca4d24371 Revert unintended commmit.
llvm-svn: 66001
2009-03-04 02:09:48 +00:00
Dale Johannesen
e184480072 Always skip ptr-to-ptr bitcasts when counting,
per Chris' suggestion.  Slightly faster.

llvm-svn: 65999
2009-03-04 01:53:05 +00:00