369 Commits

Author SHA1 Message Date
Chris Lattner
ba962825a4 when eliding a byval copy due to inlining a readonly function, we have
to make sure that the reused alloca has sufficient alignment.

llvm-svn: 122236
2010-12-20 08:10:40 +00:00
Chris Lattner
c0a48df9f9 pull byval processing out to its own helper function.
llvm-svn: 122235
2010-12-20 07:57:41 +00:00
Chris Lattner
029952c844 fix PR8769, a miscompilation by inliner when inlining a function with a byval
argument.  The generated alloca has to have at least the alignment of the
byval, if not, the client may be making assumptions that the new alloca won't
satisfy.

llvm-svn: 122234
2010-12-20 07:45:28 +00:00
Chris Lattner
21587c9f65 improve comment
llvm-svn: 120994
2010-12-06 07:43:04 +00:00
Benjamin Kramer
9141603779 Simplify code. No change in functionality.
llvm-svn: 119908
2010-11-20 18:43:35 +00:00
Duncan Sands
c84443a206 Have InlineFunction use SimplifyInstruction rather than
hasConstantValue.  I was leery of using SimplifyInstruction
while the IR was still in a half-baked state, which is the
reason for delaying the simplification until the IR is fully
cooked.

llvm-svn: 119494
2010-11-17 11:16:23 +00:00
Rafael Espindola
aa5448f520 Be more consistent in using ValueToValueMapTy.
llvm-svn: 116387
2010-10-13 01:36:30 +00:00
Dan Gohman
8088d5e31d Reapply r112091 and r111922, support for metadata linking, with a
fix: add a flag to MapValue and friends which indicates whether
any module-level mappings are being made. In the common case of
inlining, no module-level mappings are needed, so MapValue doesn't
need to examine non-function-local metadata, which can be very
expensive in the case of a large module with really deep metadata
(e.g. a large C++ program compiled with -g).

This flag is a little awkward; perhaps eventually it can be moved
into the ClonedCodeInfo class.

llvm-svn: 112190
2010-08-26 15:41:53 +00:00
Gabor Greif
394841cea8 simplify: CallSite::get --> CallSite constructor
llvm-svn: 109506
2010-07-27 15:02:37 +00:00
Gabor Greif
14afc1e2d9 use callsite to obtain all arguments
llvm-svn: 106728
2010-06-24 09:56:43 +00:00
Devang Patel
0c927a80fe Use ValueMap instead of DenseMap.
The ValueMapper used by various cloning utility maps MDNodes also.

llvm-svn: 106706
2010-06-24 00:33:28 +00:00
Devang Patel
373b10cbaa Cosmetic change.
Do not use "ValueMap" as a name for a local variable or an argument.

llvm-svn: 106698
2010-06-23 23:55:51 +00:00
Duncan Sands
da677f56f2 Fix PR7272: when inlining through a callsite with byval arguments,
the newly created allocas may be used by inlined calls, so these
need to have their tail call flags cleared.  Fixes PR7272.

llvm-svn: 105255
2010-05-31 21:00:26 +00:00
Chris Lattner
66d1a6ad69 rename InlineInfo.DevirtualizedCalls -> InlinedCalls to
reflect that it includes all inlined calls now, not just
devirtualized ones.

llvm-svn: 102824
2010-05-01 01:26:13 +00:00
Chris Lattner
9ee72a47c2 Implement rdar://6295824 and PR6724 with two tiny changes
that can have a big effect :).  The first is to enable the
iterative SCC passmanager juice that kicks in when the
scc passmgr detects that a function pass has devirtualized
a call.  In this case, it will rerun all the passes it 
manages on the SCC, up to the iteration count limit (4). This
is useful because a function pass may devirualize a call, and
we want the inliner to inline it, or pruneeh to infer stuff
about it, etc.

The second patch is to add *all* call sites to the 
DevirtualizedCalls list the inliner uses.  This list is
about to get renamed, but the jist of this is that the 
inliner now reconsiders *all* inlined call sites as candidates
for further inlining.  The intuition is this that in cases 
like this:

f() { g(1); }     g(int x) { h(x); }

We analyze this bottom up, and may decide that it isn't 
profitable to inline H into G.  Next step, we decide that it is
profitable to inline G into F, and do so, which means that F 
now calls H.  Even though the call from G -> H may not have been
profitable to inline, the call from F -> H may be (in this case
because a constant allows folding etc).

In my spot checks, this doesn't have a big impact on code.  For
example, the LLC output for 252.eon grew from 0.02% (from
317252 to 317308) and 176.gcc actually shrunk by .3% (from 1525612
to 1520964 bytes).  252.eon never iterated in the SCC Passmgr,
176.gcc iterated at most 1 time.

llvm-svn: 102823
2010-05-01 01:15:56 +00:00
Chris Lattner
a61306d68f switch InlineInfo.DevirtualizedCalls's list to be of WeakVH.
This fixes a bug where calls inlined into an invoke would get
changed into an invoke but the array would keep pointing to
the (now dead) call.  The improved inliner behavior is still
disabled for now.

llvm-svn: 102196
2010-04-23 18:37:01 +00:00
Chris Lattner
5d87e1be44 The inliner was choosing to not consider call sites
that appear in the SCC as a result of inlining as candidates
for inlining.  Change this so that it *does* consider call 
sites that change from being indirect to being direct as a
result of inlining.  This allows it to completely 
"devirtualize" the testcase.

llvm-svn: 102146
2010-04-22 23:37:35 +00:00
Chris Lattner
5edbd34cff refactor the interface to InlineFunction so that most of the in/out
arguments are handled with a new InlineFunctionInfo class.  This 
makes it easier to extend InlineFunction to return more info in the
future.

llvm-svn: 102137
2010-04-22 23:07:58 +00:00
Chris Lattner
707b6f87ba when inlining something like this:
define void @f3(void (i8*)* %__f) ssp {
entry:
  call void %__f(i8* undef)
  unreachable
}

define void @f4(i8* %this) ssp align 2 {
entry:
  call void @f3(void (i8*)* @f2) ssp
  ret void
}

The inliner is turning the indirect call to %__f into a direct
call to F2.  Make the call graph more precise when this happens.

The inliner doesn't revisit call sites introduced by inlining,
so there isn't an easy way to test for this, but a more precise
callgraph is a good thing.

llvm-svn: 102131
2010-04-22 21:31:00 +00:00
Chris Lattner
efb9e5bf75 eliminate dead #include.
llvm-svn: 102119
2010-04-22 20:41:10 +00:00
Eric Christopher
e78496e5f1 Revert 101465, it broke internal OpenGL testing.
Probably the best way to know that all getOperand() calls have been handled
is to replace that API instead of updating.

llvm-svn: 101579
2010-04-16 23:37:20 +00:00
Gabor Greif
e7d6812008 reapply r101434
with a fix for self-hosting

rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary

llvm-svn: 101465
2010-04-16 15:33:14 +00:00
Gabor Greif
cd116e8c6a back out r101423 and r101397, they break llvm-gcc self-host on darwin10
llvm-svn: 101434
2010-04-16 01:16:20 +00:00
Gabor Greif
2e18d34d80 reapply r101364, which has been backed out in r101368
with a fix

rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary

llvm-svn: 101397
2010-04-15 20:51:13 +00:00
Gabor Greif
6022150477 back out r101364, as it trips the linux nightlybot on some clang C++ tests
llvm-svn: 101368
2010-04-15 12:46:56 +00:00
Gabor Greif
428ca23bbd rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary

llvm-svn: 101364
2010-04-15 10:49:53 +00:00
Mon P Wang
484bbe6aa9 Reapply address space patch after fixing an issue in MemCopyOptimizer.
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)

llvm-svn: 100304
2010-04-04 03:10:48 +00:00
Mon P Wang
0ccf050ca3 Revert r100191 since it breaks objc in clang
llvm-svn: 100199
2010-04-02 18:43:02 +00:00
Mon P Wang
a01350755e Reapply address space patch after fixing an issue in MemCopyOptimizer.
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)

llvm-svn: 100191
2010-04-02 18:04:15 +00:00
Bob Wilson
aae933cc81 Revert Mon Ping's change 99928, since it broke all the llvm-gcc buildbots.
llvm-svn: 99948
2010-03-30 22:27:04 +00:00
Mon P Wang
9351ea594a Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
A update of langref will occur in a subsequent checkin.

llvm-svn: 99928
2010-03-30 20:55:56 +00:00
Eric Christopher
e293604548 Temporarily revert this, it's causing an issue with an internal project.
llvm-svn: 99451
2010-03-24 23:35:21 +00:00
Chris Lattner
a9eb6a6987 add some accessors to callsite/callinst/invokeinst to check
for the noinline attribute, and make the inliner refuse to
inline a call site when the call site is marked noinline even
if the callee isn't.  This fixes PR6682.

llvm-svn: 99341
2010-03-23 22:59:07 +00:00
Devang Patel
3b08c33f33 Remove dead debug info intrinsics.
Intrinsic::dbg_stoppoint
 Intrinsic::dbg_region_start 
 Intrinsic::dbg_region_end 
 Intrinsic::dbg_func_start
AutoUpgrade simply ignores these intrinsics now.

llvm-svn: 92557
2010-01-05 01:10:40 +00:00
Devang Patel
5c983cb2ab Implement support to debug inlined functions.
llvm-svn: 86748
2009-11-10 23:06:00 +00:00
Chris Lattner
aecb9a4040 Fix a pretty serious misfeature of the inliner: if it inlines a function
with multiple return values it inserts a PHI to merge them all together.
However, if the return values are all the same, it ends up with a pointless
PHI and this pointless PHI happens to really block SRoA from happening in 
at least a silly C++ example written by Doug, but probably others.  This 
fixes rdar://7339069.

llvm-svn: 85206
2009-10-27 05:39:41 +00:00
Chris Lattner
7f32b72975 Simplify some code (first hunk) and fix PR5208 (second hunk) by
updating the callgraph when introducing a call.

llvm-svn: 84310
2009-10-17 05:39:39 +00:00
Duncan Sands
2400ad7236 Introduce and use convenience methods for getting pointer types
where the element is of a basic builtin type.  For example, to get
an i8* use getInt8PtrTy.

llvm-svn: 83379
2009-10-06 15:40:36 +00:00
Nick Lewycky
d21c325f09 Instruction::clone does not need to take an LLVMContext&. Remove that and
update all the callers.

llvm-svn: 82889
2009-09-27 07:38:41 +00:00
Eric Christopher
6acdc39b86 Fix comment.
llvm-svn: 81138
2009-09-06 22:20:54 +00:00
Chris Lattner
e4173b39bd remove a bunch of explicit code previously needed to update the
callgraph.  This is now dead because RAUW does the job.

llvm-svn: 80703
2009-09-01 18:44:06 +00:00
Chris Lattner
53698d7fd7 Change CallGraphNode to maintain it's Function as an AssertingVH
for sanity.  This didn't turn up any bugs.

Change CallGraphNode to maintain its "callsite" information in the 
call edges list as a WeakVH instead of as an instruction*.  This fixes
a broad class of dangling pointer bugs, and makes CallGraph have a number
of useful invariants again.  This fixes the class of problem indicated
by PR4029 and PR3601.

llvm-svn: 80663
2009-09-01 06:31:31 +00:00
Devang Patel
fbaeda732e Reapply 79977.
Use MDNodes to encode debug info in llvm IR.

llvm-svn: 80406
2009-08-28 23:24:31 +00:00
Chris Lattner
becb2e9534 enhance InlineFunction to be able to optionally return
a the list of static allocas that it inlined.

llvm-svn: 80203
2009-08-27 04:20:52 +00:00
Chris Lattner
7fbde3dedf smallvectorize the list of returns built by CloneAndPruneFunctionInto.
llvm-svn: 80202
2009-08-27 04:02:30 +00:00
Chris Lattner
71e787d93b reduce inlining factor some stuff out to a static helper function,
and other code cleanups.  No functionality change.

llvm-svn: 80199
2009-08-27 03:51:50 +00:00
Devang Patel
10c075a316 Revert 79977. It causes llvm-gcc bootstrap failures on some platforms.
llvm-svn: 80073
2009-08-26 05:01:18 +00:00
Devang Patel
7d42bfab6c Update DebugInfo interface to use metadata, instead of special named llvm.dbg.... global variables, to encode debugging information in llvm IR. This is mostly a mechanical change that tests metadata support very well.
This change speeds up llvm-gcc by more then 6% at "-O0 -g" (measured by compiling InstructionCombining.cpp!)

llvm-svn: 79977
2009-08-25 05:24:07 +00:00
Owen Anderson
9df206d02d Push LLVMContexts through the IntegerType APIs.
llvm-svn: 78948
2009-08-13 21:58:54 +00:00
Owen Anderson
93ccaf5c60 Move more code back to 2.5 APIs.
llvm-svn: 77635
2009-07-30 23:03:37 +00:00