Extend LinkModules to pass a ValueMaterializer to RemapInstruction and friends to lazily create Functions for lazily linked globals. This is a big win when linking small modules with large (mostly unused) library modules.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182776 91177308-0d34-0410-b5e6-96231b3b80d8
Other passes, PPC counter-loop formation for example, also need to add loop
preheaders outside of the regular loop simplification pass. This makes
InsertPreheaderForLoop a global function so that it can be used by other
passes.
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182299 91177308-0d34-0410-b5e6-96231b3b80d8
the things, and renames it to CBindingWrapping.h. I also moved
CBindingWrapping.h into Support/.
This new file just contains the macros for defining different wrap/unwrap
methods.
The calls to those macros, as well as any custom wrap/unwrap definitions
(like for array of Values for example), are put into corresponding C++
headers.
Doing this required some #include surgery, since some .cpp files relied
on the fact that including Wrap.h implicitly caused the inclusion of a
bunch of other things.
This also now means that the C++ headers will include their corresponding
C API headers; for example Value.h must include llvm-c/Core.h. I think
this is harmless, since the C API headers contain just external function
declarations and some C types, so I don't believe there should be any
nasty dependency issues here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180881 91177308-0d34-0410-b5e6-96231b3b80d8
This resurrects r179957, but adds code that makes sure we don't touch
atomic/volatile stores:
This transformation will transform a conditional store with a preceeding
uncondtional store to the same location:
a[i] =
may-alias with a[i] load
if (cond)
a[i] = Y
into an unconditional store.
a[i] = X
may-alias with a[i] load
tmp = cond ? Y : X;
a[i] = tmp
We assume that on average the cost of a mispredicted branch is going to be
higher than the cost of a second store to the same location, and that the
secondary benefits of creating a bigger basic block for other optimizations to
work on outway the potential case where the branch would be correctly predicted
and the cost of the executing the second store would be noticably reflected in
performance.
hmmer's execution time improves by 30% on an imac12,2 on ref data sets. With
this change we are on par with gcc's performance (gcc also performs this
transformation). There was a 1.2 % performance improvement on a ARM swift chip.
Other tests in the test-suite+external seem to be mostly uninfluenced in my
experiments:
This optimization was triggered on 41 tests such that the executable was
different before/after the patch. Only 1 out of the 40 tests (dealII) was
reproducable below 100% (by about .4%). Given that hmmer benefits so much I
believe this to be a fair trade off.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180731 91177308-0d34-0410-b5e6-96231b3b80d8
Since we can't guarantee that the original dbg.declare instrinsic
is removed by LowerDbgDeclare(), we need to make sure that we are
not inserting the same dbg.value intrinsic over and over.
This removes tons of redundant DIEs when compiling optimized code.
rdar://problem/13056109
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180615 91177308-0d34-0410-b5e6-96231b3b80d8
debug location. This solves a problem where range of an inlined
subroutine is emitted wrongly.
Patch by Manman Ren.
Fixes rdar://problem/12415623
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180140 91177308-0d34-0410-b5e6-96231b3b80d8
There is the temptation to make this tranform dependent on target information as
it is not going to be beneficial on all (sub)targets. Therefore, we should
probably do this in MI Early-Ifconversion.
This reverts commit r179957. Original commit message:
"SimplifyCFG: If convert single conditional stores
This transformation will transform a conditional store with a preceeding
uncondtional store to the same location:
a[i] =
may-alias with a[i] load
if (cond)
a[i] = Y
into an unconditional store.
a[i] = X
may-alias with a[i] load
tmp = cond ? Y : X;
a[i] = tmp
We assume that on average the cost of a mispredicted branch is going to be
higher than the cost of a second store to the same location, and that the
secondary benefits of creating a bigger basic block for other optimizations to
work on outway the potential case were the branch would be correctly predicted
and the cost of the executing the second store would be noticably reflected in
performance.
hmmer's execution time improves by 30% on an imac12,2 on ref data sets. With
this change we are on par with gcc's performance (gcc also performs this
transformation). There was a 1.2 % performance improvement on a ARM swift chip.
Other tests in the test-suite+external seem to be mostly uninfluenced in my
experiments:
This optimization was triggered on 41 tests such that the executable was
different before/after the patch. Only 1 out of the 40 tests (dealII) was
reproducable below 100% (by about .4%). Given that hmmer benefits so much I
believe this to be a fair trade off.
I am going to watch performance numbers across the builtbots and will revert
this if anything unexpected comes up."
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179980 91177308-0d34-0410-b5e6-96231b3b80d8
This transformation will transform a conditional store with a preceeding
uncondtional store to the same location:
a[i] =
may-alias with a[i] load
if (cond)
a[i] = Y
into an unconditional store.
a[i] = X
may-alias with a[i] load
tmp = cond ? Y : X;
a[i] = tmp
We assume that on average the cost of a mispredicted branch is going to be
higher than the cost of a second store to the same location, and that the
secondary benefits of creating a bigger basic block for other optimizations to
work on outway the potential case were the branch would be correctly predicted
and the cost of the executing the second store would be noticably reflected in
performance.
hmmer's execution time improves by 30% on an imac12,2 on ref data sets. With
this change we are on par with gcc's performance (gcc also performs this
transformation). There was a 1.2 % performance improvement on a ARM swift chip.
Other tests in the test-suite+external seem to be mostly uninfluenced in my
experiments:
This optimization was triggered on 41 tests such that the executable was
different before/after the patch. Only 1 out of the 40 tests (dealII) was
reproducable below 100% (by about .4%). Given that hmmer benefits so much I
believe this to be a fair trade off.
I am going to watch performance numbers across the builtbots and will revert
this if anything unexpected comes up.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179957 91177308-0d34-0410-b5e6-96231b3b80d8
If a switch instruction has a case for every possible value of its type,
with the same successor, SimplifyCFG would replace it with an icmp ult,
but the computation of the bound overflows in that case, which inverts
the test.
Patch by Jed Davis!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179587 91177308-0d34-0410-b5e6-96231b3b80d8
rather than checking if the source and destination have the same number of
arguments and copying the attributes over directly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179169 91177308-0d34-0410-b5e6-96231b3b80d8
How did this ever work?
Basically, if you have a function that's inlined into the caller, it may not
have any 'call' instructions, but any 'resume' instructions it may have should
still be forwarded to the outer (caller's) landing pad. This requires that all
of the 'landingpad' instructions in the callee have their clauses merged with
the caller's outer 'landingpad' instruction (hence the bit of ugly code in the
`forwardResume' method).
Testcase in a follow commit to the test-suite repository.
<rdar://problem/13360379> & PR15555
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177680 91177308-0d34-0410-b5e6-96231b3b80d8
Nadav reported a performance regression due to the work I did to
merge the library call simplifier into instcombine [1]. The issue
is that a new LibCallSimplifier object is being created whenever
InstCombiner::runOnFunction is called. Every time a LibCallSimplifier
object is used to optimize a call it creates a hash table to map from
a function name to an object that optimizes functions of that name.
For short-lived LibCallSimplifier instances this is quite inefficient.
Especially for cases where no calls are actually simplified.
This patch fixes the issue by dropping the hash table and implementing
an explicit lookup function to correlate the function name to the object
that optimizes functions of that name. This avoids the cost of always
building and destroying the hash table in cases where the LibCallSimplifier
object is short-lived and avoids the cost of building the table when no
simplifications are actually preformed.
On a benchmark containing 100,000 calls where none of them are simplified
I noticed a 30% speedup. On a benchmark containing 100,000 calls where
all of them are simplified I noticed an 8% speedup.
[1] http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130304/167639.html
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176840 91177308-0d34-0410-b5e6-96231b3b80d8
An invoke may require a table entry. For instance, when the function it calls
is expected to throw.
<rdar://problem/13360379>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176827 91177308-0d34-0410-b5e6-96231b3b80d8
Fixes rdar:13349374.
Volatile loads and stores need to be preserved even if the language
standard says they are undefined. "volatile" in this context means "get
out of the way compiler, let my platform handle it".
Additionally, this is the only way I know of with llvm to write to the
first page (when hardware allows) without dropping to assembly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176599 91177308-0d34-0410-b5e6-96231b3b80d8
* Only apply divide bypass optimization when not optimizing for size.
* Fixed bug caused by constant for 0 value of type Int32,
used dividend type to generate the constant instead.
* For atom x86-64 apply the divide bypass to use 16-bit divides instead of
64-bit divides when operand values are small enough.
* Added lit tests for 64-bit divide bypass.
Patch by Tyler Nowicki!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176442 91177308-0d34-0410-b5e6-96231b3b80d8
enhancement done the trivial way; by extending inputs and truncating outputs
which is addequate for targets with little or no support for integer arithmetic
on integer types less than 32 bits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176139 91177308-0d34-0410-b5e6-96231b3b80d8
The 'nobuiltin' attribute is applied to call sites to indicate that LLVM should
not treat the callee function as a built-in function. I.e., it shouldn't try to
replace that function with different code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175835 91177308-0d34-0410-b5e6-96231b3b80d8
isn't using the default calling convention. However, if the transformation is
from a call to inline IR, then the calling convention doesn't matter.
rdar://13157990
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174724 91177308-0d34-0410-b5e6-96231b3b80d8
This is a re-worked version of r174048.
Given source IR:
call void @llvm.dbg.declare(metadata !{i32* %argc.addr}, metadata !14), !dbg !15
we used to generate
call void @llvm.dbg.declare(metadata !27, metadata !28), !dbg !29!27 = metadata !{null}
With this patch, we will correctly generate
call void @llvm.dbg.declare(metadata !{i32* %argc.addr}, metadata !27), !dbg !28
Looking up %argc.addr in ValueMap will return null, since %argc.addr is already
correctly set up, we can use identity mapping.
rdar://problem/13089880
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174093 91177308-0d34-0410-b5e6-96231b3b80d8
Given source IR:
call void @llvm.dbg.declare(metadata !{i32* %argc.addr}, metadata !14), !dbg !15
we used to generate
call void @llvm.dbg.declare(metadata !27, metadata !28), !dbg !29!27 = metadata !{null}
With this patch, we will correctly generate
call void @llvm.dbg.declare(metadata !{i32* %argc.addr}, metadata !27), !dbg !28
Looking up %argc.addr in ValueMap will return null, since %argc.addr is already
correctly set up, we can use identity mapping.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173946 91177308-0d34-0410-b5e6-96231b3b80d8
loops over instructions in the basic block or the use-def list of the
value, neither of which are really efficient when repeatedly querying
about values in the same basic block.
What's more, we already know that the CondBB is small, and so we can do
a much more efficient test by counting the uses in CondBB, and seeing if
those account for all of the uses.
Finally, we shouldn't blanket fail on any such instruction, instead we
should conservatively assume that those instructions are part of the
cost.
Note that this actually fixes a bug in the pass because
isUsedInBasicBlock has a really terrible bug in it. I'll fix that in my
next commit, but the fix for it would make this code suddenly take the
compile time hit I thought it already was taking, so I wanted to go
ahead and migrate this code to a faster & better pattern.
The bug in isUsedInBasicBlock was also causing other tests to test the
wrong thing entirely: for example we weren't actually disabling
speculation for floating point operations as intended (and tested), but
the test passed because we failed to speculate them due to the
isUsedInBasicBlock failure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173417 91177308-0d34-0410-b5e6-96231b3b80d8
Original commit message:
Plug TTI into the speculation logic, giving it a real cost interface
that can be specialized by targets.
The goal here is not to be more aggressive, but to just be more accurate
with very obvious cases. There are instructions which are known to be
truly free and which were not being modeled as such in this code -- see
the regression test which is distilled from an inner loop of zlib.
Everywhere the TTI cost model is insufficiently conservative I've added
explicit checks with FIXME comments to go add proper modelling of these
cost factors.
If this causes regressions, the likely solution is to make TTI even more
conservative in its cost estimates, but test cases will help here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173357 91177308-0d34-0410-b5e6-96231b3b80d8
that can be specialized by targets.
The goal here is not to be more aggressive, but to just be more accurate
with very obvious cases. There are instructions which are known to be
truly free and which were not being modeled as such in this code -- see
the regression test which is distilled from an inner loop of zlib.
Everywhere the TTI cost model is insufficiently conservative I've added
explicit checks with FIXME comments to go add proper modelling of these
cost factors.
If this causes regressions, the likely solution is to make TTI even more
conservative in its cost estimates, but test cases will help here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173342 91177308-0d34-0410-b5e6-96231b3b80d8
a cost fuction that seems both a bit ad-hoc and also poorly suited to
evaluating constant expressions.
Notably, it is missing any support for trivial expressions such as
'inttoptr'. I could fix this routine, but it isn't clear to me all of
the constraints its other users are operating under.
The core protection that seems relevant here is avoiding the formation
of a select instruction wich a further chain of select operations in
a constant expression operand. Just explicitly encode that constraint.
Also, update the comments and organization here to make it clear where
this needs to go -- this should be driven off of real cost measurements
which take into account the number of constants expressions and the
depth of the constant expression tree.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173340 91177308-0d34-0410-b5e6-96231b3b80d8
terms of cost rather than hoisting a single instruction.
This does *not* change the cost model! We still set the cost threshold
at 1 here, it's just that we track it by accumulating cost rather than
by storing an instruction.
The primary advantage is that we no longer leave no-op intrinsics in the
basic block. For example, this will now move both debug info intrinsics
and a single instruction, instead of only moving the instruction and
leaving a basic block with nothing bug debug info intrinsics in it, and
those intrinsics now no longer ordered correctly with the hoisted value.
Instead, we now splice the entire conditional basic block's instruction
sequence.
This also places the code for checking the safety of hoisting next to
the code computing the cost.
Currently, the only observable side-effect of this change is that debug
info intrinsics are no longer abandoned. I'm not sure how to craft
a test case for this, and my real goal was the refactoring, but I'll
talk to Dave or Eric about how to add a test case for this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173339 91177308-0d34-0410-b5e6-96231b3b80d8