Remove a few old FIXMEs from the original commit of the Metadata/Value
split in r223802. These are commented out assertions to the effect that
calls between mapValue and mapMetadata never return nullptr.
(The only behaviour change is that Mapper::mapSimpleMetadata memoizes
the nullptr return.)
When I originally rewrote the mapping code, I thought we could be
stricter in the new metadata hierarchy and never return nullptr when
RF_NullMapMissingGlobalValues was off. It's still not entirely clear to
me why these assertions failed (a few months ago, I had a theory that I
forgot to write down, but that's helping no one).
Understood or not, I no longer see how these commented-out assertions
would be useful. I'm relegating them to the annals of source control
before making significant changes to ValueMapper.cpp.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@265282 91177308-0d34-0410-b5e6-96231b3b80d8
This adds an assertion to maintain the property from r265273. When
Mapper::mapSimpleMetadata calls Mapper::mapValue, it should not find its
way back to mapMetadataImpl. This guarantees that mapSimpleMetadata is
not involved in any recursion.
Since Mapper::mapValue calls out to arbitrary materializers, we need to
save a bit on the ValueMap to make this assertion effective.
There should be no functionality change here. This co-recursion should
already have been impossible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@265276 91177308-0d34-0410-b5e6-96231b3b80d8
The main change is to delay materializing GlobalValue initializers from
Mapper::mapValue until Mapper::~Mapper. This effectively removes all
recursion from mapSimplifiedMetadata, as promised in r265270.
mapSimplifiedMetadata calls mapValue for ConstantAsMetadata nodes to
find the mapped constant, and now it shouldn't be possible for mapValue
to indirectly re-invoke mapMetadata. I'll add an assertion to that
effect in a follow-up (separated so that the assertion can easily be
reverted independently, if it comes to that).
This a step toward a broader goal: converting Mapper::mapMetadataImpl
from a recursive to an iterative algorithm.
When a BlockAddress points at a BasicBlock inside an unmaterialized
function body, we need to delay it until the function body is
materialized in Mapper::~Mapper. This commit creates a temporary
BasicBlock and returns a new BlockAddress, then RAUWs the BasicBlock
once it is known. This situation should be extremely rare since a
BlockAddress is usually used from within the function it's referencing
(and BlockAddress itself is rare).
There should be no observable functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@265273 91177308-0d34-0410-b5e6-96231b3b80d8
Split out a helper for mapping metadata without operands. This is any
metadata that is not an MDNode, and any MDNode where the answer is known
without looking at operands.
Through some weird twists, this function is co-recursive:
mapSimpleMetadata
=> MapValue
=> materializeInitFor
=> linkFunctionBody
=> RemapInstructions
=> MapMetadata
=> mapSimpleMetadata
I plan to break the recursion in a follow-up.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@265270 91177308-0d34-0410-b5e6-96231b3b80d8
Instead of checking live during MapMetadata whether a subprogram is
needed, seed the ValueMap with `nullptr` up-front.
There is a small hypothetical functionality change. Previously, calling
MapMetadataOp on a node whose "scope:" chain led to an unneeded
subprogram would return nullptr. However, if that were ever called,
then the subprogram would be needed; a situation that the IRMover is
supposed to avoid a priori!
Besides cleaning up the code a little, this restores a nice property:
MapMetadataOp returns the same as MapMetadata.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@265229 91177308-0d34-0410-b5e6-96231b3b80d8
Support seeding a ValueMap with nullptr for Metadata entries, a
situation I didn't consider in the Metadata/Value split.
I added a ValueMapper::getMappedMD accessor that returns an
Optional<Metadata*> with the mapped (possibly null) metadata. IRMover
needs to use this to avoid modifying the map when it's checking for
unneeded subprograms. I updated a call from bugpoint since I find the
new code clearer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@265228 91177308-0d34-0410-b5e6-96231b3b80d8
They're not necessary (since the stack pointer is trivially restored on
return), and the way LLVM inserts the stackrestore calls breaks the
IR (we get a stackrestore between the deoptimize call and the return).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@265101 91177308-0d34-0410-b5e6-96231b3b80d8
They're not necessary (since the lifetime of the alloca is trivially
over due to the return), and the way LLVM inserts the lifetime.end
markers breaks the IR (we get a lifetime end marker between the
deoptimize call and the return).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@265100 91177308-0d34-0410-b5e6-96231b3b80d8
"blockaddress" can not apply to an external function. All
blockaddress constant uses must belong to the same module as the
definition of the target function.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@265061 91177308-0d34-0410-b5e6-96231b3b80d8
Only force "extern" linkage if the function used to be a definition
in the source module. Declarations keep their original linkage.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@265043 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
As discussed on llvm-dev[1].
This change adds the basic boilerplate code around having this intrinsic
in LLVM:
- Changes in Intrinsics.td, and the IR Verifier
- A lowering pass to lower @llvm.experimental.guard to normal
control flow
- Inliner support
[1]: http://lists.llvm.org/pipermail/llvm-dev/2016-February/095523.html
Reviewers: reames, atrick, chandlerc, rnk, JosephTremoulet, echristo
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D18527
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@264976 91177308-0d34-0410-b5e6-96231b3b80d8
Commit r260791 contained an error in that it would introduce a cross-module
reference in the old module. It also introduced O(N^2) complexity in the
module cloner by requiring the entire module to be visited for each function.
Fix both of these problems by avoiding use of the CloneDebugInfoMetadata
function (which is only designed to do intra-module cloning) and cloning
function-attached metadata in the same way that we clone all other metadata.
Differential Revision: http://reviews.llvm.org/D18583
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@264935 91177308-0d34-0410-b5e6-96231b3b80d8
Prior to this patch, the MemorySSA caching visitor would cache all
calls that it visited. When paired with phi optimization, this can be
problematic. Consider:
define void @foo() {
; 1 = MemoryDef(liveOnEntry)
call void @clobberFunction()
br i1 undef, label %if.end, label %if.then
if.then:
; MemoryUse(??)
call void @readOnlyFunction()
; 2 = MemoryDef(1)
call void @clobberFunction()
br label %if.end
if.end:
; 3 = MemoryPhi(...)
; MemoryUse(?)
call void @readOnlyFunction()
ret void
}
When optimizing MemoryUse(?), we visit defs 1 and 2, so we note to
cache them later. We ultimately end up not being able to optimize
passed the Phi, so we set MemoryUse(?) to point to the Phi. We then
cache the clobbering call for def 1 to be the Phi.
This commit changes this behavior so that we wipe out any calls
added to VisistedCalls while visiting the defs of a phi we couldn't
optimize.
Aside: With this patch, we now can bootstrap clang/LLVM without a
single MemorySSA verifier failure. Woohoo. :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@264820 91177308-0d34-0410-b5e6-96231b3b80d8
This patch teaches the caching MemorySSA walker a few things:
1. Not to walk Phis we've walked before. It seems that we tried to do
this before, but it didn't work so well in cases like:
define void @foo() {
%1 = alloca i8
%2 = alloca i8
br label %begin
begin:
; 3 = MemoryPhi({%0,liveOnEntry},{%end,2})
; 1 = MemoryDef(3)
store i8 0, i8* %2
br label %end
end:
; MemoryUse(?)
load i8, i8* %1
; 2 = MemoryDef(1)
store i8 0, i8* %2
br label %begin
}
Because we wouldn't put Phis in Q.Visited until we tried to visit them.
So, when trying to optimize MemoryUse(?):
- We would visit 3 above
- ...Which would make us put {%0,liveOnEntry} in Q.Visited
- ...Which would make us visit {%0,liveOnEntry}
- ...Which would make us put {%end,2} in Q.Visited
- ...Which would make us visit {%end,2}
- ...Which would make us visit 3
- ...Which would realize we've already visited everything in 3
- ...Which would make us conservatively return 3.
In the added test-case, (@looped_visitedonlyonce) this behavior would
cause us to give incorrect results. Specifically, we'd visit 4 twice
in the same query, but on the second visit, we'd skip while.cond because
it had been visited, visit if.then/if.then2, and cache "1" as the
clobbering def on the way back.
2. If we try to walk the defs of a {Phi,MemLoc} and see it has been
visited before, just hand back the Phi we're trying to optimize.
I promise this isn't as terrible as it seems. :)
We now insert {Phi,MemLoc} pairs just before walking the Phi's upward
defs. So, we check the cache for the {Phi,MemLoc} pair before checking
if we've already walked the Phi.
The {Phi,MemLoc} pair is (almost?) always guaranteed to have a cache
entry if we've already fully walked it, because we cache as we go.
So, if the {Phi,MemLoc} pair isn't in cache, either:
(a) we must be in the process of visiting it (in which case, we can't
give a better answer in a cache-as-we-go DFS walker)
(b) we visited it, but didn't cache it on the way back (...which seems
to require `ModifyingAccess` to not dominate `StartingAccess`,
so I'm 99% sure that would be an error. If it's not an error, I
haven't been able to get it to happen locally, so I suspect it's
rare.)
- - - - -
As a consequence of this change, we no longer skip upward defs of phis,
so we can kill the `VisitedOnlyOne` check. This gives us better accuracy
than we had before, at the cost of potentially doing a bit more work
when we have a loop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@264814 91177308-0d34-0410-b5e6-96231b3b80d8
Since we have moved to a model where functions are imported in bulk from
each source module after making summary-based importing decisions, there
is no longer a need to link metadata as a postpass, and all users have
been removed.
This essentially reverts r255909 and follow-on fixes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@264763 91177308-0d34-0410-b5e6-96231b3b80d8
When eliminating or merging almost empty basic blocks, the existence of non-trivial PHI nodes
is currently used to recognize potential loops of which the block is the header and keep the block.
However, the current algorithm fails if the loops' exit condition is evaluated only with volatile
values hence no PHI nodes in the header. Especially when such a loop is an outer loop of a nested
loop, the loop is collapsed into a single loop which prevent later optimizations from being
applied (e.g., transforming nested loops into simplified forms and loop vectorization).
The patch augments the existing PHI node-based check by adding a pre-test if the BB actually
belongs to a set of loop headers and not eliminating it if yes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@264697 91177308-0d34-0410-b5e6-96231b3b80d8
Personality is copied as part of copyFunctionAttributes, but it is
invalid on a declaration. Remove the personality attribute it the
function body is not cloned.
Also add a verifier run over output modules in the llvm-split tool.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@264667 91177308-0d34-0410-b5e6-96231b3b80d8
When eliminating or merging almost empty basic blocks, the existence of non-trivial PHI nodes
is currently used to recognize potential loops of which the block is the header and keep the block.
However, the current algorithm fails if the loops' exit condition is evaluated only with volatile
values hence no PHI nodes in the header. Especially when such a loop is an outer loop of a nested
loop, the loop is collapsed into a single loop which prevent later optimizations from being
applied (e.g., transforming nested loops into simplified forms and loop vectorization).
The patch augments the existing PHI node-based check by adding a pre-test if the BB actually
belongs to a set of loop headers and not eliminating it if yes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@264596 91177308-0d34-0410-b5e6-96231b3b80d8
This changes RS4GC to lower calls to ``@llvm.experimental.deoptimize``
to gc.statepoints wrapping ``__llvm_deoptimize``, and changes
``callsGCLeafFunction`` to recognize ``@llvm.experimental.deoptimize``
as a non GC leaf function.
I've had to hard code the ``"__llvm_deoptimize"`` name in
RewriteStatepointsForGC; since ``TargetLibraryInfo`` is available only
during codegen. This isn't without precedent in the codebase, so I'm
not overtly concerned.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@264456 91177308-0d34-0410-b5e6-96231b3b80d8
There are a few bugs in the walker that this patch addresses.
Primarily:
- Caching can break when we have multiple BBs without phis
- We weren't optimizing some phis properly
- Because of how the DFS iterator works, there were times where we
wouldn't cache any results of our DFS
I left the test cases with FIXMEs in, because I'm not sure how much
effort it will take to get those to work (read: We'll probably
ultimately have to end up redoing the walker, or we'll have to come up
with some creative caching tricks), and more test coverage = better.
Differential Revision: http://reviews.llvm.org/D18065
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@264180 91177308-0d34-0410-b5e6-96231b3b80d8
When you have multiple LCSSA (single-operand) PHIs that are converted
into two-operand PHIs due to versioning, only assert that the PHI
currently being converted has a single operand. I.e. we don't want to
check PHIs that were converted earlier in the loop.
Fixes PR27023.
Thanks to Karl-Johan Karlsson for the minimized testcase!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@264081 91177308-0d34-0410-b5e6-96231b3b80d8
If we have a BB with only MemoryDefs, live-in calculations will ignore
it. This means we get results like this:
define void @foo(i8* %p) {
; 1 = MemoryDef(liveOnEntry)
store i8 0, i8* %p
br i1 undef, label %if.then, label %if.end
if.then:
; 2 = MemoryDef(1)
store i8 1, i8* %p
br label %if.end
if.end:
; 3 = MemoryDef(1)
store i8 2, i8* %p
ret void
}
...When there should be a MemoryPhi in the `if.end` BB.
This patch fixes that behavior.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@263991 91177308-0d34-0410-b5e6-96231b3b80d8
The sinpi/cospi can be replaced with sincospi to remove unnecessary
computations. However, we need to make sure that the calls are within
the same function!
This fixes PR26993.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@263875 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
ThinLTO is relying on linkInModule to import selected function.
However a lot of "magic" was hidden in linkInModule and the IRMover,
who would rename and promote global variables on the fly.
This is moving to an approach where the steps are decoupled and the
client is reponsible to specify the list of globals to import.
As a consequence some test are changed because they were relying on
the previous behavior which was importing the definition of *every*
single global without control on the client side.
Now the burden is on the client to decide if a global has to be imported
or not.
Reviewers: tejohnson
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D18122
From: Mehdi Amini <mehdi.amini@apple.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@263863 91177308-0d34-0410-b5e6-96231b3b80d8
The loop on IVOperand's incoming values assumes IVOperand to be an
induction variable on the loop over which `S Pred X` is invariant;
otherwise loop invariant incoming values to IVOperand are not guaranteed
to dominate the comparision.
This fixes PR26973.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@263827 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Use the new LoopVersioning facility (D16712) to add noalias metadata in
the vector loop if we versioned with memchecks. This can enable some
optimization opportunities further down the pipeline (see the included
test or the benchmark improvement quoted in D16712).
The test also covers the bug I had in the initial version in D16712.
The vectorizer did not previously use LoopVersioning. The reason is
that the vectorizer performs its transformations in single shot. It
creates an empty single-block vector loop that it then populates with
the widened, if-converted instructions. Thus creating an intermediate
versioned scalar loop seems wasteful.
So this patch (rather than bringing in LoopVersioning fully) adds a
special interface to LoopVersioning to allow the vectorizer to add
no-alias annotation while still performing its own versioning.
As the vectorizer propagates metadata from the instructions in the
original loop to the vector instructions we also check the pointer in
the original instruction and see if LoopVersioning can add no-alias
metadata based on the issued memchecks.
Reviewers: hfinkel, nadav, mzolotukhin
Subscribers: mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D17191
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@263744 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
If we decide to version a loop to benefit a transformation, it makes
sense to record the now non-aliasing accesses in the newly versioned
loop. This allows non-aliasing information to be used by subsequent
passes.
One example is 456.hmmer in SPECint2006 where after loop distribution,
we vectorize one of the newly distributed loops. To vectorize we
version this loop to fully disambiguate may-aliasing accesses. If we
add the noalias markers, we can use the same information in a later DSE
pass to eliminate some dead stores which amounts to ~25% of the
instructions of this hot memory-pipeline-bound loop. The overall
performance improves by 18% on our ARM64.
The scoped noalias annotation is added in LoopVersioning. The patch
then enables this for loop distribution. A follow-on patch will enable
it for the vectorizer. Eventually this should be run by default when
versioning the loop but first I'd like to get some feedback whether my
understanding and application of scoped noalias metadata is correct.
Essentially my approach was to have a separate alias domain for each
versioning of the loop. For example, if we first version in loop
distribution and then in vectorization of the distributed loops, we have
a different set of memchecks for each versioning. By keeping the scopes
in different domains they can conveniently be defined independently
since different alias domains don't affect each other.
As written, I also have a separate domain for each loop. This is not
necessary and we could save some metadata here by using the same domain
across the different loops. I don't think it's a big deal either way.
Probably the best is to review the tests first to see if I mapped this
problem correctly to scoped noalias markers. I have plenty of comments
in the tests.
Note that the interface is prepared for the vectorizer which needs the
annotateInstWithNoAlias API. The vectorizer does not use LoopVersioning
so we need a way to pass in the versioned instructions. This is also
why the maps have to become part of the object state.
Also currently, we only have an AA-aware DSE after the vectorizer if we
also run the LTO pipeline. Depending how widely this triggers we may
want to schedule a DSE toward the end of the regular pass pipeline.
Reviewers: hfinkel, nadav, ashutosh.nema
Subscribers: mssimpso, aemerson, llvm-commits, mcrosier
Differential Revision: http://reviews.llvm.org/D16712
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@263743 91177308-0d34-0410-b5e6-96231b3b80d8
This is similar to D18133 where we allowed profile weights on select instructions.
This extends that change to also allow the 'unpredictable' attribute of branches to apply to selects.
A test to check that 'unpredictable' metadata is preserved when cloning instructions was checked in at:
http://reviews.llvm.org/rL263648
Differential Revision: http://reviews.llvm.org/D18220
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@263716 91177308-0d34-0410-b5e6-96231b3b80d8
This was a latent bug that got exposed by the change to add LoopSimplify
as a dependence to LoopLoadElimination. Since LoopInfo was corrupted
after LV, LoopSimplify mis-compiled nbench in the test-suite (more
details in the PR).
The problem was that when we create the blocks for predicated stores we
didn't add those to any loops.
The original testcase for store predication provides coverage for this
assuming we verify LI on the way out of LV.
Fixes PR26952.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@263565 91177308-0d34-0410-b5e6-96231b3b80d8
(Resubmitting after fixing missing file issue)
With the changes in r263275, there are now more than just functions in
the summary. Completed the renaming of data structures (started in
r263275) to reflect the wider scope. In particular, changed the
FunctionIndex* data structures to ModuleIndex*, and renamed related
variables and comments. Also renamed the files to reflect the changes.
A companion clang patch will immediately succeed this patch to reflect
this renaming.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@263513 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Specifically, when we perform runtime loop unrolling of a loop that
contains a convergent op, we can only unroll k times, where k divides
the loop trip multiple.
Without this change, we'll happily unroll e.g. the following loop
for (int i = 0; i < N; ++i) {
if (i == 0) convergent_op();
foo();
}
into
int i = 0;
if (N % 2 == 1) {
convergent_op();
foo();
++i;
}
for (; i < N - 1; i += 2) {
if (i == 0) convergent_op();
foo();
foo();
}.
This is unsafe, because we've just added a control-flow dependency to
the convergent op in the prelude.
In general, runtime unrolling loops that contain convergent ops is safe
only if we don't have emit a prelude, which occurs when the unroll count
divides the trip multiple.
Reviewers: resistor
Subscribers: llvm-commits, mzolotukhin
Differential Revision: http://reviews.llvm.org/D17526
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@263509 91177308-0d34-0410-b5e6-96231b3b80d8
With the changes in r263275, there are now more than just functions in
the summary. Completed the renaming of data structures (started in
r263275) to reflect the wider scope. In particular, changed the
FunctionIndex* data structures to ModuleIndex*, and renamed related
variables and comments. Also renamed the files to reflect the changes.
A companion clang patch will immediately succeed this patch to reflect
this renaming.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@263490 91177308-0d34-0410-b5e6-96231b3b80d8
As noted in:
https://llvm.org/bugs/show_bug.cgi?id=26636
This doesn't accomplish anything on its own. It's the first step towards preserving
and using branch weights with selects.
The next step would be to make sure we're propagating the info in all of the other
places where we create selects (SimplifyCFG, InstCombine, etc). I don't think there's
an easy fix to make this happen; we have to look at each transform individually to
determine how to correctly propagate the weights.
Along with that step, we need to then use the weights when making subsequent transform
decisions such as discussed in http://reviews.llvm.org/D16836.
The inliner test is independent but closely related. It verifies that metadata is
preserved when both branches and selects are cloned.
Differential Revision: http://reviews.llvm.org/D18133
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@263482 91177308-0d34-0410-b5e6-96231b3b80d8
This reapplies r263258, which was reverted in r263321 because
of issues on Clang side.
From: Mehdi Amini <mehdi.amini@apple.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@263393 91177308-0d34-0410-b5e6-96231b3b80d8