Commit Graph

339 Commits

Author SHA1 Message Date
Chandler Carruth
df1cbec93f [PM/ThinLTO] Port the ThinLTO pipeline (both components) to the new PM.
Based on the original patch by Davide, but I've adjusted the API exposed
to just be different entry points rather than exposing more state
parameters. I've factored all the common logic out so that we don't have
any duplicate pipelines, we just stitch them together in different ways.
I think this makes the build easier to reason about and understand.

This adds a direct method for getting the module simplification pipeline
as well as a method to get the optimization pipeline. While not my
express goal, this seems nice and gives a good place comment about the
restrictions that are imposed on them.

I did make some minor changes to the way the pipelines are structured
here, but hopefully not ones that are significant or controversial:

1) I sunk the PGO indirect call promotion to only be run when we have
   PGO enabled (or as part of the special ThinLTO pipeline).

2) I made the extra GlobalOpt run in ThinLTO just happen all the time
   and at a slightly more powerful place (before we remove available
   externaly functions). This seems like general goodness and not a big
   compile time sink, so it didn't make sense to *only* use it in
   ThinLTO. Fewer differences in the pipeline makes everything simpler
   IMO.

3) I hoisted the ThinLTO stop point pre-link above the the RPO function
   attr inference. The RPO inference won't infer anything terribly
   meaningful pre-link (recursiveness?) so it didn't make a lot of
   sense. But if the placement of RPO inference starts to matter, we
   should move it to the canonicalization phase anyways which seems like
   a better place for it (and there is a FIXME to this effect!). But
   that seemed a bridge too far for this patch.

If we ever need to parameterize these pipelines more heavily, we can
always sink the logic to helper functions with parameters to keep those
parameters out of the public API. But the changes above seemed minor
that we could possible get away without the parameters entirely.

I added support for parsing 'thinlto' and 'thinlto-pre-link' names in
pass pipelines to make it easy to test these routines and play with them
in larger pipelines. I also added a really basic manifest of passes test
that will show exactly how the pipelines behave and work as well as
making updates to them clear.

Lastly, this factoring does introduce a nesting layer of module pass
managers in the default pipeline. I don't think this is a big deal and
the flexibility of decoupling the pipelines seems easily worth it.

Differential Revision: https://reviews.llvm.org/D33540

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304407 91177308-0d34-0410-b5e6-96231b3b80d8
2017-06-01 11:39:39 +00:00
Chandler Carruth
88001205b7 [PM] Enable the new simple loop unswitch pass in the new pass manager
(where it is the only realistic option).

This passes the LLVM test suite for me, but I'm clearly still hammering
on this.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@303952 91177308-0d34-0410-b5e6-96231b3b80d8
2017-05-26 01:24:11 +00:00
Easwaran Raman
de37aad1ce [PM] Add ProfileSummaryAnalysis as a required pass in the new pipeline.
Differential revision: https://reviews.llvm.org/D32768

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@302170 91177308-0d34-0410-b5e6-96231b3b80d8
2017-05-04 16:58:45 +00:00
Chandler Carruth
bde56a9699 Disable GVN Hoist due to still more bugs being found in it. There is
also a discussion about exactly what we should do prior to re-enabling
it.

The current bug is http://llvm.org/PR32821 and the discussion about this
is in the review thread for r300200.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@301505 91177308-0d34-0410-b5e6-96231b3b80d8
2017-04-27 00:28:03 +00:00
Filipe Cabecinhas
70c9b6a6d8 Simplify the CFG after loop pass cleanup.
Summary:
Otherwise we might end up with some empty basic blocks or
single-entry-single-exit basic blocks.

This fixes PR32085

Reviewers: chandlerc, danielcdh

Subscribers: mehdi_amini, RKSimon, llvm-commits

Differential Revision: https://reviews.llvm.org/D30468

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@301395 91177308-0d34-0410-b5e6-96231b3b80d8
2017-04-26 12:02:41 +00:00
Piotr Padlewski
698667025a Handle invariant.group.barrier in BasicAA
Summary:
llvm.invariant.group.barrier returns pointer that mustalias
pointer it takes. It can't be marked with `returned` attribute,
because it would be remove easily. The other reason is that
only Alias Analysis can know about this, because if any other
pass would know it, then the result would be replaced with it's
argument, which would be invalid.

We can think about returned pointer as something that mustalias, but
it doesn't have to be bitwise the same as the argument.

Reviewers: dberlin, chandlerc, hfinkel, sanjoy

Subscribers: reames, nlewycky, rsmith, anna, amharc

Differential Revision: https://reviews.llvm.org/D31585

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@301227 91177308-0d34-0410-b5e6-96231b3b80d8
2017-04-24 19:37:17 +00:00
Piotr Padlewski
6f93f61542 Remove readnone from invariant.group.barrier
Summary:
Readnone attribute would cause CSE of two barriers with
the same argument, which is invalid by example:

    struct Base {
          virtual int foo() { return 42; }
    };

    struct Derived1 : Base {
          int foo() override { return 50; }
    };

    struct Derived2 : Base {
          int foo() override { return 100; }
    };

    void foo() {
        Base *x = new Base{};
        new (x) Derived1{};
        int a = std::launder(x)->foo();
        new (x) Derived2{};
        int b = std::launder(x)->foo();
    }

Here 2 calls of std::launder will produce @llvm.invariant.group.barrier,
which would be merged into one call, causing devirtualization
to devirtualize second call into Derived1::foo() instead of
Derived2::foo()

Reviewers: chandlerc, dberlin, hfinkel

Subscribers: llvm-commits, rsmith, amharc

Differential Revision: https://reviews.llvm.org/D31531

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300101 91177308-0d34-0410-b5e6-96231b3b80d8
2017-04-12 20:45:12 +00:00
Rafael Espindola
37e8db6fe5 Bring back r297624.
The issues was just a missing REQUIRES in the test.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@297661 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-13 20:00:25 +00:00
Rafael Espindola
5682e3ee66 Revert "Fix crash when multiple raw_fd_ostreams to stdout are created."
This reverts commit r297624.
It was failing on the bots.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@297657 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-13 19:38:32 +00:00
Rafael Espindola
b8ff3dcbf6 Fix crash when multiple raw_fd_ostreams to stdout are created.
If raw_fd_ostream is constructed with the path of "-", it claims
ownership of the stdout file descriptor. This means that it closes
stdout when it is destroyed. If there are multiple users of
raw_fd_ostream wrapped around stdout, then a crash can occur because
of operations on a closed stream.

An example of this would be running something like "clang -S -o - -MD
-MF - test.cpp". Alternatively, using outs() (which creates a local
version of raw_fd_stream to stdout) anywhere combined with such a
stream usage would cause the crash.

The fix duplicates the stdout file descriptor when used within
raw_fd_ostream, so that only that particular descriptor is closed when
the stream is destroyed.

Patch by James Henderson!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@297624 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-13 14:45:06 +00:00
Chandler Carruth
4fea871248 [PM/Inliner] Make the new PM's inliner process call edges across an
entire SCC before iterating on newly-introduced call edges resulting
from any inlined function bodies.

This more closely matches the behavior of the old PM's inliner. While it
wasn't really clear to me initially, this behavior is actually essential
to the inliner behaving reasonably in its current design.

Because the inliner is fundamentally a bottom-up inliner and all of its
cost modeling is designed around that it often runs into trouble within
an SCC where we don't have any meaningful bottom-up ordering to use. In
addition to potentially cyclic, infinite inlining that we block with the
inline history mechanism, it can also take seemingly simple call graph
patterns within an SCC and turn them into *insanely* large functions by
accidentally working top-down across the SCC without any of the
threshold limitations that traditional top-down inliners use.

Consider this diabolical monster.cpp file that Richard Smith came up
with to help demonstrate this issue:
```
template <int N> extern const char *str;

void g(const char *);

template <bool K, int N> void f(bool *B, bool *E) {
  if (K)
    g(str<N>);
  if (B == E)
    return;
  if (*B)
    f<true, N + 1>(B + 1, E);
  else
    f<false, N + 1>(B + 1, E);
}
template <> void f<false, MAX>(bool *B, bool *E) { return f<false, 0>(B, E); }
template <> void f<true, MAX>(bool *B, bool *E) { return f<true, 0>(B, E); }

extern bool *arr, *end;
void test() { f<false, 0>(arr, end); }
```

When compiled with '-DMAX=N' for various values of N, this will create an SCC
with a reasonably large number of functions. Previously, the inliner would try
to exhaust the inlining candidates in a single function before moving on. This,
unfortunately, turns it into a top-down inliner within the SCC. Because our
thresholds were never built for that, we will incrementally decide that it is
always worth inlining and proceed to flatten the entire SCC into that one
function.

What's worse, we'll then proceed to the next function, and do the exact same
thing except we'll skip the first function, and so on. And at each step, we'll
also make some of the constant factors larger, which is awesome.

The fix in this patch is the obvious one which makes the new PM's inliner use
the same technique used by the old PM: consider all the call edges across the
entire SCC before beginning to process call edges introduced by inlining. The
result of this is essentially to distribute the inlining across the SCC so that
every function incrementally grows toward the inline thresholds rather than
allowing the inliner to grow one of the functions vastly beyond the threshold.
The code for this is a bit awkward, but it works out OK.

We could consider in the future doing something more powerful here such as
prioritized order (via lowest cost and/or profile info) and/or a code-growth
budget per SCC. However, both of those would require really substantial work
both to design the system in a way that wouldn't break really useful
abstraction decomposition properties of the current inliner and to be tuned
across a reasonably diverse set of code and workloads. It also seems really
risky in many ways. I have only found a single real-world file that triggers
the bad behavior here and it is generated code that has a pretty pathological
pattern. I'm not worried about the inliner not doing an *awesome* job here as
long as it does *ok*. On the other hand, the cases that will be tricky to get
right in a prioritized scheme with a budget will be more common and idiomatic
for at least some frontends (C++ and Rust at least). So while these approaches
are still really interesting, I'm not in a huge rush to go after them. Staying
even closer to the existing PM's behavior, especially when this easy to do,
seems like the right short to medium term approach.

I don't really have a test case that makes sense yet... I'll try to find a
variant of the IR produced by the monster template metaprogram that is both
small enough to be sane and large enough to clearly show when we get this wrong
in the future. But I'm not confident this exists. And the behavior change here
*should* be unobservable without snooping on debug logging. So there isn't
really much to test.

The test case updates come from two incidental changes:
1) We now visit functions in an SCC in the opposite order. I don't think there
   really is a "right" order here, so I just update the test cases.
2) We no longer compute some analyses when an SCC has no call instructions that
   we consider for inlining.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@297374 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-09 11:35:40 +00:00
Zachary Turner
985631dcc8 Teach lit to expand glob expressions.
This will enable removing hacks throughout the codebase
in clang and compiler-rt that feed multiple inputs to a
testing utility by globbing, all of which are either disabled
on Windows currently or using xargs / find hacks.

Differential Revision: https://reviews.llvm.org/D30380

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@296904 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-03 18:55:24 +00:00
Daniel Berlin
894edf6642 NewGVN: Add debug counter for value numbering
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@296665 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-01 19:59:26 +00:00
Daniel Jasper
7c861d9e37 s/REQUIRES: Asserts/REQUIRES: asserts/
Other than this, we consistently use lower case.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@295623 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-19 23:26:00 +00:00
Daniel Berlin
1ca7d1765a Re-add debugcounter.ll with Requires: Asserts so that it only triggers when asserts are on
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@295598 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-19 06:45:02 +00:00
Daniel Berlin
87d7001dbd Which, in turn, causes build bots to fail that have it unexpectedly passing. So remove debugcounter.ll for now
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@295597 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-19 04:56:07 +00:00
Daniel Berlin
996ea533d7 XFAIL this test until we figure out what to do here, since it will fail if NDEBUG defined
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@295596 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-19 04:55:02 +00:00
Daniel Berlin
2287817fa9 Add a DebugCounter for PredicateInfo renaming, and an associated test
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@295594 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-19 04:29:01 +00:00
Peter Collingbourne
4f50278f40 opt: Rename -default-data-layout flag to -data-layout and make it always override the layout.
There isn't much point in a flag that only works if the data layout is empty.

Differential Revision: https://reviews.llvm.org/D30014

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@295468 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-17 17:36:52 +00:00
Brian Cain
24ee76184b Correct a typo, s/hosting/hoisting/
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@295066 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-14 16:41:10 +00:00
Davide Italiano
0125c63670 [PM] Hook up the instrumented PGO machinery in the new PM.
Differential Revision:  https://reviews.llvm.org/D29308

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294955 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-13 15:26:22 +00:00
Chandler Carruth
1f7ef68a4e [PM] Add devirtualization-based iteration utility into the new PM's
default pipeline.

A clang with this patch built with ASan and asserts can build all of the
test-suite as well, so it seems to not uncover any latent problems.

Differential Revision: https://reviews.llvm.org/D29853

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294888 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-12 05:38:04 +00:00
Chandler Carruth
9c2410924c [PM] Enable GlobalsAA in the new PM's pipeline by default.
All the invalidation issues and bugs in this seem to be fixed, it has
survived a full build of the test suite plus SPEC with asserts and ASan
enabled on the Clang binary used.

Differential Revision: https://reviews.llvm.org/D29815

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294887 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-12 05:34:04 +00:00
Chandler Carruth
350138bff9 [PM] Relax the patterns used in the new test I added because some
compilers don't print the typedef name.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294729 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-10 08:48:50 +00:00
Chandler Carruth
1587018791 [PM] Fix a bug in the new loop PM when handling functions with no loops.
Without any loops, we don't even bother to build the standard analyses
used by loop passes. Without these, we can't run loop analyses or
invalidate them properly. Unfortunately, we did these things in the
wrong order which would allow a loop analysis manager's proxy to be
built but then not have the standard analyses built. When we went to do
the invalidation in the proxy thing would fall apart. In the test case
provided, it would actually crash.

The fix is to carefully check for loops first, and to in fact build the
standard analyses before building the proxy. This allows it to
correctly trigger invalidation for those standard analyses.

An alternative might seem to be  to look at whether there are any loops
when doing invalidation, but this doesn't work when during the loop
pipeline run we delete the last loop. I've even included that as a test
case. It is both simpler and more robust to defer building the proxy
until there are definitely the standard set of analyses and indeed
loops.

This bug was uncovered by enabling GlobalsAA in the pipeline.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294728 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-10 08:26:58 +00:00
Chandler Carruth
88c0fa92a7 [PM] Add Argument Promotion to the pass pipeline.
This needs explicit requires of the optimization remark emission before
loop pass pipelines containing LICM as we no longer get it from the
inliner -- Argument Promotion may invalidate it. Technically the inliner
could also have broken this, but it never came up in testing.

Differential Revision: https://reviews.llvm.org/D29595

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294670 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-09 23:54:57 +00:00
Chandler Carruth
90fe7e78dc [PM] Port LoopLoadElimination to the new pass manager and wire it into
the main pipeline.

This is a very straight forward port. Nothing weird or surprising.

This brings the number of missing passes from the new PM's pipeline down
to three.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293249 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-27 01:32:26 +00:00
Chandler Carruth
f8e4cd6c85 [PM] Flesh out almost all of the late loop passes.
With this the per-module pass pipeline is *extremely* close to the
legacy PM. The missing pieces are:
- PruneEH (or some equivalent)
- ArgumentPromotion
- LoopLoadElimination
- LoopUnswitch

I'm going to work through those in essentially that order but this seems
like a worthwhile incremental step toward the end state.

One difference in what I have here from the legacy PM is that I've
consolidated some of the per-function passes at the very end of the
pipeline into the main optimization function pipeline. The intervening
passes are *really* uninteresting and so this seems very likely to have
any effect other than minor improvement to locality.

Note that there are still some failures in the test suite, but the
compiler doesn't crash or assert.

Differential Revision: https://reviews.llvm.org/D29114

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293241 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-27 00:50:21 +00:00
Chandler Carruth
e1c01ccd11 [PM] Enable the main loop pass pipelines with everything but
loop-unswitch in the main pipelines for the new PM.

All of these now work, and Clang built using this pipeline can build the
test suite and SPEC without hitting any asserts of ASan failures.

There are still some bugs hiding though -- 7 tests regress with the new
PM. I'm going to be investigating these, but it seems worthwhile to at
least get the pipelines in place so that others can play with them, and
they aren't completely broken.

Differential Revision: https://reviews.llvm.org/D29113

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293225 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-26 23:21:17 +00:00
Chandler Carruth
5585626232 [PH] Replace uses of AssertingVH from members of analysis results with
a lazy-asserting PoisoningVH.

AssertVH is fundamentally incompatible with cache-invalidation of
analysis results. The invaliadtion happens after the AssertingVH has
already fired. Instead, use a PoisoningVH that will assert if the
dangling handle is ever used rather than merely be assigned or
destroyed.

This patch also removes all of the (numerous) doomed attempts to work
around this fundamental incompatibility. It is a pretty significant
simplification IMO.

The most interesting change is in the Inliner where we still do some
clearing because we don't want to rely on the coarse grained
invalidation strategy of the containing pass manager. However, I prefer
the approach that contains this logic to the cleanup phase of the
Inliner, and I think we could enhance the CGSCC analysis management
layer to make this even better in the future if desired.

The rest is straight cleanup.

I've also added a test for one of the harder cases to work around: when
a *module analysis* contains many AssertingVHes pointing at functions.

Differential Revision: https://reviews.llvm.org/D29006

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292928 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-24 12:55:57 +00:00
Chandler Carruth
0f21af6a69 [PM] Further fixes to the test case in r292863.
This should hopefully fix the MSVC failures remaining.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292887 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-24 05:30:41 +00:00
Davide Italiano
5e7aac4918 [PM] Try to make all three compilers happy when it comes to pretty printing.
Modeled after a similar change from Michael Kuperstein. Let's hope this
sticks together.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292872 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-24 01:45:53 +00:00
Davide Italiano
6fe089fa68 [PM] Flesh out the new pass manager LTO pipeline.
Differential Revision:  https://reviews.llvm.org/D28996

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292863 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-24 00:57:39 +00:00
Chandler Carruth
7576d48d1f [PM] Replace the hard invalidate in JumpThreading for LVI with correct
invalidation of deleted functions in GlobalDCE.

This was always testing a bug really triggered in GlobalDCE. Right now
we have analyses with asserting value handles into IR. As long as those
remain, when *deleting* an IR unit, we cannot wait for the normal
invalidation scheme to kick in even though it was designed to work
correctly in the face of these kinds of deletions. Instead, the pass
needs to directly handle invalidating the analysis results pointing at
that IR unit.

I've tought the Inliner about this and this patch teaches GlobalDCE.
This will handle the asserting VH case in the existing test as well as
other issues of the same fundamental variety. I've moved the test into
the GlobalDCE directory and added a comment explaining what is going on.

Note that we cannot simply require LVI here because LVI is too lazy.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292773 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-23 08:33:24 +00:00
Chandler Carruth
d894e4c5d5 [PM] Teach LVI to correctly invalidate itself when its dependencies
become unavailable.

The AssumptionCache is now immutable but it still needs to respond to
DomTree invalidation if it ended up caching one.

This lets us remove one of the explicit invalidates of LVI but the
other one continues to avoid hitting a latent bug.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292769 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-23 06:35:12 +00:00
Chandler Carruth
4eca311419 [PM] Fix a really nasty bug introduced when adding PGO support to the
new PM's inliner.

The bug happens when we refine an SCC after having computed a proxy for
the FunctionAnalysisManager, and then proceed to compute fresh analyses
for functions in the *new* SCC using the manager provided by the old
SCC's proxy. *And* when we manage to mutate a function in this new SCC
in a way that invalidates those analyses. This can be... challenging to
reproduce.

I've managed to contrive a set of functions that trigger this and added
a test case, but it is a bit brittle. I've directly checked that the
passes run in the expected ways to help avoid the test just becoming
silently irrelevant.

This gets the new PM back to passing the LLVM test suite after the PGO
improvements landed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292757 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-22 10:34:01 +00:00
Chandler Carruth
3710416022 [PM] Teach the loop PM to run LoopSimplify prior to the loop pipeline.
This adds the last remaining core feature of the loop pass pipeline in
the new PM and removes the last of the really egregious hacks in the
LICM tests.

Sadly, this requires really substantial changes in the unittests in
order to provide and maintain simplified loops. This is particularly
hard because for example LoopSimplify will try to fold undef branches to
an ideal direction and simplify the loop accordingly.

Differential Revision: https://reviews.llvm.org/D28766

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292709 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-21 03:48:51 +00:00
Chandler Carruth
7937d38b4d [PM] Tidy up the spacing of this new, much nicer test file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292592 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-20 09:30:03 +00:00
Michael Kuperstein
f3bfad26ae [PM] Attempt to pacify windows bots.
Another difference in type pretty-printing, this one windows-specific.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292556 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-20 00:47:32 +00:00
Michael Kuperstein
92c056e8b3 [PM] Make default pipeline test for the new PM strict
Use CHECK-NEXT to verify that a test breaks whenever unexpected passes,
analyses, or invalidations show up in default pipelines. The test case
is constructed so that we don't expect to invalidate anything, and needs
to be kept that way.

The test is slightly less strict than we'd like because of differences
in type pretty-printing.

(Right now it does show some invalidations - all of those are intentional
and temporary.)

Differential Revision: https://reviews.llvm.org/D28887


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292536 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-19 23:39:28 +00:00
Michael Kuperstein
90513756ce Revert r292530 since it breaks buildbots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292534 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-19 23:22:55 +00:00
Michael Kuperstein
84788aed39 [PM] Make default pipeline test for the new PM strict
Use CHECK-NEXT to verify that a test breaks whenever unexpected passes,
analyses, or invalidations show up in default pipelines. The test case
is constructed so that we don't expect to invalidate anything, and needs
to be kept that way.

(Right now it does show some invalidations - all of those are intentional
and temporary.)

Differential Revision: https://reviews.llvm.org/D28887


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292530 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-19 22:55:46 +00:00
Michael Kuperstein
dcf46612cc [PM] Add LoopVectorize to the default module pipeline
LV no longer "requires" LCSSA and LoopSimplify, and instead forms
them internally as required. So, there's nothing preventing it from
being enabled.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292464 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-19 02:21:54 +00:00
Chandler Carruth
1c28b57b8b [PM] Teach the LoopPassManager to automatically canonicalize loops by
runnig LCSSA over them prior to running the loop pipeline.

This also teaches the loop PM to verify that LCSSA form is preserved
throughout the pipeline's run across the loop nest.

Most of the test updates just leverage this new functionality. One has to be
relaxed with the new PM as IVUsers is less powerful when it sees LCSSA input.

Differential Revision: https://reviews.llvm.org/D28743

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292241 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-17 19:18:12 +00:00
Chandler Carruth
d97514d79a [PM] Teach the optimization remarks emitter to handle invalidation
events.

This pass sometimes has a pointer to BlockFrequencyInfo so it needs
custom invalidation logic. It is also otherwise immutable so we can
reduce the number of invalidations that happen substantially.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292058 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-15 08:20:50 +00:00
Adam Nemet
6c3ca405e1 Move test of lazy BFI with ORE to a generic directory
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291862 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-13 00:16:23 +00:00
Chandler Carruth
d27a39a962 [PM] Rewrite the loop pass manager to use a worklist and augmented run
arguments much like the CGSCC pass manager.

This is a major redesign following the pattern establish for the CGSCC layer to
support updates to the set of loops during the traversal of the loop nest and
to support invalidation of analyses.

An additional significant burden in the loop PM is that so many passes require
access to a large number of function analyses. Manually ensuring these are
cached, available, and preserved has been a long-standing burden in LLVM even
with the help of the automatic scheduling in the old pass manager. And it made
the new pass manager extremely unweildy. With this design, we can package the
common analyses up while in a function pass and make them immediately available
to all the loop passes. While in some cases this is unnecessary, I think the
simplicity afforded is worth it.

This does not (yet) address loop simplified form or LCSSA form, but those are
the next things on my radar and I have a clear plan for them.

While the patch is very large, most of it is either mechanically updating loop
passes to the new API or the new testing for the loop PM. The code for it is
reasonably compact.

I have not yet updated all of the loop passes to correctly leverage the update
mechanisms demonstrated in the unittests. I'll do that in follow-up patches
along with improved FileCheck tests for those passes that ensure things work in
more realistic scenarios. In many cases, there isn't much we can do with these
until the loop simplified form and LCSSA form are in place.

Differential Revision: https://reviews.llvm.org/D28292

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291651 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-11 06:23:21 +00:00
Chandler Carruth
86d536565b [PM] Introduce a devirtualization iteration layer for the new PM.
This is an orthogonal and separated layer instead of being embedded
inside the pass manager. While it adds a small amount of complexity, it
is fairly minimal and the composability and control seems worth the
cost.

The logic for this ends up being nicely isolated and targeted. It should
be easy to experiment with different iteration strategies wrapped around
the CGSCC bottom-up walk using this kind of facility.

The mechanism used to track devirtualization is the simplest one I came
up with. I think it handles most of the cases the existing iteration
machinery handles, but I haven't done a *very* in depth analysis. It
does however match the basic intended semantics, and we can tweak or
tune its exact behavior incrementally as necessary. One thing that we
may want to revisit is freshly building the value handle set on each
iteration. While I don't think this will be a significant cost (it is
strictly fewer value handles but more churn of value handes than the old
call graph), it is conceivable that we'll want a somewhat more clever
tracking mechanism. My hope is to layer that on as a follow up patch
with data supporting any implementation complexity it adds.

This code also provides for a basic count heuristic: if the number of
indirect calls decreases and the number of direct calls increases for
a given function in the SCC, we assume devirtualization is responsible.
This matches the heuristics currently used in the legacy pass manager.

Differential Revision: https://reviews.llvm.org/D23114

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290665 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-28 11:07:33 +00:00
Chandler Carruth
66dc5d6fe7 [PM] Actually commit the test update that was supposed to accompany
r290644. Sorry for this.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290646 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-28 02:31:24 +00:00
Chandler Carruth
b09bd97491 [PM] Teach BasicAA how to invalidate its result object.
This requires custom handling because BasicAA caches handles to other
analyses and so it needs to trigger indirect invalidation.

This fixes one of the common crashes when using the new PM in real
pipelines. I've also tweaked a regression test to check that we are at
least handling the most immediate case.

I'm going to work at re-structuring this test some to both scale better
(rather than all being in one file) and check more invalidation paths in
a follow-up commit, but I wanted to get the basic bug fix in place.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290603 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-27 10:30:45 +00:00