Commit Graph

3651 Commits

Author SHA1 Message Date
Nuno Lopes
fe353a0cbf Merge isKnownNonNull into isKnownNonZero
It now knows the tricks of both functions.
Also, fix a bug that considered allocas of non-zero address space to be always non null

Differential Revision: https://reviews.llvm.org/D37628

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312869 91177308-0d34-0410-b5e6-96231b3b80d8
2017-09-09 18:23:11 +00:00
Sanjay Patel
193e898f75 [DivRempairs] add a pass to optimize div/rem pairs (PR31028)
This is intended to be a superset of the functionality from D31037 (EarlyCSE) but implemented 
as an independent pass, so there's no stretching of scope and feature creep for an existing pass. 
I also proposed a weaker version of this for SimplifyCFG in D30910. And I initially had almost 
this same functionality as an addition to CGP in the motivating example of PR31028:
https://bugs.llvm.org/show_bug.cgi?id=31028

The advantage of positioning this ahead of SimplifyCFG in the pass pipeline is that it can allow 
more flattening. But it needs to be after passes (InstCombine) that could sink a div/rem and
undo the hoisting that is done here.

Decomposing remainder may allow removing some code from the backend (PPC and possibly others).

Differential Revision: https://reviews.llvm.org/D37121 


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312862 91177308-0d34-0410-b5e6-96231b3b80d8
2017-09-09 13:38:18 +00:00
Guozhi Wei
2f597e6a3f [TargetTransformInfo] Remove the extra "default" in a switch that all enum values has been covered.
In function TargetTransformInfo::getInstructionCost, all enum values in the switch statement has been covered, so the default is unnecessary, and may cause error with option -Werror,-Wcovered-switch-default, so remove it.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312834 91177308-0d34-0410-b5e6-96231b3b80d8
2017-09-08 23:34:28 +00:00
Guozhi Wei
19969b8b8f [TargetTransformInfo] Add a new public interface getInstructionCost
Current TargetTransformInfo can support throughput cost model and code size model, but sometimes we also need instruction latency cost model in different optimizations. Hal suggested we need a single public interface to query the different cost of an instruction. So I proposed following interface:

  enum TargetCostKind {
    TCK_RecipThroughput, ///< Reciprocal throughput.
    TCK_Latency,         ///< The latency of instruction.
    TCK_CodeSize         ///< Instruction code size.
  };

  int getInstructionCost(const Instruction *I, enum TargetCostKind kind) const;

All clients should mainly use this function to query the cost of an instruction, parameter <kind> specifies the desired cost model.

This patch also provides a simple default implementation of getInstructionLatency.

The default getInstructionLatency provides latency numbers for only small number of instruction classes, those latency numbers are only reasonable for modern OOO processors. It can be extended in following ways:

   Add more detail into this function.
   Add getXXXLatency function and call it from here.
   Implement target specific getInstructionLatency function.

Differential Revision: https://reviews.llvm.org/D37170



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312832 91177308-0d34-0410-b5e6-96231b3b80d8
2017-09-08 22:29:17 +00:00
Alexey Bataev
4fcc7e8528 [SLP] Support for horizontal min/max reduction.
SLP vectorizer supports horizontal reductions for Add/FAdd binary
operations. Patch adds support for horizontal min/max reductions.
Function getReductionCost() is split to getArithmeticReductionCost() for
binary operation reductions and getMinMaxReductionCost() for min/max
reductions.
Patch fixes PR26956.

Differential revision: https://reviews.llvm.org/D27846

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312791 91177308-0d34-0410-b5e6-96231b3b80d8
2017-09-08 13:49:36 +00:00
Reid Kleckner
6619121492 Sink some IntrinsicInst.h and Intrinsics.h out of llvm/include
Many of these uses can get by with forward declarations. Hopefully this
speeds up compilation after adding a single intrinsic.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312759 91177308-0d34-0410-b5e6-96231b3b80d8
2017-09-07 23:27:44 +00:00
Sanjay Patel
04894a4949 [ValueTracking, InstCombine] canonicalize fcmp ord/uno with non-NAN ops to null constants
This is a preliminary step towards solving the remaining part of PR27145 - IR for isfinite():
https://bugs.llvm.org/show_bug.cgi?id=27145

In order to solve that one more generally, we need to add matching for and/or of fcmp ord/uno
with a constant operand.

But while looking at those patterns, I realized we were missing a canonicalization for nonzero
constants. Rather than limiting to just folds for constants, we're adding a general value
tracking method for this based on an existing DAG helper.

By transforming everything to 0.0, we can simplify the existing code in foldLogicOfFCmps()
and pick up missing vector folds.

Differential Revision: https://reviews.llvm.org/D37427


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312591 91177308-0d34-0410-b5e6-96231b3b80d8
2017-09-05 23:13:13 +00:00
Eugene Zelenko
cecd8f18e2 [Analysis, Transforms] Fix some Clang-tidy modernize and Include What You Use warnings; other minor fixes (NFC).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312383 91177308-0d34-0410-b5e6-96231b3b80d8
2017-09-01 21:37:29 +00:00
Craig Topper
85fcd3487c [InstCombine][InstSimplify] Teach decomposeBitTestICmp to look through truncate instructions
This patch teaches decomposeBitTestICmp to look through truncate instructions on the input to the compare. If a truncate is found it will now return the pre-truncated Value and appropriately extend the APInt mask.

This allows some code to be removed from InstSimplify that was doing this functionality.

This allows InstCombine's bit test combining code to match a pre-truncate Value with the same Value appear with an 'and' on another icmp. Or it allows us to combine a truncate to i16 and a truncate to i8. This also required removing the type check from the beginning of getMaskedTypeForICmpPair, but I believe that's ok because we still have to find two values from the input to each icmp that are equal before we'll do any transformation. So the type check was really just serving as an early out.

There was one user of decomposeBitTestICmp that didn't want to look through truncates, so I've added a flag to prevent that behavior when necessary.

Differential Revision: https://reviews.llvm.org/D37158

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312382 91177308-0d34-0410-b5e6-96231b3b80d8
2017-09-01 21:27:34 +00:00
Davide Italiano
e5593530f5 [TTI] Fix getGEPCost() for geps with a single operand.
Previously this would sporadically crash as TargetType
was never initialized. We special-case the single-operand
case returning earlier and trying to mimic the behaviour of
isLegalAddressingMode as closely as possible.

Differential Revision:  https://reviews.llvm.org/D37277

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312357 91177308-0d34-0410-b5e6-96231b3b80d8
2017-09-01 19:54:08 +00:00
Davide Italiano
7902ceb443 [TTI] Initialize a value to trigger a crash deterministically.
We expect the pointer to be initialized by the above loop, but
if that's not executed, the contents are garbage.
A fix for the crash will be committed immediately after.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312353 91177308-0d34-0410-b5e6-96231b3b80d8
2017-09-01 19:36:34 +00:00
Alexandre Isoard
3b88873b05 [SCEV] Add URem support to SCEV
In LLVM IR the following code:

    %r = urem <ty> %t, %b

is equivalent to

    %q = udiv <ty> %t, %b
    %s = mul <ty> nuw %q, %b
    %r = sub <ty> nuw %t, %q ; (t / b) * b + (t % b) = t

As UDiv, Mul and Sub are already supported by SCEV, URem can be implemented
with minimal effort using that relation:

    %r --> (-%b * (%t /u %b)) + %t

We implement two special cases:

  - if %b is 1, the result is always 0
  - if %b is a power-of-two, we produce a zext/trunc based expression instead

That is, the following code:

    %r = urem i32 %t, 65536

Produces:

    %r --> (zext i16 (trunc i32 %a to i16) to i32)

Note that while this helps get a tighter bound on the range analysis and the
known-bits analysis, this exposes some normalization shortcoming of SCEVs:

    %div = udim i32 %a, 65536
    %mul = mul i32 %div, 65536
    %rem = urem i32 %a, 65536
    %add = add i32 %mul, %rem

Will usually not be reduced.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312329 91177308-0d34-0410-b5e6-96231b3b80d8
2017-09-01 14:59:59 +00:00
Eugene Zelenko
046ca04445 [Analysis] Fix some Clang-tidy modernize-use-using and Include What You Use warnings; other minor fixes. Also affected in files (NFC).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312289 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-31 21:56:16 +00:00
NAKAMURA Takumi
d60caed503 Prune whitespaces in blank lines.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@311876 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-28 07:48:37 +00:00
Tobias Grosser
2050a0312d Model cache size and associativity in TargetTransformInfo
Summary:
We add the precise cache sizes and associativity for the following Intel
architectures:

  - Penry
  - Nehalem
  - Westmere
  - Sandy Bridge
  - Ivy Bridge
  - Haswell
  - Broadwell
  - Skylake
  - Kabylake

Polly uses since several months a performance model for BLAS computations that
derives optimal cache and register tile sizes from cache and latency
information (based on ideas from "Analytical Modeling Is Enough for High-Performance BLIS", by Tze Meng Low published at TOMS 2016).
While bootstrapping this model, these target values have been kept in Polly.
However, as our implementation is now rather mature, it seems time to teach
LLVM itself about cache sizes.

Interestingly, L1 and L2 cache sizes are pretty constant across
micro-architectures, hence a set of architecture specific default values
seems like a good start. They can be expanded to more target specific values,
in case certain newer architectures require different values. For now a set
of Intel architectures are provided.

Just as a little teaser, for a simple gemm kernel this model allows us to
improve performance from 1.2s to 0.27s. For gemm kernels with less optimal
memory layouts even larger speedups can be reported.

Reviewers: Meinersbur, bollu, singam-sanjay, hfinkel, gareevroman, fhahn, sebpop, efriedma, asb

Reviewed By: fhahn, asb

Subscribers: lsaba, asb, pollydev, llvm-commits

Differential Revision: https://reviews.llvm.org/D37051

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@311647 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-24 09:46:25 +00:00
Sam Elliott
4467dc73bd [ORE] Remove Old Optimization Remark API
Summary: https://bugs.llvm.org/show_bug.cgi?id=33789

Reviewers: anemet

Reviewed By: anemet

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D36972

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@311380 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-21 20:30:44 +00:00
Haicheng Wu
25ef265dc9 [InlineCost] Add cl::opt to allow full inline cost to be computed for debugging purposes.
Currently, the inline cost model will bail once the inline cost exceeds the
inline threshold in order to avoid unnecessary compile-time. However, when
debugging it is useful to compute the full cost, so this command line option
is added to override the default behavior.

I took over this work from Chad Rosier (mcrosier@codeaurora.org).

Differential Revision: https://reviews.llvm.org/D35850

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@311371 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-21 20:00:09 +00:00
Sam Elliott
b4d267277b Emit only A Single Opt Remark When Inlining
Summary:
This updates the Inliner to only add a single Optimization
Remark when Inlining, rather than an Analysis Remark and an
Optimization Remark.

Fixes https://bugs.llvm.org/show_bug.cgi?id=33786

Reviewers: anemet, davidxl, chandlerc

Reviewed By: anemet

Subscribers: haicheng, fhahn, mehdi_amini, dblaikie, llvm-commits, eraman

Differential Revision: https://reviews.llvm.org/D36054

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@311349 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-21 16:45:47 +00:00
Sam Elliott
fe94416753 Revert "Emit only A Single Opt Remark When Inlining"
Reverting due to clang build failure

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@311274 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-20 06:55:10 +00:00
Sam Elliott
2999c9c71d Emit only A Single Opt Remark When Inlining
Summary:
This updates the Inliner to only add a single Optimization
Remark when Inlining, rather than an Analysis Remark and an
Optimization Remark.

Fixes https://bugs.llvm.org/show_bug.cgi?id=33786

Reviewers: anemet, davidxl, chandlerc

Reviewed By: anemet

Subscribers: haicheng, fhahn, mehdi_amini, dblaikie, llvm-commits, eraman

Differential Revision: https://reviews.llvm.org/D36054

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@311273 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-20 06:43:34 +00:00
Victor Leschuk
61fd1c077c Set init value for ScalarEvolution::BackedgeTakenInfo::MaxOrZero
Otherwise it can be used uninitialized in move ctor.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@311262 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-19 21:05:08 +00:00
Eugene Zelenko
89688ce180 [Analysis] Fix some Clang-tidy modernize and Include What You Use warnings; other minor fixes (NFC).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@311212 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-18 23:51:26 +00:00
Sanjay Patel
47b90a07c7 fix typos in comments; NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@311193 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-18 20:27:47 +00:00
Eugene Zelenko
93bb413a33 [Analysis] Fix some Clang-tidy modernize and Include What You Use warnings; other minor fixes (NFC).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@311048 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-16 22:07:40 +00:00
Craig Topper
fc52a9c1a3 Recommit r310869, "[InstSimplify][InstCombine] Modify the interface of decomposeBitTestICmp and use it in the InstSimplify"
This recommits r310869, with the moved files and no extra changes.

Original commit message:

This addresses a fixme in InstSimplify about using decomposeBitTest. This also fixes InstSimplify to handle ugt and ult compares too.

I've modified the interface a little to return only the APInt version of the mask that InstSimplify needs. InstCombine now has a small wrapper routine to create a Constant out of it. I've also dropped the returning of 0 since InstSimplify doesn't need that. So InstCombine creates a zero constant itself.

I also had to make decomposeBitTest support vectors since InstSimplify needs that.

As InstSimplify can't use something from the Transforms library, I've moved the CmpInstAnalysis code to the Analysis library.

Differential Revision: https://reviews.llvm.org/D36593

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@310889 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-14 21:39:51 +00:00
Craig Topper
15209888f5 Revert r310870 "[InstCombine][InstSimplify] 'git add' two files that moved in r310869."
An extra change crept in here.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@310872 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-14 19:09:28 +00:00
Craig Topper
fcc217010f [InstCombine][InstSimplify] 'git add' two files that moved in r310869.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@310870 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-14 19:01:32 +00:00
Sam Parker
66f113a5b0 [LoopUnroll] Enable option to peel remainder loop
On some targets, the penalty of executing runtime unrolling checks
and then not the unrolled loop can be significantly detrimental to
performance. This results in the need to be more conservative with
the unroll count, keeping a trip count of 2 reduces the overhead as
well as increasing the chance of the unrolled body being executed. But
being conservative leaves performance gains on the table.

This patch enables the unrolling of the remainder loop introduced by
runtime unrolling. This can help reduce the overhead of misunrolled
loops because the cost of non-taken branches is much less than the
cost of the backedge that would normally be executed in the remainder
loop. This allows larger unroll factors to be used without suffering
performance loses with smaller iteration counts.

Differential Revision: https://reviews.llvm.org/D36309


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@310824 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-14 09:25:26 +00:00
Eugene Zelenko
c5e4ac86db [Analysis] Fix some Clang-tidy modernize-use-using and Include What You Use warnings; other minor fixes (NFC).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@310766 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-11 21:30:02 +00:00
Chandler Carruth
83bfb55f3b [PM] Switch the CGSCC debug messages to use the standard LLVM debug
printing techniques with a DEBUG_TYPE controlling them.

It was a mistake to start re-purposing the pass manager `DebugLogging`
variable for generic debug printing -- those logs are intended to be
very minimal and primarily used for testing. More detailed and
comprehensive logging doesn't make sense there (it would only make for
brittle tests).

Moreover, we kept forgetting to propagate the `DebugLogging` variable to
various places making it also ineffective and/or unavailable. Switching
to `DEBUG_TYPE` makes this a non-issue.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@310695 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-11 05:47:13 +00:00
Nuno Lopes
cf10b736fb CFLAA: return MustAlias when pointers p, q are equal, i.e.,
must-alias(p, sz_p, p, sz_q)  irrespective of access sizes sz_p, sz_q

As discussed a couple of weeks ago on the ML.
This makes the behavior consistent with that of BasicAA.
AA clients already check the obj size themselves and may not require the
obj size to match exactly the access size (e.g., in case of store forwarding)

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@310495 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-09 17:02:18 +00:00
Jonas Paulsson
b7123745ed [LSR / TTI / SystemZ] Eliminate TargetTransformInfo::isFoldableMemAccess()
isLegalAddressingMode() has recently gained the extra optional Instruction*
parameter, and therefore it can now do the job that previously only
isFoldableMemAccess() could do.

The SystemZ implementation of isLegalAddressingMode() has gained the
functionality of checking for offsets, which used to be done with
isFoldableMemAccess().

The isFoldableMemAccess() hook has been removed everywhere.

Review: Quentin Colombet, Ulrich Weigand
https://reviews.llvm.org/D35933

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@310463 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-09 11:28:01 +00:00
Chandler Carruth
affd12878e [LCG] Switch one of the update methods for the LazyCallGraph to support
limited batch updates.

Specifically, allow removing multiple reference edges starting from
a common source node. There are a few constraints that play into
supporting this form of batching:

1) The way updates occur during the CGSCC walk, about the most we can
   functionally batch together are those with a common source node. This
   also makes the batching simpler to implement, so it seems
   a worthwhile restriction.
2) The far and away hottest function for large C++ files I measured
   (generated code for protocol buffers) showed a huge amount of time
   was spent removing ref edges specifically, so it seems worth focusing
   there.
3) The algorithm for removing ref edges is very amenable to this
   restricted batching. There are just both API and implementation
   special casing for the non-batch case that gets in the way. Once
   removed, supporting batches is nearly trivial.

This does modify the API in an interesting way -- now, we only preserve
the target RefSCC when the RefSCC structure is unchanged. In the face of
any splits, we create brand new RefSCC objects. However, all of the
users were OK with it that I could find. Only the unittest needed
interesting updates here.

How much does batching these updates help? I instrumented the compiler
when run over a very large generated source file for a protocol buffer
and found that the majority of updates are intrinsically updating one
function at a time. However, nearly 40% of the total ref edges removed
are removed as part of a batch of removals greater than one, so these
are the cases batching can help with.

When compiling the IR for this file with 'opt' and 'O3', this patch
reduces the total time by 8-9%.

Differential Revision: https://reviews.llvm.org/D36352

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@310450 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-09 09:05:27 +00:00
Matt Arsenault
aa4edb1422 Fix typo in comment
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@310259 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-07 14:58:43 +00:00
Chandler Carruth
3ae1e28f67 [LCG] Completely remove the parent set and leaf tracking for RefSCCs.
After the previous series of patches, this is now trivial and deletes
a pretty astonishing amount of complexity. This has been a long time
coming, as the move toward a PO sequence of RefSCCs started eroding the
underlying use cases for this half of the data structure.

Among the biggest advantages here is that now there aren't two
independent data structures that need to stay in sync.

Some of my profiling has also indicated that updating the parent sets
was among the most expensive parts of the lazy call graph. Eliminating
it whole sale is likely to be a nice win in terms of compile time.

Last but not least, I had discussed with some folks previously keeping
it around for asserts and other correctness checking, but once the
fundamentals of the parent and child checking were implemented without
the parent sets their value in correctness checking was tiny and no
where near worth the cost of the complexity required to keep everything
up-to-date.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@310171 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-05 07:37:00 +00:00
Chandler Carruth
8d16ccbe0f [LCG] Re-implement the basic isParentOf, isAncestorOf, isChildOf, and
isDescendantOf methods on RefSCCs in terms of the forward edges rather
than the parent sets.

This is technically slower, but probably not interestingly slower, and
all of these routines were already so expensive that they're guarded
behind both !NDEBUG and EXPENSIVE_CHECKS.

This removes another non-critical usage of parent sets.

I've also added some comments to try and help clarify to any potential
users the costs of these routines. They're mostly useful for debugging,
asserts, or other queries.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@310170 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-05 06:24:09 +00:00
Chandler Carruth
95f263eb86 [LCG] Add the concept of a "dead" node and use it to avoid a complex
walk over the parent set.

When removing a single function from the call graph, we previously would
walk the entire RefSCC's parent set and then walk every outgoing edge
just to find the ones to remove. In addition to this being quite high
complexity in theory, it is also the last fundamental use of the parent
sets.

With this change, when we remove a function we transform the node
containing it to be recognizably "dead" and then teach the edge
iterators to recognize edges to such nodes and skip them the same way
they skip null edges.

We can't move fully to using "dead" nodes -- when disconnecting two live
nodes we need to null out the edge. But the complexity this adds to the
edge sequence isn't too bad and the simplification of lazily handling
this seems like a significant win.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@310169 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-05 05:47:37 +00:00
Chandler Carruth
d2f3f4a06a [LCG] Replace an implicit bool operator with a named function. (NFC)
The definition of 'false' here was already pretty vague and debatable,
and I'm about to add another potential 'false' that would actually make
much more sense in a bool operator. Especially given how rarely this is
used, a nicely named method seems better.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@310165 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-05 04:04:06 +00:00
Teresa Johnson
dad922aba2 Use profile summary to disable peeling for huge working sets
Summary:
Detect when the working set size of a profiled application is huge,
by comparing the number of counts required to reach the hot percentile
in the profile summary to a large threshold*.

When the working set size is determined to be huge, disable peeling
to avoid bloating the working set further.

*Note that the selected threshold (15K) is significantly larger than the
largest working set value in SPEC cpu2006 (which is gcc at around 11K).

Reviewers: davidxl

Subscribers: mehdi_amini, mzolotukhin, eraman, llvm-commits

Differential Revision: https://reviews.llvm.org/D36288

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@310005 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-03 23:42:58 +00:00
Easwaran Raman
2bbed02c0b [Inliner] Increase threshold for hot callsites without PGO.
Summary:
This increases the inlining threshold for hot callsites. Hotness is
defined in terms of block frequency of the callsite relative to the
caller's entry block's frequency. Since this requires BFI in the
inliner, this only affects the new PM pipeline. This is enabled by
default at -O3.

This improves the performance of some internal benchmarks. Notably, an
internal benchmark for Gipfeli compression
(https://github.com/google/gipfeli) improves by ~7%. Povray in SPEC2006
improves by ~2.5%. I am running more experiments and will update the
thread if other benchmarks show improvement/regression.

In terms of text size, LLVM test-suite shows an 1.22% text size
increase. Diving into the results, 13 of the benchmarks in the
test-suite increases by > 10%. Most of these are small, but
Adobe-C++/loop_unroll (17.6% increases) and tramp3d(20.7% size increase)
have >250K text size. On a large application, the text size increases by
2%

Reviewers: chandlerc, davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D36199

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@309994 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-03 22:23:33 +00:00
Max Kazantsev
511c1a306c [SCEV] Re-enable "Cache results of computeExitLimit"
The patch rL309080 was reverted because it did not clean up the cache on "forgetValue"
method call. This patch re-enables this change, adds the missing check and introduces
two new unit tests that make sure that the cache is cleaned properly.

Differential Revision: https://reviews.llvm.org/D36087


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@309925 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-03 08:41:30 +00:00
Chandler Carruth
759393be2d [PM] Fix a bug where through CGSCC iteration we can get
infinite-inlining across multiple runs of the inliner by keeping a tiny
history of internal-to-SCC inlining decisions.

This is still a bit gross, but I don't yet have any fundamentally better
ideas and numerous people are blocked on this to use new PM and ThinLTO
together.

The core of the idea is to detect when we are about to do an inline that
has a chance of re-splitting an SCC which we have split before with
a similar inlining step. That is a critical component in the inlining
forming a cycle and so far detects all of the various cyclic patterns
I can come up with as well as the original real-world test case (which
comes from a ThinLTO build of libunwind).

I've added some tests that I think really demonstrate what is going on
here. They are essentially state machines that march the inliner through
various steps of a cycle and check that we stop when the cycle is closed
and that we actually did do inlining to form that cycle.

A lot of thanks go to Eric Christopher and Sanjoy Das for the help
understanding this issue and improving the test cases.

The biggest "yuck" here is the layering issue -- the CGSCC pass manager
is providing somewhat magical state to the inliner for it to use to make
itself converge. This isn't great, but I don't honestly have a lot of
better ideas yet and at least seems nicely isolated.

I have tested this patch, and it doesn't block *any* inlining on the
entire LLVM test suite and SPEC, so it seems sufficiently narrowly
targeted to the issue at hand.

We have come up with hypothetical issues that this patch doesn't cover,
but so far none of them are practical and we don't have a viable
solution yet that covers the hypothetical stuff, so proceeding here in
the interim. Definitely an area that we will be back and revisiting in
the future.

Differential Revision: https://reviews.llvm.org/D36188

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@309784 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-02 02:09:22 +00:00
Chad Rosier
0d8a77755a [Value Tracking] Default argument to true and rename accordingly. NFC.
IMHO this is a bit more readable.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@309739 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-01 20:18:54 +00:00
Hiroshi Inoue
eeca49d1ac [StackColoring] Update AliasAnalysis information in stack coloring pass
Stack coloring pass need to maintain AliasAnalysis information when merging stack slots of different types.
Actually, there is a FIXME comment in StackColoring.cpp

// FIXME: In order to enable the use of TBAA when using AA in CodeGen,
// we'll also need to update the TBAA nodes in MMOs with values
// derived from the merged allocas.

But, TBAA has been already enabled in CodeGen without fixing this pass.
The incorrect TBAA metadata results in recent failures in bootstrap test on ppc64le (PR33928) by allowing unsafe instruction scheduling.
Although we observed the problem on ppc64le, this is a platform neutral issue.

This patch makes the stack coloring pass maintains AliasAnalysis information when merging multiple stack slots.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@309651 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-01 03:32:15 +00:00
Alina Sbirlea
211f7eebce Allow None as a MemoryLocation to getModRefInfo
Summary:
Adding part of the changes in D30369 (needed to make progress):
Current patch updates AliasAnalysis and MemoryLocation, but does _not_ clean up MemorySSA.

Original summary from D30369, by dberlin:
Currently, we have instructions which affect memory but have no memory
location. If you call, for example, MemoryLocation::get on a fence,
it asserts. This means things specifically have to avoid that. It
also means we end up with a copy of each API, one taking a memory
location, one not.

This starts to fix that.

We add MemoryLocation::getOrNone as a new call, and reimplement the
old asserting version in terms of it.

We make MemoryLocation optional in the (Instruction, MemoryLocation)
version of getModRefInfo, and kill the old one argument version in
favor of passing None (it had one caller). Now both can handle fences
because you can just use MemoryLocation::getOrNone on an instruction
and it will return a correct answer.

We use all this to clean up part of MemorySSA that had to handle this difference.

Note that literally every actual getModRefInfo interface we have could be made private and replaced with:

getModRefInfo(Instruction, Optional<MemoryLocation>)
and
getModRefInfo(Instruction, Optional<MemoryLocation>, Instruction, Optional<MemoryLocation>)

and delegating to the right ones, if we wanted to.

I have not attempted to do this yet.

Reviewers: dberlin, davide, dblaikie

Subscribers: sanjoy, hfinkel, chandlerc, llvm-commits

Differential Revision: https://reviews.llvm.org/D35441

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@309641 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-01 00:28:29 +00:00
Alexey Bataev
5a34abfe3e [Cost] Rename getReductionCost() to getArithmeticReductionCost(), NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@309563 91177308-0d34-0410-b5e6-96231b3b80d8
2017-07-31 14:19:32 +00:00
Chad Rosier
c252763623 [ValueTracking] Remove a number of unused arguments. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@309385 91177308-0d34-0410-b5e6-96231b3b80d8
2017-07-28 14:39:06 +00:00
Sanjoy Das
e47ec8cfbd Revert "[SCEV] Cache results of computeExitLimit"
This reverts commit r309080.  The patch needs to clear out the
ScalarEvolution::ExitLimits cache in forgetMemoizedResults.

I've replied on the commit thread for the patch with more details.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@309357 91177308-0d34-0410-b5e6-96231b3b80d8
2017-07-28 03:25:07 +00:00
Dehao Chen
87452e81d0 Separate the ICP total threshold and remaining threshold.
Summary: In the current implementation, isPromotionProfitable only checks if the call count to a direct target is no less than a certain percentage threshold of the remaining call counts that have not been promoted. This causes code size problems when the target count is small but greater than a large portion of remaining counts. E.g. target1 takes 99.9%, while target2 takes 0.1%. Both targets will be promoted and inlined, makes the function size too large, which potentially prevents it from further inlining into its callers. This patch adds another percentage threshold against the total indirect call count. If the target count needs to be no less than both thresholds in order to be promoted speculatively.

Reviewers: davidxl, tejohnson

Reviewed By: tejohnson

Subscribers: sanjoy, llvm-commits

Differential Revision: https://reviews.llvm.org/D35962

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@309345 91177308-0d34-0410-b5e6-96231b3b80d8
2017-07-28 01:02:54 +00:00
Jakub Kuderski
a65cddf557 [Dominators] Change Roots type to SmallVector
Summary: We can use the template parameter `IsPostDom` to pick an appropriate SmallVector size to store DomTree roots for dominators and postdominators. Before, the code would always allocate memory with `std::vector`.

Reviewers: dberlin, davide, sanjoy, grosser

Reviewed By: grosser

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D35636

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@309148 91177308-0d34-0410-b5e6-96231b3b80d8
2017-07-26 18:27:39 +00:00