2004-04-02 20:23:17 +00:00
|
|
|
//===- llvm/Analysis/ScalarEvolution.h - Scalar Evolution -------*- C++ -*-===//
|
2005-04-21 20:19:05 +00:00
|
|
|
//
|
2004-04-02 20:23:17 +00:00
|
|
|
// The LLVM Compiler Infrastructure
|
|
|
|
//
|
2007-12-29 19:59:42 +00:00
|
|
|
// This file is distributed under the University of Illinois Open Source
|
|
|
|
// License. See LICENSE.TXT for details.
|
2005-04-21 20:19:05 +00:00
|
|
|
//
|
2004-04-02 20:23:17 +00:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
//
|
|
|
|
// The ScalarEvolution class is an LLVM pass which can be used to analyze and
|
2010-03-01 17:49:51 +00:00
|
|
|
// categorize scalar expressions in loops. It specializes in recognizing
|
2004-04-02 20:23:17 +00:00
|
|
|
// general induction variables, representing them with the abstract and opaque
|
|
|
|
// SCEV class. Given this analysis, trip counts of loops and other important
|
|
|
|
// properties can be obtained.
|
|
|
|
//
|
|
|
|
// This analysis is primarily useful for induction variable substitution and
|
|
|
|
// strength reduction.
|
2005-04-21 20:19:05 +00:00
|
|
|
//
|
2004-04-02 20:23:17 +00:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
#ifndef LLVM_ANALYSIS_SCALAREVOLUTION_H
|
|
|
|
#define LLVM_ANALYSIS_SCALAREVOLUTION_H
|
|
|
|
|
2012-12-03 17:02:12 +00:00
|
|
|
#include "llvm/ADT/DenseSet.h"
|
|
|
|
#include "llvm/ADT/FoldingSet.h"
|
[SCEV] Try to reuse existing value during SCEV expansion
Current SCEV expansion will expand SCEV as a sequence of operations
and doesn't utilize the value already existed. This will introduce
redundent computation which may not be cleaned up throughly by
following optimizations.
This patch introduces an ExprValueMap which is a map from SCEV to the
set of equal values with the same SCEV. When a SCEV is expanded, the
set of values is checked and reused whenever possible before generating
a sequence of operations.
The original commit triggered regressions in Polly tests. The regressions
exposed two problems which have been fixed in current version.
1. Polly will generate a new function based on the old one. To generate an
instruction for the new function, it builds SCEV for the old instruction,
applies some tranformation on the SCEV generated, then expands the transformed
SCEV and insert the expanded value into new function. Because SCEV expansion
may reuse value cached in ExprValueMap, the value in old function may be
inserted into new function, which is wrong.
In SCEVExpander::expand, there is a logic to check the cached value to
be used should dominate the insertion point. However, for the above
case, the check always passes. That is because the insertion point is
in a new function, which is unreachable from the old function. However
for unreachable node, DominatorTreeBase::dominates thinks it will be
dominated by any other node.
The fix is to simply add a check that the cached value to be used in
expansion should be in the same function as the insertion point instruction.
2. When the SCEV is of scConstant type, expanding it directly is cheaper than
reusing a normal value cached. Although in the cached value set in ExprValueMap,
there is a Constant type value, but it is not easy to find it out -- the cached
Value set is not sorted according to the potential cost. Existing reuse logic
in SCEVExpander::expand simply chooses the first legal element from the cached
value set.
The fix is that when the SCEV is of scConstant type, don't try the reuse
logic. simply expand it.
Differential Revision: http://reviews.llvm.org/D12090
llvm-svn: 259736
2016-02-04 01:27:38 +00:00
|
|
|
#include "llvm/ADT/SetVector.h"
|
2015-12-29 09:06:21 +00:00
|
|
|
#include "llvm/Analysis/LoopInfo.h"
|
2014-03-04 12:24:34 +00:00
|
|
|
#include "llvm/IR/ConstantRange.h"
|
2013-01-02 11:36:10 +00:00
|
|
|
#include "llvm/IR/Instructions.h"
|
|
|
|
#include "llvm/IR/Operator.h"
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 02:08:17 +00:00
|
|
|
#include "llvm/IR/PassManager.h"
|
2014-03-04 11:17:44 +00:00
|
|
|
#include "llvm/IR/ValueHandle.h"
|
[SCEV][LAA] Re-commit r260085 and r260086, this time with a fix for the memory
sanitizer issue. The PredicatedScalarEvolution's copy constructor
wasn't copying the Generation value, and was leaving it un-initialized.
Original commit message:
[SCEV][LAA] Add no wrap SCEV predicates and use use them to improve strided pointer detection
Summary:
This change adds no wrap SCEV predicates with:
- support for runtime checking
- support for expression rewriting:
(sext ({x,+,y}) -> {sext(x),+,sext(y)}
(zext ({x,+,y}) -> {zext(x),+,sext(y)}
Note that we are sign extending the increment of the SCEV, even for
the zext case. This is needed to cover the fairly common case where y would
be a (small) negative integer. In order to do this, this change adds two new
flags: nusw and nssw that are applicable to AddRecExprs and permit the
transformations above.
We also change isStridedPtr in LAA to be able to make use of
these predicates. With this feature we should now always be able to
work around overflow issues in the dependence analysis.
Reviewers: mzolotukhin, sanjoy, anemet
Subscribers: mzolotukhin, sanjoy, llvm-commits, rengolin, jmolloy, hfinkel
Differential Revision: http://reviews.llvm.org/D15412
llvm-svn: 260112
2016-02-08 17:02:45 +00:00
|
|
|
#include "llvm/IR/ValueMap.h"
|
2012-12-03 17:02:12 +00:00
|
|
|
#include "llvm/Pass.h"
|
2009-06-27 21:21:31 +00:00
|
|
|
#include "llvm/Support/Allocator.h"
|
2012-12-03 17:02:12 +00:00
|
|
|
#include "llvm/Support/DataTypes.h"
|
2004-04-02 20:23:17 +00:00
|
|
|
|
|
|
|
namespace llvm {
|
2016-09-25 23:11:51 +00:00
|
|
|
class APInt;
|
2016-12-19 08:22:17 +00:00
|
|
|
class AssumptionCache;
|
2016-09-25 23:11:51 +00:00
|
|
|
class Constant;
|
|
|
|
class ConstantInt;
|
|
|
|
class DominatorTree;
|
|
|
|
class Type;
|
|
|
|
class ScalarEvolution;
|
|
|
|
class DataLayout;
|
|
|
|
class TargetLibraryInfo;
|
|
|
|
class LLVMContext;
|
|
|
|
class Operator;
|
|
|
|
class SCEV;
|
|
|
|
class SCEVAddRecExpr;
|
|
|
|
class SCEVConstant;
|
|
|
|
class SCEVExpander;
|
|
|
|
class SCEVPredicate;
|
|
|
|
class SCEVUnknown;
|
|
|
|
class Function;
|
|
|
|
|
|
|
|
template <> struct FoldingSetTrait<SCEV>;
|
|
|
|
template <> struct FoldingSetTrait<SCEVPredicate>;
|
|
|
|
|
|
|
|
/// This class represents an analyzed expression in the program. These are
|
|
|
|
/// opaque objects that the client is not allowed to do much with directly.
|
|
|
|
///
|
|
|
|
class SCEV : public FoldingSetNode {
|
|
|
|
friend struct FoldingSetTrait<SCEV>;
|
|
|
|
|
|
|
|
/// A reference to an Interned FoldingSetNodeID for this node. The
|
|
|
|
/// ScalarEvolution's BumpPtrAllocator holds the data.
|
|
|
|
FoldingSetNodeIDRef FastID;
|
|
|
|
|
|
|
|
// The SCEV baseclass this node corresponds to
|
|
|
|
const unsigned short SCEVType;
|
|
|
|
|
|
|
|
protected:
|
|
|
|
/// This field is initialized to zero and may be used in subclasses to store
|
|
|
|
/// miscellaneous information.
|
|
|
|
unsigned short SubclassData;
|
|
|
|
|
|
|
|
private:
|
|
|
|
SCEV(const SCEV &) = delete;
|
|
|
|
void operator=(const SCEV &) = delete;
|
|
|
|
|
|
|
|
public:
|
|
|
|
/// NoWrapFlags are bitfield indices into SubclassData.
|
|
|
|
///
|
|
|
|
/// Add and Mul expressions may have no-unsigned-wrap <NUW> or
|
|
|
|
/// no-signed-wrap <NSW> properties, which are derived from the IR
|
|
|
|
/// operator. NSW is a misnomer that we use to mean no signed overflow or
|
|
|
|
/// underflow.
|
|
|
|
///
|
|
|
|
/// AddRec expressions may have a no-self-wraparound <NW> property if, in
|
|
|
|
/// the integer domain, abs(step) * max-iteration(loop) <=
|
|
|
|
/// unsigned-max(bitwidth). This means that the recurrence will never reach
|
|
|
|
/// its start value if the step is non-zero. Computing the same value on
|
|
|
|
/// each iteration is not considered wrapping, and recurrences with step = 0
|
|
|
|
/// are trivially <NW>. <NW> is independent of the sign of step and the
|
|
|
|
/// value the add recurrence starts with.
|
|
|
|
///
|
|
|
|
/// Note that NUW and NSW are also valid properties of a recurrence, and
|
|
|
|
/// either implies NW. For convenience, NW will be set for a recurrence
|
|
|
|
/// whenever either NUW or NSW are set.
|
|
|
|
enum NoWrapFlags {
|
|
|
|
FlagAnyWrap = 0, // No guarantee.
|
|
|
|
FlagNW = (1 << 0), // No self-wrap.
|
|
|
|
FlagNUW = (1 << 1), // No unsigned wrap.
|
|
|
|
FlagNSW = (1 << 2), // No signed wrap.
|
|
|
|
NoWrapMask = (1 << 3) - 1
|
|
|
|
};
|
2010-11-17 22:27:42 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
explicit SCEV(const FoldingSetNodeIDRef ID, unsigned SCEVTy)
|
|
|
|
: FastID(ID), SCEVType(SCEVTy), SubclassData(0) {}
|
2011-03-14 16:50:06 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
unsigned getSCEVType() const { return SCEVType; }
|
2009-06-27 21:21:31 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Return the LLVM type of this SCEV expression.
|
|
|
|
///
|
|
|
|
Type *getType() const;
|
2004-04-02 20:23:17 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Return true if the expression is a constant zero.
|
|
|
|
///
|
|
|
|
bool isZero() const;
|
2004-04-02 20:23:17 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Return true if the expression is a constant one.
|
|
|
|
///
|
|
|
|
bool isOne() const;
|
2008-06-18 16:23:07 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Return true if the expression is a constant all-ones value.
|
|
|
|
///
|
|
|
|
bool isAllOnesValue() const;
|
2009-05-18 15:22:39 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Return true if the specified scev is negated, but not a constant.
|
|
|
|
bool isNonConstantNegative() const;
|
2009-06-24 00:30:26 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Print out the internal representation of this scalar to the specified
|
|
|
|
/// stream. This should really only be used for debugging purposes.
|
|
|
|
void print(raw_ostream &OS) const;
|
2012-01-07 00:27:31 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// This method is used for debugging.
|
|
|
|
///
|
|
|
|
void dump() const;
|
|
|
|
};
|
|
|
|
|
|
|
|
// Specialize FoldingSetTrait for SCEV to avoid needing to compute
|
|
|
|
// temporary FoldingSetNodeID values.
|
|
|
|
template <> struct FoldingSetTrait<SCEV> : DefaultFoldingSetTrait<SCEV> {
|
|
|
|
static void Profile(const SCEV &X, FoldingSetNodeID &ID) { ID = X.FastID; }
|
|
|
|
static bool Equals(const SCEV &X, const FoldingSetNodeID &ID, unsigned IDHash,
|
|
|
|
FoldingSetNodeID &TempID) {
|
|
|
|
return ID == X.FastID;
|
|
|
|
}
|
|
|
|
static unsigned ComputeHash(const SCEV &X, FoldingSetNodeID &TempID) {
|
|
|
|
return X.FastID.ComputeHash();
|
|
|
|
}
|
|
|
|
};
|
2015-10-25 19:55:24 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
inline raw_ostream &operator<<(raw_ostream &OS, const SCEV &S) {
|
|
|
|
S.print(OS);
|
|
|
|
return OS;
|
|
|
|
}
|
2005-04-21 20:19:05 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// An object of this class is returned by queries that could not be answered.
|
|
|
|
/// For example, if you ask for the number of iterations of a linked-list
|
|
|
|
/// traversal loop, you will get one of these. None of the standard SCEV
|
|
|
|
/// operations are valid on this class, it is just a marker.
|
|
|
|
struct SCEVCouldNotCompute : public SCEV {
|
|
|
|
SCEVCouldNotCompute();
|
2010-08-16 15:31:45 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Methods for support type inquiry through isa, cast, and dyn_cast:
|
|
|
|
static bool classof(const SCEV *S);
|
|
|
|
};
|
2009-04-21 00:47:46 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// This class represents an assumption made using SCEV expressions which can
|
|
|
|
/// be checked at run-time.
|
|
|
|
class SCEVPredicate : public FoldingSetNode {
|
|
|
|
friend struct FoldingSetTrait<SCEVPredicate>;
|
2004-04-02 20:23:17 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// A reference to an Interned FoldingSetNodeID for this node. The
|
|
|
|
/// ScalarEvolution's BumpPtrAllocator holds the data.
|
|
|
|
FoldingSetNodeIDRef FastID;
|
2004-04-02 20:23:17 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
public:
|
|
|
|
enum SCEVPredicateKind { P_Union, P_Equal, P_Wrap };
|
[SCEV][LV] Add SCEV Predicates and use them to re-implement stride versioning
Summary:
SCEV Predicates represent conditions that typically cannot be derived from
static analysis, but can be used to reduce SCEV expressions to forms which are
usable for different optimizers.
ScalarEvolution now has the rewriteUsingPredicate method which can simplify a
SCEV expression using a SCEVPredicateSet. The normal workflow of a pass using
SCEVPredicates would be to hold a SCEVPredicateSet and every time assumptions
need to be made a new SCEV Predicate would be created and added to the set.
Each time after calling getSCEV, the user will call the rewriteUsingPredicate
method.
We add two types of predicates
SCEVPredicateSet - implements a set of predicates
SCEVEqualPredicate - tests for equality between two SCEV expressions
We use the SCEVEqualPredicate to re-implement stride versioning. Every time we
version a stride, we will add a SCEVEqualPredicate to the context.
Instead of adding specific stride checks, LoopVectorize now adds a more
generic SCEV check.
We only need to add support for this in the LoopVectorizer since this is the
only pass that will do stride versioning.
Reviewers: mzolotukhin, anemet, hfinkel, sanjoy
Subscribers: sanjoy, hfinkel, rengolin, jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13595
llvm-svn: 251800
2015-11-02 14:41:02 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
protected:
|
|
|
|
SCEVPredicateKind Kind;
|
|
|
|
~SCEVPredicate() = default;
|
|
|
|
SCEVPredicate(const SCEVPredicate &) = default;
|
|
|
|
SCEVPredicate &operator=(const SCEVPredicate &) = default;
|
[SCEV][LV] Add SCEV Predicates and use them to re-implement stride versioning
Summary:
SCEV Predicates represent conditions that typically cannot be derived from
static analysis, but can be used to reduce SCEV expressions to forms which are
usable for different optimizers.
ScalarEvolution now has the rewriteUsingPredicate method which can simplify a
SCEV expression using a SCEVPredicateSet. The normal workflow of a pass using
SCEVPredicates would be to hold a SCEVPredicateSet and every time assumptions
need to be made a new SCEV Predicate would be created and added to the set.
Each time after calling getSCEV, the user will call the rewriteUsingPredicate
method.
We add two types of predicates
SCEVPredicateSet - implements a set of predicates
SCEVEqualPredicate - tests for equality between two SCEV expressions
We use the SCEVEqualPredicate to re-implement stride versioning. Every time we
version a stride, we will add a SCEVEqualPredicate to the context.
Instead of adding specific stride checks, LoopVectorize now adds a more
generic SCEV check.
We only need to add support for this in the LoopVectorizer since this is the
only pass that will do stride versioning.
Reviewers: mzolotukhin, anemet, hfinkel, sanjoy
Subscribers: sanjoy, hfinkel, rengolin, jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13595
llvm-svn: 251800
2015-11-02 14:41:02 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
public:
|
|
|
|
SCEVPredicate(const FoldingSetNodeIDRef ID, SCEVPredicateKind Kind);
|
[SCEV][LV] Add SCEV Predicates and use them to re-implement stride versioning
Summary:
SCEV Predicates represent conditions that typically cannot be derived from
static analysis, but can be used to reduce SCEV expressions to forms which are
usable for different optimizers.
ScalarEvolution now has the rewriteUsingPredicate method which can simplify a
SCEV expression using a SCEVPredicateSet. The normal workflow of a pass using
SCEVPredicates would be to hold a SCEVPredicateSet and every time assumptions
need to be made a new SCEV Predicate would be created and added to the set.
Each time after calling getSCEV, the user will call the rewriteUsingPredicate
method.
We add two types of predicates
SCEVPredicateSet - implements a set of predicates
SCEVEqualPredicate - tests for equality between two SCEV expressions
We use the SCEVEqualPredicate to re-implement stride versioning. Every time we
version a stride, we will add a SCEVEqualPredicate to the context.
Instead of adding specific stride checks, LoopVectorize now adds a more
generic SCEV check.
We only need to add support for this in the LoopVectorizer since this is the
only pass that will do stride versioning.
Reviewers: mzolotukhin, anemet, hfinkel, sanjoy
Subscribers: sanjoy, hfinkel, rengolin, jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13595
llvm-svn: 251800
2015-11-02 14:41:02 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
SCEVPredicateKind getKind() const { return Kind; }
|
[SCEV][LV] Add SCEV Predicates and use them to re-implement stride versioning
Summary:
SCEV Predicates represent conditions that typically cannot be derived from
static analysis, but can be used to reduce SCEV expressions to forms which are
usable for different optimizers.
ScalarEvolution now has the rewriteUsingPredicate method which can simplify a
SCEV expression using a SCEVPredicateSet. The normal workflow of a pass using
SCEVPredicates would be to hold a SCEVPredicateSet and every time assumptions
need to be made a new SCEV Predicate would be created and added to the set.
Each time after calling getSCEV, the user will call the rewriteUsingPredicate
method.
We add two types of predicates
SCEVPredicateSet - implements a set of predicates
SCEVEqualPredicate - tests for equality between two SCEV expressions
We use the SCEVEqualPredicate to re-implement stride versioning. Every time we
version a stride, we will add a SCEVEqualPredicate to the context.
Instead of adding specific stride checks, LoopVectorize now adds a more
generic SCEV check.
We only need to add support for this in the LoopVectorizer since this is the
only pass that will do stride versioning.
Reviewers: mzolotukhin, anemet, hfinkel, sanjoy
Subscribers: sanjoy, hfinkel, rengolin, jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13595
llvm-svn: 251800
2015-11-02 14:41:02 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Returns the estimated complexity of this predicate. This is roughly
|
|
|
|
/// measured in the number of run-time checks required.
|
|
|
|
virtual unsigned getComplexity() const { return 1; }
|
[SCEV][LV] Add SCEV Predicates and use them to re-implement stride versioning
Summary:
SCEV Predicates represent conditions that typically cannot be derived from
static analysis, but can be used to reduce SCEV expressions to forms which are
usable for different optimizers.
ScalarEvolution now has the rewriteUsingPredicate method which can simplify a
SCEV expression using a SCEVPredicateSet. The normal workflow of a pass using
SCEVPredicates would be to hold a SCEVPredicateSet and every time assumptions
need to be made a new SCEV Predicate would be created and added to the set.
Each time after calling getSCEV, the user will call the rewriteUsingPredicate
method.
We add two types of predicates
SCEVPredicateSet - implements a set of predicates
SCEVEqualPredicate - tests for equality between two SCEV expressions
We use the SCEVEqualPredicate to re-implement stride versioning. Every time we
version a stride, we will add a SCEVEqualPredicate to the context.
Instead of adding specific stride checks, LoopVectorize now adds a more
generic SCEV check.
We only need to add support for this in the LoopVectorizer since this is the
only pass that will do stride versioning.
Reviewers: mzolotukhin, anemet, hfinkel, sanjoy
Subscribers: sanjoy, hfinkel, rengolin, jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13595
llvm-svn: 251800
2015-11-02 14:41:02 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Returns true if the predicate is always true. This means that no
|
|
|
|
/// assumptions were made and nothing needs to be checked at run-time.
|
|
|
|
virtual bool isAlwaysTrue() const = 0;
|
[SCEV][LV] Add SCEV Predicates and use them to re-implement stride versioning
Summary:
SCEV Predicates represent conditions that typically cannot be derived from
static analysis, but can be used to reduce SCEV expressions to forms which are
usable for different optimizers.
ScalarEvolution now has the rewriteUsingPredicate method which can simplify a
SCEV expression using a SCEVPredicateSet. The normal workflow of a pass using
SCEVPredicates would be to hold a SCEVPredicateSet and every time assumptions
need to be made a new SCEV Predicate would be created and added to the set.
Each time after calling getSCEV, the user will call the rewriteUsingPredicate
method.
We add two types of predicates
SCEVPredicateSet - implements a set of predicates
SCEVEqualPredicate - tests for equality between two SCEV expressions
We use the SCEVEqualPredicate to re-implement stride versioning. Every time we
version a stride, we will add a SCEVEqualPredicate to the context.
Instead of adding specific stride checks, LoopVectorize now adds a more
generic SCEV check.
We only need to add support for this in the LoopVectorizer since this is the
only pass that will do stride versioning.
Reviewers: mzolotukhin, anemet, hfinkel, sanjoy
Subscribers: sanjoy, hfinkel, rengolin, jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13595
llvm-svn: 251800
2015-11-02 14:41:02 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Returns true if this predicate implies \p N.
|
|
|
|
virtual bool implies(const SCEVPredicate *N) const = 0;
|
[SCEV][LV] Add SCEV Predicates and use them to re-implement stride versioning
Summary:
SCEV Predicates represent conditions that typically cannot be derived from
static analysis, but can be used to reduce SCEV expressions to forms which are
usable for different optimizers.
ScalarEvolution now has the rewriteUsingPredicate method which can simplify a
SCEV expression using a SCEVPredicateSet. The normal workflow of a pass using
SCEVPredicates would be to hold a SCEVPredicateSet and every time assumptions
need to be made a new SCEV Predicate would be created and added to the set.
Each time after calling getSCEV, the user will call the rewriteUsingPredicate
method.
We add two types of predicates
SCEVPredicateSet - implements a set of predicates
SCEVEqualPredicate - tests for equality between two SCEV expressions
We use the SCEVEqualPredicate to re-implement stride versioning. Every time we
version a stride, we will add a SCEVEqualPredicate to the context.
Instead of adding specific stride checks, LoopVectorize now adds a more
generic SCEV check.
We only need to add support for this in the LoopVectorizer since this is the
only pass that will do stride versioning.
Reviewers: mzolotukhin, anemet, hfinkel, sanjoy
Subscribers: sanjoy, hfinkel, rengolin, jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13595
llvm-svn: 251800
2015-11-02 14:41:02 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Prints a textual representation of this predicate with an indentation of
|
|
|
|
/// \p Depth.
|
|
|
|
virtual void print(raw_ostream &OS, unsigned Depth = 0) const = 0;
|
[SCEV][LV] Add SCEV Predicates and use them to re-implement stride versioning
Summary:
SCEV Predicates represent conditions that typically cannot be derived from
static analysis, but can be used to reduce SCEV expressions to forms which are
usable for different optimizers.
ScalarEvolution now has the rewriteUsingPredicate method which can simplify a
SCEV expression using a SCEVPredicateSet. The normal workflow of a pass using
SCEVPredicates would be to hold a SCEVPredicateSet and every time assumptions
need to be made a new SCEV Predicate would be created and added to the set.
Each time after calling getSCEV, the user will call the rewriteUsingPredicate
method.
We add two types of predicates
SCEVPredicateSet - implements a set of predicates
SCEVEqualPredicate - tests for equality between two SCEV expressions
We use the SCEVEqualPredicate to re-implement stride versioning. Every time we
version a stride, we will add a SCEVEqualPredicate to the context.
Instead of adding specific stride checks, LoopVectorize now adds a more
generic SCEV check.
We only need to add support for this in the LoopVectorizer since this is the
only pass that will do stride versioning.
Reviewers: mzolotukhin, anemet, hfinkel, sanjoy
Subscribers: sanjoy, hfinkel, rengolin, jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13595
llvm-svn: 251800
2015-11-02 14:41:02 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Returns the SCEV to which this predicate applies, or nullptr if this is
|
|
|
|
/// a SCEVUnionPredicate.
|
|
|
|
virtual const SCEV *getExpr() const = 0;
|
|
|
|
};
|
[SCEV][LV] Add SCEV Predicates and use them to re-implement stride versioning
Summary:
SCEV Predicates represent conditions that typically cannot be derived from
static analysis, but can be used to reduce SCEV expressions to forms which are
usable for different optimizers.
ScalarEvolution now has the rewriteUsingPredicate method which can simplify a
SCEV expression using a SCEVPredicateSet. The normal workflow of a pass using
SCEVPredicates would be to hold a SCEVPredicateSet and every time assumptions
need to be made a new SCEV Predicate would be created and added to the set.
Each time after calling getSCEV, the user will call the rewriteUsingPredicate
method.
We add two types of predicates
SCEVPredicateSet - implements a set of predicates
SCEVEqualPredicate - tests for equality between two SCEV expressions
We use the SCEVEqualPredicate to re-implement stride versioning. Every time we
version a stride, we will add a SCEVEqualPredicate to the context.
Instead of adding specific stride checks, LoopVectorize now adds a more
generic SCEV check.
We only need to add support for this in the LoopVectorizer since this is the
only pass that will do stride versioning.
Reviewers: mzolotukhin, anemet, hfinkel, sanjoy
Subscribers: sanjoy, hfinkel, rengolin, jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13595
llvm-svn: 251800
2015-11-02 14:41:02 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
inline raw_ostream &operator<<(raw_ostream &OS, const SCEVPredicate &P) {
|
|
|
|
P.print(OS);
|
|
|
|
return OS;
|
|
|
|
}
|
[SCEV][LV] Add SCEV Predicates and use them to re-implement stride versioning
Summary:
SCEV Predicates represent conditions that typically cannot be derived from
static analysis, but can be used to reduce SCEV expressions to forms which are
usable for different optimizers.
ScalarEvolution now has the rewriteUsingPredicate method which can simplify a
SCEV expression using a SCEVPredicateSet. The normal workflow of a pass using
SCEVPredicates would be to hold a SCEVPredicateSet and every time assumptions
need to be made a new SCEV Predicate would be created and added to the set.
Each time after calling getSCEV, the user will call the rewriteUsingPredicate
method.
We add two types of predicates
SCEVPredicateSet - implements a set of predicates
SCEVEqualPredicate - tests for equality between two SCEV expressions
We use the SCEVEqualPredicate to re-implement stride versioning. Every time we
version a stride, we will add a SCEVEqualPredicate to the context.
Instead of adding specific stride checks, LoopVectorize now adds a more
generic SCEV check.
We only need to add support for this in the LoopVectorizer since this is the
only pass that will do stride versioning.
Reviewers: mzolotukhin, anemet, hfinkel, sanjoy
Subscribers: sanjoy, hfinkel, rengolin, jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13595
llvm-svn: 251800
2015-11-02 14:41:02 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
// Specialize FoldingSetTrait for SCEVPredicate to avoid needing to compute
|
|
|
|
// temporary FoldingSetNodeID values.
|
|
|
|
template <>
|
|
|
|
struct FoldingSetTrait<SCEVPredicate> : DefaultFoldingSetTrait<SCEVPredicate> {
|
[SCEV][LV] Add SCEV Predicates and use them to re-implement stride versioning
Summary:
SCEV Predicates represent conditions that typically cannot be derived from
static analysis, but can be used to reduce SCEV expressions to forms which are
usable for different optimizers.
ScalarEvolution now has the rewriteUsingPredicate method which can simplify a
SCEV expression using a SCEVPredicateSet. The normal workflow of a pass using
SCEVPredicates would be to hold a SCEVPredicateSet and every time assumptions
need to be made a new SCEV Predicate would be created and added to the set.
Each time after calling getSCEV, the user will call the rewriteUsingPredicate
method.
We add two types of predicates
SCEVPredicateSet - implements a set of predicates
SCEVEqualPredicate - tests for equality between two SCEV expressions
We use the SCEVEqualPredicate to re-implement stride versioning. Every time we
version a stride, we will add a SCEVEqualPredicate to the context.
Instead of adding specific stride checks, LoopVectorize now adds a more
generic SCEV check.
We only need to add support for this in the LoopVectorizer since this is the
only pass that will do stride versioning.
Reviewers: mzolotukhin, anemet, hfinkel, sanjoy
Subscribers: sanjoy, hfinkel, rengolin, jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13595
llvm-svn: 251800
2015-11-02 14:41:02 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
static void Profile(const SCEVPredicate &X, FoldingSetNodeID &ID) {
|
|
|
|
ID = X.FastID;
|
[SCEV][LV] Add SCEV Predicates and use them to re-implement stride versioning
Summary:
SCEV Predicates represent conditions that typically cannot be derived from
static analysis, but can be used to reduce SCEV expressions to forms which are
usable for different optimizers.
ScalarEvolution now has the rewriteUsingPredicate method which can simplify a
SCEV expression using a SCEVPredicateSet. The normal workflow of a pass using
SCEVPredicates would be to hold a SCEVPredicateSet and every time assumptions
need to be made a new SCEV Predicate would be created and added to the set.
Each time after calling getSCEV, the user will call the rewriteUsingPredicate
method.
We add two types of predicates
SCEVPredicateSet - implements a set of predicates
SCEVEqualPredicate - tests for equality between two SCEV expressions
We use the SCEVEqualPredicate to re-implement stride versioning. Every time we
version a stride, we will add a SCEVEqualPredicate to the context.
Instead of adding specific stride checks, LoopVectorize now adds a more
generic SCEV check.
We only need to add support for this in the LoopVectorizer since this is the
only pass that will do stride versioning.
Reviewers: mzolotukhin, anemet, hfinkel, sanjoy
Subscribers: sanjoy, hfinkel, rengolin, jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13595
llvm-svn: 251800
2015-11-02 14:41:02 +00:00
|
|
|
}
|
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
static bool Equals(const SCEVPredicate &X, const FoldingSetNodeID &ID,
|
|
|
|
unsigned IDHash, FoldingSetNodeID &TempID) {
|
|
|
|
return ID == X.FastID;
|
|
|
|
}
|
|
|
|
static unsigned ComputeHash(const SCEVPredicate &X,
|
|
|
|
FoldingSetNodeID &TempID) {
|
|
|
|
return X.FastID.ComputeHash();
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
/// This class represents an assumption that two SCEV expressions are equal,
|
|
|
|
/// and this can be checked at run-time. We assume that the left hand side is
|
|
|
|
/// a SCEVUnknown and the right hand side a constant.
|
|
|
|
class SCEVEqualPredicate final : public SCEVPredicate {
|
|
|
|
/// We assume that LHS == RHS, where LHS is a SCEVUnknown and RHS a
|
|
|
|
/// constant.
|
|
|
|
const SCEVUnknown *LHS;
|
|
|
|
const SCEVConstant *RHS;
|
|
|
|
|
|
|
|
public:
|
|
|
|
SCEVEqualPredicate(const FoldingSetNodeIDRef ID, const SCEVUnknown *LHS,
|
|
|
|
const SCEVConstant *RHS);
|
|
|
|
|
|
|
|
/// Implementation of the SCEVPredicate interface
|
|
|
|
bool implies(const SCEVPredicate *N) const override;
|
|
|
|
void print(raw_ostream &OS, unsigned Depth = 0) const override;
|
|
|
|
bool isAlwaysTrue() const override;
|
|
|
|
const SCEV *getExpr() const override;
|
|
|
|
|
|
|
|
/// Returns the left hand side of the equality.
|
|
|
|
const SCEVUnknown *getLHS() const { return LHS; }
|
|
|
|
|
|
|
|
/// Returns the right hand side of the equality.
|
|
|
|
const SCEVConstant *getRHS() const { return RHS; }
|
|
|
|
|
|
|
|
/// Methods for support type inquiry through isa, cast, and dyn_cast:
|
|
|
|
static inline bool classof(const SCEVPredicate *P) {
|
|
|
|
return P->getKind() == P_Equal;
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
/// This class represents an assumption made on an AddRec expression. Given an
|
|
|
|
/// affine AddRec expression {a,+,b}, we assume that it has the nssw or nusw
|
|
|
|
/// flags (defined below) in the first X iterations of the loop, where X is a
|
|
|
|
/// SCEV expression returned by getPredicatedBackedgeTakenCount).
|
|
|
|
///
|
|
|
|
/// Note that this does not imply that X is equal to the backedge taken
|
|
|
|
/// count. This means that if we have a nusw predicate for i32 {0,+,1} with a
|
|
|
|
/// predicated backedge taken count of X, we only guarantee that {0,+,1} has
|
|
|
|
/// nusw in the first X iterations. {0,+,1} may still wrap in the loop if we
|
|
|
|
/// have more than X iterations.
|
|
|
|
class SCEVWrapPredicate final : public SCEVPredicate {
|
|
|
|
public:
|
|
|
|
/// Similar to SCEV::NoWrapFlags, but with slightly different semantics
|
|
|
|
/// for FlagNUSW. The increment is considered to be signed, and a + b
|
|
|
|
/// (where b is the increment) is considered to wrap if:
|
|
|
|
/// zext(a + b) != zext(a) + sext(b)
|
|
|
|
///
|
|
|
|
/// If Signed is a function that takes an n-bit tuple and maps to the
|
|
|
|
/// integer domain as the tuples value interpreted as twos complement,
|
|
|
|
/// and Unsigned a function that takes an n-bit tuple and maps to the
|
|
|
|
/// integer domain as as the base two value of input tuple, then a + b
|
|
|
|
/// has IncrementNUSW iff:
|
|
|
|
///
|
|
|
|
/// 0 <= Unsigned(a) + Signed(b) < 2^n
|
|
|
|
///
|
|
|
|
/// The IncrementNSSW flag has identical semantics with SCEV::FlagNSW.
|
|
|
|
///
|
|
|
|
/// Note that the IncrementNUSW flag is not commutative: if base + inc
|
|
|
|
/// has IncrementNUSW, then inc + base doesn't neccessarily have this
|
|
|
|
/// property. The reason for this is that this is used for sign/zero
|
|
|
|
/// extending affine AddRec SCEV expressions when a SCEVWrapPredicate is
|
|
|
|
/// assumed. A {base,+,inc} expression is already non-commutative with
|
|
|
|
/// regards to base and inc, since it is interpreted as:
|
|
|
|
/// (((base + inc) + inc) + inc) ...
|
|
|
|
enum IncrementWrapFlags {
|
|
|
|
IncrementAnyWrap = 0, // No guarantee.
|
|
|
|
IncrementNUSW = (1 << 0), // No unsigned with signed increment wrap.
|
|
|
|
IncrementNSSW = (1 << 1), // No signed with signed increment wrap
|
|
|
|
// (equivalent with SCEV::NSW)
|
|
|
|
IncrementNoWrapMask = (1 << 2) - 1
|
[SCEV][LV] Add SCEV Predicates and use them to re-implement stride versioning
Summary:
SCEV Predicates represent conditions that typically cannot be derived from
static analysis, but can be used to reduce SCEV expressions to forms which are
usable for different optimizers.
ScalarEvolution now has the rewriteUsingPredicate method which can simplify a
SCEV expression using a SCEVPredicateSet. The normal workflow of a pass using
SCEVPredicates would be to hold a SCEVPredicateSet and every time assumptions
need to be made a new SCEV Predicate would be created and added to the set.
Each time after calling getSCEV, the user will call the rewriteUsingPredicate
method.
We add two types of predicates
SCEVPredicateSet - implements a set of predicates
SCEVEqualPredicate - tests for equality between two SCEV expressions
We use the SCEVEqualPredicate to re-implement stride versioning. Every time we
version a stride, we will add a SCEVEqualPredicate to the context.
Instead of adding specific stride checks, LoopVectorize now adds a more
generic SCEV check.
We only need to add support for this in the LoopVectorizer since this is the
only pass that will do stride versioning.
Reviewers: mzolotukhin, anemet, hfinkel, sanjoy
Subscribers: sanjoy, hfinkel, rengolin, jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13595
llvm-svn: 251800
2015-11-02 14:41:02 +00:00
|
|
|
};
|
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Convenient IncrementWrapFlags manipulation methods.
|
2016-10-16 21:17:29 +00:00
|
|
|
LLVM_NODISCARD static SCEVWrapPredicate::IncrementWrapFlags
|
2016-09-25 23:11:51 +00:00
|
|
|
clearFlags(SCEVWrapPredicate::IncrementWrapFlags Flags,
|
|
|
|
SCEVWrapPredicate::IncrementWrapFlags OffFlags) {
|
|
|
|
assert((Flags & IncrementNoWrapMask) == Flags && "Invalid flags value!");
|
|
|
|
assert((OffFlags & IncrementNoWrapMask) == OffFlags &&
|
|
|
|
"Invalid flags value!");
|
|
|
|
return (SCEVWrapPredicate::IncrementWrapFlags)(Flags & ~OffFlags);
|
|
|
|
}
|
[SCEV][LV] Add SCEV Predicates and use them to re-implement stride versioning
Summary:
SCEV Predicates represent conditions that typically cannot be derived from
static analysis, but can be used to reduce SCEV expressions to forms which are
usable for different optimizers.
ScalarEvolution now has the rewriteUsingPredicate method which can simplify a
SCEV expression using a SCEVPredicateSet. The normal workflow of a pass using
SCEVPredicates would be to hold a SCEVPredicateSet and every time assumptions
need to be made a new SCEV Predicate would be created and added to the set.
Each time after calling getSCEV, the user will call the rewriteUsingPredicate
method.
We add two types of predicates
SCEVPredicateSet - implements a set of predicates
SCEVEqualPredicate - tests for equality between two SCEV expressions
We use the SCEVEqualPredicate to re-implement stride versioning. Every time we
version a stride, we will add a SCEVEqualPredicate to the context.
Instead of adding specific stride checks, LoopVectorize now adds a more
generic SCEV check.
We only need to add support for this in the LoopVectorizer since this is the
only pass that will do stride versioning.
Reviewers: mzolotukhin, anemet, hfinkel, sanjoy
Subscribers: sanjoy, hfinkel, rengolin, jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13595
llvm-svn: 251800
2015-11-02 14:41:02 +00:00
|
|
|
|
2016-10-16 21:17:29 +00:00
|
|
|
LLVM_NODISCARD static SCEVWrapPredicate::IncrementWrapFlags
|
2016-09-25 23:11:51 +00:00
|
|
|
maskFlags(SCEVWrapPredicate::IncrementWrapFlags Flags, int Mask) {
|
|
|
|
assert((Flags & IncrementNoWrapMask) == Flags && "Invalid flags value!");
|
|
|
|
assert((Mask & IncrementNoWrapMask) == Mask && "Invalid mask value!");
|
[SCEV][LV] Add SCEV Predicates and use them to re-implement stride versioning
Summary:
SCEV Predicates represent conditions that typically cannot be derived from
static analysis, but can be used to reduce SCEV expressions to forms which are
usable for different optimizers.
ScalarEvolution now has the rewriteUsingPredicate method which can simplify a
SCEV expression using a SCEVPredicateSet. The normal workflow of a pass using
SCEVPredicates would be to hold a SCEVPredicateSet and every time assumptions
need to be made a new SCEV Predicate would be created and added to the set.
Each time after calling getSCEV, the user will call the rewriteUsingPredicate
method.
We add two types of predicates
SCEVPredicateSet - implements a set of predicates
SCEVEqualPredicate - tests for equality between two SCEV expressions
We use the SCEVEqualPredicate to re-implement stride versioning. Every time we
version a stride, we will add a SCEVEqualPredicate to the context.
Instead of adding specific stride checks, LoopVectorize now adds a more
generic SCEV check.
We only need to add support for this in the LoopVectorizer since this is the
only pass that will do stride versioning.
Reviewers: mzolotukhin, anemet, hfinkel, sanjoy
Subscribers: sanjoy, hfinkel, rengolin, jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13595
llvm-svn: 251800
2015-11-02 14:41:02 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
return (SCEVWrapPredicate::IncrementWrapFlags)(Flags & Mask);
|
|
|
|
}
|
[SCEV][LV] Add SCEV Predicates and use them to re-implement stride versioning
Summary:
SCEV Predicates represent conditions that typically cannot be derived from
static analysis, but can be used to reduce SCEV expressions to forms which are
usable for different optimizers.
ScalarEvolution now has the rewriteUsingPredicate method which can simplify a
SCEV expression using a SCEVPredicateSet. The normal workflow of a pass using
SCEVPredicates would be to hold a SCEVPredicateSet and every time assumptions
need to be made a new SCEV Predicate would be created and added to the set.
Each time after calling getSCEV, the user will call the rewriteUsingPredicate
method.
We add two types of predicates
SCEVPredicateSet - implements a set of predicates
SCEVEqualPredicate - tests for equality between two SCEV expressions
We use the SCEVEqualPredicate to re-implement stride versioning. Every time we
version a stride, we will add a SCEVEqualPredicate to the context.
Instead of adding specific stride checks, LoopVectorize now adds a more
generic SCEV check.
We only need to add support for this in the LoopVectorizer since this is the
only pass that will do stride versioning.
Reviewers: mzolotukhin, anemet, hfinkel, sanjoy
Subscribers: sanjoy, hfinkel, rengolin, jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13595
llvm-svn: 251800
2015-11-02 14:41:02 +00:00
|
|
|
|
2016-10-16 21:17:29 +00:00
|
|
|
LLVM_NODISCARD static SCEVWrapPredicate::IncrementWrapFlags
|
2016-09-25 23:11:51 +00:00
|
|
|
setFlags(SCEVWrapPredicate::IncrementWrapFlags Flags,
|
|
|
|
SCEVWrapPredicate::IncrementWrapFlags OnFlags) {
|
|
|
|
assert((Flags & IncrementNoWrapMask) == Flags && "Invalid flags value!");
|
|
|
|
assert((OnFlags & IncrementNoWrapMask) == OnFlags &&
|
|
|
|
"Invalid flags value!");
|
[SCEV][LAA] Re-commit r260085 and r260086, this time with a fix for the memory
sanitizer issue. The PredicatedScalarEvolution's copy constructor
wasn't copying the Generation value, and was leaving it un-initialized.
Original commit message:
[SCEV][LAA] Add no wrap SCEV predicates and use use them to improve strided pointer detection
Summary:
This change adds no wrap SCEV predicates with:
- support for runtime checking
- support for expression rewriting:
(sext ({x,+,y}) -> {sext(x),+,sext(y)}
(zext ({x,+,y}) -> {zext(x),+,sext(y)}
Note that we are sign extending the increment of the SCEV, even for
the zext case. This is needed to cover the fairly common case where y would
be a (small) negative integer. In order to do this, this change adds two new
flags: nusw and nssw that are applicable to AddRecExprs and permit the
transformations above.
We also change isStridedPtr in LAA to be able to make use of
these predicates. With this feature we should now always be able to
work around overflow issues in the dependence analysis.
Reviewers: mzolotukhin, sanjoy, anemet
Subscribers: mzolotukhin, sanjoy, llvm-commits, rengolin, jmolloy, hfinkel
Differential Revision: http://reviews.llvm.org/D15412
llvm-svn: 260112
2016-02-08 17:02:45 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
return (SCEVWrapPredicate::IncrementWrapFlags)(Flags | OnFlags);
|
|
|
|
}
|
[SCEV][LAA] Re-commit r260085 and r260086, this time with a fix for the memory
sanitizer issue. The PredicatedScalarEvolution's copy constructor
wasn't copying the Generation value, and was leaving it un-initialized.
Original commit message:
[SCEV][LAA] Add no wrap SCEV predicates and use use them to improve strided pointer detection
Summary:
This change adds no wrap SCEV predicates with:
- support for runtime checking
- support for expression rewriting:
(sext ({x,+,y}) -> {sext(x),+,sext(y)}
(zext ({x,+,y}) -> {zext(x),+,sext(y)}
Note that we are sign extending the increment of the SCEV, even for
the zext case. This is needed to cover the fairly common case where y would
be a (small) negative integer. In order to do this, this change adds two new
flags: nusw and nssw that are applicable to AddRecExprs and permit the
transformations above.
We also change isStridedPtr in LAA to be able to make use of
these predicates. With this feature we should now always be able to
work around overflow issues in the dependence analysis.
Reviewers: mzolotukhin, sanjoy, anemet
Subscribers: mzolotukhin, sanjoy, llvm-commits, rengolin, jmolloy, hfinkel
Differential Revision: http://reviews.llvm.org/D15412
llvm-svn: 260112
2016-02-08 17:02:45 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Returns the set of SCEVWrapPredicate no wrap flags implied by a
|
|
|
|
/// SCEVAddRecExpr.
|
2016-10-16 21:17:29 +00:00
|
|
|
LLVM_NODISCARD static SCEVWrapPredicate::IncrementWrapFlags
|
2016-09-25 23:11:51 +00:00
|
|
|
getImpliedFlags(const SCEVAddRecExpr *AR, ScalarEvolution &SE);
|
|
|
|
|
|
|
|
private:
|
|
|
|
const SCEVAddRecExpr *AR;
|
|
|
|
IncrementWrapFlags Flags;
|
|
|
|
|
|
|
|
public:
|
|
|
|
explicit SCEVWrapPredicate(const FoldingSetNodeIDRef ID,
|
|
|
|
const SCEVAddRecExpr *AR,
|
|
|
|
IncrementWrapFlags Flags);
|
|
|
|
|
|
|
|
/// Returns the set assumed no overflow flags.
|
|
|
|
IncrementWrapFlags getFlags() const { return Flags; }
|
|
|
|
/// Implementation of the SCEVPredicate interface
|
|
|
|
const SCEV *getExpr() const override;
|
|
|
|
bool implies(const SCEVPredicate *N) const override;
|
|
|
|
void print(raw_ostream &OS, unsigned Depth = 0) const override;
|
|
|
|
bool isAlwaysTrue() const override;
|
|
|
|
|
|
|
|
/// Methods for support type inquiry through isa, cast, and dyn_cast:
|
|
|
|
static inline bool classof(const SCEVPredicate *P) {
|
|
|
|
return P->getKind() == P_Wrap;
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
/// This class represents a composition of other SCEV predicates, and is the
|
|
|
|
/// class that most clients will interact with. This is equivalent to a
|
|
|
|
/// logical "AND" of all the predicates in the union.
|
2016-09-25 23:12:06 +00:00
|
|
|
///
|
|
|
|
/// NB! Unlike other SCEVPredicate sub-classes this class does not live in the
|
|
|
|
/// ScalarEvolution::Preds folding set. This is why the \c add function is sound.
|
2016-09-25 23:11:51 +00:00
|
|
|
class SCEVUnionPredicate final : public SCEVPredicate {
|
|
|
|
private:
|
|
|
|
typedef DenseMap<const SCEV *, SmallVector<const SCEVPredicate *, 4>>
|
|
|
|
PredicateMap;
|
|
|
|
|
|
|
|
/// Vector with references to all predicates in this union.
|
|
|
|
SmallVector<const SCEVPredicate *, 16> Preds;
|
|
|
|
/// Maps SCEVs to predicates for quick look-ups.
|
|
|
|
PredicateMap SCEVToPreds;
|
|
|
|
|
|
|
|
public:
|
|
|
|
SCEVUnionPredicate();
|
|
|
|
|
|
|
|
const SmallVectorImpl<const SCEVPredicate *> &getPredicates() const {
|
|
|
|
return Preds;
|
|
|
|
}
|
[SCEV][LAA] Re-commit r260085 and r260086, this time with a fix for the memory
sanitizer issue. The PredicatedScalarEvolution's copy constructor
wasn't copying the Generation value, and was leaving it un-initialized.
Original commit message:
[SCEV][LAA] Add no wrap SCEV predicates and use use them to improve strided pointer detection
Summary:
This change adds no wrap SCEV predicates with:
- support for runtime checking
- support for expression rewriting:
(sext ({x,+,y}) -> {sext(x),+,sext(y)}
(zext ({x,+,y}) -> {zext(x),+,sext(y)}
Note that we are sign extending the increment of the SCEV, even for
the zext case. This is needed to cover the fairly common case where y would
be a (small) negative integer. In order to do this, this change adds two new
flags: nusw and nssw that are applicable to AddRecExprs and permit the
transformations above.
We also change isStridedPtr in LAA to be able to make use of
these predicates. With this feature we should now always be able to
work around overflow issues in the dependence analysis.
Reviewers: mzolotukhin, sanjoy, anemet
Subscribers: mzolotukhin, sanjoy, llvm-commits, rengolin, jmolloy, hfinkel
Differential Revision: http://reviews.llvm.org/D15412
llvm-svn: 260112
2016-02-08 17:02:45 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Adds a predicate to this union.
|
|
|
|
void add(const SCEVPredicate *N);
|
[SCEV][LAA] Re-commit r260085 and r260086, this time with a fix for the memory
sanitizer issue. The PredicatedScalarEvolution's copy constructor
wasn't copying the Generation value, and was leaving it un-initialized.
Original commit message:
[SCEV][LAA] Add no wrap SCEV predicates and use use them to improve strided pointer detection
Summary:
This change adds no wrap SCEV predicates with:
- support for runtime checking
- support for expression rewriting:
(sext ({x,+,y}) -> {sext(x),+,sext(y)}
(zext ({x,+,y}) -> {zext(x),+,sext(y)}
Note that we are sign extending the increment of the SCEV, even for
the zext case. This is needed to cover the fairly common case where y would
be a (small) negative integer. In order to do this, this change adds two new
flags: nusw and nssw that are applicable to AddRecExprs and permit the
transformations above.
We also change isStridedPtr in LAA to be able to make use of
these predicates. With this feature we should now always be able to
work around overflow issues in the dependence analysis.
Reviewers: mzolotukhin, sanjoy, anemet
Subscribers: mzolotukhin, sanjoy, llvm-commits, rengolin, jmolloy, hfinkel
Differential Revision: http://reviews.llvm.org/D15412
llvm-svn: 260112
2016-02-08 17:02:45 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Returns a reference to a vector containing all predicates which apply to
|
|
|
|
/// \p Expr.
|
|
|
|
ArrayRef<const SCEVPredicate *> getPredicatesForExpr(const SCEV *Expr);
|
[SCEV][LAA] Re-commit r260085 and r260086, this time with a fix for the memory
sanitizer issue. The PredicatedScalarEvolution's copy constructor
wasn't copying the Generation value, and was leaving it un-initialized.
Original commit message:
[SCEV][LAA] Add no wrap SCEV predicates and use use them to improve strided pointer detection
Summary:
This change adds no wrap SCEV predicates with:
- support for runtime checking
- support for expression rewriting:
(sext ({x,+,y}) -> {sext(x),+,sext(y)}
(zext ({x,+,y}) -> {zext(x),+,sext(y)}
Note that we are sign extending the increment of the SCEV, even for
the zext case. This is needed to cover the fairly common case where y would
be a (small) negative integer. In order to do this, this change adds two new
flags: nusw and nssw that are applicable to AddRecExprs and permit the
transformations above.
We also change isStridedPtr in LAA to be able to make use of
these predicates. With this feature we should now always be able to
work around overflow issues in the dependence analysis.
Reviewers: mzolotukhin, sanjoy, anemet
Subscribers: mzolotukhin, sanjoy, llvm-commits, rengolin, jmolloy, hfinkel
Differential Revision: http://reviews.llvm.org/D15412
llvm-svn: 260112
2016-02-08 17:02:45 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Implementation of the SCEVPredicate interface
|
|
|
|
bool isAlwaysTrue() const override;
|
|
|
|
bool implies(const SCEVPredicate *N) const override;
|
|
|
|
void print(raw_ostream &OS, unsigned Depth) const override;
|
|
|
|
const SCEV *getExpr() const override;
|
[SCEV][LAA] Re-commit r260085 and r260086, this time with a fix for the memory
sanitizer issue. The PredicatedScalarEvolution's copy constructor
wasn't copying the Generation value, and was leaving it un-initialized.
Original commit message:
[SCEV][LAA] Add no wrap SCEV predicates and use use them to improve strided pointer detection
Summary:
This change adds no wrap SCEV predicates with:
- support for runtime checking
- support for expression rewriting:
(sext ({x,+,y}) -> {sext(x),+,sext(y)}
(zext ({x,+,y}) -> {zext(x),+,sext(y)}
Note that we are sign extending the increment of the SCEV, even for
the zext case. This is needed to cover the fairly common case where y would
be a (small) negative integer. In order to do this, this change adds two new
flags: nusw and nssw that are applicable to AddRecExprs and permit the
transformations above.
We also change isStridedPtr in LAA to be able to make use of
these predicates. With this feature we should now always be able to
work around overflow issues in the dependence analysis.
Reviewers: mzolotukhin, sanjoy, anemet
Subscribers: mzolotukhin, sanjoy, llvm-commits, rengolin, jmolloy, hfinkel
Differential Revision: http://reviews.llvm.org/D15412
llvm-svn: 260112
2016-02-08 17:02:45 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// We estimate the complexity of a union predicate as the size number of
|
|
|
|
/// predicates in the union.
|
|
|
|
unsigned getComplexity() const override { return Preds.size(); }
|
[SCEV][LAA] Re-commit r260085 and r260086, this time with a fix for the memory
sanitizer issue. The PredicatedScalarEvolution's copy constructor
wasn't copying the Generation value, and was leaving it un-initialized.
Original commit message:
[SCEV][LAA] Add no wrap SCEV predicates and use use them to improve strided pointer detection
Summary:
This change adds no wrap SCEV predicates with:
- support for runtime checking
- support for expression rewriting:
(sext ({x,+,y}) -> {sext(x),+,sext(y)}
(zext ({x,+,y}) -> {zext(x),+,sext(y)}
Note that we are sign extending the increment of the SCEV, even for
the zext case. This is needed to cover the fairly common case where y would
be a (small) negative integer. In order to do this, this change adds two new
flags: nusw and nssw that are applicable to AddRecExprs and permit the
transformations above.
We also change isStridedPtr in LAA to be able to make use of
these predicates. With this feature we should now always be able to
work around overflow issues in the dependence analysis.
Reviewers: mzolotukhin, sanjoy, anemet
Subscribers: mzolotukhin, sanjoy, llvm-commits, rengolin, jmolloy, hfinkel
Differential Revision: http://reviews.llvm.org/D15412
llvm-svn: 260112
2016-02-08 17:02:45 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Methods for support type inquiry through isa, cast, and dyn_cast:
|
|
|
|
static inline bool classof(const SCEVPredicate *P) {
|
|
|
|
return P->getKind() == P_Union;
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
/// The main scalar evolution driver. Because client code (intentionally)
|
|
|
|
/// can't do much with the SCEV objects directly, they must ask this class
|
|
|
|
/// for services.
|
|
|
|
class ScalarEvolution {
|
|
|
|
public:
|
|
|
|
/// An enum describing the relationship between a SCEV and a loop.
|
|
|
|
enum LoopDisposition {
|
|
|
|
LoopVariant, ///< The SCEV is loop-variant (unknown).
|
|
|
|
LoopInvariant, ///< The SCEV is loop-invariant.
|
|
|
|
LoopComputable ///< The SCEV varies predictably with the loop.
|
|
|
|
};
|
[SCEV][LAA] Re-commit r260085 and r260086, this time with a fix for the memory
sanitizer issue. The PredicatedScalarEvolution's copy constructor
wasn't copying the Generation value, and was leaving it un-initialized.
Original commit message:
[SCEV][LAA] Add no wrap SCEV predicates and use use them to improve strided pointer detection
Summary:
This change adds no wrap SCEV predicates with:
- support for runtime checking
- support for expression rewriting:
(sext ({x,+,y}) -> {sext(x),+,sext(y)}
(zext ({x,+,y}) -> {zext(x),+,sext(y)}
Note that we are sign extending the increment of the SCEV, even for
the zext case. This is needed to cover the fairly common case where y would
be a (small) negative integer. In order to do this, this change adds two new
flags: nusw and nssw that are applicable to AddRecExprs and permit the
transformations above.
We also change isStridedPtr in LAA to be able to make use of
these predicates. With this feature we should now always be able to
work around overflow issues in the dependence analysis.
Reviewers: mzolotukhin, sanjoy, anemet
Subscribers: mzolotukhin, sanjoy, llvm-commits, rengolin, jmolloy, hfinkel
Differential Revision: http://reviews.llvm.org/D15412
llvm-svn: 260112
2016-02-08 17:02:45 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// An enum describing the relationship between a SCEV and a basic block.
|
|
|
|
enum BlockDisposition {
|
|
|
|
DoesNotDominateBlock, ///< The SCEV does not dominate the block.
|
|
|
|
DominatesBlock, ///< The SCEV dominates the block.
|
|
|
|
ProperlyDominatesBlock ///< The SCEV properly dominates the block.
|
[SCEV][LAA] Re-commit r260085 and r260086, this time with a fix for the memory
sanitizer issue. The PredicatedScalarEvolution's copy constructor
wasn't copying the Generation value, and was leaving it un-initialized.
Original commit message:
[SCEV][LAA] Add no wrap SCEV predicates and use use them to improve strided pointer detection
Summary:
This change adds no wrap SCEV predicates with:
- support for runtime checking
- support for expression rewriting:
(sext ({x,+,y}) -> {sext(x),+,sext(y)}
(zext ({x,+,y}) -> {zext(x),+,sext(y)}
Note that we are sign extending the increment of the SCEV, even for
the zext case. This is needed to cover the fairly common case where y would
be a (small) negative integer. In order to do this, this change adds two new
flags: nusw and nssw that are applicable to AddRecExprs and permit the
transformations above.
We also change isStridedPtr in LAA to be able to make use of
these predicates. With this feature we should now always be able to
work around overflow issues in the dependence analysis.
Reviewers: mzolotukhin, sanjoy, anemet
Subscribers: mzolotukhin, sanjoy, llvm-commits, rengolin, jmolloy, hfinkel
Differential Revision: http://reviews.llvm.org/D15412
llvm-svn: 260112
2016-02-08 17:02:45 +00:00
|
|
|
};
|
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Convenient NoWrapFlags manipulation that hides enum casts and is
|
|
|
|
/// visible in the ScalarEvolution name space.
|
2016-10-16 21:17:29 +00:00
|
|
|
LLVM_NODISCARD static SCEV::NoWrapFlags maskFlags(SCEV::NoWrapFlags Flags,
|
|
|
|
int Mask) {
|
2016-09-25 23:11:51 +00:00
|
|
|
return (SCEV::NoWrapFlags)(Flags & Mask);
|
|
|
|
}
|
2016-10-16 21:17:29 +00:00
|
|
|
LLVM_NODISCARD static SCEV::NoWrapFlags setFlags(SCEV::NoWrapFlags Flags,
|
|
|
|
SCEV::NoWrapFlags OnFlags) {
|
2016-09-25 23:11:51 +00:00
|
|
|
return (SCEV::NoWrapFlags)(Flags | OnFlags);
|
|
|
|
}
|
2016-10-16 21:17:29 +00:00
|
|
|
LLVM_NODISCARD static SCEV::NoWrapFlags
|
2016-09-25 23:11:51 +00:00
|
|
|
clearFlags(SCEV::NoWrapFlags Flags, SCEV::NoWrapFlags OffFlags) {
|
|
|
|
return (SCEV::NoWrapFlags)(Flags & ~OffFlags);
|
|
|
|
}
|
[SCEV][LV] Add SCEV Predicates and use them to re-implement stride versioning
Summary:
SCEV Predicates represent conditions that typically cannot be derived from
static analysis, but can be used to reduce SCEV expressions to forms which are
usable for different optimizers.
ScalarEvolution now has the rewriteUsingPredicate method which can simplify a
SCEV expression using a SCEVPredicateSet. The normal workflow of a pass using
SCEVPredicates would be to hold a SCEVPredicateSet and every time assumptions
need to be made a new SCEV Predicate would be created and added to the set.
Each time after calling getSCEV, the user will call the rewriteUsingPredicate
method.
We add two types of predicates
SCEVPredicateSet - implements a set of predicates
SCEVEqualPredicate - tests for equality between two SCEV expressions
We use the SCEVEqualPredicate to re-implement stride versioning. Every time we
version a stride, we will add a SCEVEqualPredicate to the context.
Instead of adding specific stride checks, LoopVectorize now adds a more
generic SCEV check.
We only need to add support for this in the LoopVectorizer since this is the
only pass that will do stride versioning.
Reviewers: mzolotukhin, anemet, hfinkel, sanjoy
Subscribers: sanjoy, hfinkel, rengolin, jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13595
llvm-svn: 251800
2015-11-02 14:41:02 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
private:
|
|
|
|
/// A CallbackVH to arrange for ScalarEvolution to be notified whenever a
|
|
|
|
/// Value is deleted.
|
|
|
|
class SCEVCallbackVH final : public CallbackVH {
|
|
|
|
ScalarEvolution *SE;
|
|
|
|
void deleted() override;
|
|
|
|
void allUsesReplacedWith(Value *New) override;
|
[SCEV][LV] Add SCEV Predicates and use them to re-implement stride versioning
Summary:
SCEV Predicates represent conditions that typically cannot be derived from
static analysis, but can be used to reduce SCEV expressions to forms which are
usable for different optimizers.
ScalarEvolution now has the rewriteUsingPredicate method which can simplify a
SCEV expression using a SCEVPredicateSet. The normal workflow of a pass using
SCEVPredicates would be to hold a SCEVPredicateSet and every time assumptions
need to be made a new SCEV Predicate would be created and added to the set.
Each time after calling getSCEV, the user will call the rewriteUsingPredicate
method.
We add two types of predicates
SCEVPredicateSet - implements a set of predicates
SCEVEqualPredicate - tests for equality between two SCEV expressions
We use the SCEVEqualPredicate to re-implement stride versioning. Every time we
version a stride, we will add a SCEVEqualPredicate to the context.
Instead of adding specific stride checks, LoopVectorize now adds a more
generic SCEV check.
We only need to add support for this in the LoopVectorizer since this is the
only pass that will do stride versioning.
Reviewers: mzolotukhin, anemet, hfinkel, sanjoy
Subscribers: sanjoy, hfinkel, rengolin, jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13595
llvm-svn: 251800
2015-11-02 14:41:02 +00:00
|
|
|
|
|
|
|
public:
|
2016-09-25 23:11:51 +00:00
|
|
|
SCEVCallbackVH(Value *V, ScalarEvolution *SE = nullptr);
|
[SCEV][LV] Add SCEV Predicates and use them to re-implement stride versioning
Summary:
SCEV Predicates represent conditions that typically cannot be derived from
static analysis, but can be used to reduce SCEV expressions to forms which are
usable for different optimizers.
ScalarEvolution now has the rewriteUsingPredicate method which can simplify a
SCEV expression using a SCEVPredicateSet. The normal workflow of a pass using
SCEVPredicates would be to hold a SCEVPredicateSet and every time assumptions
need to be made a new SCEV Predicate would be created and added to the set.
Each time after calling getSCEV, the user will call the rewriteUsingPredicate
method.
We add two types of predicates
SCEVPredicateSet - implements a set of predicates
SCEVEqualPredicate - tests for equality between two SCEV expressions
We use the SCEVEqualPredicate to re-implement stride versioning. Every time we
version a stride, we will add a SCEVEqualPredicate to the context.
Instead of adding specific stride checks, LoopVectorize now adds a more
generic SCEV check.
We only need to add support for this in the LoopVectorizer since this is the
only pass that will do stride versioning.
Reviewers: mzolotukhin, anemet, hfinkel, sanjoy
Subscribers: sanjoy, hfinkel, rengolin, jmolloy, llvm-commits
Differential Revision: http://reviews.llvm.org/D13595
llvm-svn: 251800
2015-11-02 14:41:02 +00:00
|
|
|
};
|
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
friend class SCEVCallbackVH;
|
|
|
|
friend class SCEVExpander;
|
|
|
|
friend class SCEVUnknown;
|
2009-05-19 19:22:47 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// The function we are analyzing.
|
|
|
|
///
|
|
|
|
Function &F;
|
2009-04-21 23:15:49 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Does the module have any calls to the llvm.experimental.guard intrinsic
|
|
|
|
/// at all? If this is false, we avoid doing work that will only help if
|
|
|
|
/// thare are guards present in the IR.
|
|
|
|
///
|
|
|
|
bool HasGuards;
|
2016-05-10 00:31:49 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// The target library information for the target we are targeting.
|
|
|
|
///
|
|
|
|
TargetLibraryInfo &TLI;
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 02:08:17 +00:00
|
|
|
|
2016-12-19 08:22:17 +00:00
|
|
|
/// The tracker for @llvm.assume intrinsics in this function.
|
|
|
|
AssumptionCache &AC;
|
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// The dominator tree.
|
|
|
|
///
|
|
|
|
DominatorTree &DT;
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 02:08:17 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// The loop information for the function we are currently analyzing.
|
|
|
|
///
|
|
|
|
LoopInfo &LI;
|
2010-01-19 22:21:27 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// This SCEV is used to represent unknown trip counts and things.
|
|
|
|
std::unique_ptr<SCEVCouldNotCompute> CouldNotCompute;
|
2009-04-21 23:15:49 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// The typedef for HasRecMap.
|
|
|
|
///
|
|
|
|
typedef DenseMap<const SCEV *, bool> HasRecMapType;
|
[SCEV] Try to reuse existing value during SCEV expansion
Current SCEV expansion will expand SCEV as a sequence of operations
and doesn't utilize the value already existed. This will introduce
redundent computation which may not be cleaned up throughly by
following optimizations.
This patch introduces an ExprValueMap which is a map from SCEV to the
set of equal values with the same SCEV. When a SCEV is expanded, the
set of values is checked and reused whenever possible before generating
a sequence of operations.
The original commit triggered regressions in Polly tests. The regressions
exposed two problems which have been fixed in current version.
1. Polly will generate a new function based on the old one. To generate an
instruction for the new function, it builds SCEV for the old instruction,
applies some tranformation on the SCEV generated, then expands the transformed
SCEV and insert the expanded value into new function. Because SCEV expansion
may reuse value cached in ExprValueMap, the value in old function may be
inserted into new function, which is wrong.
In SCEVExpander::expand, there is a logic to check the cached value to
be used should dominate the insertion point. However, for the above
case, the check always passes. That is because the insertion point is
in a new function, which is unreachable from the old function. However
for unreachable node, DominatorTreeBase::dominates thinks it will be
dominated by any other node.
The fix is to simply add a check that the cached value to be used in
expansion should be in the same function as the insertion point instruction.
2. When the SCEV is of scConstant type, expanding it directly is cheaper than
reusing a normal value cached. Although in the cached value set in ExprValueMap,
there is a Constant type value, but it is not easy to find it out -- the cached
Value set is not sorted according to the potential cost. Existing reuse logic
in SCEVExpander::expand simply chooses the first legal element from the cached
value set.
The fix is that when the SCEV is of scConstant type, don't try the reuse
logic. simply expand it.
Differential Revision: http://reviews.llvm.org/D12090
llvm-svn: 259736
2016-02-04 01:27:38 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// This is a cache to record whether a SCEV contains any scAddRecExpr.
|
|
|
|
HasRecMapType HasRecMap;
|
[SCEV] Try to reuse existing value during SCEV expansion
Current SCEV expansion will expand SCEV as a sequence of operations
and doesn't utilize the value already existed. This will introduce
redundent computation which may not be cleaned up throughly by
following optimizations.
This patch introduces an ExprValueMap which is a map from SCEV to the
set of equal values with the same SCEV. When a SCEV is expanded, the
set of values is checked and reused whenever possible before generating
a sequence of operations.
The original commit triggered regressions in Polly tests. The regressions
exposed two problems which have been fixed in current version.
1. Polly will generate a new function based on the old one. To generate an
instruction for the new function, it builds SCEV for the old instruction,
applies some tranformation on the SCEV generated, then expands the transformed
SCEV and insert the expanded value into new function. Because SCEV expansion
may reuse value cached in ExprValueMap, the value in old function may be
inserted into new function, which is wrong.
In SCEVExpander::expand, there is a logic to check the cached value to
be used should dominate the insertion point. However, for the above
case, the check always passes. That is because the insertion point is
in a new function, which is unreachable from the old function. However
for unreachable node, DominatorTreeBase::dominates thinks it will be
dominated by any other node.
The fix is to simply add a check that the cached value to be used in
expansion should be in the same function as the insertion point instruction.
2. When the SCEV is of scConstant type, expanding it directly is cheaper than
reusing a normal value cached. Although in the cached value set in ExprValueMap,
there is a Constant type value, but it is not easy to find it out -- the cached
Value set is not sorted according to the potential cost. Existing reuse logic
in SCEVExpander::expand simply chooses the first legal element from the cached
value set.
The fix is that when the SCEV is of scConstant type, don't try the reuse
logic. simply expand it.
Differential Revision: http://reviews.llvm.org/D12090
llvm-svn: 259736
2016-02-04 01:27:38 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// The typedef for ExprValueMap.
|
|
|
|
///
|
|
|
|
typedef std::pair<Value *, ConstantInt *> ValueOffsetPair;
|
|
|
|
typedef DenseMap<const SCEV *, SetVector<ValueOffsetPair>> ExprValueMapType;
|
[SCEV] Try to reuse existing value during SCEV expansion
Current SCEV expansion will expand SCEV as a sequence of operations
and doesn't utilize the value already existed. This will introduce
redundent computation which may not be cleaned up throughly by
following optimizations.
This patch introduces an ExprValueMap which is a map from SCEV to the
set of equal values with the same SCEV. When a SCEV is expanded, the
set of values is checked and reused whenever possible before generating
a sequence of operations.
The original commit triggered regressions in Polly tests. The regressions
exposed two problems which have been fixed in current version.
1. Polly will generate a new function based on the old one. To generate an
instruction for the new function, it builds SCEV for the old instruction,
applies some tranformation on the SCEV generated, then expands the transformed
SCEV and insert the expanded value into new function. Because SCEV expansion
may reuse value cached in ExprValueMap, the value in old function may be
inserted into new function, which is wrong.
In SCEVExpander::expand, there is a logic to check the cached value to
be used should dominate the insertion point. However, for the above
case, the check always passes. That is because the insertion point is
in a new function, which is unreachable from the old function. However
for unreachable node, DominatorTreeBase::dominates thinks it will be
dominated by any other node.
The fix is to simply add a check that the cached value to be used in
expansion should be in the same function as the insertion point instruction.
2. When the SCEV is of scConstant type, expanding it directly is cheaper than
reusing a normal value cached. Although in the cached value set in ExprValueMap,
there is a Constant type value, but it is not easy to find it out -- the cached
Value set is not sorted according to the potential cost. Existing reuse logic
in SCEVExpander::expand simply chooses the first legal element from the cached
value set.
The fix is that when the SCEV is of scConstant type, don't try the reuse
logic. simply expand it.
Differential Revision: http://reviews.llvm.org/D12090
llvm-svn: 259736
2016-02-04 01:27:38 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// ExprValueMap -- This map records the original values from which
|
|
|
|
/// the SCEV expr is generated from.
|
|
|
|
///
|
|
|
|
/// We want to represent the mapping as SCEV -> ValueOffsetPair instead
|
|
|
|
/// of SCEV -> Value:
|
|
|
|
/// Suppose we know S1 expands to V1, and
|
|
|
|
/// S1 = S2 + C_a
|
|
|
|
/// S3 = S2 + C_b
|
|
|
|
/// where C_a and C_b are different SCEVConstants. Then we'd like to
|
|
|
|
/// expand S3 as V1 - C_a + C_b instead of expanding S2 literally.
|
|
|
|
/// It is helpful when S2 is a complex SCEV expr.
|
|
|
|
///
|
|
|
|
/// In order to do that, we represent ExprValueMap as a mapping from
|
|
|
|
/// SCEV to ValueOffsetPair. We will save both S1->{V1, 0} and
|
|
|
|
/// S2->{V1, C_a} into the map when we create SCEV for V1. When S3
|
|
|
|
/// is expanded, it will first expand S2 to V1 - C_a because of
|
|
|
|
/// S2->{V1, C_a} in the map, then expand S3 to V1 - C_a + C_b.
|
|
|
|
///
|
|
|
|
/// Note: S->{V, Offset} in the ExprValueMap means S can be expanded
|
|
|
|
/// to V - Offset.
|
|
|
|
ExprValueMapType ExprValueMap;
|
[SCEV] Try to reuse existing value during SCEV expansion
Current SCEV expansion will expand SCEV as a sequence of operations
and doesn't utilize the value already existed. This will introduce
redundent computation which may not be cleaned up throughly by
following optimizations.
This patch introduces an ExprValueMap which is a map from SCEV to the
set of equal values with the same SCEV. When a SCEV is expanded, the
set of values is checked and reused whenever possible before generating
a sequence of operations.
The original commit triggered regressions in Polly tests. The regressions
exposed two problems which have been fixed in current version.
1. Polly will generate a new function based on the old one. To generate an
instruction for the new function, it builds SCEV for the old instruction,
applies some tranformation on the SCEV generated, then expands the transformed
SCEV and insert the expanded value into new function. Because SCEV expansion
may reuse value cached in ExprValueMap, the value in old function may be
inserted into new function, which is wrong.
In SCEVExpander::expand, there is a logic to check the cached value to
be used should dominate the insertion point. However, for the above
case, the check always passes. That is because the insertion point is
in a new function, which is unreachable from the old function. However
for unreachable node, DominatorTreeBase::dominates thinks it will be
dominated by any other node.
The fix is to simply add a check that the cached value to be used in
expansion should be in the same function as the insertion point instruction.
2. When the SCEV is of scConstant type, expanding it directly is cheaper than
reusing a normal value cached. Although in the cached value set in ExprValueMap,
there is a Constant type value, but it is not easy to find it out -- the cached
Value set is not sorted according to the potential cost. Existing reuse logic
in SCEVExpander::expand simply chooses the first legal element from the cached
value set.
The fix is that when the SCEV is of scConstant type, don't try the reuse
logic. simply expand it.
Differential Revision: http://reviews.llvm.org/D12090
llvm-svn: 259736
2016-02-04 01:27:38 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// The typedef for ValueExprMap.
|
|
|
|
///
|
|
|
|
typedef DenseMap<SCEVCallbackVH, const SCEV *, DenseMapInfo<Value *>>
|
2010-08-27 18:55:03 +00:00
|
|
|
ValueExprMapType;
|
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// This is a cache of the values we have analyzed so far.
|
|
|
|
///
|
|
|
|
ValueExprMapType ValueExprMap;
|
|
|
|
|
|
|
|
/// Mark predicate values currently being processed by isImpliedCond.
|
2016-09-27 18:01:38 +00:00
|
|
|
SmallPtrSet<Value *, 6> PendingLoopPredicates;
|
2016-09-25 23:11:51 +00:00
|
|
|
|
|
|
|
/// Set to true by isLoopBackedgeGuardedByCond when we're walking the set of
|
|
|
|
/// conditions dominating the backedge of a loop.
|
|
|
|
bool WalkingBEDominatingConds;
|
|
|
|
|
|
|
|
/// Set to true by isKnownPredicateViaSplitting when we're trying to prove a
|
|
|
|
/// predicate by splitting it into a set of independent predicates.
|
|
|
|
bool ProvingSplitPredicate;
|
|
|
|
|
2017-02-14 15:53:12 +00:00
|
|
|
/// Memoized values for the GetMinTrailingZeros
|
|
|
|
DenseMap<const SCEV *, uint32_t> MinTrailingZerosCache;
|
|
|
|
|
|
|
|
/// Private helper method for the GetMinTrailingZeros method
|
|
|
|
uint32_t GetMinTrailingZerosImpl(const SCEV *S);
|
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Information about the number of loop iterations for which a loop exit's
|
|
|
|
/// branch condition evaluates to the not-taken path. This is a temporary
|
|
|
|
/// pair of exact and max expressions that are eventually summarized in
|
|
|
|
/// ExitNotTakenInfo and BackedgeTakenInfo.
|
|
|
|
struct ExitLimit {
|
2016-10-21 12:51:16 +00:00
|
|
|
const SCEV *ExactNotTaken; // The exit is not taken exactly this many times
|
|
|
|
const SCEV *MaxNotTaken; // The exit is not taken at most this many times
|
|
|
|
bool MaxOrZero; // Not taken either exactly MaxNotTaken or zero times
|
2016-09-25 23:11:51 +00:00
|
|
|
|
2016-09-28 17:14:58 +00:00
|
|
|
/// A set of predicate guards for this ExitLimit. The result is only valid
|
|
|
|
/// if all of the predicates in \c Predicates evaluate to 'true' at
|
|
|
|
/// run-time.
|
|
|
|
SmallPtrSet<const SCEVPredicate *, 4> Predicates;
|
|
|
|
|
|
|
|
void addPredicate(const SCEVPredicate *P) {
|
|
|
|
assert(!isa<SCEVUnionPredicate>(P) && "Only add leaf predicates here!");
|
|
|
|
Predicates.insert(P);
|
|
|
|
}
|
2016-09-25 23:11:51 +00:00
|
|
|
|
2017-05-15 04:22:09 +00:00
|
|
|
/*implicit*/ ExitLimit(const SCEV *E);
|
2016-09-25 23:11:51 +00:00
|
|
|
|
2016-09-28 17:14:58 +00:00
|
|
|
ExitLimit(
|
2016-10-21 11:08:48 +00:00
|
|
|
const SCEV *E, const SCEV *M, bool MaxOrZero,
|
2017-05-15 04:22:09 +00:00
|
|
|
ArrayRef<const SmallPtrSetImpl<const SCEVPredicate *> *> PredSetList);
|
Re-commit [SCEV] Introduce a guarded backedge taken count and use it in LAA and LV
This re-commits r265535 which was reverted in r265541 because it
broke the windows bots. The problem was that we had a PointerIntPair
which took a pointer to a struct allocated with new. The problem
was that new doesn't provide sufficient alignment guarantees.
This pattern was already present before r265535 and it just happened
to work. To fix this, we now separate the PointerToIntPair from the
ExitNotTakenInfo struct into a pointer and a bool.
Original commit message:
Summary:
When the backedge taken codition is computed from an icmp, SCEV can
deduce the backedge taken count only if one of the sides of the icmp
is an AddRecExpr. However, due to sign/zero extensions, we sometimes
end up with something that is not an AddRecExpr.
However, we can use SCEV predicates to produce a 'guarded' expression.
This change adds a method to SCEV to get this expression, and the
SCEV predicate associated with it.
In HowManyGreaterThans and HowManyLessThans we will now add a SCEV
predicate associated with the guarded backedge taken count when the
analyzed SCEV expression is not an AddRecExpr. Note that we only do
this as an alternative to returning a 'CouldNotCompute'.
We use new feature in Loop Access Analysis and LoopVectorize to analyze
and transform more loops.
Reviewers: anemet, mzolotukhin, hfinkel, sanjoy
Subscribers: flyingforyou, mcrosier, atrick, mssimpso, sanjoy, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D17201
llvm-svn: 265786
2016-04-08 14:29:09 +00:00
|
|
|
|
2016-10-21 11:08:48 +00:00
|
|
|
ExitLimit(const SCEV *E, const SCEV *M, bool MaxOrZero,
|
2017-05-15 04:22:09 +00:00
|
|
|
const SmallPtrSetImpl<const SCEVPredicate *> &PredSet);
|
2016-09-28 17:14:58 +00:00
|
|
|
|
2017-05-15 04:22:09 +00:00
|
|
|
ExitLimit(const SCEV *E, const SCEV *M, bool MaxOrZero);
|
2016-09-28 17:14:58 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Test whether this ExitLimit contains any computed information, or
|
|
|
|
/// whether it's all SCEVCouldNotCompute values.
|
|
|
|
bool hasAnyInfo() const {
|
2016-09-25 23:11:57 +00:00
|
|
|
return !isa<SCEVCouldNotCompute>(ExactNotTaken) ||
|
|
|
|
!isa<SCEVCouldNotCompute>(MaxNotTaken);
|
2016-09-25 23:11:51 +00:00
|
|
|
}
|
2011-07-26 17:19:55 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Test whether this ExitLimit contains all information.
|
2016-09-25 23:11:57 +00:00
|
|
|
bool hasFullInfo() const {
|
|
|
|
return !isa<SCEVCouldNotCompute>(ExactNotTaken);
|
|
|
|
}
|
2016-09-25 23:11:51 +00:00
|
|
|
};
|
Re-commit [SCEV] Introduce a guarded backedge taken count and use it in LAA and LV
This re-commits r265535 which was reverted in r265541 because it
broke the windows bots. The problem was that we had a PointerIntPair
which took a pointer to a struct allocated with new. The problem
was that new doesn't provide sufficient alignment guarantees.
This pattern was already present before r265535 and it just happened
to work. To fix this, we now separate the PointerToIntPair from the
ExitNotTakenInfo struct into a pointer and a bool.
Original commit message:
Summary:
When the backedge taken codition is computed from an icmp, SCEV can
deduce the backedge taken count only if one of the sides of the icmp
is an AddRecExpr. However, due to sign/zero extensions, we sometimes
end up with something that is not an AddRecExpr.
However, we can use SCEV predicates to produce a 'guarded' expression.
This change adds a method to SCEV to get this expression, and the
SCEV predicate associated with it.
In HowManyGreaterThans and HowManyLessThans we will now add a SCEV
predicate associated with the guarded backedge taken count when the
analyzed SCEV expression is not an AddRecExpr. Note that we only do
this as an alternative to returning a 'CouldNotCompute'.
We use new feature in Loop Access Analysis and LoopVectorize to analyze
and transform more loops.
Reviewers: anemet, mzolotukhin, hfinkel, sanjoy
Subscribers: flyingforyou, mcrosier, atrick, mssimpso, sanjoy, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D17201
llvm-svn: 265786
2016-04-08 14:29:09 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Information about the number of times a particular loop exit may be
|
|
|
|
/// reached before exiting the loop.
|
|
|
|
struct ExitNotTakenInfo {
|
2017-01-24 12:55:57 +00:00
|
|
|
PoisoningVH<BasicBlock> ExitingBlock;
|
2016-09-25 23:11:51 +00:00
|
|
|
const SCEV *ExactNotTaken;
|
2016-09-25 23:12:04 +00:00
|
|
|
std::unique_ptr<SCEVUnionPredicate> Predicate;
|
2016-09-25 23:12:00 +00:00
|
|
|
bool hasAlwaysTruePredicate() const {
|
2016-09-25 23:12:04 +00:00
|
|
|
return !Predicate || Predicate->isAlwaysTrue();
|
2016-09-25 23:11:51 +00:00
|
|
|
}
|
2016-09-25 23:12:04 +00:00
|
|
|
|
2017-01-24 12:55:57 +00:00
|
|
|
explicit ExitNotTakenInfo(PoisoningVH<BasicBlock> ExitingBlock,
|
2016-09-25 23:12:04 +00:00
|
|
|
const SCEV *ExactNotTaken,
|
|
|
|
std::unique_ptr<SCEVUnionPredicate> Predicate)
|
|
|
|
: ExitingBlock(ExitingBlock), ExactNotTaken(ExactNotTaken),
|
|
|
|
Predicate(std::move(Predicate)) {}
|
2016-09-25 23:11:51 +00:00
|
|
|
};
|
2010-11-17 20:23:08 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Information about the backedge-taken count of a loop. This currently
|
|
|
|
/// includes an exact count and a maximum count.
|
|
|
|
///
|
|
|
|
class BackedgeTakenInfo {
|
|
|
|
/// A list of computable exits and their not-taken counts. Loops almost
|
|
|
|
/// never have more than one computable exit.
|
2016-09-25 23:12:00 +00:00
|
|
|
SmallVector<ExitNotTakenInfo, 1> ExitNotTaken;
|
|
|
|
|
|
|
|
/// The pointer part of \c MaxAndComplete is an expression indicating the
|
|
|
|
/// least maximum backedge-taken count of the loop that is known, or a
|
|
|
|
/// SCEVCouldNotCompute. This expression is only valid if the predicates
|
|
|
|
/// associated with all loop exits are true.
|
|
|
|
///
|
|
|
|
/// The integer part of \c MaxAndComplete is a boolean indicating if \c
|
|
|
|
/// ExitNotTaken has an element for every exiting block in the loop.
|
|
|
|
PointerIntPair<const SCEV *, 1> MaxAndComplete;
|
2015-07-27 21:42:49 +00:00
|
|
|
|
2016-10-21 11:08:48 +00:00
|
|
|
/// True iff the backedge is taken either exactly Max or zero times.
|
|
|
|
bool MaxOrZero;
|
|
|
|
|
2016-09-25 23:12:00 +00:00
|
|
|
/// \name Helper projection functions on \c MaxAndComplete.
|
|
|
|
/// @{
|
|
|
|
bool isComplete() const { return MaxAndComplete.getInt(); }
|
|
|
|
const SCEV *getMax() const { return MaxAndComplete.getPointer(); }
|
|
|
|
/// @}
|
2016-05-29 00:32:17 +00:00
|
|
|
|
2004-04-02 20:23:17 +00:00
|
|
|
public:
|
2017-06-03 05:21:08 +00:00
|
|
|
BackedgeTakenInfo() : MaxAndComplete(nullptr, 0), MaxOrZero(false) {}
|
2010-02-01 18:27:38 +00:00
|
|
|
|
2016-10-20 12:20:28 +00:00
|
|
|
BackedgeTakenInfo(BackedgeTakenInfo &&) = default;
|
|
|
|
BackedgeTakenInfo &operator=(BackedgeTakenInfo &&) = default;
|
2016-09-26 00:00:51 +00:00
|
|
|
|
2016-09-26 01:10:25 +00:00
|
|
|
typedef std::pair<BasicBlock *, ExitLimit> EdgeExitInfo;
|
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Initialize BackedgeTakenInfo from a list of exact exit counts.
|
2016-09-26 01:10:27 +00:00
|
|
|
BackedgeTakenInfo(SmallVectorImpl<EdgeExitInfo> &&ExitCounts, bool Complete,
|
2016-10-21 11:08:48 +00:00
|
|
|
const SCEV *MaxCount, bool MaxOrZero);
|
2007-10-22 18:31:58 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Test whether this BackedgeTakenInfo contains any computed information,
|
|
|
|
/// or whether it's all SCEVCouldNotCompute values.
|
|
|
|
bool hasAnyInfo() const {
|
2016-09-25 23:12:00 +00:00
|
|
|
return !ExitNotTaken.empty() || !isa<SCEVCouldNotCompute>(getMax());
|
2015-03-09 21:43:43 +00:00
|
|
|
}
|
2009-07-13 21:35:55 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Test whether this BackedgeTakenInfo contains complete information.
|
2016-09-25 23:12:00 +00:00
|
|
|
bool hasFullInfo() const { return isComplete(); }
|
2016-09-25 23:11:51 +00:00
|
|
|
|
2017-05-22 06:46:04 +00:00
|
|
|
/// Return an expression indicating the exact *backedge-taken*
|
|
|
|
/// count of the loop if it is known or SCEVCouldNotCompute
|
|
|
|
/// otherwise. If execution makes it to the backedge on every
|
|
|
|
/// iteration (i.e. there are no abnormal exists like exception
|
|
|
|
/// throws and thread exits) then this is the number of times the
|
|
|
|
/// loop header will execute minus one.
|
2016-09-25 23:11:51 +00:00
|
|
|
///
|
|
|
|
/// If the SCEV predicate associated with the answer can be different
|
|
|
|
/// from AlwaysTrue, we must add a (non null) Predicates argument.
|
|
|
|
/// The SCEV predicate associated with the answer will be added to
|
|
|
|
/// Predicates. A run-time check needs to be emitted for the SCEV
|
|
|
|
/// predicate in order for the answer to be valid.
|
|
|
|
///
|
|
|
|
/// Note that we should always know if we need to pass a predicate
|
|
|
|
/// argument or not from the way the ExitCounts vector was computed.
|
|
|
|
/// If we allowed SCEV predicates to be generated when populating this
|
|
|
|
/// vector, this information can contain them and therefore a
|
|
|
|
/// SCEVPredicate argument should be added to getExact.
|
|
|
|
const SCEV *getExact(ScalarEvolution *SE,
|
|
|
|
SCEVUnionPredicate *Predicates = nullptr) const;
|
|
|
|
|
|
|
|
/// Return the number of times this loop exit may fall through to the back
|
|
|
|
/// edge, or SCEVCouldNotCompute. The loop is guaranteed not to exit via
|
|
|
|
/// this block before this number of iterations, but may exit via another
|
2015-09-17 19:04:03 +00:00
|
|
|
/// block.
|
2016-09-25 23:11:51 +00:00
|
|
|
const SCEV *getExact(BasicBlock *ExitingBlock, ScalarEvolution *SE) const;
|
2015-06-29 14:42:48 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Get the max backedge taken count for the loop.
|
|
|
|
const SCEV *getMax(ScalarEvolution *SE) const;
|
2015-06-29 14:42:48 +00:00
|
|
|
|
2016-10-21 11:08:48 +00:00
|
|
|
/// Return true if the number of times this backedge is taken is either the
|
|
|
|
/// value returned by getMax or zero.
|
|
|
|
bool isMaxOrZero(ScalarEvolution *SE) const;
|
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Return true if any backedge taken count expressions refer to the given
|
|
|
|
/// subexpression.
|
|
|
|
bool hasOperand(const SCEV *S, ScalarEvolution *SE) const;
|
2015-10-27 00:52:09 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Invalidate this result and free associated memory.
|
|
|
|
void clear();
|
2004-04-02 20:23:17 +00:00
|
|
|
};
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 02:08:17 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Cache the backedge-taken count of the loops for this function as they
|
|
|
|
/// are computed.
|
|
|
|
DenseMap<const Loop *, BackedgeTakenInfo> BackedgeTakenCounts;
|
|
|
|
|
|
|
|
/// Cache the predicated backedge-taken count of the loops for this
|
|
|
|
/// function as they are computed.
|
|
|
|
DenseMap<const Loop *, BackedgeTakenInfo> PredicatedBackedgeTakenCounts;
|
|
|
|
|
|
|
|
/// This map contains entries for all of the PHI instructions that we
|
|
|
|
/// attempt to compute constant evolutions for. This allows us to avoid
|
|
|
|
/// potentially expensive recomputation of these properties. An instruction
|
|
|
|
/// maps to null if we are unable to compute its exit value.
|
|
|
|
DenseMap<PHINode *, Constant *> ConstantEvolutionLoopExitValue;
|
|
|
|
|
|
|
|
/// This map contains entries for all the expressions that we attempt to
|
|
|
|
/// compute getSCEVAtScope information for, which can be expensive in
|
|
|
|
/// extreme cases.
|
|
|
|
DenseMap<const SCEV *, SmallVector<std::pair<const Loop *, const SCEV *>, 2>>
|
|
|
|
ValuesAtScopes;
|
|
|
|
|
|
|
|
/// Memoized computeLoopDisposition results.
|
|
|
|
DenseMap<const SCEV *,
|
|
|
|
SmallVector<PointerIntPair<const Loop *, 2, LoopDisposition>, 2>>
|
|
|
|
LoopDispositions;
|
|
|
|
|
2016-09-26 02:44:07 +00:00
|
|
|
struct LoopProperties {
|
|
|
|
/// Set to true if the loop contains no instruction that can have side
|
|
|
|
/// effects (i.e. via throwing an exception, volatile or atomic access).
|
|
|
|
bool HasNoAbnormalExits;
|
|
|
|
|
|
|
|
/// Set to true if the loop contains no instruction that can abnormally exit
|
|
|
|
/// the loop (i.e. via throwing an exception, by terminating the thread
|
|
|
|
/// cleanly or by infinite looping in a called function). Strictly
|
|
|
|
/// speaking, the last one is not leaving the loop, but is identical to
|
|
|
|
/// leaving the loop for reasoning about undefined behavior.
|
|
|
|
bool HasNoSideEffects;
|
|
|
|
};
|
|
|
|
|
|
|
|
/// Cache for \c getLoopProperties.
|
|
|
|
DenseMap<const Loop *, LoopProperties> LoopPropertiesCache;
|
|
|
|
|
|
|
|
/// Return a \c LoopProperties instance for \p L, creating one if necessary.
|
|
|
|
LoopProperties getLoopProperties(const Loop *L);
|
|
|
|
|
|
|
|
bool loopHasNoSideEffects(const Loop *L) {
|
|
|
|
return getLoopProperties(L).HasNoSideEffects;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool loopHasNoAbnormalExits(const Loop *L) {
|
|
|
|
return getLoopProperties(L).HasNoAbnormalExits;
|
|
|
|
}
|
2016-09-25 23:11:51 +00:00
|
|
|
|
|
|
|
/// Compute a LoopDisposition value.
|
|
|
|
LoopDisposition computeLoopDisposition(const SCEV *S, const Loop *L);
|
|
|
|
|
|
|
|
/// Memoized computeBlockDisposition results.
|
|
|
|
DenseMap<
|
|
|
|
const SCEV *,
|
|
|
|
SmallVector<PointerIntPair<const BasicBlock *, 2, BlockDisposition>, 2>>
|
|
|
|
BlockDispositions;
|
|
|
|
|
|
|
|
/// Compute a BlockDisposition value.
|
|
|
|
BlockDisposition computeBlockDisposition(const SCEV *S, const BasicBlock *BB);
|
|
|
|
|
|
|
|
/// Memoized results from getRange
|
|
|
|
DenseMap<const SCEV *, ConstantRange> UnsignedRanges;
|
|
|
|
|
|
|
|
/// Memoized results from getRange
|
|
|
|
DenseMap<const SCEV *, ConstantRange> SignedRanges;
|
|
|
|
|
|
|
|
/// Used to parameterize getRange
|
|
|
|
enum RangeSignHint { HINT_RANGE_UNSIGNED, HINT_RANGE_SIGNED };
|
|
|
|
|
|
|
|
/// Set the memoized range for the given SCEV.
|
|
|
|
const ConstantRange &setRange(const SCEV *S, RangeSignHint Hint,
|
2017-05-08 17:39:08 +00:00
|
|
|
ConstantRange CR) {
|
2016-09-25 23:11:51 +00:00
|
|
|
DenseMap<const SCEV *, ConstantRange> &Cache =
|
|
|
|
Hint == HINT_RANGE_UNSIGNED ? UnsignedRanges : SignedRanges;
|
|
|
|
|
2017-05-07 16:28:17 +00:00
|
|
|
auto Pair = Cache.try_emplace(S, std::move(CR));
|
2016-09-25 23:11:51 +00:00
|
|
|
if (!Pair.second)
|
2017-05-07 16:28:17 +00:00
|
|
|
Pair.first->second = std::move(CR);
|
2016-09-25 23:11:51 +00:00
|
|
|
return Pair.first->second;
|
|
|
|
}
|
2016-03-11 10:22:49 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Determine the range for a particular SCEV.
|
|
|
|
ConstantRange getRange(const SCEV *S, RangeSignHint Hint);
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 02:08:17 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Determines the range for the affine SCEVAddRecExpr {\p Start,+,\p Stop}.
|
|
|
|
/// Helper for \c getRange.
|
|
|
|
ConstantRange getRangeForAffineAR(const SCEV *Start, const SCEV *Stop,
|
|
|
|
const SCEV *MaxBECount, unsigned BitWidth);
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 02:08:17 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Try to compute a range for the affine SCEVAddRecExpr {\p Start,+,\p
|
|
|
|
/// Stop} by "factoring out" a ternary expression from the add recurrence.
|
|
|
|
/// Helper called by \c getRange.
|
|
|
|
ConstantRange getRangeViaFactoring(const SCEV *Start, const SCEV *Stop,
|
|
|
|
const SCEV *MaxBECount, unsigned BitWidth);
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 02:08:17 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// We know that there is no SCEV for the specified value. Analyze the
|
|
|
|
/// expression.
|
|
|
|
const SCEV *createSCEV(Value *V);
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 02:08:17 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Provide the special handling we need to analyze PHI SCEVs.
|
|
|
|
const SCEV *createNodeForPHI(PHINode *PN);
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 02:08:17 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Helper function called from createNodeForPHI.
|
|
|
|
const SCEV *createAddRecFromPHI(PHINode *PN);
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 02:08:17 +00:00
|
|
|
|
2017-05-03 23:53:38 +00:00
|
|
|
/// A helper function for createAddRecFromPHI to handle simple cases.
|
|
|
|
const SCEV *createSimpleAffineAddRec(PHINode *PN, Value *BEValueV,
|
|
|
|
Value *StartValueV);
|
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Helper function called from createNodeForPHI.
|
|
|
|
const SCEV *createNodeFromSelectLikePHI(PHINode *PN);
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 02:08:17 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Provide special handling for a select-like instruction (currently this
|
|
|
|
/// is either a select instruction or a phi node). \p I is the instruction
|
|
|
|
/// being processed, and it is assumed equivalent to "Cond ? TrueVal :
|
|
|
|
/// FalseVal".
|
|
|
|
const SCEV *createNodeForSelectOrPHI(Instruction *I, Value *Cond,
|
|
|
|
Value *TrueVal, Value *FalseVal);
|
[PM] Port ScalarEvolution to the new pass manager.
This change makes ScalarEvolution a stand-alone object and just produces
one from a pass as needed. Making this work well requires making the
object movable, using references instead of overwritten pointers in
a number of places, and other refactorings.
I've also wired it up to the new pass manager and added a RUN line to
a test to exercise it under the new pass manager. This includes basic
printing support much like with other analyses.
But there is a big and somewhat scary change here. Prior to this patch
ScalarEvolution was never *actually* invalidated!!! Re-running the pass
just re-wired up the various other analyses and didn't remove any of the
existing entries in the SCEV caches or clear out anything at all. This
might seem OK as everything in SCEV that can uses ValueHandles to track
updates to the values that serve as SCEV keys. However, this still means
that as we ran SCEV over each function in the module, we kept
accumulating more and more SCEVs into the cache. At the end, we would
have a SCEV cache with every value that we ever needed a SCEV for in the
entire module!!! Yowzers. The releaseMemory routine would dump all of
this, but that isn't realy called during normal runs of the pipeline as
far as I can see.
To make matters worse, there *is* actually a key that we don't update
with value handles -- there is a map keyed off of Loop*s. Because
LoopInfo *does* release its memory from run to run, it is entirely
possible to run SCEV over one function, then over another function, and
then lookup a Loop* from the second function but find an entry inserted
for the first function! Ouch.
To make matters still worse, there are plenty of updates that *don't*
trip a value handle. It seems incredibly unlikely that today GVN or
another pass that invalidates SCEV can update values in *just* such
a way that a subsequent run of SCEV will incorrectly find lookups in
a cache, but it is theoretically possible and would be a nightmare to
debug.
With this refactoring, I've fixed all this by actually destroying and
recreating the ScalarEvolution object from run to run. Technically, this
could increase the amount of malloc traffic we see, but then again it is
also technically correct. ;] I don't actually think we're suffering from
tons of malloc traffic from SCEV because if we were, the fact that we
never clear the memory would seem more likely to have come up as an
actual problem before now. So, I've made the simple fix here. If in fact
there are serious issues with too much allocation and deallocation,
I can work on a clever fix that preserves the allocations (while
clearing the data) between each run, but I'd prefer to do that kind of
optimization with a test case / benchmark that shows why we need such
cleverness (and that can test that we actually make it faster). It's
possible that this will make some things faster by making the SCEV
caches have higher locality (due to being significantly smaller) so
until there is a clear benchmark, I think the simple change is best.
Differential Revision: http://reviews.llvm.org/D12063
llvm-svn: 245193
2015-08-17 02:08:17 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Provide the special handling we need to analyze GEP SCEVs.
|
|
|
|
const SCEV *createNodeForGEP(GEPOperator *GEP);
|
Re-commit r255115, with the PredicatedScalarEvolution class moved to
ScalarEvolution.h, in order to avoid cyclic dependencies between the Transform
and Analysis modules:
[LV][LAA] Add a layer over SCEV to apply run-time checked knowledge on SCEV expressions
Summary:
This change creates a layer over ScalarEvolution for LAA and LV, and centralizes the
usage of SCEV predicates. The SCEVPredicatedLayer takes the statically deduced knowledge
by ScalarEvolution and applies the knowledge from the SCEV predicates. The end goal is
that both LAA and LV should use this interface everywhere.
This also solves a problem involving the result of SCEV expression rewritting when
the predicate changes. Suppose we have the expression (sext {a,+,b}) and two predicates
P1: {a,+,b} has nsw
P2: b = 1.
Applying P1 and then P2 gives us {a,+,1}, while applying P2 and the P1 gives us
sext({a,+,1}) (the AddRec expression was changed by P2 so P1 no longer applies).
The SCEVPredicatedLayer maintains the order of transformations by feeding back
the results of previous transformations into new transformations, and therefore
avoiding this issue.
The SCEVPredicatedLayer maintains a cache to remember the results of previous
SCEV rewritting results. This also has the benefit of reducing the overall number
of expression rewrites.
Reviewers: mzolotukhin, anemet
Subscribers: jmolloy, sanjoy, llvm-commits
Differential Revision: http://reviews.llvm.org/D14296
llvm-svn: 255122
2015-12-09 16:06:28 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Implementation code for getSCEVAtScope; called at most once for each
|
|
|
|
/// SCEV+Loop pair.
|
|
|
|
///
|
|
|
|
const SCEV *computeSCEVAtScope(const SCEV *S, const Loop *L);
|
|
|
|
|
|
|
|
/// This looks up computed SCEV values for all instructions that depend on
|
|
|
|
/// the given instruction and removes them from the ValueExprMap map if they
|
|
|
|
/// reference SymName. This is used during PHI resolution.
|
|
|
|
void forgetSymbolicName(Instruction *I, const SCEV *SymName);
|
|
|
|
|
|
|
|
/// Return the BackedgeTakenInfo for the given loop, lazily computing new
|
|
|
|
/// values if the loop hasn't been analyzed yet. The returned result is
|
|
|
|
/// guaranteed not to be predicated.
|
|
|
|
const BackedgeTakenInfo &getBackedgeTakenInfo(const Loop *L);
|
|
|
|
|
|
|
|
/// Similar to getBackedgeTakenInfo, but will add predicates as required
|
|
|
|
/// with the purpose of returning complete information.
|
|
|
|
const BackedgeTakenInfo &getPredicatedBackedgeTakenInfo(const Loop *L);
|
|
|
|
|
|
|
|
/// Compute the number of times the specified loop will iterate.
|
|
|
|
/// If AllowPredicates is set, we will create new SCEV predicates as
|
|
|
|
/// necessary in order to return an exact answer.
|
|
|
|
BackedgeTakenInfo computeBackedgeTakenCount(const Loop *L,
|
|
|
|
bool AllowPredicates = false);
|
|
|
|
|
|
|
|
/// Compute the number of times the backedge of the specified loop will
|
|
|
|
/// execute if it exits via the specified block. If AllowPredicates is set,
|
|
|
|
/// this call will try to use a minimal set of SCEV predicates in order to
|
|
|
|
/// return an exact answer.
|
|
|
|
ExitLimit computeExitLimit(const Loop *L, BasicBlock *ExitingBlock,
|
|
|
|
bool AllowPredicates = false);
|
|
|
|
|
|
|
|
/// Compute the number of times the backedge of the specified loop will
|
|
|
|
/// execute if its exit condition were a conditional branch of ExitCond,
|
|
|
|
/// TBB, and FBB.
|
|
|
|
///
|
|
|
|
/// \p ControlsExit is true if ExitCond directly controls the exit
|
|
|
|
/// branch. In this case, we can assume that the loop exits only if the
|
|
|
|
/// condition is true and can infer that failing to meet the condition prior
|
|
|
|
/// to integer wraparound results in undefined behavior.
|
|
|
|
///
|
|
|
|
/// If \p AllowPredicates is set, this call will try to use a minimal set of
|
|
|
|
/// SCEV predicates in order to return an exact answer.
|
|
|
|
ExitLimit computeExitLimitFromCond(const Loop *L, Value *ExitCond,
|
|
|
|
BasicBlock *TBB, BasicBlock *FBB,
|
|
|
|
bool ControlsExit,
|
|
|
|
bool AllowPredicates = false);
|
|
|
|
|
2017-04-24 00:09:46 +00:00
|
|
|
// Helper functions for computeExitLimitFromCond to avoid exponential time
|
|
|
|
// complexity.
|
|
|
|
|
|
|
|
class ExitLimitCache {
|
|
|
|
// It may look like we need key on the whole (L, TBB, FBB, ControlsExit,
|
|
|
|
// AllowPredicates) tuple, but recursive calls to
|
|
|
|
// computeExitLimitFromCondCached from computeExitLimitFromCondImpl only
|
|
|
|
// vary the in \c ExitCond and \c ControlsExit parameters. We remember the
|
|
|
|
// initial values of the other values to assert our assumption.
|
|
|
|
SmallDenseMap<PointerIntPair<Value *, 1>, ExitLimit> TripCountMap;
|
|
|
|
|
|
|
|
const Loop *L;
|
|
|
|
BasicBlock *TBB;
|
|
|
|
BasicBlock *FBB;
|
|
|
|
bool AllowPredicates;
|
|
|
|
|
|
|
|
public:
|
|
|
|
ExitLimitCache(const Loop *L, BasicBlock *TBB, BasicBlock *FBB,
|
|
|
|
bool AllowPredicates)
|
|
|
|
: L(L), TBB(TBB), FBB(FBB), AllowPredicates(AllowPredicates) {}
|
|
|
|
|
|
|
|
Optional<ExitLimit> find(const Loop *L, Value *ExitCond, BasicBlock *TBB,
|
|
|
|
BasicBlock *FBB, bool ControlsExit,
|
|
|
|
bool AllowPredicates);
|
|
|
|
|
|
|
|
void insert(const Loop *L, Value *ExitCond, BasicBlock *TBB,
|
|
|
|
BasicBlock *FBB, bool ControlsExit, bool AllowPredicates,
|
|
|
|
const ExitLimit &EL);
|
|
|
|
};
|
|
|
|
|
|
|
|
typedef ExitLimitCache ExitLimitCacheTy;
|
|
|
|
ExitLimit computeExitLimitFromCondCached(ExitLimitCacheTy &Cache,
|
|
|
|
const Loop *L, Value *ExitCond,
|
|
|
|
BasicBlock *TBB, BasicBlock *FBB,
|
|
|
|
bool ControlsExit,
|
|
|
|
bool AllowPredicates);
|
|
|
|
ExitLimit computeExitLimitFromCondImpl(ExitLimitCacheTy &Cache, const Loop *L,
|
|
|
|
Value *ExitCond, BasicBlock *TBB,
|
|
|
|
BasicBlock *FBB, bool ControlsExit,
|
|
|
|
bool AllowPredicates);
|
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Compute the number of times the backedge of the specified loop will
|
|
|
|
/// execute if its exit condition were a conditional branch of the ICmpInst
|
|
|
|
/// ExitCond, TBB, and FBB. If AllowPredicates is set, this call will try
|
|
|
|
/// to use a minimal set of SCEV predicates in order to return an exact
|
|
|
|
/// answer.
|
|
|
|
ExitLimit computeExitLimitFromICmp(const Loop *L, ICmpInst *ExitCond,
|
|
|
|
BasicBlock *TBB, BasicBlock *FBB,
|
|
|
|
bool IsSubExpr,
|
|
|
|
bool AllowPredicates = false);
|
|
|
|
|
|
|
|
/// Compute the number of times the backedge of the specified loop will
|
|
|
|
/// execute if its exit condition were a switch with a single exiting case
|
|
|
|
/// to ExitingBB.
|
|
|
|
ExitLimit computeExitLimitFromSingleExitSwitch(const Loop *L,
|
|
|
|
SwitchInst *Switch,
|
|
|
|
BasicBlock *ExitingBB,
|
|
|
|
bool IsSubExpr);
|
|
|
|
|
|
|
|
/// Given an exit condition of 'icmp op load X, cst', try to see if we can
|
|
|
|
/// compute the backedge-taken count.
|
|
|
|
ExitLimit computeLoadConstantCompareExitLimit(LoadInst *LI, Constant *RHS,
|
|
|
|
const Loop *L,
|
|
|
|
ICmpInst::Predicate p);
|
|
|
|
|
|
|
|
/// Compute the exit limit of a loop that is controlled by a
|
|
|
|
/// "(IV >> 1) != 0" type comparison. We cannot compute the exact trip
|
|
|
|
/// count in these cases (since SCEV has no way of expressing them), but we
|
|
|
|
/// can still sometimes compute an upper bound.
|
|
|
|
///
|
|
|
|
/// Return an ExitLimit for a loop whose backedge is guarded by `LHS Pred
|
|
|
|
/// RHS`.
|
|
|
|
ExitLimit computeShiftCompareExitLimit(Value *LHS, Value *RHS, const Loop *L,
|
|
|
|
ICmpInst::Predicate Pred);
|
|
|
|
|
|
|
|
/// If the loop is known to execute a constant number of times (the
|
|
|
|
/// condition evolves only from constants), try to evaluate a few iterations
|
|
|
|
/// of the loop until we get the exit condition gets a value of ExitWhen
|
|
|
|
/// (true or false). If we cannot evaluate the exit count of the loop,
|
|
|
|
/// return CouldNotCompute.
|
|
|
|
const SCEV *computeExitCountExhaustively(const Loop *L, Value *Cond,
|
|
|
|
bool ExitWhen);
|
|
|
|
|
|
|
|
/// Return the number of times an exit condition comparing the specified
|
|
|
|
/// value to zero will execute. If not computable, return CouldNotCompute.
|
|
|
|
/// If AllowPredicates is set, this call will try to use a minimal set of
|
|
|
|
/// SCEV predicates in order to return an exact answer.
|
|
|
|
ExitLimit howFarToZero(const SCEV *V, const Loop *L, bool IsSubExpr,
|
|
|
|
bool AllowPredicates = false);
|
|
|
|
|
|
|
|
/// Return the number of times an exit condition checking the specified
|
|
|
|
/// value for nonzero will execute. If not computable, return
|
|
|
|
/// CouldNotCompute.
|
|
|
|
ExitLimit howFarToNonZero(const SCEV *V, const Loop *L);
|
|
|
|
|
|
|
|
/// Return the number of times an exit condition containing the specified
|
|
|
|
/// less-than comparison will execute. If not computable, return
|
|
|
|
/// CouldNotCompute.
|
|
|
|
///
|
|
|
|
/// \p isSigned specifies whether the less-than is signed.
|
|
|
|
///
|
|
|
|
/// \p ControlsExit is true when the LHS < RHS condition directly controls
|
|
|
|
/// the branch (loops exits only if condition is true). In this case, we can
|
|
|
|
/// use NoWrapFlags to skip overflow checks.
|
|
|
|
///
|
|
|
|
/// If \p AllowPredicates is set, this call will try to use a minimal set of
|
|
|
|
/// SCEV predicates in order to return an exact answer.
|
|
|
|
ExitLimit howManyLessThans(const SCEV *LHS, const SCEV *RHS, const Loop *L,
|
|
|
|
bool isSigned, bool ControlsExit,
|
|
|
|
bool AllowPredicates = false);
|
|
|
|
|
|
|
|
ExitLimit howManyGreaterThans(const SCEV *LHS, const SCEV *RHS, const Loop *L,
|
|
|
|
bool isSigned, bool IsSubExpr,
|
|
|
|
bool AllowPredicates = false);
|
|
|
|
|
|
|
|
/// Return a predecessor of BB (which may not be an immediate predecessor)
|
|
|
|
/// which has exactly one successor from which BB is reachable, or null if
|
|
|
|
/// no such block is found.
|
|
|
|
std::pair<BasicBlock *, BasicBlock *>
|
|
|
|
getPredecessorWithUniqueSuccessorForBB(BasicBlock *BB);
|
|
|
|
|
|
|
|
/// Test whether the condition described by Pred, LHS, and RHS is true
|
|
|
|
/// whenever the given FoundCondValue value evaluates to true.
|
|
|
|
bool isImpliedCond(ICmpInst::Predicate Pred, const SCEV *LHS, const SCEV *RHS,
|
|
|
|
Value *FoundCondValue, bool Inverse);
|
|
|
|
|
|
|
|
/// Test whether the condition described by Pred, LHS, and RHS is true
|
|
|
|
/// whenever the condition described by FoundPred, FoundLHS, FoundRHS is
|
|
|
|
/// true.
|
|
|
|
bool isImpliedCond(ICmpInst::Predicate Pred, const SCEV *LHS, const SCEV *RHS,
|
|
|
|
ICmpInst::Predicate FoundPred, const SCEV *FoundLHS,
|
|
|
|
const SCEV *FoundRHS);
|
|
|
|
|
|
|
|
/// Test whether the condition described by Pred, LHS, and RHS is true
|
|
|
|
/// whenever the condition described by Pred, FoundLHS, and FoundRHS is
|
|
|
|
/// true.
|
|
|
|
bool isImpliedCondOperands(ICmpInst::Predicate Pred, const SCEV *LHS,
|
|
|
|
const SCEV *RHS, const SCEV *FoundLHS,
|
|
|
|
const SCEV *FoundRHS);
|
|
|
|
|
[ScalarEvolution] Re-enable Predicate implication from operations
The patch rL298481 was reverted due to crash on clang-with-lto-ubuntu build.
The reason of the crash was type mismatch between either a or b and RHS in the following situation:
LHS = sext(a +nsw b) > RHS.
This is quite rare, but still possible situation. Normally we need to cast all {a, b, RHS} to their widest type.
But we try to avoid creation of new SCEV that are not constants to avoid initiating recursive analysis that
can take a lot of time and/or cache a bad value for iterations number. To deal with this, in this patch we
reject this case and will not try to analyze it if the type of sum doesn't match with the type of RHS. In this
situation we don't need to create any non-constant SCEVs.
This patch also adds an assertion to the method IsProvedViaContext so that we could fail on it and not
go further into range analysis etc (because in some situations these analyzes succeed even when the passed
arguments have wrong types, what should not normally happen).
The patch also contains a fix for a problem with too narrow scope of the analysis caused by wrong
usage of predicates in recursive invocations.
The regression test on the said failure: test/Analysis/ScalarEvolution/implied-via-addition.ll
Reviewers: reames, apilipenko, anna, sanjoy
Reviewed By: sanjoy
Subscribers: mzolotukhin, mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D31238
llvm-svn: 299205
2017-03-31 12:05:30 +00:00
|
|
|
/// Test whether the condition described by Pred, LHS, and RHS is true
|
|
|
|
/// whenever the condition described by Pred, FoundLHS, and FoundRHS is
|
|
|
|
/// true. Here LHS is an operation that includes FoundLHS as one of its
|
|
|
|
/// arguments.
|
|
|
|
bool isImpliedViaOperations(ICmpInst::Predicate Pred,
|
|
|
|
const SCEV *LHS, const SCEV *RHS,
|
|
|
|
const SCEV *FoundLHS, const SCEV *FoundRHS,
|
|
|
|
unsigned Depth = 0);
|
|
|
|
|
|
|
|
/// Test whether the condition described by Pred, LHS, and RHS is true.
|
|
|
|
/// Use only simple non-recursive types of checks, such as range analysis etc.
|
|
|
|
bool isKnownViaSimpleReasoning(ICmpInst::Predicate Pred,
|
|
|
|
const SCEV *LHS, const SCEV *RHS);
|
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Test whether the condition described by Pred, LHS, and RHS is true
|
|
|
|
/// whenever the condition described by Pred, FoundLHS, and FoundRHS is
|
|
|
|
/// true.
|
|
|
|
bool isImpliedCondOperandsHelper(ICmpInst::Predicate Pred, const SCEV *LHS,
|
|
|
|
const SCEV *RHS, const SCEV *FoundLHS,
|
|
|
|
const SCEV *FoundRHS);
|
|
|
|
|
|
|
|
/// Test whether the condition described by Pred, LHS, and RHS is true
|
|
|
|
/// whenever the condition described by Pred, FoundLHS, and FoundRHS is
|
|
|
|
/// true. Utility function used by isImpliedCondOperands. Tries to get
|
|
|
|
/// cases like "X `sgt` 0 => X - 1 `sgt` -1".
|
|
|
|
bool isImpliedCondOperandsViaRanges(ICmpInst::Predicate Pred, const SCEV *LHS,
|
|
|
|
const SCEV *RHS, const SCEV *FoundLHS,
|
|
|
|
const SCEV *FoundRHS);
|
|
|
|
|
|
|
|
/// Return true if the condition denoted by \p LHS \p Pred \p RHS is implied
|
|
|
|
/// by a call to \c @llvm.experimental.guard in \p BB.
|
|
|
|
bool isImpliedViaGuard(BasicBlock *BB, ICmpInst::Predicate Pred,
|
|
|
|
const SCEV *LHS, const SCEV *RHS);
|
|
|
|
|
|
|
|
/// Test whether the condition described by Pred, LHS, and RHS is true
|
|
|
|
/// whenever the condition described by Pred, FoundLHS, and FoundRHS is
|
|
|
|
/// true.
|
|
|
|
///
|
|
|
|
/// This routine tries to rule out certain kinds of integer overflow, and
|
|
|
|
/// then tries to reason about arithmetic properties of the predicates.
|
|
|
|
bool isImpliedCondOperandsViaNoOverflow(ICmpInst::Predicate Pred,
|
|
|
|
const SCEV *LHS, const SCEV *RHS,
|
|
|
|
const SCEV *FoundLHS,
|
|
|
|
const SCEV *FoundRHS);
|
|
|
|
|
|
|
|
/// If we know that the specified Phi is in the header of its containing
|
|
|
|
/// loop, we know the loop executes a constant number of times, and the PHI
|
|
|
|
/// node is just a recurrence involving constants, fold it.
|
|
|
|
Constant *getConstantEvolutionLoopExitValue(PHINode *PN, const APInt &BEs,
|
|
|
|
const Loop *L);
|
|
|
|
|
|
|
|
/// Test if the given expression is known to satisfy the condition described
|
|
|
|
/// by Pred and the known constant ranges of LHS and RHS.
|
|
|
|
///
|
|
|
|
bool isKnownPredicateViaConstantRanges(ICmpInst::Predicate Pred,
|
|
|
|
const SCEV *LHS, const SCEV *RHS);
|
2016-05-29 00:36:42 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Try to prove the condition described by "LHS Pred RHS" by ruling out
|
|
|
|
/// integer overflow.
|
|
|
|
///
|
|
|
|
/// For instance, this will return true for "A s< (A + C)<nsw>" if C is
|
|
|
|
/// positive.
|
|
|
|
bool isKnownPredicateViaNoOverflow(ICmpInst::Predicate Pred, const SCEV *LHS,
|
|
|
|
const SCEV *RHS);
|
|
|
|
|
|
|
|
/// Try to split Pred LHS RHS into logical conjunctions (and's) and try to
|
|
|
|
/// prove them individually.
|
|
|
|
bool isKnownPredicateViaSplitting(ICmpInst::Predicate Pred, const SCEV *LHS,
|
|
|
|
const SCEV *RHS);
|
|
|
|
|
|
|
|
/// Try to match the Expr as "(L + R)<Flags>".
|
|
|
|
bool splitBinaryAdd(const SCEV *Expr, const SCEV *&L, const SCEV *&R,
|
|
|
|
SCEV::NoWrapFlags &Flags);
|
|
|
|
|
|
|
|
/// Compute \p LHS - \p RHS and returns the result as an APInt if it is a
|
|
|
|
/// constant, and None if it isn't.
|
|
|
|
///
|
|
|
|
/// This is intended to be a cheaper version of getMinusSCEV. We can be
|
|
|
|
/// frugal here since we just bail out of actually constructing and
|
|
|
|
/// canonicalizing an expression in the cases where the result isn't going
|
|
|
|
/// to be a constant.
|
|
|
|
Optional<APInt> computeConstantDifference(const SCEV *LHS, const SCEV *RHS);
|
|
|
|
|
|
|
|
/// Drop memoized information computed for S.
|
|
|
|
void forgetMemoizedResults(const SCEV *S);
|
|
|
|
|
|
|
|
/// Return an existing SCEV for V if there is one, otherwise return nullptr.
|
|
|
|
const SCEV *getExistingSCEV(Value *V);
|
|
|
|
|
|
|
|
/// Return false iff given SCEV contains a SCEVUnknown with NULL value-
|
|
|
|
/// pointer.
|
|
|
|
bool checkValidity(const SCEV *S) const;
|
|
|
|
|
|
|
|
/// Return true if `ExtendOpTy`({`Start`,+,`Step`}) can be proved to be
|
|
|
|
/// equal to {`ExtendOpTy`(`Start`),+,`ExtendOpTy`(`Step`)}. This is
|
|
|
|
/// equivalent to proving no signed (resp. unsigned) wrap in
|
|
|
|
/// {`Start`,+,`Step`} if `ExtendOpTy` is `SCEVSignExtendExpr`
|
|
|
|
/// (resp. `SCEVZeroExtendExpr`).
|
|
|
|
///
|
|
|
|
template <typename ExtendOpTy>
|
|
|
|
bool proveNoWrapByVaryingStart(const SCEV *Start, const SCEV *Step,
|
|
|
|
const Loop *L);
|
2016-05-29 00:36:42 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Try to prove NSW or NUW on \p AR relying on ConstantRange manipulation.
|
|
|
|
SCEV::NoWrapFlags proveNoWrapViaConstantRanges(const SCEVAddRecExpr *AR);
|
2016-05-29 00:36:42 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
bool isMonotonicPredicateImpl(const SCEVAddRecExpr *LHS,
|
|
|
|
ICmpInst::Predicate Pred, bool &Increasing);
|
2016-05-29 00:36:42 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Return SCEV no-wrap flags that can be proven based on reasoning about
|
|
|
|
/// how poison produced from no-wrap flags on this value (e.g. a nuw add)
|
|
|
|
/// would trigger undefined behavior on overflow.
|
|
|
|
SCEV::NoWrapFlags getNoWrapFlagsFromUB(const Value *V);
|
|
|
|
|
|
|
|
/// Return true if the SCEV corresponding to \p I is never poison. Proving
|
|
|
|
/// this is more complex than proving that just \p I is never poison, since
|
|
|
|
/// SCEV commons expressions across control flow, and you can have cases
|
|
|
|
/// like:
|
|
|
|
///
|
|
|
|
/// idx0 = a + b;
|
|
|
|
/// ptr[idx0] = 100;
|
|
|
|
/// if (<condition>) {
|
|
|
|
/// idx1 = a +nsw b;
|
|
|
|
/// ptr[idx1] = 200;
|
|
|
|
/// }
|
|
|
|
///
|
|
|
|
/// where the SCEV expression (+ a b) is guaranteed to not be poison (and
|
|
|
|
/// hence not sign-overflow) only if "<condition>" is true. Since both
|
|
|
|
/// `idx0` and `idx1` will be mapped to the same SCEV expression, (+ a b),
|
|
|
|
/// it is not okay to annotate (+ a b) with <nsw> in the above example.
|
|
|
|
bool isSCEVExprNeverPoison(const Instruction *I);
|
|
|
|
|
|
|
|
/// This is like \c isSCEVExprNeverPoison but it specifically works for
|
|
|
|
/// instructions that will get mapped to SCEV add recurrences. Return true
|
|
|
|
/// if \p I will never generate poison under the assumption that \p I is an
|
|
|
|
/// add recurrence on the loop \p L.
|
|
|
|
bool isAddRecNeverPoison(const Instruction *I, const Loop *L);
|
|
|
|
|
|
|
|
public:
|
2016-12-19 08:22:17 +00:00
|
|
|
ScalarEvolution(Function &F, TargetLibraryInfo &TLI, AssumptionCache &AC,
|
2016-09-25 23:11:51 +00:00
|
|
|
DominatorTree &DT, LoopInfo &LI);
|
|
|
|
~ScalarEvolution();
|
|
|
|
ScalarEvolution(ScalarEvolution &&Arg);
|
|
|
|
|
|
|
|
LLVMContext &getContext() const { return F.getContext(); }
|
|
|
|
|
|
|
|
/// Test if values of the given type are analyzable within the SCEV
|
|
|
|
/// framework. This primarily includes integer types, and it can optionally
|
|
|
|
/// include pointer types if the ScalarEvolution class has access to
|
|
|
|
/// target-specific information.
|
|
|
|
bool isSCEVable(Type *Ty) const;
|
|
|
|
|
|
|
|
/// Return the size in bits of the specified type, for which isSCEVable must
|
|
|
|
/// return true.
|
|
|
|
uint64_t getTypeSizeInBits(Type *Ty) const;
|
|
|
|
|
|
|
|
/// Return a type with the same bitwidth as the given type and which
|
|
|
|
/// represents how SCEV will treat the given type, for which isSCEVable must
|
|
|
|
/// return true. For pointer types, this is the pointer-sized integer type.
|
|
|
|
Type *getEffectiveSCEVType(Type *Ty) const;
|
|
|
|
|
[ScalarEvolution] Re-enable Predicate implication from operations
The patch rL298481 was reverted due to crash on clang-with-lto-ubuntu build.
The reason of the crash was type mismatch between either a or b and RHS in the following situation:
LHS = sext(a +nsw b) > RHS.
This is quite rare, but still possible situation. Normally we need to cast all {a, b, RHS} to their widest type.
But we try to avoid creation of new SCEV that are not constants to avoid initiating recursive analysis that
can take a lot of time and/or cache a bad value for iterations number. To deal with this, in this patch we
reject this case and will not try to analyze it if the type of sum doesn't match with the type of RHS. In this
situation we don't need to create any non-constant SCEVs.
This patch also adds an assertion to the method IsProvedViaContext so that we could fail on it and not
go further into range analysis etc (because in some situations these analyzes succeed even when the passed
arguments have wrong types, what should not normally happen).
The patch also contains a fix for a problem with too narrow scope of the analysis caused by wrong
usage of predicates in recursive invocations.
The regression test on the said failure: test/Analysis/ScalarEvolution/implied-via-addition.ll
Reviewers: reames, apilipenko, anna, sanjoy
Reviewed By: sanjoy
Subscribers: mzolotukhin, mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D31238
llvm-svn: 299205
2017-03-31 12:05:30 +00:00
|
|
|
// Returns a wider type among {Ty1, Ty2}.
|
|
|
|
Type *getWiderType(Type *Ty1, Type *Ty2) const;
|
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Return true if the SCEV is a scAddRecExpr or it contains
|
|
|
|
/// scAddRecExpr. The result will be cached in HasRecMap.
|
|
|
|
///
|
|
|
|
bool containsAddRecurrence(const SCEV *S);
|
|
|
|
|
|
|
|
/// Return the Value set from which the SCEV expr is generated.
|
|
|
|
SetVector<ValueOffsetPair> *getSCEVValues(const SCEV *S);
|
|
|
|
|
|
|
|
/// Erase Value from ValueExprMap and ExprValueMap.
|
|
|
|
void eraseValueFromMap(Value *V);
|
|
|
|
|
|
|
|
/// Return a SCEV expression for the full generality of the specified
|
|
|
|
/// expression.
|
|
|
|
const SCEV *getSCEV(Value *V);
|
|
|
|
|
|
|
|
const SCEV *getConstant(ConstantInt *V);
|
|
|
|
const SCEV *getConstant(const APInt &Val);
|
|
|
|
const SCEV *getConstant(Type *Ty, uint64_t V, bool isSigned = false);
|
|
|
|
const SCEV *getTruncateExpr(const SCEV *Op, Type *Ty);
|
2017-04-17 20:40:05 +00:00
|
|
|
|
|
|
|
typedef SmallDenseMap<std::pair<const SCEV *, Type *>, const SCEV *, 8>
|
|
|
|
ExtendCacheTy;
|
2016-09-25 23:11:51 +00:00
|
|
|
const SCEV *getZeroExtendExpr(const SCEV *Op, Type *Ty);
|
2017-04-17 20:40:05 +00:00
|
|
|
const SCEV *getZeroExtendExprCached(const SCEV *Op, Type *Ty,
|
|
|
|
ExtendCacheTy &Cache);
|
|
|
|
const SCEV *getZeroExtendExprImpl(const SCEV *Op, Type *Ty,
|
|
|
|
ExtendCacheTy &Cache);
|
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
const SCEV *getSignExtendExpr(const SCEV *Op, Type *Ty);
|
2017-04-17 20:40:05 +00:00
|
|
|
const SCEV *getSignExtendExprCached(const SCEV *Op, Type *Ty,
|
|
|
|
ExtendCacheTy &Cache);
|
|
|
|
const SCEV *getSignExtendExprImpl(const SCEV *Op, Type *Ty,
|
|
|
|
ExtendCacheTy &Cache);
|
2016-09-25 23:11:51 +00:00
|
|
|
const SCEV *getAnyExtendExpr(const SCEV *Op, Type *Ty);
|
|
|
|
const SCEV *getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
|
2017-02-06 12:38:06 +00:00
|
|
|
SCEV::NoWrapFlags Flags = SCEV::FlagAnyWrap,
|
|
|
|
unsigned Depth = 0);
|
2016-09-25 23:11:51 +00:00
|
|
|
const SCEV *getAddExpr(const SCEV *LHS, const SCEV *RHS,
|
|
|
|
SCEV::NoWrapFlags Flags = SCEV::FlagAnyWrap) {
|
|
|
|
SmallVector<const SCEV *, 2> Ops = {LHS, RHS};
|
|
|
|
return getAddExpr(Ops, Flags);
|
|
|
|
}
|
|
|
|
const SCEV *getAddExpr(const SCEV *Op0, const SCEV *Op1, const SCEV *Op2,
|
|
|
|
SCEV::NoWrapFlags Flags = SCEV::FlagAnyWrap) {
|
|
|
|
SmallVector<const SCEV *, 3> Ops = {Op0, Op1, Op2};
|
|
|
|
return getAddExpr(Ops, Flags);
|
|
|
|
}
|
|
|
|
const SCEV *getMulExpr(SmallVectorImpl<const SCEV *> &Ops,
|
|
|
|
SCEV::NoWrapFlags Flags = SCEV::FlagAnyWrap);
|
|
|
|
const SCEV *getMulExpr(const SCEV *LHS, const SCEV *RHS,
|
|
|
|
SCEV::NoWrapFlags Flags = SCEV::FlagAnyWrap) {
|
|
|
|
SmallVector<const SCEV *, 2> Ops = {LHS, RHS};
|
|
|
|
return getMulExpr(Ops, Flags);
|
|
|
|
}
|
|
|
|
const SCEV *getMulExpr(const SCEV *Op0, const SCEV *Op1, const SCEV *Op2,
|
|
|
|
SCEV::NoWrapFlags Flags = SCEV::FlagAnyWrap) {
|
|
|
|
SmallVector<const SCEV *, 3> Ops = {Op0, Op1, Op2};
|
|
|
|
return getMulExpr(Ops, Flags);
|
|
|
|
}
|
|
|
|
const SCEV *getUDivExpr(const SCEV *LHS, const SCEV *RHS);
|
|
|
|
const SCEV *getUDivExactExpr(const SCEV *LHS, const SCEV *RHS);
|
|
|
|
const SCEV *getAddRecExpr(const SCEV *Start, const SCEV *Step, const Loop *L,
|
|
|
|
SCEV::NoWrapFlags Flags);
|
|
|
|
const SCEV *getAddRecExpr(SmallVectorImpl<const SCEV *> &Operands,
|
|
|
|
const Loop *L, SCEV::NoWrapFlags Flags);
|
|
|
|
const SCEV *getAddRecExpr(const SmallVectorImpl<const SCEV *> &Operands,
|
|
|
|
const Loop *L, SCEV::NoWrapFlags Flags) {
|
|
|
|
SmallVector<const SCEV *, 4> NewOp(Operands.begin(), Operands.end());
|
|
|
|
return getAddRecExpr(NewOp, L, Flags);
|
|
|
|
}
|
|
|
|
/// Returns an expression for a GEP
|
|
|
|
///
|
2016-11-13 06:59:50 +00:00
|
|
|
/// \p GEP The GEP. The indices contained in the GEP itself are ignored,
|
|
|
|
/// instead we use IndexExprs.
|
2016-09-25 23:11:51 +00:00
|
|
|
/// \p IndexExprs The expressions for the indices.
|
2016-11-13 06:59:50 +00:00
|
|
|
const SCEV *getGEPExpr(GEPOperator *GEP,
|
|
|
|
const SmallVectorImpl<const SCEV *> &IndexExprs);
|
2016-09-25 23:11:51 +00:00
|
|
|
const SCEV *getSMaxExpr(const SCEV *LHS, const SCEV *RHS);
|
|
|
|
const SCEV *getSMaxExpr(SmallVectorImpl<const SCEV *> &Operands);
|
|
|
|
const SCEV *getUMaxExpr(const SCEV *LHS, const SCEV *RHS);
|
|
|
|
const SCEV *getUMaxExpr(SmallVectorImpl<const SCEV *> &Operands);
|
|
|
|
const SCEV *getSMinExpr(const SCEV *LHS, const SCEV *RHS);
|
|
|
|
const SCEV *getUMinExpr(const SCEV *LHS, const SCEV *RHS);
|
|
|
|
const SCEV *getUnknown(Value *V);
|
|
|
|
const SCEV *getCouldNotCompute();
|
|
|
|
|
|
|
|
/// Return a SCEV for the constant 0 of a specific type.
|
|
|
|
const SCEV *getZero(Type *Ty) { return getConstant(Ty, 0); }
|
|
|
|
|
|
|
|
/// Return a SCEV for the constant 1 of a specific type.
|
|
|
|
const SCEV *getOne(Type *Ty) { return getConstant(Ty, 1); }
|
|
|
|
|
|
|
|
/// Return an expression for sizeof AllocTy that is type IntTy
|
|
|
|
///
|
|
|
|
const SCEV *getSizeOfExpr(Type *IntTy, Type *AllocTy);
|
2016-05-29 00:36:42 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Return an expression for offsetof on the given field with type IntTy
|
|
|
|
///
|
|
|
|
const SCEV *getOffsetOfExpr(Type *IntTy, StructType *STy, unsigned FieldNo);
|
2016-05-29 00:36:42 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Return the SCEV object corresponding to -V.
|
|
|
|
///
|
|
|
|
const SCEV *getNegativeSCEV(const SCEV *V,
|
|
|
|
SCEV::NoWrapFlags Flags = SCEV::FlagAnyWrap);
|
2016-05-29 00:36:42 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Return the SCEV object corresponding to ~V.
|
|
|
|
///
|
|
|
|
const SCEV *getNotSCEV(const SCEV *V);
|
2016-05-29 00:36:42 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Return LHS-RHS. Minus is represented in SCEV as A+B*-1.
|
|
|
|
const SCEV *getMinusSCEV(const SCEV *LHS, const SCEV *RHS,
|
|
|
|
SCEV::NoWrapFlags Flags = SCEV::FlagAnyWrap);
|
2016-05-29 00:36:42 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Return a SCEV corresponding to a conversion of the input value to the
|
|
|
|
/// specified type. If the type must be extended, it is zero extended.
|
|
|
|
const SCEV *getTruncateOrZeroExtend(const SCEV *V, Type *Ty);
|
|
|
|
|
|
|
|
/// Return a SCEV corresponding to a conversion of the input value to the
|
|
|
|
/// specified type. If the type must be extended, it is sign extended.
|
|
|
|
const SCEV *getTruncateOrSignExtend(const SCEV *V, Type *Ty);
|
|
|
|
|
|
|
|
/// Return a SCEV corresponding to a conversion of the input value to the
|
|
|
|
/// specified type. If the type must be extended, it is zero extended. The
|
|
|
|
/// conversion must not be narrowing.
|
|
|
|
const SCEV *getNoopOrZeroExtend(const SCEV *V, Type *Ty);
|
|
|
|
|
|
|
|
/// Return a SCEV corresponding to a conversion of the input value to the
|
|
|
|
/// specified type. If the type must be extended, it is sign extended. The
|
|
|
|
/// conversion must not be narrowing.
|
|
|
|
const SCEV *getNoopOrSignExtend(const SCEV *V, Type *Ty);
|
|
|
|
|
|
|
|
/// Return a SCEV corresponding to a conversion of the input value to the
|
|
|
|
/// specified type. If the type must be extended, it is extended with
|
|
|
|
/// unspecified bits. The conversion must not be narrowing.
|
|
|
|
const SCEV *getNoopOrAnyExtend(const SCEV *V, Type *Ty);
|
|
|
|
|
|
|
|
/// Return a SCEV corresponding to a conversion of the input value to the
|
|
|
|
/// specified type. The conversion must not be widening.
|
|
|
|
const SCEV *getTruncateOrNoop(const SCEV *V, Type *Ty);
|
|
|
|
|
|
|
|
/// Promote the operands to the wider of the types using zero-extension, and
|
|
|
|
/// then perform a umax operation with them.
|
|
|
|
const SCEV *getUMaxFromMismatchedTypes(const SCEV *LHS, const SCEV *RHS);
|
|
|
|
|
|
|
|
/// Promote the operands to the wider of the types using zero-extension, and
|
|
|
|
/// then perform a umin operation with them.
|
|
|
|
const SCEV *getUMinFromMismatchedTypes(const SCEV *LHS, const SCEV *RHS);
|
|
|
|
|
|
|
|
/// Transitively follow the chain of pointer-type operands until reaching a
|
|
|
|
/// SCEV that does not have a single pointer operand. This returns a
|
|
|
|
/// SCEVUnknown pointer for well-formed pointer-type expressions, but corner
|
|
|
|
/// cases do exist.
|
|
|
|
const SCEV *getPointerBase(const SCEV *V);
|
|
|
|
|
|
|
|
/// Return a SCEV expression for the specified value at the specified scope
|
|
|
|
/// in the program. The L value specifies a loop nest to evaluate the
|
|
|
|
/// expression at, where null is the top-level or a specified loop is
|
|
|
|
/// immediately inside of the loop.
|
|
|
|
///
|
|
|
|
/// This method can be used to compute the exit value for a variable defined
|
|
|
|
/// in a loop by querying what the value will hold in the parent loop.
|
|
|
|
///
|
|
|
|
/// In the case that a relevant loop exit value cannot be computed, the
|
|
|
|
/// original value V is returned.
|
|
|
|
const SCEV *getSCEVAtScope(const SCEV *S, const Loop *L);
|
|
|
|
|
|
|
|
/// This is a convenience function which does getSCEVAtScope(getSCEV(V), L).
|
|
|
|
const SCEV *getSCEVAtScope(Value *V, const Loop *L);
|
|
|
|
|
|
|
|
/// Test whether entry to the loop is protected by a conditional between LHS
|
|
|
|
/// and RHS. This is used to help avoid max expressions in loop trip
|
|
|
|
/// counts, and to eliminate casts.
|
|
|
|
bool isLoopEntryGuardedByCond(const Loop *L, ICmpInst::Predicate Pred,
|
|
|
|
const SCEV *LHS, const SCEV *RHS);
|
|
|
|
|
|
|
|
/// Test whether the backedge of the loop is protected by a conditional
|
|
|
|
/// between LHS and RHS. This is used to to eliminate casts.
|
|
|
|
bool isLoopBackedgeGuardedByCond(const Loop *L, ICmpInst::Predicate Pred,
|
|
|
|
const SCEV *LHS, const SCEV *RHS);
|
|
|
|
|
|
|
|
/// Returns the maximum trip count of the loop if it is a single-exit
|
|
|
|
/// loop and we can compute a small maximum for that loop.
|
|
|
|
///
|
|
|
|
/// Implemented in terms of the \c getSmallConstantTripCount overload with
|
|
|
|
/// the single exiting block passed to it. See that routine for details.
|
2017-03-17 22:19:52 +00:00
|
|
|
unsigned getSmallConstantTripCount(const Loop *L);
|
2016-09-25 23:11:51 +00:00
|
|
|
|
|
|
|
/// Returns the maximum trip count of this loop as a normal unsigned
|
|
|
|
/// value. Returns 0 if the trip count is unknown or not constant. This
|
|
|
|
/// "trip count" assumes that control exits via ExitingBlock. More
|
|
|
|
/// precisely, it is the number of times that control may reach ExitingBlock
|
|
|
|
/// before taking the branch. For loops with multiple exits, it may not be
|
|
|
|
/// the number times that the loop header executes if the loop exits
|
|
|
|
/// prematurely via another branch.
|
2017-03-17 22:19:52 +00:00
|
|
|
unsigned getSmallConstantTripCount(const Loop *L, BasicBlock *ExitingBlock);
|
2016-09-25 23:11:51 +00:00
|
|
|
|
2016-10-12 21:29:38 +00:00
|
|
|
/// Returns the upper bound of the loop trip count as a normal unsigned
|
|
|
|
/// value.
|
|
|
|
/// Returns 0 if the trip count is unknown or not constant.
|
2017-03-17 22:19:52 +00:00
|
|
|
unsigned getSmallConstantMaxTripCount(const Loop *L);
|
2016-10-12 21:29:38 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Returns the largest constant divisor of the trip count of the
|
|
|
|
/// loop if it is a single-exit loop and we can compute a small maximum for
|
|
|
|
/// that loop.
|
|
|
|
///
|
|
|
|
/// Implemented in terms of the \c getSmallConstantTripMultiple overload with
|
|
|
|
/// the single exiting block passed to it. See that routine for details.
|
2017-03-17 22:19:52 +00:00
|
|
|
unsigned getSmallConstantTripMultiple(const Loop *L);
|
2016-09-25 23:11:51 +00:00
|
|
|
|
|
|
|
/// Returns the largest constant divisor of the trip count of this loop as a
|
|
|
|
/// normal unsigned value, if possible. This means that the actual trip
|
|
|
|
/// count is always a multiple of the returned value (don't forget the trip
|
|
|
|
/// count could very well be zero as well!). As explained in the comments
|
|
|
|
/// for getSmallConstantTripCount, this assumes that control exits the loop
|
|
|
|
/// via ExitingBlock.
|
2017-03-17 22:19:52 +00:00
|
|
|
unsigned getSmallConstantTripMultiple(const Loop *L,
|
|
|
|
BasicBlock *ExitingBlock);
|
2016-09-25 23:11:51 +00:00
|
|
|
|
|
|
|
/// Get the expression for the number of loop iterations for which this loop
|
|
|
|
/// is guaranteed not to exit via ExitingBlock. Otherwise return
|
|
|
|
/// SCEVCouldNotCompute.
|
2017-03-17 22:19:52 +00:00
|
|
|
const SCEV *getExitCount(const Loop *L, BasicBlock *ExitingBlock);
|
2016-09-25 23:11:51 +00:00
|
|
|
|
|
|
|
/// If the specified loop has a predictable backedge-taken count, return it,
|
2017-05-22 06:46:04 +00:00
|
|
|
/// otherwise return a SCEVCouldNotCompute object. The backedge-taken count is
|
|
|
|
/// the number of times the loop header will be branched to from within the
|
|
|
|
/// loop, assuming there are no abnormal exists like exception throws. This is
|
|
|
|
/// one less than the trip count of the loop, since it doesn't count the first
|
|
|
|
/// iteration, when the header is branched to from outside the loop.
|
2016-09-25 23:11:51 +00:00
|
|
|
///
|
|
|
|
/// Note that it is not valid to call this method on a loop without a
|
|
|
|
/// loop-invariant backedge-taken count (see
|
|
|
|
/// hasLoopInvariantBackedgeTakenCount).
|
|
|
|
///
|
|
|
|
const SCEV *getBackedgeTakenCount(const Loop *L);
|
|
|
|
|
|
|
|
/// Similar to getBackedgeTakenCount, except it will add a set of
|
|
|
|
/// SCEV predicates to Predicates that are required to be true in order for
|
|
|
|
/// the answer to be correct. Predicates can be checked with run-time
|
|
|
|
/// checks and can be used to perform loop versioning.
|
|
|
|
const SCEV *getPredicatedBackedgeTakenCount(const Loop *L,
|
|
|
|
SCEVUnionPredicate &Predicates);
|
|
|
|
|
2017-05-22 06:46:04 +00:00
|
|
|
/// When successful, this returns a SCEVConstant that is greater than or equal
|
|
|
|
/// to (i.e. a "conservative over-approximation") of the value returend by
|
|
|
|
/// getBackedgeTakenCount. If such a value cannot be computed, it returns the
|
|
|
|
/// SCEVCouldNotCompute object.
|
2016-09-25 23:11:51 +00:00
|
|
|
const SCEV *getMaxBackedgeTakenCount(const Loop *L);
|
|
|
|
|
2016-10-21 11:08:48 +00:00
|
|
|
/// Return true if the backedge taken count is either the value returned by
|
|
|
|
/// getMaxBackedgeTakenCount or zero.
|
|
|
|
bool isBackedgeTakenCountMaxOrZero(const Loop *L);
|
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Return true if the specified loop has an analyzable loop-invariant
|
|
|
|
/// backedge-taken count.
|
|
|
|
bool hasLoopInvariantBackedgeTakenCount(const Loop *L);
|
|
|
|
|
|
|
|
/// This method should be called by the client when it has changed a loop in
|
|
|
|
/// a way that may effect ScalarEvolution's ability to compute a trip count,
|
|
|
|
/// or if the loop is deleted. This call is potentially expensive for large
|
|
|
|
/// loop bodies.
|
|
|
|
void forgetLoop(const Loop *L);
|
|
|
|
|
|
|
|
/// This method should be called by the client when it has changed a value
|
|
|
|
/// in a way that may effect its value, or which may disconnect it from a
|
|
|
|
/// def-use chain linking it to a loop.
|
|
|
|
void forgetValue(Value *V);
|
|
|
|
|
|
|
|
/// Called when the client has changed the disposition of values in
|
|
|
|
/// this loop.
|
|
|
|
///
|
|
|
|
/// We don't have a way to invalidate per-loop dispositions. Clear and
|
|
|
|
/// recompute is simpler.
|
|
|
|
void forgetLoopDispositions(const Loop *L) { LoopDispositions.clear(); }
|
2016-05-29 00:36:42 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Determine the minimum number of zero bits that S is guaranteed to end in
|
|
|
|
/// (at every loop iteration). It is, at the same time, the minimum number
|
|
|
|
/// of times S is divisible by 2. For example, given {4,+,8} it returns 2.
|
|
|
|
/// If S is guaranteed to be 0, it returns the bitwidth of S.
|
|
|
|
uint32_t GetMinTrailingZeros(const SCEV *S);
|
2016-05-29 00:36:42 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Determine the unsigned range for a particular SCEV.
|
|
|
|
///
|
|
|
|
ConstantRange getUnsignedRange(const SCEV *S) {
|
|
|
|
return getRange(S, HINT_RANGE_UNSIGNED);
|
|
|
|
}
|
2016-05-29 00:36:42 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Determine the signed range for a particular SCEV.
|
|
|
|
///
|
|
|
|
ConstantRange getSignedRange(const SCEV *S) {
|
|
|
|
return getRange(S, HINT_RANGE_SIGNED);
|
|
|
|
}
|
2016-05-29 00:36:42 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Test if the given expression is known to be negative.
|
|
|
|
///
|
|
|
|
bool isKnownNegative(const SCEV *S);
|
2016-05-29 00:36:42 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Test if the given expression is known to be positive.
|
|
|
|
///
|
|
|
|
bool isKnownPositive(const SCEV *S);
|
2016-05-29 00:36:42 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Test if the given expression is known to be non-negative.
|
|
|
|
///
|
|
|
|
bool isKnownNonNegative(const SCEV *S);
|
2016-05-29 00:36:42 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Test if the given expression is known to be non-positive.
|
|
|
|
///
|
|
|
|
bool isKnownNonPositive(const SCEV *S);
|
2016-05-29 00:36:42 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Test if the given expression is known to be non-zero.
|
|
|
|
///
|
|
|
|
bool isKnownNonZero(const SCEV *S);
|
2016-05-29 00:36:42 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Test if the given expression is known to satisfy the condition described
|
|
|
|
/// by Pred, LHS, and RHS.
|
|
|
|
///
|
|
|
|
bool isKnownPredicate(ICmpInst::Predicate Pred, const SCEV *LHS,
|
|
|
|
const SCEV *RHS);
|
|
|
|
|
2017-01-25 15:07:55 +00:00
|
|
|
/// Return true if, for all loop invariant X, the predicate "LHS `Pred` X"
|
|
|
|
/// is monotonically increasing or decreasing. In the former case set
|
|
|
|
/// `Increasing` to true and in the latter case set `Increasing` to false.
|
|
|
|
///
|
|
|
|
/// A predicate is said to be monotonically increasing if may go from being
|
|
|
|
/// false to being true as the loop iterates, but never the other way
|
|
|
|
/// around. A predicate is said to be monotonically decreasing if may go
|
|
|
|
/// from being true to being false as the loop iterates, but never the other
|
|
|
|
/// way around.
|
|
|
|
bool isMonotonicPredicate(const SCEVAddRecExpr *LHS, ICmpInst::Predicate Pred,
|
|
|
|
bool &Increasing);
|
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Return true if the result of the predicate LHS `Pred` RHS is loop
|
|
|
|
/// invariant with respect to L. Set InvariantPred, InvariantLHS and
|
|
|
|
/// InvariantLHS so that InvariantLHS `InvariantPred` InvariantRHS is the
|
|
|
|
/// loop invariant form of LHS `Pred` RHS.
|
|
|
|
bool isLoopInvariantPredicate(ICmpInst::Predicate Pred, const SCEV *LHS,
|
|
|
|
const SCEV *RHS, const Loop *L,
|
|
|
|
ICmpInst::Predicate &InvariantPred,
|
|
|
|
const SCEV *&InvariantLHS,
|
|
|
|
const SCEV *&InvariantRHS);
|
|
|
|
|
|
|
|
/// Simplify LHS and RHS in a comparison with predicate Pred. Return true
|
|
|
|
/// iff any changes were made. If the operands are provably equal or
|
|
|
|
/// unequal, LHS and RHS are set to the same value and Pred is set to either
|
|
|
|
/// ICMP_EQ or ICMP_NE.
|
|
|
|
///
|
|
|
|
bool SimplifyICmpOperands(ICmpInst::Predicate &Pred, const SCEV *&LHS,
|
|
|
|
const SCEV *&RHS, unsigned Depth = 0);
|
|
|
|
|
|
|
|
/// Return the "disposition" of the given SCEV with respect to the given
|
|
|
|
/// loop.
|
|
|
|
LoopDisposition getLoopDisposition(const SCEV *S, const Loop *L);
|
|
|
|
|
|
|
|
/// Return true if the value of the given SCEV is unchanging in the
|
|
|
|
/// specified loop.
|
|
|
|
bool isLoopInvariant(const SCEV *S, const Loop *L);
|
|
|
|
|
Re-enable "[SCEV] Do not fold dominated SCEVUnknown into AddRecExpr start"
The patch rL303730 was reverted because test lsr-expand-quadratic.ll failed on
many non-X86 configs with this patch. The reason of this is that the patch
makes a correctless fix that changes optimizer's behavior for this test.
Without the change, LSR was making an overconfident simplification basing on a
wrong SCEV. Apparently it did not need the IV analysis to do this. With the
change, it chose a different way to simplify (that wasn't so confident), and
this way required the IV analysis. Now, following the right execution path,
LSR tries to make a transformation relying on IV Users analysis. This analysis
is target-dependent due to this code:
// LSR is not APInt clean, do not touch integers bigger than 64-bits.
// Also avoid creating IVs of non-native types. For example, we don't want a
// 64-bit IV in 32-bit code just because the loop has one 64-bit cast.
uint64_t Width = SE->getTypeSizeInBits(I->getType());
if (Width > 64 || !DL.isLegalInteger(Width))
return false;
To make a proper transformation in this test case, the type i32 needs to be
legal for the specified data layout. When the test runs on some non-X86
configuration (e.g. pure ARM 64), opt gets confused by the specified target
and does not use it, rejecting the specified data layout as well. Instead,
it uses some default layout that does not treat i32 as a legal type
(currently the layout that is used when it is not specified does not have
legal types at all). As result, the transformation we expect to happen does
not happen for this test.
This re-enabling patch does not have any source code changes compared to the
original patch rL303730. The only difference is that the failing test is
moved to X86 directory and now has requirement of running on x86 only to comply
with the specified target triple and data layout.
Differential Revision: https://reviews.llvm.org/D33543
llvm-svn: 303971
2017-05-26 06:47:04 +00:00
|
|
|
/// Determine if the SCEV can be evaluated at loop's entry. It is true if it
|
|
|
|
/// doesn't depend on a SCEVUnknown of an instruction which is dominated by
|
|
|
|
/// the header of loop L.
|
2017-05-30 10:54:58 +00:00
|
|
|
bool isAvailableAtLoopEntry(const SCEV *S, const Loop *L);
|
Re-enable "[SCEV] Do not fold dominated SCEVUnknown into AddRecExpr start"
The patch rL303730 was reverted because test lsr-expand-quadratic.ll failed on
many non-X86 configs with this patch. The reason of this is that the patch
makes a correctless fix that changes optimizer's behavior for this test.
Without the change, LSR was making an overconfident simplification basing on a
wrong SCEV. Apparently it did not need the IV analysis to do this. With the
change, it chose a different way to simplify (that wasn't so confident), and
this way required the IV analysis. Now, following the right execution path,
LSR tries to make a transformation relying on IV Users analysis. This analysis
is target-dependent due to this code:
// LSR is not APInt clean, do not touch integers bigger than 64-bits.
// Also avoid creating IVs of non-native types. For example, we don't want a
// 64-bit IV in 32-bit code just because the loop has one 64-bit cast.
uint64_t Width = SE->getTypeSizeInBits(I->getType());
if (Width > 64 || !DL.isLegalInteger(Width))
return false;
To make a proper transformation in this test case, the type i32 needs to be
legal for the specified data layout. When the test runs on some non-X86
configuration (e.g. pure ARM 64), opt gets confused by the specified target
and does not use it, rejecting the specified data layout as well. Instead,
it uses some default layout that does not treat i32 as a legal type
(currently the layout that is used when it is not specified does not have
legal types at all). As result, the transformation we expect to happen does
not happen for this test.
This re-enabling patch does not have any source code changes compared to the
original patch rL303730. The only difference is that the failing test is
moved to X86 directory and now has requirement of running on x86 only to comply
with the specified target triple and data layout.
Differential Revision: https://reviews.llvm.org/D33543
llvm-svn: 303971
2017-05-26 06:47:04 +00:00
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
/// Return true if the given SCEV changes value in a known way in the
|
|
|
|
/// specified loop. This property being true implies that the value is
|
|
|
|
/// variant in the loop AND that we can emit an expression to compute the
|
|
|
|
/// value of the expression at any particular loop iteration.
|
|
|
|
bool hasComputableLoopEvolution(const SCEV *S, const Loop *L);
|
|
|
|
|
|
|
|
/// Return the "disposition" of the given SCEV with respect to the given
|
|
|
|
/// block.
|
|
|
|
BlockDisposition getBlockDisposition(const SCEV *S, const BasicBlock *BB);
|
|
|
|
|
|
|
|
/// Return true if elements that makes up the given SCEV dominate the
|
|
|
|
/// specified basic block.
|
|
|
|
bool dominates(const SCEV *S, const BasicBlock *BB);
|
|
|
|
|
|
|
|
/// Return true if elements that makes up the given SCEV properly dominate
|
|
|
|
/// the specified basic block.
|
|
|
|
bool properlyDominates(const SCEV *S, const BasicBlock *BB);
|
|
|
|
|
|
|
|
/// Test whether the given SCEV has Op as a direct or indirect operand.
|
|
|
|
bool hasOperand(const SCEV *S, const SCEV *Op) const;
|
|
|
|
|
|
|
|
/// Return the size of an element read or written by Inst.
|
|
|
|
const SCEV *getElementSize(Instruction *Inst);
|
|
|
|
|
|
|
|
/// Compute the array dimensions Sizes from the set of Terms extracted from
|
|
|
|
/// the memory access function of this SCEVAddRecExpr (second step of
|
|
|
|
/// delinearization).
|
|
|
|
void findArrayDimensions(SmallVectorImpl<const SCEV *> &Terms,
|
|
|
|
SmallVectorImpl<const SCEV *> &Sizes,
|
2017-05-07 05:29:36 +00:00
|
|
|
const SCEV *ElementSize);
|
2016-09-25 23:11:51 +00:00
|
|
|
|
|
|
|
void print(raw_ostream &OS) const;
|
|
|
|
void verify() const;
|
2017-01-09 07:44:34 +00:00
|
|
|
bool invalidate(Function &F, const PreservedAnalyses &PA,
|
|
|
|
FunctionAnalysisManager::Invalidator &Inv);
|
2016-09-25 23:11:51 +00:00
|
|
|
|
|
|
|
/// Collect parametric terms occurring in step expressions (first step of
|
|
|
|
/// delinearization).
|
|
|
|
void collectParametricTerms(const SCEV *Expr,
|
|
|
|
SmallVectorImpl<const SCEV *> &Terms);
|
|
|
|
|
|
|
|
/// Return in Subscripts the access functions for each dimension in Sizes
|
|
|
|
/// (third step of delinearization).
|
|
|
|
void computeAccessFunctions(const SCEV *Expr,
|
|
|
|
SmallVectorImpl<const SCEV *> &Subscripts,
|
|
|
|
SmallVectorImpl<const SCEV *> &Sizes);
|
|
|
|
|
|
|
|
/// Split this SCEVAddRecExpr into two vectors of SCEVs representing the
|
|
|
|
/// subscripts and sizes of an array access.
|
|
|
|
///
|
|
|
|
/// The delinearization is a 3 step process: the first two steps compute the
|
|
|
|
/// sizes of each subscript and the third step computes the access functions
|
|
|
|
/// for the delinearized array:
|
|
|
|
///
|
|
|
|
/// 1. Find the terms in the step functions
|
|
|
|
/// 2. Compute the array size
|
|
|
|
/// 3. Compute the access function: divide the SCEV by the array size
|
|
|
|
/// starting with the innermost dimensions found in step 2. The Quotient
|
|
|
|
/// is the SCEV to be divided in the next step of the recursion. The
|
|
|
|
/// Remainder is the subscript of the innermost dimension. Loop over all
|
|
|
|
/// array dimensions computed in step 2.
|
|
|
|
///
|
|
|
|
/// To compute a uniform array size for several memory accesses to the same
|
|
|
|
/// object, one can collect in step 1 all the step terms for all the memory
|
|
|
|
/// accesses, and compute in step 2 a unique array shape. This guarantees
|
|
|
|
/// that the array shape will be the same across all memory accesses.
|
|
|
|
///
|
|
|
|
/// FIXME: We could derive the result of steps 1 and 2 from a description of
|
|
|
|
/// the array shape given in metadata.
|
|
|
|
///
|
|
|
|
/// Example:
|
|
|
|
///
|
|
|
|
/// A[][n][m]
|
|
|
|
///
|
|
|
|
/// for i
|
|
|
|
/// for j
|
|
|
|
/// for k
|
|
|
|
/// A[j+k][2i][5i] =
|
|
|
|
///
|
|
|
|
/// The initial SCEV:
|
|
|
|
///
|
|
|
|
/// A[{{{0,+,2*m+5}_i, +, n*m}_j, +, n*m}_k]
|
|
|
|
///
|
|
|
|
/// 1. Find the different terms in the step functions:
|
|
|
|
/// -> [2*m, 5, n*m, n*m]
|
|
|
|
///
|
|
|
|
/// 2. Compute the array size: sort and unique them
|
|
|
|
/// -> [n*m, 2*m, 5]
|
|
|
|
/// find the GCD of all the terms = 1
|
|
|
|
/// divide by the GCD and erase constant terms
|
|
|
|
/// -> [n*m, 2*m]
|
|
|
|
/// GCD = m
|
|
|
|
/// divide by GCD -> [n, 2]
|
|
|
|
/// remove constant terms
|
|
|
|
/// -> [n]
|
|
|
|
/// size of the array is A[unknown][n][m]
|
|
|
|
///
|
|
|
|
/// 3. Compute the access function
|
|
|
|
/// a. Divide {{{0,+,2*m+5}_i, +, n*m}_j, +, n*m}_k by the innermost size m
|
|
|
|
/// Quotient: {{{0,+,2}_i, +, n}_j, +, n}_k
|
|
|
|
/// Remainder: {{{0,+,5}_i, +, 0}_j, +, 0}_k
|
|
|
|
/// The remainder is the subscript of the innermost array dimension: [5i].
|
|
|
|
///
|
|
|
|
/// b. Divide Quotient: {{{0,+,2}_i, +, n}_j, +, n}_k by next outer size n
|
|
|
|
/// Quotient: {{{0,+,0}_i, +, 1}_j, +, 1}_k
|
|
|
|
/// Remainder: {{{0,+,2}_i, +, 0}_j, +, 0}_k
|
|
|
|
/// The Remainder is the subscript of the next array dimension: [2i].
|
|
|
|
///
|
|
|
|
/// The subscript of the outermost dimension is the Quotient: [j+k].
|
|
|
|
///
|
|
|
|
/// Overall, we have: A[][n][m], and the access function: A[j+k][2i][5i].
|
|
|
|
void delinearize(const SCEV *Expr, SmallVectorImpl<const SCEV *> &Subscripts,
|
|
|
|
SmallVectorImpl<const SCEV *> &Sizes,
|
|
|
|
const SCEV *ElementSize);
|
|
|
|
|
|
|
|
/// Return the DataLayout associated with the module this SCEV instance is
|
|
|
|
/// operating on.
|
|
|
|
const DataLayout &getDataLayout() const {
|
|
|
|
return F.getParent()->getDataLayout();
|
|
|
|
}
|
|
|
|
|
|
|
|
const SCEVPredicate *getEqualPredicate(const SCEVUnknown *LHS,
|
|
|
|
const SCEVConstant *RHS);
|
|
|
|
|
|
|
|
const SCEVPredicate *
|
|
|
|
getWrapPredicate(const SCEVAddRecExpr *AR,
|
|
|
|
SCEVWrapPredicate::IncrementWrapFlags AddedFlags);
|
|
|
|
|
|
|
|
/// Re-writes the SCEV according to the Predicates in \p A.
|
|
|
|
const SCEV *rewriteUsingPredicate(const SCEV *S, const Loop *L,
|
|
|
|
SCEVUnionPredicate &A);
|
|
|
|
/// Tries to convert the \p S expression to an AddRec expression,
|
|
|
|
/// adding additional predicates to \p Preds as required.
|
2016-09-28 17:14:58 +00:00
|
|
|
const SCEVAddRecExpr *convertSCEVToAddRecWithPredicates(
|
|
|
|
const SCEV *S, const Loop *L,
|
|
|
|
SmallPtrSetImpl<const SCEVPredicate *> &Preds);
|
2016-09-25 23:11:51 +00:00
|
|
|
|
|
|
|
private:
|
|
|
|
/// Compute the backedge taken count knowing the interval difference, the
|
|
|
|
/// stride and presence of the equality in the comparison.
|
|
|
|
const SCEV *computeBECount(const SCEV *Delta, const SCEV *Stride,
|
|
|
|
bool Equality);
|
|
|
|
|
|
|
|
/// Verify if an linear IV with positive stride can overflow when in a
|
|
|
|
/// less-than comparison, knowing the invariant term of the comparison,
|
|
|
|
/// the stride and the knowledge of NSW/NUW flags on the recurrence.
|
|
|
|
bool doesIVOverflowOnLT(const SCEV *RHS, const SCEV *Stride, bool IsSigned,
|
|
|
|
bool NoWrap);
|
|
|
|
|
|
|
|
/// Verify if an linear IV with negative stride can overflow when in a
|
|
|
|
/// greater-than comparison, knowing the invariant term of the comparison,
|
|
|
|
/// the stride and the knowledge of NSW/NUW flags on the recurrence.
|
|
|
|
bool doesIVOverflowOnGT(const SCEV *RHS, const SCEV *Stride, bool IsSigned,
|
|
|
|
bool NoWrap);
|
|
|
|
|
2017-02-06 12:38:06 +00:00
|
|
|
/// Get add expr already created or create a new one
|
|
|
|
const SCEV *getOrCreateAddExpr(SmallVectorImpl<const SCEV *> &Ops,
|
|
|
|
SCEV::NoWrapFlags Flags);
|
|
|
|
|
2016-09-25 23:11:51 +00:00
|
|
|
private:
|
|
|
|
FoldingSet<SCEV> UniqueSCEVs;
|
|
|
|
FoldingSet<SCEVPredicate> UniquePreds;
|
|
|
|
BumpPtrAllocator SCEVAllocator;
|
|
|
|
|
|
|
|
/// The head of a linked list of all SCEVUnknown values that have been
|
|
|
|
/// allocated. This is used by releaseMemory to locate them all and call
|
|
|
|
/// their destructors.
|
|
|
|
SCEVUnknown *FirstUnknown;
|
|
|
|
};
|
|
|
|
|
|
|
|
/// Analysis pass that exposes the \c ScalarEvolution for a function.
|
|
|
|
class ScalarEvolutionAnalysis
|
|
|
|
: public AnalysisInfoMixin<ScalarEvolutionAnalysis> {
|
|
|
|
friend AnalysisInfoMixin<ScalarEvolutionAnalysis>;
|
2016-11-23 17:53:26 +00:00
|
|
|
static AnalysisKey Key;
|
2016-09-25 23:11:51 +00:00
|
|
|
|
|
|
|
public:
|
|
|
|
typedef ScalarEvolution Result;
|
|
|
|
|
|
|
|
ScalarEvolution run(Function &F, FunctionAnalysisManager &AM);
|
|
|
|
};
|
|
|
|
|
|
|
|
/// Printer pass for the \c ScalarEvolutionAnalysis results.
|
|
|
|
class ScalarEvolutionPrinterPass
|
|
|
|
: public PassInfoMixin<ScalarEvolutionPrinterPass> {
|
|
|
|
raw_ostream &OS;
|
|
|
|
|
|
|
|
public:
|
|
|
|
explicit ScalarEvolutionPrinterPass(raw_ostream &OS) : OS(OS) {}
|
|
|
|
PreservedAnalyses run(Function &F, FunctionAnalysisManager &AM);
|
|
|
|
};
|
|
|
|
|
|
|
|
class ScalarEvolutionWrapperPass : public FunctionPass {
|
|
|
|
std::unique_ptr<ScalarEvolution> SE;
|
|
|
|
|
|
|
|
public:
|
|
|
|
static char ID;
|
|
|
|
|
|
|
|
ScalarEvolutionWrapperPass();
|
|
|
|
|
|
|
|
ScalarEvolution &getSE() { return *SE; }
|
|
|
|
const ScalarEvolution &getSE() const { return *SE; }
|
|
|
|
|
|
|
|
bool runOnFunction(Function &F) override;
|
|
|
|
void releaseMemory() override;
|
|
|
|
void getAnalysisUsage(AnalysisUsage &AU) const override;
|
|
|
|
void print(raw_ostream &OS, const Module * = nullptr) const override;
|
|
|
|
void verifyAnalysis() const override;
|
|
|
|
};
|
|
|
|
|
|
|
|
/// An interface layer with SCEV used to manage how we see SCEV expressions
|
|
|
|
/// for values in the context of existing predicates. We can add new
|
|
|
|
/// predicates, but we cannot remove them.
|
|
|
|
///
|
|
|
|
/// This layer has multiple purposes:
|
|
|
|
/// - provides a simple interface for SCEV versioning.
|
|
|
|
/// - guarantees that the order of transformations applied on a SCEV
|
|
|
|
/// expression for a single Value is consistent across two different
|
|
|
|
/// getSCEV calls. This means that, for example, once we've obtained
|
|
|
|
/// an AddRec expression for a certain value through expression
|
|
|
|
/// rewriting, we will continue to get an AddRec expression for that
|
|
|
|
/// Value.
|
|
|
|
/// - lowers the number of expression rewrites.
|
|
|
|
class PredicatedScalarEvolution {
|
|
|
|
public:
|
|
|
|
PredicatedScalarEvolution(ScalarEvolution &SE, Loop &L);
|
|
|
|
const SCEVUnionPredicate &getUnionPredicate() const;
|
|
|
|
|
|
|
|
/// Returns the SCEV expression of V, in the context of the current SCEV
|
|
|
|
/// predicate. The order of transformations applied on the expression of V
|
|
|
|
/// returned by ScalarEvolution is guaranteed to be preserved, even when
|
|
|
|
/// adding new predicates.
|
|
|
|
const SCEV *getSCEV(Value *V);
|
|
|
|
|
|
|
|
/// Get the (predicated) backedge count for the analyzed loop.
|
|
|
|
const SCEV *getBackedgeTakenCount();
|
|
|
|
|
|
|
|
/// Adds a new predicate.
|
|
|
|
void addPredicate(const SCEVPredicate &Pred);
|
|
|
|
|
|
|
|
/// Attempts to produce an AddRecExpr for V by adding additional SCEV
|
|
|
|
/// predicates. If we can't transform the expression into an AddRecExpr we
|
|
|
|
/// return nullptr and not add additional SCEV predicates to the current
|
|
|
|
/// context.
|
|
|
|
const SCEVAddRecExpr *getAsAddRec(Value *V);
|
|
|
|
|
|
|
|
/// Proves that V doesn't overflow by adding SCEV predicate.
|
|
|
|
void setNoOverflow(Value *V, SCEVWrapPredicate::IncrementWrapFlags Flags);
|
|
|
|
|
|
|
|
/// Returns true if we've proved that V doesn't wrap by means of a SCEV
|
|
|
|
/// predicate.
|
|
|
|
bool hasNoOverflow(Value *V, SCEVWrapPredicate::IncrementWrapFlags Flags);
|
|
|
|
|
|
|
|
/// Returns the ScalarEvolution analysis used.
|
|
|
|
ScalarEvolution *getSE() const { return &SE; }
|
|
|
|
|
|
|
|
/// We need to explicitly define the copy constructor because of FlagsMap.
|
|
|
|
PredicatedScalarEvolution(const PredicatedScalarEvolution &);
|
|
|
|
|
|
|
|
/// Print the SCEV mappings done by the Predicated Scalar Evolution.
|
|
|
|
/// The printed text is indented by \p Depth.
|
|
|
|
void print(raw_ostream &OS, unsigned Depth) const;
|
|
|
|
|
|
|
|
private:
|
|
|
|
/// Increments the version number of the predicate. This needs to be called
|
|
|
|
/// every time the SCEV predicate changes.
|
|
|
|
void updateGeneration();
|
|
|
|
|
|
|
|
/// Holds a SCEV and the version number of the SCEV predicate used to
|
|
|
|
/// perform the rewrite of the expression.
|
|
|
|
typedef std::pair<unsigned, const SCEV *> RewriteEntry;
|
|
|
|
|
|
|
|
/// Maps a SCEV to the rewrite result of that SCEV at a certain version
|
|
|
|
/// number. If this number doesn't match the current Generation, we will
|
|
|
|
/// need to do a rewrite. To preserve the transformation order of previous
|
|
|
|
/// rewrites, we will rewrite the previous result instead of the original
|
|
|
|
/// SCEV.
|
|
|
|
DenseMap<const SCEV *, RewriteEntry> RewriteMap;
|
|
|
|
|
|
|
|
/// Records what NoWrap flags we've added to a Value *.
|
|
|
|
ValueMap<Value *, SCEVWrapPredicate::IncrementWrapFlags> FlagsMap;
|
|
|
|
|
|
|
|
/// The ScalarEvolution analysis.
|
|
|
|
ScalarEvolution &SE;
|
|
|
|
|
|
|
|
/// The analyzed Loop.
|
|
|
|
const Loop &L;
|
|
|
|
|
|
|
|
/// The SCEVPredicate that forms our context. We will rewrite all
|
|
|
|
/// expressions assuming that this predicate true.
|
|
|
|
SCEVUnionPredicate Preds;
|
|
|
|
|
|
|
|
/// Marks the version of the SCEV predicate used. When rewriting a SCEV
|
|
|
|
/// expression we mark it with the version of the predicate. We use this to
|
|
|
|
/// figure out if the predicate has changed from the last rewrite of the
|
|
|
|
/// SCEV. If so, we need to perform a new rewrite.
|
|
|
|
unsigned Generation;
|
|
|
|
|
|
|
|
/// The backedge taken count.
|
|
|
|
const SCEV *BackedgeCount;
|
|
|
|
};
|
2015-06-23 09:49:53 +00:00
|
|
|
}
|
2004-04-02 20:23:17 +00:00
|
|
|
|
|
|
|
#endif
|