mirror of
https://github.com/RPCS3/llvm.git
synced 2025-04-04 14:22:26 +00:00

parts of the AA interface out of the base class of every single AA result object. Because this logic reformulates the query in terms of some other aspect of the API, it would easily cause O(n^2) query patterns in alias analysis. These could in turn be magnified further based on the number of call arguments, and then further based on the number of AA queries made for a particular call. This ended up causing problems for Rust that were actually noticable enough to get a bug (PR26564) and probably other places as well. When originally re-working the AA infrastructure, the desire was to regularize the pattern of refinement without losing any generality. While I think it was successful, that is clearly proving to be too costly. And the cost is needless: we gain no actual improvement for this generality of making a direct query to tbaa actually be able to re-use some other alias analysis's refinement logic for one of the other APIs, or some such. In short, this is entirely wasted work. To the extent possible, delegation to other API surfaces should be done at the aggregation layer so that we can avoid re-walking the aggregation. In fact, this significantly simplifies the logic as we no longer need to smuggle the aggregation layer into each alias analysis (or the TargetLibraryInfo into each alias analysis just so we can form argument memory locations!). However, we also have some delegation logic inside of BasicAA and some of it even makes sense. When the delegation logic is baking in specific knowledge of aliasing properties of the LLVM IR, as opposed to simply reformulating the query to utilize a different alias analysis interface entry point, it makes a lot of sense to restrict that logic to a different layer such as BasicAA. So one aspect of the delegation that was in every AA base class is that when we don't have operand bundles, we re-use function AA results as a fallback for callsite alias results. This relies on the IR properties of calls and functions w.r.t. aliasing, and so seems a better fit to BasicAA. I've lifted the logic up to that point where it seems to be a natural fit. This still does a bit of redundant work (we query function attributes twice, once via the callsite and once via the function AA query) but it is *exactly* twice here, no more. The end result is that all of the delegation logic is hoisted out of the base class and into either the aggregation layer when it is a pure retargeting to a different API surface, or into BasicAA when it relies on the IR's aliasing properties. This should fix the quadratic query pattern reported in PR26564, although I don't have a stand-alone test case to reproduce it. It also seems general goodness. Now the numerous AAs that don't need target library info don't carry it around and depend on it. I think I can even rip out the general access to the aggregation layer and only expose that in BasicAA as it is the only place where we re-query in that manner. However, this is a non-trivial change to the AA infrastructure so I want to get some additional eyes on this before it lands. Sadly, it can't wait long because we should really cherry pick this into 3.8 if we're going to go this route. Differential Revision: http://reviews.llvm.org/D17329 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262490 91177308-0d34-0410-b5e6-96231b3b80d8
142 lines
5.6 KiB
C++
142 lines
5.6 KiB
C++
//===- ScalarEvolutionAliasAnalysis.cpp - SCEV-based Alias Analysis -------===//
|
|
//
|
|
// The LLVM Compiler Infrastructure
|
|
//
|
|
// This file is distributed under the University of Illinois Open Source
|
|
// License. See LICENSE.TXT for details.
|
|
//
|
|
//===----------------------------------------------------------------------===//
|
|
//
|
|
// This file defines the ScalarEvolutionAliasAnalysis pass, which implements a
|
|
// simple alias analysis implemented in terms of ScalarEvolution queries.
|
|
//
|
|
// This differs from traditional loop dependence analysis in that it tests
|
|
// for dependencies within a single iteration of a loop, rather than
|
|
// dependencies between different iterations.
|
|
//
|
|
// ScalarEvolution has a more complete understanding of pointer arithmetic
|
|
// than BasicAliasAnalysis' collection of ad-hoc analyses.
|
|
//
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
#include "llvm/Analysis/ScalarEvolutionAliasAnalysis.h"
|
|
using namespace llvm;
|
|
|
|
AliasResult SCEVAAResult::alias(const MemoryLocation &LocA,
|
|
const MemoryLocation &LocB) {
|
|
// If either of the memory references is empty, it doesn't matter what the
|
|
// pointer values are. This allows the code below to ignore this special
|
|
// case.
|
|
if (LocA.Size == 0 || LocB.Size == 0)
|
|
return NoAlias;
|
|
|
|
// This is SCEVAAResult. Get the SCEVs!
|
|
const SCEV *AS = SE.getSCEV(const_cast<Value *>(LocA.Ptr));
|
|
const SCEV *BS = SE.getSCEV(const_cast<Value *>(LocB.Ptr));
|
|
|
|
// If they evaluate to the same expression, it's a MustAlias.
|
|
if (AS == BS)
|
|
return MustAlias;
|
|
|
|
// If something is known about the difference between the two addresses,
|
|
// see if it's enough to prove a NoAlias.
|
|
if (SE.getEffectiveSCEVType(AS->getType()) ==
|
|
SE.getEffectiveSCEVType(BS->getType())) {
|
|
unsigned BitWidth = SE.getTypeSizeInBits(AS->getType());
|
|
APInt ASizeInt(BitWidth, LocA.Size);
|
|
APInt BSizeInt(BitWidth, LocB.Size);
|
|
|
|
// Compute the difference between the two pointers.
|
|
const SCEV *BA = SE.getMinusSCEV(BS, AS);
|
|
|
|
// Test whether the difference is known to be great enough that memory of
|
|
// the given sizes don't overlap. This assumes that ASizeInt and BSizeInt
|
|
// are non-zero, which is special-cased above.
|
|
if (ASizeInt.ule(SE.getUnsignedRange(BA).getUnsignedMin()) &&
|
|
(-BSizeInt).uge(SE.getUnsignedRange(BA).getUnsignedMax()))
|
|
return NoAlias;
|
|
|
|
// Folding the subtraction while preserving range information can be tricky
|
|
// (because of INT_MIN, etc.); if the prior test failed, swap AS and BS
|
|
// and try again to see if things fold better that way.
|
|
|
|
// Compute the difference between the two pointers.
|
|
const SCEV *AB = SE.getMinusSCEV(AS, BS);
|
|
|
|
// Test whether the difference is known to be great enough that memory of
|
|
// the given sizes don't overlap. This assumes that ASizeInt and BSizeInt
|
|
// are non-zero, which is special-cased above.
|
|
if (BSizeInt.ule(SE.getUnsignedRange(AB).getUnsignedMin()) &&
|
|
(-ASizeInt).uge(SE.getUnsignedRange(AB).getUnsignedMax()))
|
|
return NoAlias;
|
|
}
|
|
|
|
// If ScalarEvolution can find an underlying object, form a new query.
|
|
// The correctness of this depends on ScalarEvolution not recognizing
|
|
// inttoptr and ptrtoint operators.
|
|
Value *AO = GetBaseValue(AS);
|
|
Value *BO = GetBaseValue(BS);
|
|
if ((AO && AO != LocA.Ptr) || (BO && BO != LocB.Ptr))
|
|
if (alias(MemoryLocation(AO ? AO : LocA.Ptr,
|
|
AO ? +MemoryLocation::UnknownSize : LocA.Size,
|
|
AO ? AAMDNodes() : LocA.AATags),
|
|
MemoryLocation(BO ? BO : LocB.Ptr,
|
|
BO ? +MemoryLocation::UnknownSize : LocB.Size,
|
|
BO ? AAMDNodes() : LocB.AATags)) == NoAlias)
|
|
return NoAlias;
|
|
|
|
// Forward the query to the next analysis.
|
|
return AAResultBase::alias(LocA, LocB);
|
|
}
|
|
|
|
/// Given an expression, try to find a base value.
|
|
///
|
|
/// Returns null if none was found.
|
|
Value *SCEVAAResult::GetBaseValue(const SCEV *S) {
|
|
if (const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(S)) {
|
|
// In an addrec, assume that the base will be in the start, rather
|
|
// than the step.
|
|
return GetBaseValue(AR->getStart());
|
|
} else if (const SCEVAddExpr *A = dyn_cast<SCEVAddExpr>(S)) {
|
|
// If there's a pointer operand, it'll be sorted at the end of the list.
|
|
const SCEV *Last = A->getOperand(A->getNumOperands() - 1);
|
|
if (Last->getType()->isPointerTy())
|
|
return GetBaseValue(Last);
|
|
} else if (const SCEVUnknown *U = dyn_cast<SCEVUnknown>(S)) {
|
|
// This is a leaf node.
|
|
return U->getValue();
|
|
}
|
|
// No Identified object found.
|
|
return nullptr;
|
|
}
|
|
|
|
SCEVAAResult SCEVAA::run(Function &F, AnalysisManager<Function> *AM) {
|
|
return SCEVAAResult(AM->getResult<ScalarEvolutionAnalysis>(F));
|
|
}
|
|
|
|
char SCEVAAWrapperPass::ID = 0;
|
|
INITIALIZE_PASS_BEGIN(SCEVAAWrapperPass, "scev-aa",
|
|
"ScalarEvolution-based Alias Analysis", false, true)
|
|
INITIALIZE_PASS_DEPENDENCY(ScalarEvolutionWrapperPass)
|
|
INITIALIZE_PASS_END(SCEVAAWrapperPass, "scev-aa",
|
|
"ScalarEvolution-based Alias Analysis", false, true)
|
|
|
|
FunctionPass *llvm::createSCEVAAWrapperPass() {
|
|
return new SCEVAAWrapperPass();
|
|
}
|
|
|
|
SCEVAAWrapperPass::SCEVAAWrapperPass() : FunctionPass(ID) {
|
|
initializeSCEVAAWrapperPassPass(*PassRegistry::getPassRegistry());
|
|
}
|
|
|
|
bool SCEVAAWrapperPass::runOnFunction(Function &F) {
|
|
Result.reset(
|
|
new SCEVAAResult(getAnalysis<ScalarEvolutionWrapperPass>().getSE()));
|
|
return false;
|
|
}
|
|
|
|
void SCEVAAWrapperPass::getAnalysisUsage(AnalysisUsage &AU) const {
|
|
AU.setPreservesAll();
|
|
AU.addRequired<ScalarEvolutionWrapperPass>();
|
|
}
|