llvm/lib/Analysis
2013-10-15 05:20:47 +00:00
..
IPA Disable inlining between sanitized and non-sanitized functions. 2013-08-08 08:22:39 +00:00
AliasAnalysis.cpp Reimplement isPotentiallyReachable to make nocapture deduction much stronger. 2013-07-27 01:24:00 +00:00
AliasAnalysisCounter.cpp
AliasAnalysisEvaluator.cpp
AliasDebugger.cpp
AliasSetTracker.cpp In AliasSetTracker, do not change the alias set to "mod/ref" when adding 2013-09-12 20:15:50 +00:00
Analysis.cpp Remove the very substantial, largely unmaintained legacy PGO 2013-10-02 15:42:23 +00:00
BasicAliasAnalysis.cpp Use type helper functions 2013-09-27 22:18:51 +00:00
BlockFrequencyInfo.cpp
BranchProbabilityInfo.cpp Use SmallVectorImpl::iterator/const_iterator instead of SmallVector to avoid specifying the vector size. 2013-07-04 01:31:24 +00:00
CaptureTracking.cpp CaptureTracking: Plug a loophole in the "too many uses" heuristic. 2013-10-03 13:24:02 +00:00
CFG.cpp Add some constantness. 2013-08-20 23:04:15 +00:00
CFGPrinter.cpp
CMakeLists.txt Remove the very substantial, largely unmaintained legacy PGO 2013-10-02 15:42:23 +00:00
CodeMetrics.cpp
ConstantFolding.cpp Fix a constant folding address space place I missed. 2013-09-17 23:23:16 +00:00
CostModel.cpp Move variable into assert to avoid unused variable warning. 2013-09-17 21:13:57 +00:00
DependenceAnalysis.cpp Remove extraneous semicolon. 2013-08-06 16:40:40 +00:00
DominanceFrontier.cpp
DomPrinter.cpp
InstCount.cpp
InstructionSimplify.cpp Teach MemoryBuiltins and InstructionSimplify that operator new never returns NULL. 2013-09-24 16:37:51 +00:00
Interval.cpp
IntervalPartition.cpp
IVUsers.cpp
LazyValueInfo.cpp Use SmallVectorImpl::iterator/const_iterator instead of SmallVector to avoid specifying the vector size. 2013-07-04 01:31:24 +00:00
LibCallAliasAnalysis.cpp
LibCallSemantics.cpp
Lint.cpp Fix lint assert on integer vector division 2013-08-26 23:29:33 +00:00
LLVMBuild.txt
Loads.cpp
LoopInfo.cpp Add 'const' qualifiers to static const char* variables. 2013-07-16 01:17:10 +00:00
LoopPass.cpp Comment: try to clarify loop iteration order. 2013-07-20 23:10:31 +00:00
Makefile
MemDepPrinter.cpp
MemoryBuiltins.cpp Rename DataLayout variables TD -> DL 2013-10-03 19:50:01 +00:00
MemoryDependenceAnalysis.cpp
ModuleDebugInfoPrinter.cpp
NoAliasAnalysis.cpp
PHITransAddr.cpp
PostDominators.cpp
PtrUseVisitor.cpp
README.txt
RegionInfo.cpp Reorder headers according to lint. 2013-08-21 21:14:19 +00:00
RegionPass.cpp
RegionPrinter.cpp
ScalarEvolution.cpp Minor code simplification 2013-09-27 22:38:23 +00:00
ScalarEvolutionAliasAnalysis.cpp
ScalarEvolutionExpander.cpp SCEVExpander: Fix a regression I introduced by to eagerly adding RAII objects. 2013-10-01 12:17:11 +00:00
ScalarEvolutionNormalization.cpp
SparsePropagation.cpp
TargetTransformInfo.cpp Costmodel: Add support for horizontal vector reductions 2013-09-17 18:06:50 +00:00
Trace.cpp
TypeBasedAliasAnalysis.cpp TBAA: try to fix the dragonegg bots. 2013-09-27 22:59:21 +00:00
ValueTracking.cpp Remove x86_sse42_crc32_64_8 intrinsic. It has no functional difference from x86_sse42_crc32_32_8 and was not mapped to a clang builtin. I'm not even sure why this form of the instruction is even called out explicitly in the docs. Also add AutoUpgrade support to convert it into the other intrinsic with appropriate trunc and zext. 2013-10-15 05:20:47 +00:00

Analysis Opportunities:

//===---------------------------------------------------------------------===//

In test/Transforms/LoopStrengthReduce/quadradic-exit-value.ll, the
ScalarEvolution expression for %r is this:

  {1,+,3,+,2}<loop>

Outside the loop, this could be evaluated simply as (%n * %n), however
ScalarEvolution currently evaluates it as

  (-2 + (2 * (trunc i65 (((zext i64 (-2 + %n) to i65) * (zext i64 (-1 + %n) to i65)) /u 2) to i64)) + (3 * %n))

In addition to being much more complicated, it involves i65 arithmetic,
which is very inefficient when expanded into code.

//===---------------------------------------------------------------------===//

In formatValue in test/CodeGen/X86/lsr-delayed-fold.ll,

ScalarEvolution is forming this expression:

((trunc i64 (-1 * %arg5) to i32) + (trunc i64 %arg5 to i32) + (-1 * (trunc i64 undef to i32)))

This could be folded to

(-1 * (trunc i64 undef to i32))

//===---------------------------------------------------------------------===//