mirror of
https://github.com/RPCSX/llvm.git
synced 2025-01-20 19:25:05 +00:00
594e4a1dd3
consider: long long *f(long long *b, long long *e) { return b + (e - b); } we would lower this to something like: define i64* @f(i64* %b, i64* %e) { %1 = ptrtoint i64* %e to i64 %2 = ptrtoint i64* %b to i64 %3 = sub i64 %1, %2 %4 = ashr exact i64 %3, 3 %5 = getelementptr inbounds i64* %b, i64 %4 ret i64* %5 } This should fold away to just 'e'. N.B. This adds m_SpecificInt as a convenient way to match against a particular 64-bit integer when using LLVM's match interface. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@216439 91177308-0d34-0410-b5e6-96231b3b80d8
Analysis Opportunities: //===---------------------------------------------------------------------===// In test/Transforms/LoopStrengthReduce/quadradic-exit-value.ll, the ScalarEvolution expression for %r is this: {1,+,3,+,2}<loop> Outside the loop, this could be evaluated simply as (%n * %n), however ScalarEvolution currently evaluates it as (-2 + (2 * (trunc i65 (((zext i64 (-2 + %n) to i65) * (zext i64 (-1 + %n) to i65)) /u 2) to i64)) + (3 * %n)) In addition to being much more complicated, it involves i65 arithmetic, which is very inefficient when expanded into code. //===---------------------------------------------------------------------===// In formatValue in test/CodeGen/X86/lsr-delayed-fold.ll, ScalarEvolution is forming this expression: ((trunc i64 (-1 * %arg5) to i32) + (trunc i64 %arg5 to i32) + (-1 * (trunc i64 undef to i32))) This could be folded to (-1 * (trunc i64 undef to i32)) //===---------------------------------------------------------------------===//