mirror of
https://github.com/RPCS3/llvm.git
synced 2026-01-31 01:25:19 +01:00
We would eventually catch these via demanded bits and computing known bits in InstCombine, but I think it's better to handle the simple cases as soon as possible as a matter of efficiency. This fold allows further simplifications based on distributed ops transforms. eg: %a = lshr i8 %x, 7 %b = or i8 %a, 2 %c = and i8 %b, 1 InstSimplify can directly fold this now: %a = lshr i8 %x, 7 Differential Revision: https://reviews.llvm.org/D33221 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@303213 91177308-0d34-0410-b5e6-96231b3b80d8
Analysis Opportunities:
//===---------------------------------------------------------------------===//
In test/Transforms/LoopStrengthReduce/quadradic-exit-value.ll, the
ScalarEvolution expression for %r is this:
{1,+,3,+,2}<loop>
Outside the loop, this could be evaluated simply as (%n * %n), however
ScalarEvolution currently evaluates it as
(-2 + (2 * (trunc i65 (((zext i64 (-2 + %n) to i65) * (zext i64 (-1 + %n) to i65)) /u 2) to i64)) + (3 * %n))
In addition to being much more complicated, it involves i65 arithmetic,
which is very inefficient when expanded into code.
//===---------------------------------------------------------------------===//
In formatValue in test/CodeGen/X86/lsr-delayed-fold.ll,
ScalarEvolution is forming this expression:
((trunc i64 (-1 * %arg5) to i32) + (trunc i64 %arg5 to i32) + (-1 * (trunc i64 undef to i32)))
This could be folded to
(-1 * (trunc i64 undef to i32))
//===---------------------------------------------------------------------===//