When checking if we should return a constant, we create some temporary APInts to see if we know all bits. But the exact computations we do are needed in several other locations in the same code.
This patch moves them to named temporaries so we can reuse them.
Ideally we'd write directly to KnownZero/One, but we currently seem to only write those variables after all the simplifications checks and I didn't want to change that with this patch.
Differential Revision: https://reviews.llvm.org/D32094
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300376 91177308-0d34-0410-b5e6-96231b3b80d8
This used to be a Hexagon-specific treatment, but is no longer needed
since it's switched to subregister liveness tracking.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300369 91177308-0d34-0410-b5e6-96231b3b80d8
This avoids the confusing 'CS.paramHasAttr(ArgNo + 1, Foo)' pattern.
Previously we were testing return value attributes with index 0, so I
introduced hasReturnAttr() for that use case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300367 91177308-0d34-0410-b5e6-96231b3b80d8
...when C1 differs from C2 by one bit and C1 <u C2:
http://rise4fun.com/Alive/Vuo
And move related folds to a helper function. This reduces code duplication and
will make it easier to remove the scalar-only restriction as a follow-up step.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300364 91177308-0d34-0410-b5e6-96231b3b80d8
We currently only support folding a subtract into a select but not a PHI. This fixes that.
I had to fix an assumption in FoldOpIntoPhi that assumed the PHI node was always in operand 0. Now we pass it in like we do for FoldOpIntoSelect. But we still require some dancing to find the Constant when we create the BinOp or ConstantExpr. This is based code is similar to what we do for selects.
Since I touched all call sites, this also renames FoldOpIntoPhi to foldOpIntoPhi to match coding standards.
Differential Revision: https://reviews.llvm.org/D31686
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300363 91177308-0d34-0410-b5e6-96231b3b80d8
One of the ValueTracking unittests creates a named ArrayRef initialized by a std::initializer_list. The underlying array for an std::initializer_list is only guaranteed to have a lifetime as long as the initializer_list object itself. So this can leave the ArrayRef pointing at an array that no long exists.
This fixes this to just create an explicit array instead of an ArrayRef.
Differential Revision: https://reviews.llvm.org/D32089
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300354 91177308-0d34-0410-b5e6-96231b3b80d8
Currently this code always makes 2 or 3 calls to tryFactorization regardless of whether the LHS/RHS are BinaryOperators. We make 3 calls when both operands are BinaryOperators with the same opcode. Or surprisingly, when neither are BinaryOperators. This is because getBinOpsForFactorization returns Instruction::BinaryOpsEnd when the operand is not a BinaryOperator. If both LHS and RHS are not BinaryOperators then they both have an Opcode of Instruction::BinaryOpsEnd. When this happens we rely on tryFactorization to early out due to A/B/C/D being null. Similar behavior occurs for the other calls, we rely on getBinOpsForFactorization having made A/B or C/D null to get tryFactorization to early out.
We also rely on these null checks to check the result of getIdentityValue and early out for it.
This patches refactors this to pull these checks up to SimplifyUsingDistributiveLaws so we don't rely on BinaryOpsEnd as a sentinel or this A/B/C/D null behavior. I think this makes this code easier to reason about. Should also give a tiny performance improvement for cases where the LHS or RHS isn't a BinaryOperator.
Differential Revision: https://reviews.llvm.org/D31913
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300353 91177308-0d34-0410-b5e6-96231b3b80d8
Removes all of the boilerplate, cache management etc. from
ScalarEvolutionNormalization, and keeps only the interesting bits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300349 91177308-0d34-0410-b5e6-96231b3b80d8
This change really saves just one foldingset lookup, but makes
SCEVRewriteVisitor "feature compatible" with the handwritten logic in
ScalarEvolutionNormalization, so that I can change
ScalarEvolutionNormalization to use SCEVRewriteVisitor in a next step.
This is a non-functional change, but _may_ improve performance in some
pathological cases, but that's unlikely.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300348 91177308-0d34-0410-b5e6-96231b3b80d8
It won't compile after the recent changes I've made, and I think
keeping it in provides very little value.
Instead I've added (in an earlier commit) a C++ unit test to check the
Denormalize(Normalized(X)) == X property for specific instances of X,
which is what the assert was trying to do anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300339 91177308-0d34-0410-b5e6-96231b3b80d8
The PostIncTransform class was not pulling its weight, so delete it
and use free functions instead.
This also makes the use of `function_ref` more idiomatic. We were
storing an instance of function_ref in the PostIncTransform class
before, which was fine in that specific case, but the usage after this
change is more obviously okay.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300338 91177308-0d34-0410-b5e6-96231b3b80d8
Looks like earlier I was relying on #include ordering in files that
used ScalarEvolutionNormalization.h.
Found thanks to the selfhost modules buildbot!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300336 91177308-0d34-0410-b5e6-96231b3b80d8
A non-zero lane mask on a register with no subregister means that the
whole register is live-in. It is equivalent to a full mask.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300335 91177308-0d34-0410-b5e6-96231b3b80d8
Instead of having two ways to check if an add recurrence needs to be
normalized, just pass in one predicate to decide that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300333 91177308-0d34-0410-b5e6-96231b3b80d8
It is cleaner to have a callback based system where the logic of
whether an add recurrence is normalized or not lives on IVUsers.
This is one step in a multi-step cleanup.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300330 91177308-0d34-0410-b5e6-96231b3b80d8
MOVNTDQA non-temporal aligned vector loads can be correctly represented using generic builtin loads, allowing us to remove the existing x86 intrinsics.
Clang companion patch: D31766.
Differential Revision: https://reviews.llvm.org/D31767
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300325 91177308-0d34-0410-b5e6-96231b3b80d8
This patch is part of D28975's breakdown - no change in output intended.
LV's code currently assumes the vectorized loop is a single basic block up
until predicateInstructions() is called. This patch removes two manifestations
of this assumption (loop phi incoming values, dominator tree update) by
replacing the use of vectorLoopBody with the vectorized loop's latch/header.
Differential Revision: https://reviews.llvm.org/D32040
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300310 91177308-0d34-0410-b5e6-96231b3b80d8
The APInt was created from an 'unsigned' and we just wanted to know how many bits the value needed to represent it. We can just use Log2_32 from MathExtras.h to get the info.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300309 91177308-0d34-0410-b5e6-96231b3b80d8
Start using it in LLD to avoid needing to read bitcode again just to get the
target triple, and in llvm-lto2 to avoid printing symbol table information
that is inappropriate for the target.
Differential Revision: https://reviews.llvm.org/D32038
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300300 91177308-0d34-0410-b5e6-96231b3b80d8