I'm not sure if this was intentional, but today
isGuaranteedToTransferExecutionToSuccessor returns true for readonly and
argmemonly calls that may throw. This commit changes the function to
not implicitly infer nounwind this way.
Even if we eventually specify readonly calls as not throwing,
isGuaranteedToTransferExecutionToSuccessor is not the best place to
infer that. We should instead teach FunctionAttrs or some other such
pass to tag readonly functions / calls as nounwind instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290794 91177308-0d34-0410-b5e6-96231b3b80d8
I don't think this hole is currently exposed, but I crashed regression tests for
jump-threading and loop-vectorize after I added calls to isKnownNonNullAt() in
InstSimplify as part of trying to solve PR28430:
https://llvm.org/bugs/show_bug.cgi?id=28430
That's because they call into value tracking with a context instruction, but no
other parts of the query structure filled in.
For more background, see the discussion in:
https://reviews.llvm.org/D27855
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290786 91177308-0d34-0410-b5e6-96231b3b80d8
After r289755, the AssumptionCache is no longer needed. Variables affected by
assumptions are now found by using the new operand-bundle-based scheme. This
new scheme is more computationally efficient, and also we need much less
code...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289756 91177308-0d34-0410-b5e6-96231b3b80d8
There was an efficiency problem with how we processed @llvm.assume in
ValueTracking (and other places). The AssumptionCache tracked all of the
assumptions in a given function. In order to find assumptions relevant to
computing known bits, etc. we searched every assumption in the function. For
ValueTracking, that means that we did O(#assumes * #values) work in InstCombine
and other passes (with a constant factor that can be quite large because we'd
repeat this search at every level of recursion of the analysis).
Several of us discussed this situation at the last developers' meeting, and
this implements the discussed solution: Make the values that an assume might
affect operands of the assume itself. To avoid exposing this detail to
frontends and passes that need not worry about it, I've used the new
operand-bundle feature to add these extra call "operands" in a way that does
not affect the intrinsic's signature. I think this solution is relatively
clean. InstCombine adds these extra operands based on what ValueTracking, LVI,
etc. will need and then those passes need only search the users of the values
under consideration. This should fix the computational-complexity problem.
At this point, no passes depend on the AssumptionCache, and so I'll remove
that as a follow-up change.
Differential Revision: https://reviews.llvm.org/D27259
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289755 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Attaching !absolute_symbol to a global variable does two things:
1) Marks it as an absolute symbol reference.
2) Specifies the value range of that symbol's address.
Teach the X86 backend to allow absolute symbols to appear in place of
immediates by extending the relocImm and mov64imm32 matchers. Start using
relocImm in more places where it is legal.
As previously proposed on llvm-dev:
http://lists.llvm.org/pipermail/llvm-dev/2016-October/105800.html
Differential Revision: https://reviews.llvm.org/D25878
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289087 91177308-0d34-0410-b5e6-96231b3b80d8
Instead, expose whether the current type is an array or a struct, if an array
what the upper bound is, and if a struct the struct type itself. This is
in preparation for a later change which will make PointerType derive from
Type rather than SequentialType.
Differential Revision: https://reviews.llvm.org/D26594
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288458 91177308-0d34-0410-b5e6-96231b3b80d8
Currently LLVM assumes that a pointer addrspacecasted to a different addr space is equivalent to trunc or zext bitwise, which is not true. For example, in amdgcn target, when a null pointer is addrspacecasted from addr space 4 to 0, its value is changed from i64 0 to i32 -1.
This patch teaches LLVM not to assume known bits of addrspacecast instruction to its operand.
Differential Revision: https://reviews.llvm.org/D26803
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@287545 91177308-0d34-0410-b5e6-96231b3b80d8
The smallest tests that expose this are codegen tests (because SelectionDAGBuilder::visitSelect() uses matchSelectPattern
to create UMAX/UMIN nodes), but it's also possible to see the effects in IR alone with folds of min/max pairs.
If these were written as unsigned compares in IR, InstCombine canonicalizes the unsigned compares to signed compares.
Ie, running the optimizer pessimizes the codegen for this case without this patch:
define <4 x i32> @umax_vec(<4 x i32> %x) {
%cmp = icmp ugt <4 x i32> %x, <i32 2147483647, i32 2147483647, i32 2147483647, i32 2147483647>
%sel = select <4 x i1> %cmp, <4 x i32> %x, <4 x i32> <i32 2147483647, i32 2147483647, i32 2147483647, i32 2147483647>
ret <4 x i32> %sel
}
$ ./opt umax.ll -S | ./llc -o - -mattr=avx
vpmaxud LCPI0_0(%rip), %xmm0, %xmm0
$ ./opt -instcombine umax.ll -S | ./llc -o - -mattr=avx
vpxor %xmm1, %xmm1, %xmm1
vpcmpgtd %xmm0, %xmm1, %xmm1
vmovaps LCPI0_0(%rip), %xmm2 ## xmm2 = [2147483647,2147483647,2147483647,2147483647]
vblendvps %xmm1, %xmm0, %xmm2, %xmm0
Differential Revision: https://reviews.llvm.org/D26096
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286318 91177308-0d34-0410-b5e6-96231b3b80d8
InstCombine should always canonicalize patterns like the one shown in the comment
when visiting 'select' insts in adjustMinMax().
Scalars were already handled there, and vector splats are handled after:
https://reviews.llvm.org/rL285732
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285744 91177308-0d34-0410-b5e6-96231b3b80d8
Try harder to detect obfuscated min/max patterns: the initial pattern was added with D9352 / rL236202.
There was a bug fix for PR27137 at rL264996, but I think we can do better by folding the corresponding
smax pattern and commuted variants.
The codegen tests demonstrate the effect of ValueTracking on the backend via SelectionDAGBuilder. We
can't expose these differences minimally in IR because we don't have smin/smax intrinsics for IR.
Differential Revision: https://reviews.llvm.org/D26091
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285499 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
While walking defs of pointer operands we were assuming that the pointer
size would remain constant. This is not true, because addresspacecast
instructions may cast the pointer to an address space with a different
pointer width.
This partial reverts r282612, which was a more conservative solution
to this problem.
Reviewers: reames, sanjoy, apilipenko
Subscribers: wdng, llvm-commits
Differential Revision: https://reviews.llvm.org/D24772
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@283557 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The computeKnownBits and ComputeNumSignBits functions in ValueTracking can now do a simple look-through of ExtractElement.
Reviewers: majnemer, spatel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D24955
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@283434 91177308-0d34-0410-b5e6-96231b3b80d8
Pointers in different addrspaces can have different sizes, so it's not valid to look through addrspace cast calculating base and offset for a value.
This is similar to D13008.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D24729
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@282612 91177308-0d34-0410-b5e6-96231b3b80d8
There is no benefit in looking through assumptions on UndefValue to
guess known bits. Return early to avoid walking their use-lists, and
assert that all instances of ConstantData are handled here for similar
reasons (UndefValue was the only integer/pointer holdout).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@282337 91177308-0d34-0410-b5e6-96231b3b80d8
Check and return early for ConstantPointerNull and UndefValue
specifically in isKnownNonNullAt, and assert that ConstantData never
make it to isKnownNonNullFromDominatingCondition.
This confirms that isKnownNonNullFromDominatingCondition never walks
through the use-list of an instance of ConstantData. Given that such
use-lists cross module boundaries, it never really made sense to do so,
and was potentially very expensive.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@282333 91177308-0d34-0410-b5e6-96231b3b80d8
computeKnownBits() already works for integer vectors, so allow vector types when calling that from InstCombine.
I don't think the change to use m_APInt in computeKnownBits is strictly necessary because we do check for
ConstantVector later, but it's more efficient to handle the splat case without needing to loop on vector elements.
This should work with InstSimplify, but doesn't yet, so I made that a FIXME comment on the test for PR24942:
https://llvm.org/bugs/show_bug.cgi?id=24942
Differential Revision: https://reviews.llvm.org/D24677
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281777 91177308-0d34-0410-b5e6-96231b3b80d8
Almost all of the method here are only analysing Value's as opposed to
mutating them. Mark all of the easy ones as const.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278585 91177308-0d34-0410-b5e6-96231b3b80d8
This method had some duplicate code when we did or did not have a dom tree. Refactor
it to remove the duplication, but also clean up the control flow to have less duplication.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278450 91177308-0d34-0410-b5e6-96231b3b80d8
There were 2 versions of this method. A public one which takes a
const Instruction* and a private implementation which takes a mutable
Value* and casts to an Instruction*.
There was no need for the 2 versions as all callers pass a const Instruction*
and there was no need for a mutable pointer as we only do analysis here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278434 91177308-0d34-0410-b5e6-96231b3b80d8
Motivated by the work on the llvm.noalias intrinsic, teach BasicAA to look
through returned-argument functions when answering queries. This is essential
so that we don't loose all other AA information when supplementing with
llvm.noalias.
Differential Revision: http://reviews.llvm.org/D9383
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275035 91177308-0d34-0410-b5e6-96231b3b80d8
This actually uncovered a surprisingly large chain of ultimately unused
TLI args.
From what I can gather, this argument is a remnant of when
isKnownNonNull would look at the TLI directly.
The current approach seems to be that InferFunctionAttrs runs early in
the pipeline and uses TLI to annotate the TLI-dependent non-null
information as return attributes.
This also removes the dependence of functionattrs on TLI altogether.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274455 91177308-0d34-0410-b5e6-96231b3b80d8
This is breaking an optimizaton remark test in clang. I've identified a couple fixes for that, but want to understand it better before I commit to anything.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274102 91177308-0d34-0410-b5e6-96231b3b80d8
If a operation for a recurrence is an addition with no signed wrap and both input sign bits are 0, then the result sign bit must also be 0. Similar for the negative case.
I found this deficiency while playing around with a loop in the x86 backend that contained a signed division that could be optimized into an unsigned division if we could prove both inputs were positive. One of them being the loop induction variable. With this patch we can perform the conversion for this case. One of the test cases here is a contrived variation of the loop I was looking at.
Differential revision: http://reviews.llvm.org/D21493
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274098 91177308-0d34-0410-b5e6-96231b3b80d8
This is similar to the computeKnownBits improvement in rL268479.
There's probably more we can do for vector logic instructions, but
this should let us see non-splat constant masking ops that can
become vector selects instead of and/andn/or sequences.
Differential Revision: http://reviews.llvm.org/D21610
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@273459 91177308-0d34-0410-b5e6-96231b3b80d8
This change teaches llvm::isGuaranteedToTransferExecutionToSuccessor
that calls to @llvm.assume always terminate. Most other relevant
intrinsics should be covered by the "CS.onlyReadsMemory() ||
CS.onlyAccessesArgMemory()" bit but we were missing @llvm.assumes
because we state that it clobbers memory.
Added an LICM test case, but this change is not specific to LICM.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@272703 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Make isGuaranteedToExecute use the
isGuaranteedToTransferExecutionToSuccessor helper, and make that helper
a bit more accurate.
There's a potential performance impact here from assuming that arbitrary
calls might not return. This probably has little impact on loads and
stores to a pointer because most things alias analysis can reason about
are dereferenceable anyway. The other impacts, like less aggressive
hoisting of sdiv by a variable and less aggressive hoisting around
volatile memory operations, are unlikely to matter for real code.
This also impacts SCEV, which uses the same helper. It's a minor
improvement there because we can tell that, for example, memcpy always
returns normally. Strictly speaking, it's also introducing
a bug, but it's not any worse than everywhere else we assume readonly
functions terminate.
Fixes http://llvm.org/PR27857.
Reviewers: hfinkel, reames, chandlerc, sanjoy
Subscribers: broune, llvm-commits
Differential Revision: http://reviews.llvm.org/D21167
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@272489 91177308-0d34-0410-b5e6-96231b3b80d8