This is one step towards solving PR24766:
https://llvm.org/bugs/show_bug.cgi?id=24766
We were not producing the same IR for these two C functions because the store
to the temp bool causes extra zexts:
#include <stdbool.h>
bool switchy(char x1, char x2, char condition) {
bool conditionMet = false;
switch (condition) {
case 0: conditionMet = (x1 == x2); break;
case 1: conditionMet = (x1 <= x2); break;
}
return conditionMet;
}
bool switchy2(char x1, char x2, char condition) {
switch (condition) {
case 0: return (x1 == x2);
case 1: return (x1 <= x2);
}
return false;
}
As noted in the code comments, this test case manages to avoid the more general existing
phi optimizations where there are only 2 phi inputs or where there are no constant phi
args mixed in with the casts ops. It seems like a corner case, but if we don't catch it,
then I don't think we can get SimplifyCFG to further optimize towards the canonical form
for this function shown in the bug report.
Differential Revision: http://reviews.llvm.org/D12866
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248689 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Factor the code that rewrites invokes to calls and rewrites WinEH
terminators to their "unwind to caller" equivalents into a helper in
Utils/Local, and use it in the three places I'm aware of that need to do
this.
Reviewers: andrew.w.kaylor, majnemer, rnk
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D13152
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248677 91177308-0d34-0410-b5e6-96231b3b80d8
This is a fix for PR22723:
https://llvm.org/bugs/show_bug.cgi?id=22723
My first attempt at this was to change what I thought was the root problem:
xor (zext i1 X to i32), 1 --> zext (xor i1 X, true) to i32
...but we create the opposite pattern in InstCombiner::visitZExt(), so infinite loop!
My next idea was to fix the matchIfNot() implementation in PatternMatch, but that would
mean potentially returning a different size for the match than what was input. I think
this would require all users of m_Not to check the size of the returned match, so I
abandoned that idea.
I settled on just fixing the exact case presented in the PR. This patch does allow the
2 functions in PR22723 to compile identically (x86):
bool test(bool x, bool y) { return !x | !y; }
bool test(bool x, bool y) { return !x || !y; }
...
andb %sil, %dil
xorb $1, %dil
movb %dil, %al
retq
Differential Revision: http://reviews.llvm.org/D12705
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248634 91177308-0d34-0410-b5e6-96231b3b80d8
Loop unswitching produces conditional branches with constant condition,
and it's beneficial for later passes to clean this up with simplify-cfg.
We do this after the second invocation of loop-unswitch, but not after
the first one. Not doing so might cause problem for passes like
LoopUnroll, whose estimate of loop body size would be less accurate.
Reviewers: hfinkel
Differential Revision: http://reviews.llvm.org/D13064
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248460 91177308-0d34-0410-b5e6-96231b3b80d8
In -fprofile-instr-generate compilation, to remove the redundant profile
variables for the COMDAT functions, these variables are placed in the same
COMDAT group as its associated function. This way when the COMDAT function
is not picked by the linker, those profile variables will also not be
output in the final binary. This may cause warning when mix link objects
built w and wo -fprofile-instr-generate.
This patch puts the profile variables for COMDAT functions to its own COMDAT
group to avoid the problem.
Patch by xur.
Differential Revision: http://reviews.llvm.org/D12248
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248440 91177308-0d34-0410-b5e6-96231b3b80d8
This patch changes the order of GEPs generated by Splitting GEPs
pass, specially when one of the GEPs has constant and the base is
loop invariant, then we will generate the GEP with constant first
when beneficial, to expose more cases for LICM.
If originally Splitting GEP generate the following:
do.body.i:
%idxprom.i = sext i32 %shr.i to i64
%2 = bitcast %typeD* %s to i8*
%3 = shl i64 %idxprom.i, 2
%uglygep = getelementptr i8, i8* %2, i64 %3
%uglygep7 = getelementptr i8, i8* %uglygep, i64 1032
...
Now it genereates:
do.body.i:
%idxprom.i = sext i32 %shr.i to i64
%2 = bitcast %typeD* %s to i8*
%3 = shl i64 %idxprom.i, 2
%uglygep = getelementptr i8, i8* %2, i64 1032
%uglygep7 = getelementptr i8, i8* %uglygep, i64 %3
...
For no-loop cases, the original way of generating GEPs seems to
expose more CSE cases, so we don't change the logic for no-loop
cases, and only limit our change to the specific case we are
interested in.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248420 91177308-0d34-0410-b5e6-96231b3b80d8
Add two new ways of accessing the unsafe stack pointer:
* At a fixed offset from the thread TLS base. This is very similar to
StackProtector cookies, but we plan to extend it to other backends
(ARM in particular) soon. Bionic-side implementation here:
https://android-review.googlesource.com/170988.
* Via a function call, as a fallback for platforms that provide
neither a fixed TLS slot, nor a reasonable TLS implementation (i.e.
not emutls).
This is a re-commit of a change in r248357 that was reverted in
r248358.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248405 91177308-0d34-0410-b5e6-96231b3b80d8
This changes the behavior of AddAligntmentAssumptions to match its
comment. I.e, prove the asserted alignment in the context of the caller,
not the callee.
Thanks to Mehdi Amini for seeing the issue here! Also to Artur Pilipenko
who also saw a fix for the issue.
rdar://22521387
Differential Revision: http://reviews.llvm.org/D12997
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248390 91177308-0d34-0410-b5e6-96231b3b80d8
Invoking a function which returns an aggregate can sometimes be
transformed to return a scalar value. However, this means that we need
to create an insertvalue instruction(s) to recreate the correct
aggregate type. We achieved this by inserting an insertvalue
instruction at the invoke's normal successor. However, this is not
feasible if the normal successor uses the invoke's return value inside a
PHI node.
Instead, split the edge between the invoke and the unwind successor and
create the insertvalue instruction in the new basic block. The new
basic block's successor will be the old invoke successor which leaves
us with IR which is well behaved.
This fixes PR24906.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248387 91177308-0d34-0410-b5e6-96231b3b80d8
This patches removes the x86.sse41.pmovsx* intrinsics, provides a suitable upgrade path and updates relevant tests to sign extend a subvector instead.
LLVM counterpart to D12835
Differential Revision: http://reviews.llvm.org/D13002
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248368 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
It is fairly common to call SE->getConstant(Ty, 0) or
SE->getConstant(Ty, 1); this change makes such uses a little bit
briefer.
I've refactored the call sites I could find easily to use getZero /
getOne.
Reviewers: hfinkel, majnemer, reames
Subscribers: sanjoy, llvm-commits
Differential Revision: http://reviews.llvm.org/D12947
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248362 91177308-0d34-0410-b5e6-96231b3b80d8
Add two new ways of accessing the unsafe stack pointer:
* At a fixed offset from the thread TLS base. This is very similar to
StackProtector cookies, but we plan to extend it to other backends
(ARM in particular) soon. Bionic-side implementation here:
https://android-review.googlesource.com/170988.
* Via a function call, as a fallback for platforms that provide
neither a fixed TLS slot, nor a reasonable TLS implementation (i.e.
not emutls).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248357 91177308-0d34-0410-b5e6-96231b3b80d8
Apart from checking that GlobalVariable is a constant, we should check
that it's not a weak constant, in which case we can't propagate its
value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248327 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
We should either require the DT info to be available, or check if it's
available in every place we use DT (and we already miss such check in
one place, which causes failures in some cases). As other loop passes
preserve DT and it's usually available, it makes sense to just require
it here.
There is no regression test, because the bug only shows up if pass
manager decides to clean DT info right before LoopUnswitch. If
loop-unswitch is run separately, DT is available, so bug isn't exposed.
Reviewers: chandlerc, hfinkel
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D13036
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248230 91177308-0d34-0410-b5e6-96231b3b80d8
We know that an argmemonly function can only access memory pointed to by it's pointer arguments. Rather than needing to consider all possible stores as aliasing (as we do for a readonly function), we can only consider the aliasing of the pointer arguments.
Note that this change only addresses hoisting. I'm thinking about how to address speculation safety as well, but that will be a different change.
FYI, argmemonly disallows accessing memory through non-pointer typed arguments.
Differential Revision: http://reviews.llvm.org/D12771
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248220 91177308-0d34-0410-b5e6-96231b3b80d8
We're currently losing any fast-math flags when synthesizing fcmps for
min/max reductions. In LV, make sure we copy over the scalar inst's
flags. In LoopUtils, we know we only ever match patterns with
hasUnsafeAlgebra, so apply that to any synthesized ops.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248201 91177308-0d34-0410-b5e6-96231b3b80d8
evaluate whether 'readonly' or 'readnone' apply to a given function.
This both reduces indentation and will make it easy to share the logic
with a new pass manager implementation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248181 91177308-0d34-0410-b5e6-96231b3b80d8
Because -indvars widens induction variables through arithmetic,
`NeverNegative` cannot be a property of the `WidenIV` (a `WidenIV`
manages information for all transitive uses of an IV being widened,
including uses of `-1 * IV`). Instead it must live on `NarrowIVDefUse`
which manages information for a specific def-use edge in the transitive
use list of an induction variable.
This change also adds a test case that demonstrates the problem with
r248045.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248107 91177308-0d34-0410-b5e6-96231b3b80d8
(icmp eq (ashr C1, %V) -1) may have multiple answers if C1 is not a
power of two and has the sign bit set.
This fixes PR24873.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248074 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
If an induction variable is provably non-negative, its sign extension is
equal to its zero extension. This means narrow uses like
icmp slt iNarrow %indvar, %rhs
can be widened into
icmp slt iWide zext(%indvar), sext(%rhs)
Reviewers: atrick, mcrosier, hfinkel
Subscribers: hfinkel, reames, llvm-commits
Differential Revision: http://reviews.llvm.org/D12745
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@248045 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
`signum(x)` is sometimes implemented as `(x >> 63) | (-x >>> 63)` (for
an `i64` `x`). This change adds a matcher for that pattern, and an
instcombine rule to optimize `signum(x) s< 1`.
Later, we can also consider optimizing:
icmp slt signum(x), 0 --> icmp slt x, 0
icmp sle signum(x), 1 --> true
etc.
Reviewers: majnemer
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D12703
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@247846 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds support for msan on aarch64-linux for both 39 and
42-bit VMA. The support is enabled by defining the
SANITIZER_AARCH64_VMA compiler flag to either 39 or 42 at build time
for both clang/llvm and compiler-rt. The default VMA is 39 bits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@247807 91177308-0d34-0410-b5e6-96231b3b80d8
We only checked that a global is initialized with constants, which is
incorrect. We should be checking that GlobalVariable *is* a constant,
not just initialized with it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@247769 91177308-0d34-0410-b5e6-96231b3b80d8