Consider this type of a loop:
for (...) {
...
if (...) continue;
...
}
Normally, the "continue" would branch to the loop control code that
checks whether the loop should continue iterating and which contains
the (often) unique loop latch branch. In certain cases jump threading
can "thread" the inner branch directly to the loop header, creating
a second loop latch. Loop canonicalization would then transform this
loop into a loop nest. The problem with this is that in such a loop
nest neither loop is countable even if the original loop was. This
may inhibit subsequent loop optimizations and be detrimental to
performance.
Differential Revision: https://reviews.llvm.org/D36404
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312664 91177308-0d34-0410-b5e6-96231b3b80d8
This moves more of our subvector insert/extract tricks to X86InstrVecCompiler.td and refactors them into multiclasses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312661 91177308-0d34-0410-b5e6-96231b3b80d8
When if-converting a diamond, two separate blocks will be placed back
to back to form a straight line code. To ensure correctness of the
liveness information, any registers that are live in the second block
should not be killed in the first block, even if they were in the
original code.
Additionally, when the two blocks share common instructions at the
beginning, these instructions will not be duplicated, but only placed
once, before both of the blocks. Since the function "isIdenticalTo"
(as used here) ignores kill flags, the common initial code in one
block may have a kill flag for a register that is live in the other
block.
Because the code that removes kill flags only runs for the non-common
parts of the predicated blocks, a kill flag mismatch in the common
code could still lead to a live register being killed prematurely.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312654 91177308-0d34-0410-b5e6-96231b3b80d8
This patch moves some of similar non-instruction patterns from X86InstrSSE.td and X86InstrAVX512.td to a common file.
This is intended as a starting point. There are many other optimization patterns that exist in both files that we could move here.
Differential Revision: https://reviews.llvm.org/D37455
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312649 91177308-0d34-0410-b5e6-96231b3b80d8
Remove code that assumed that a nullptr of address space != 0 couldnt alias with a non-null pointer. This is incorrect, since nothing can be concluded about a null pointer in an address space != 0.
This code was written before address spaces were introduced
Differential Revision: https://reviews.llvm.org/D37518
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312648 91177308-0d34-0410-b5e6-96231b3b80d8
function return the intrinsics's first argument.
llvm.memcpy/memset/memmove return void but they will return the first
argument after they are expanded as libcalls. Now if the parent function
has any return value, llvm.memcpy cannot be turned into tail call after
expansion.
The patch is to handle that case in SelectionDAGBuilder so when caller
function return the same value as the first argument of llvm.memcpy,
tail call is allowed.
Differential Revision: https://reviews.llvm.org/D37406
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312641 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Mesa still uses a hack where empty inline assembly is used as a kind of
optimization barrier. This exposed a problem where not enough wait states
were inserted, because the hazard recognizer implicitly assumed that each
inline assembly "instruction" has at least one wait state.
Reviewers: arsenm
Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye
Differential Revision: https://reviews.llvm.org/D37205
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312635 91177308-0d34-0410-b5e6-96231b3b80d8
As noted in PR34080, a lot of x87 instructions alter the FPSW status register (or leave it in an undefined state) but aren't tagged as such in the tablegen.
This patch tags the control word, stack, wait and math instructions as altering FPSW, which matches what the AMD APMs suggests happens.
Differential Revision: https://reviews.llvm.org/D36414
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312629 91177308-0d34-0410-b5e6-96231b3b80d8
This matches what we already do for AVX512. The peephole pass makes up for this in most if not all cases. But this makes isel behavior for these consistent with every other instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312613 91177308-0d34-0410-b5e6-96231b3b80d8
xscvdpspn was not introduced until the P8, so don't use it on the P7. Fixes a
regression introduced in r288152.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312612 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Most instructions in AVX work “in-lane”, that is, each source element is applied only to other
elements of the same lane, thus a cross lane permutation is costly and needs more than one instrution.
AVX2 includes instructions to perform any-to-any permutation of words over a 256-bit register
and vectorized table lookup.
This should also Fix PR34369
Differential Revision: https://reviews.llvm.org/D37388
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312608 91177308-0d34-0410-b5e6-96231b3b80d8
Without this we would have multiple relocations pointing to symbols
with the same name: the empty string. There was no way for yaml2obj to
be able to handle that.
A more general solution would be to unique symbol names in a similar
way to how we unique section names. In practice I think this covers
all common cases and is a bit more user friendly than using names like
sym1, sym2, sym3, etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312603 91177308-0d34-0410-b5e6-96231b3b80d8
This is a preliminary step towards solving the remaining part of PR27145 - IR for isfinite():
https://bugs.llvm.org/show_bug.cgi?id=27145
In order to solve that one more generally, we need to add matching for and/or of fcmp ord/uno
with a constant operand.
But while looking at those patterns, I realized we were missing a canonicalization for nonzero
constants. Rather than limiting to just folds for constants, we're adding a general value
tracking method for this based on an existing DAG helper.
By transforming everything to 0.0, we can simplify the existing code in foldLogicOfFCmps()
and pick up missing vector folds.
Differential Revision: https://reviews.llvm.org/D37427
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312591 91177308-0d34-0410-b5e6-96231b3b80d8
Missing these could potentially screw up post-ra scheduling.
Issue found by inspection, so I don't have a real testcase. Included
test just verifies the expected operands after expansion.
Differential Revision: https://reviews.llvm.org/D35156
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312589 91177308-0d34-0410-b5e6-96231b3b80d8
Without this patch passing a .o file with multiple sections with the
same name to obj2yaml produces a yaml file that yaml2obj cannot
handle. This is pr34162.
The problem is that when specifying, for example, the section of a
symbol, we get only
Section: foo
and don't know which of the sections whose name is foo we have to use.
One alternative would be to use section numbers. This would work, but
the output from obj2yaml would be very inconvenient to edit as
deleting a section would invalidate all indexes.
Another alternative would be to invent a unique section id that would
exist only on yaml. This would work, but seems a bit heavy handed. We
could make the id optional and default it to the section name.
Since in the last alternative the id is basically what this patch uses
as a name, it can be implemented as a followup patch if needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312585 91177308-0d34-0410-b5e6-96231b3b80d8
The existing code created a JITSymbol with an invalid materializer instead,
guaranteeing a 'missing symbol' error when someone tried to materialize the
symbol.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312584 91177308-0d34-0410-b5e6-96231b3b80d8
S_UDT records are basically the "bridge" between the debugger's
expression evaluator and the type information. If you type
(Foo*)nullptr into the watch window, the debugger looks for an
S_UDT record named Foo. If it can find one, it displays your type.
Otherwise you get an error.
We have always understood this to mean that if you have code like
this:
struct A {
int X;
};
struct B {
typedef A AT;
AT Member;
};
that you will get 3 S_UDT records. "A", "B", and "B::AT". Because
if you were to type (B::AT*)nullptr into the debugger, it would
need to find an S_UDT record named "B::AT".
But "B::AT" is actually the S_UDT record that would be generated
if B were a namespace, not a struct. So the debugger needs to be
able to distinguish this case. So what it does is:
1. Look for an S_UDT named "B::AT". If it finds one, it knows
that AT is in a namespace.
2. If it doesn't find one, split at the scope resolution operator,
and look for an S_UDT named B. If it finds one, look up the type
for B, and then look for AT as one of its members.
With this algorithm, S_UDT records for nested typedefs are not just
unnecessary, but actually wrong!
The results of implementing this in clang are dramatic. It cuts
our /DEBUG:FASTLINK PDB sizes by more than 50%, and we go from
being ~20% larger than MSVC PDBs on average, to ~40% smaller.
It also slightly speeds up link time. We get about 10% faster
links than without this patch.
Differential Revision: https://reviews.llvm.org/D37410
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312583 91177308-0d34-0410-b5e6-96231b3b80d8
As suggested in D37427, we could have a value tracking function and folds that use
it to simplify these cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312578 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This intrinsic represents a label with a list of associated metadata
strings. It is modelled as reading and writing inaccessible memory so
that it won't be removed as dead code. I think the intention is that the
annotation strings should appear at most once in the debug info, so I
marked it noduplicate. We are allowed to inline code with annotations as
long as we strip the annotation, but that can be done later.
Reviewers: majnemer
Subscribers: eraman, llvm-commits, hiraditya
Differential Revision: https://reviews.llvm.org/D36904
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312569 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
When constructing the predicate P1 in ScalarEvolution::createAddRecFromPHIWithCastsImpl() it is possible
for the PHISCEV from which the predicate is constructed to be a SCEVConstant instead of a SCEVAddRec. If
this happens, then the cast<SCEVAddRec>(PHISCEV) in the code will assert.
Such a PHISCEV is possible if either the start value or the accumulator value is a constant value
that not equal to its truncated value, and if the truncated value is zero.
This patch adds tests that demonstrate the cast<> assertion, and fixes this problem by checking
whether the PHISCEV is a constant before constructing the P1 predicate; if it is, then P1 is
equivalent to one of P2 or P3. Additionally, if we know that the start value or accumulator
value are constants then we check whether the P2 and/or P3 predicates are known false at compile
time; if either is, then we bail out of constructing the AddRec.
Reviewers: sanjoy, mkazantsev, silviu.baranga
Reviewed By: mkazantsev
Subscribers: mkazantsev, llvm-commits
Differential Revision: https://reviews.llvm.org/D37265
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312568 91177308-0d34-0410-b5e6-96231b3b80d8
We had already disabled the pattern for SSE4.1 and SSE4.2. But it got re-enabled for AVX and AVX512.
With SSE41 we rely on a separate (v4f32 (X86vzmovl VR128)) pattern to select blendps with a xorps to create zeroess. And a separate (v4f32 (scalar_to_vector FR32X)) to select a COPY_TO_REG_CLASS to move FR32 to VR128
The same thing can happen for AVX with vblendps and those separate patterns already exist.
For AVX512, (v4f32 (X86vzmov VR128)) will select a VMOVSS instruction instead of VBLENDPS due to their not being a EVEX VBLENDPS. This is what we were getting out of the larger pattern anyway. So the larger pattern is unneeded for AVX512 too.
For SSE1-SSSE3 we can rely on (v4f32 (X86vzmov VR128)) selecting a MOVSS similar to AVX512. Again this is what the larger pattern did too.
So the only real change here is that AVX1/2 now properly outputs a VBLENDPS during isel instead of a VMOVSS to match SSE41. Most tests didn't notice because the two address instruction pass knows how to turn VMOVSS into VBLENDPS to get an independent destination register.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312564 91177308-0d34-0410-b5e6-96231b3b80d8
If the only call in a function is a tail call, the
function isn't considered to have a call since it's a
type of return.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@312561 91177308-0d34-0410-b5e6-96231b3b80d8