The non-constant pool version of DecodeVPERMIL2PMask was not offsetting correctly for the second input. I've updated the code to match the implementation in the constant-pool version.
Annoyingly this bug was hidden for so long as it's tricky to combine to useful variable shuffle masks that don't become constant-pool entries.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288898 91177308-0d34-0410-b5e6-96231b3b80d8
When a function F is inlined, InlineFunction extends the debug location of every
instruction inlined from F by adding an InlinedAt.
However, if an instruction has a 'null' debug location, InlineFunction would
propagate the callsite debug location to it. This behavior existed since
revision 210459.
Revision 210459 was originally committed specifically to workaround the lack of
debug information for instructions inlined from intrinsic functions (which are
usually declared with attributes `__always_inline__, __nodebug__`).
The problem with revision 210459 is that it doesn't make any sort of distinction
between instructions inlined from a 'nodebug' function and instructions which
are inlined from a function built with debug info. This issue may lead to
incorrect stepping in the debugger.
This patch works under the assumption that a nodebug function does not have a
DISubprogram. When a function F is inlined into another function G,
InlineFunction checks if F has debug info associated with it.
For nodebug functions, the InlineFunction logic is unchanged (i.e. it would
still propagate the callsite debugloc to the inlined instructions). Otherwise,
InlineFunction no longer propagates the callsite debug location.
Differential Revision: https://reviews.llvm.org/D27462
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288895 91177308-0d34-0410-b5e6-96231b3b80d8
As Eli noted in the post-commit thread for r288833, the use of
swapOperands() may not be allowed in InstSimplify, so I'm
removing those calls here pending further review.
The swap mutates the icmp, and there doesn't appear to be precedent
for instruction mutation in InstSimplify.
I didn't actually have any tests for those cases, so I'm adding
a few here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288855 91177308-0d34-0410-b5e6-96231b3b80d8
BDCE has two phases:
1. It asks SimplifyDemandedBits if all the bits of an instruction are dead, and if so,
replaces all its uses with the constant zero.
2. Then, it asks SimplifyDemandedBits again if the instruction is really dead
(no side effects etc..) and if so, eliminates it.
Now, in 1) if all the bits of an instruction are dead, we may end up replacing a dbg use:
%call = tail call i32 (...) @g() #4, !dbg !15
tail call void @llvm.dbg.value(metadata i32 %call, i64 0, metadata !8, metadata !16), !dbg !17
->
%call = tail call i32 (...) @g() #4, !dbg !15
tail call void @llvm.dbg.value(metadata i32 0, i64 0, metadata !8, metadata !16), !dbg !17
but not eliminating the call because it may have arbitrary side effects.
In other words, we lose some debug informations.
This patch fixes the problem making sure that BDCE does nothing with the instruction if
it has side effects and no non-dbg uses.
Differential Revision: https://reviews.llvm.org/D27471
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288851 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
If we write an immediate to a VGPR and then copy the VGPR to an
SGPR, we can replace the copy with a S_MOV_B32 sgpr, imm, rather than
moving the copy to the SALU.
Reviewers: arsenm
Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, llvm-commits, tony-tye
Differential Revision: https://reviews.llvm.org/D27272
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288849 91177308-0d34-0410-b5e6-96231b3b80d8
We were rounding size in bits down rather than up, leading to 0-sized slots for
i1 (assert!) and bugs for other types not byte-aligned.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288848 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Prefer expansions such as: pmullw,pmulhw,unpacklwd,unpackhwd over pmulld.
On Silvermont [source: Optimization Reference Manual]:
PMULLD has a throughput of 1/11 [instruction/cycles].
PMULHUW/PMULHW/PMULLW have a throughput of 1/2 [instruction/cycles].
Fixes pr31202.
Analysis of this issue was done by Fahana Aleen.
Reviewers: wmi, delena, mkuper
Subscribers: RKSimon, llvm-commits
Differential Revision: https://reviews.llvm.org/D27203
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288844 91177308-0d34-0410-b5e6-96231b3b80d8
All of these (and a few more) are already handled by InstCombine,
but we shouldn't have to wait until then to simplify these because
they're cheap to deal with here in InstSimplify.
This is the 'and' sibling of the earlier 'or' patch:
https://reviews.llvm.org/rL288833
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288841 91177308-0d34-0410-b5e6-96231b3b80d8
There were two problems:
+ AArch64 was reusing random data from its binary op tables, which is
complete nonsense for G_SEQUENCE.
+ Even when AArch64 gave up and said it couldn't handle G_SEQUENCE,
the generic code asserted.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288836 91177308-0d34-0410-b5e6-96231b3b80d8
It'll almost immediately fail because it always tries to half/double the size
until it finds a legal one. Unfortunately, this triggers an assertion
preventing the DAG fallback from being possible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288834 91177308-0d34-0410-b5e6-96231b3b80d8
All of these (and a few more) are already handled by InstCombine,
but we shouldn't have to wait until then to simplify these because
they're cheap to deal with here in InstSimplify.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288833 91177308-0d34-0410-b5e6-96231b3b80d8
When we see a non flag-setting instruction for which only the flag-setting
version is available in Thumb1, we should give a better error message than
"invalid instruction".
Differential Revision: https://reviews.llvm.org/D27414
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288805 91177308-0d34-0410-b5e6-96231b3b80d8
Check if a build_vector node includes a repeated constant pattern and replace it with a broadcast of that pattern.
For example:
"build_vector <0, 1, 2, 3, 0, 1, 2, 3>" would be replaced by "broadcast <0, 1, 2, 3>"
Differential Revision: https://reviews.llvm.org/D26802
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288804 91177308-0d34-0410-b5e6-96231b3b80d8
This is the final patch in the series of patches that improves
BUILD_VECTOR handling on PowerPC. This adds a few peephole optimizations
to remove redundant instructions. It also adds a large test case which
encompasses a large set of code patterns that build vectors - this test
case was the motivator for this series of patches.
Differential Revision: https://reviews.llvm.org/D26066
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288800 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: This patch makes sure FirstCSPop and MBBI never point to DBG_VALUE instructions, which affected the code generated.
Reviewers: mkuper, aprantl, MatzeB
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D27343
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288794 91177308-0d34-0410-b5e6-96231b3b80d8
This pattern turned a vector sqrt/rcp/rsqrt operation of sse_load_f32/f64 into the the scalar instruction for the operation and put undef into the upper bits. For correctness, the resulting code should still perform the sqrt/rcp/rsqrt on the upper bits after the load is extended since that's what the operation asked for. Particularly in the case where the upper bits are 0, in that case we need calculate the sqrt/rcp/rsqrt of the zeroes and keep the result in the upper-bits. This implies we should be using the packed instruction still.
The only test case for this pattern is one I just added so there was no coverage of this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288784 91177308-0d34-0410-b5e6-96231b3b80d8
This occurs due to a pattern that uses sse_load_f32/f64 with vector sqrt/rcp/rsqrt operations and turns them into scalar instructions. Perhaps for the case were the upper bits come from undef this is ok. I believe a (vzmovl load64) would do the same thing but those seems to become vzload instead and selectScalarSSELoad doesn't handle that today. In that case we should be performing the vector operation on the zeros in the upper bits which is not equivalent to using a scalar instruction.
I will remove this pattern in a follow up patch. There appears to be no other test content for it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288783 91177308-0d34-0410-b5e6-96231b3b80d8
The intrinsics are supposed to pass the upper bits straight through to their output register. This means we need to make sure we still perform the 128-bit load to get those upper bits to pass to give to the instruction since the memory form of the instruction only reads 32 or 64 bits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288781 91177308-0d34-0410-b5e6-96231b3b80d8
The sqrtsd instruction only loads 64-bits and writes bits 63:0 with the sqrt result. Bits 127:64 are preserved in the destination register. The semantics of the intrinsic indicate bits 127:64 should come from the intrinsic argument which in this case is a 128-bit load. So the generated code should have a 128-bit load and use a register form of sqrtsd.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288780 91177308-0d34-0410-b5e6-96231b3b80d8
The intrinsic takes one argument, the lower bits are affected by the operation and the upper bits should be passed through. The instruction itself takes two operands, the high bits of the first operand are passed through and the low bits of the second operand are modified by the operation. To match this to the intrinsic we should pass the single intrinsic input to both operands.
I had to remove the stack folding test for these instructions since they depended on the incorrect behavior. The same register is now used for both inputs so the load can't be folded.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288779 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds the starting support for encoding data from the MachO __DWARF segment. The first section supported is the __debug_str section because it is the simplest.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288774 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This patch removes the scalar logical operation alias instructions. We can just use reg class copies and use the normal packed instructions instead. This removes the need for putting these instructions in the execution domain fixing tables as was done recently.
I removed the loadf64_128 and loadf32_128 patterns as DAG combine creates a narrower load for (extractelt (loadv4f32)) before we ever get to isel.
I plan to add similar patterns for AVX512DQ in a future commit to allow use of the larger register class when available.
Reviewers: spatel, delena, zvi, RKSimon
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D27401
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288771 91177308-0d34-0410-b5e6-96231b3b80d8
The structured CFG is just an aid to inserting exec
mask modification instructions, once that is done
we don't really need it anymore. We also
do not analyze blocks with terminators that
modify exec, so this should only be impacting
true branches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288744 91177308-0d34-0410-b5e6-96231b3b80d8
clang -target arm deprecated-asm.s -c
deprecated-asm.s:30:9: warning: use of SP or PC in the list is deprecated
stmia r4!, {r12-r14}
We have to have an option what can disable it.
Patched by Yin Ma!
Reviewers: joey, echristo, weimingz
Subscribers: llvm-commits, aemerson
Differential Revision: https://reviews.llvm.org/D27219
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288734 91177308-0d34-0410-b5e6-96231b3b80d8
The function used to finish off PHIs by adding the relevant basic blocks can
fail if we're aborting and still don't actually have the needed
MachineBasicBlocks. So avoid trying in that case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288727 91177308-0d34-0410-b5e6-96231b3b80d8
When the entry block was empty after arg lowering, we were always placing
constants at the end. This is probably hamrless while translating the same
block, but horribly wrong once its terminator has been translated. So switch to
inserting at the beginning.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288720 91177308-0d34-0410-b5e6-96231b3b80d8