1530 Commits

Author SHA1 Message Date
Adrian Prantl
c271bc0481 Renumber testcase metadata nodes after r290153.
This patch renumbers the metadata nodes in debug info testcases after
https://reviews.llvm.org/D26769. This is a separate patch because it
causes so much churn. This was implemented with a python script that
pipes the testcases through llvm-as - | llvm-dis - and then goes
through the original and new output side-by side to insert all
comments at a close-enough location.

Differential Revision: https://reviews.llvm.org/D27765

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290292 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-22 00:45:21 +00:00
Adrian Prantl
7b500b4bdf [IR] Remove the DIExpression field from DIGlobalVariable.
This patch implements PR31013 by introducing a
DIGlobalVariableExpression that holds a pair of DIGlobalVariable and
DIExpression.

Currently, DIGlobalVariables holds a DIExpression. This is not the
best way to model this:

(1) The DIGlobalVariable should describe the source level variable,
    not how to get to its location.

(2) It makes it unsafe/hard to update the expressions when we call
    replaceExpression on the DIGLobalVariable.

(3) It makes it impossible to represent a global variable that is in
    more than one location (e.g., a variable with multiple
    DW_OP_LLVM_fragment-s).  We also moved away from attaching the
    DIExpression to DILocalVariable for the same reasons.

This reapplies r289902 with additional testcase upgrades and a change
to the Bitcode record for DIGlobalVariable, that makes upgrading the
old format unambiguous also for variables without DIExpressions.

<rdar://problem/29250149>
https://llvm.org/bugs/show_bug.cgi?id=31013
Differential Revision: https://reviews.llvm.org/D26769

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290153 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-20 02:09:43 +00:00
Adrian Prantl
096faa974a Revert "[IR] Remove the DIExpression field from DIGlobalVariable."
This reverts commit 289920 (again).
I forgot to implement a Bitcode upgrade for the case where a DIGlobalVariable
has not DIExpression. Unfortunately it is not possible to safely upgrade
these variables without adding a flag to the bitcode record indicating which
version they are.
My plan of record is to roll the planned follow-up patch that adds a
unit: field to DIGlobalVariable into this patch before recomitting.
This way we only need one Bitcode upgrade for both changes (with a
version flag in the bitcode record to safely distinguish the record
formats).

Sorry for the churn!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289982 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-16 19:39:01 +00:00
Chandler Carruth
e87e7067ed Revert r289638: [PowerPC] Fix logic dealing with nop after calls (and tail-call eligibility)
This patch appears to result in trampolines in vtables being miscompiled
when they in turn tail call a method.

I've posted some preliminary details about the failure on the thread for
this commit and talked to Hal. He was comfortable going ahead and
reverting until we sort out what is wrong.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289928 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-16 07:31:20 +00:00
Adrian Prantl
eb38a2a075 [IR] Remove the DIExpression field from DIGlobalVariable.
This patch implements PR31013 by introducing a
DIGlobalVariableExpression that holds a pair of DIGlobalVariable and
DIExpression.

Currently, DIGlobalVariables holds a DIExpression. This is not the
best way to model this:

(1) The DIGlobalVariable should describe the source level variable,
    not how to get to its location.

(2) It makes it unsafe/hard to update the expressions when we call
    replaceExpression on the DIGLobalVariable.

(3) It makes it impossible to represent a global variable that is in
    more than one location (e.g., a variable with multiple
    DW_OP_LLVM_fragment-s).  We also moved away from attaching the
    DIExpression to DILocalVariable for the same reasons.

This reapplies r289902 with additional testcase upgrades.

<rdar://problem/29250149>
https://llvm.org/bugs/show_bug.cgi?id=31013
Differential Revision: https://reviews.llvm.org/D26769

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289920 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-16 04:25:54 +00:00
Adrian Prantl
7766e56d48 Revert "[IR] Remove the DIExpression field from DIGlobalVariable."
This reverts commit 289902 while investigating bot berakage.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289906 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-16 01:00:30 +00:00
Adrian Prantl
1b11b0778e [IR] Remove the DIExpression field from DIGlobalVariable.
This patch implements PR31013 by introducing a
DIGlobalVariableExpression that holds a pair of DIGlobalVariable and
DIExpression.

Currently, DIGlobalVariables holds a DIExpression. This is not the
best way to model this:

(1) The DIGlobalVariable should describe the source level variable,
    not how to get to its location.

(2) It makes it unsafe/hard to update the expressions when we call
    replaceExpression on the DIGLobalVariable.

(3) It makes it impossible to represent a global variable that is in
    more than one location (e.g., a variable with multiple
    DW_OP_LLVM_fragment-s).  We also moved away from attaching the
    DIExpression to DILocalVariable for the same reasons.

<rdar://problem/29250149>
https://llvm.org/bugs/show_bug.cgi?id=31013
Differential Revision: https://reviews.llvm.org/D26769

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289902 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-16 00:36:43 +00:00
Ehsan Amiri
c0621296d3 [PPC] corrections in two testcases
Removing sensitivity to scheduling (by using CHECK-DAG instead of CHECK) and
some other minor corrections.

In preparation to commit Power9 processor model.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289900 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-16 00:33:07 +00:00
Ehsan Amiri
9ee618f860 [PPC] Use CHECK-DAG instead of CHECK in the testcase
This test is currently sensitive to scheduling. Using CHECK-DAG allows us to
preserve the main purpose of the test and remove this sensivity.

In preparation to commit Power9 processor model.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289869 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-15 20:51:09 +00:00
Nemanja Ivanovic
158d97d2a7 [Power9] Allow AnyExt immediates for XXSPLTIB
In some situations, the BUILD_VECTOR node that builds a v18i8 vector by
a splat of an i8 constant will end up with signed 8-bit values and other
situations, it'll end up with unsigned ones. Handle both situations.

Fixes PR31340.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289804 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-15 11:16:20 +00:00
Joerg Sonnenberger
96f8539f4a Use PIC relocation model as default for PowerPC64 ELF.
Most of the PowerPC64 code generation for the ELF ABI is already PIC.
There are four main exceptions:
(1) Constant pointer arrays etc. should in writeable sections.
(2) The TOC restoration NOP after a call is needed for all global
symbols. While GNU ld has a workaround for questionable GCC self-calls,
we trigger the checks for calls from COMDAT sections as they cross input
sections and are therefore not considered self-calls. The current
decision is questionable and suboptimal, but outside the scope of the
change.
(3) TLS access can not use the initial-exec model.
(4) Jump tables should use relative addresses. Note that the current
encoding doesn't work for the large code model, but it is more compact
than the default for any non-trivial jump table. Improving this is again
beyond the scope of this change.

At least (1) and (3) are assumptions made in target-independent code and
introducing additional hooks is a bit messy. Testing with clang shows
that a -fPIC binary is 600KB smaller than the corresponding -fno-pic
build. Separate testing from improved jump table encodings would explain
only about 100KB or so. The rest is expected to be a result of more
aggressive immediate forming for -fno-pic, where the -fPIC binary just
uses TOC entries.

This change brings the LLVM output in line with the GCC output, other
PPC64 compilers like XLC on AIX are known to produce PIC by default
as well. The relocation model can still be provided explicitly, i.e.
when using MCJIT.

One test case for case (1) is included, other test cases with relocation
mode sensitive behavior are wired to static for now. They will be
reviewed and adjusted separately.

Differential Revision: https://reviews.llvm.org/D26566


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289743 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-15 00:01:53 +00:00
Nirav Dave
e461ebb0f4 Revert "In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled."
Reverting due to ARM MCJIT and MIPS LLD error.

This reverts commit r289659.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289667 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-14 16:43:44 +00:00
Nirav Dave
ee001a514b In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
Retrying after fixing after removing load-store factoring through
token factors in favor of improved token factor operand pruning

Simplify Consecutive Merge Store Candidate Search

Now that address aliasing is much less conservative, push through
simplified store merging search which only checks for parallel stores
through the chain subgraph. This is cleaner as the separation of
non-interfering loads/stores from the store-merging logic.

Whem merging stores, search up the chain through a single load, and
finds all possible stores by looking down from through a load and a
TokenFactor to all stores visited. This improves the quality of the
output SelectionDAG and generally the output CodeGen (with some
exceptions).

Additional Minor Changes:

   1. Finishes removing unused AliasLoad code
   2. Unifies the the chain aggregation in the merged stores across
      code paths
   3. Re-add the Store node to the worklist after calling
      SimplifyDemandedBits.
   4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
      arbitrary, but seemed sufficient to not cause regressions in
      tests.

This finishes the change Matt Arsenault started in r246307 and
jyknight's original patch.

Many tests required some changes as memory operations are now
reorderable. Some tests relying on the order were changed to use
volatile memory operations

Noteworthy tests:

    CodeGen/AArch64/argument-blocks.ll -
      It's not entirely clear what the test_varargs_stackalign test is
      supposed to be asserting, but the new code looks right.

    CodeGen/AArch64/arm64-memset-inline.lli -
    CodeGen/AArch64/arm64-stur.ll -
    CodeGen/ARM/memset-inline.ll -

      The backend now generates *worse* code due to store merging
      succeeding, as we do do a 16-byte constant-zero store efficiently.

    CodeGen/AArch64/merge-store.ll -
      Improved, but there still seems to be an extraneous vector insert
      from an element to itself?

    CodeGen/PowerPC/ppc64-align-long-double.ll -
      Worse code emitted in this case, due to the improved store->load
      forwarding.

    CodeGen/X86/dag-merge-fast-accesses.ll -
    CodeGen/X86/MergeConsecutiveStores.ll -
    CodeGen/X86/stores-merging.ll -
    CodeGen/Mips/load-store-left-right.ll -
      Restored correct merging of non-aligned stores

    CodeGen/AMDGPU/promote-alloca-stored-pointer-value.ll -
      Improved. Correctly merges buffer_store_dword calls

    CodeGen/AMDGPU/si-triv-disjoint-mem-access.ll -
      Improved. Sidesteps loading a stored value and
      merges two stores

    CodeGen/X86/pr18023.ll -
      This test has been removed, as it was asserting incorrect
      behavior. Non-volatile stores *CAN* be moved past volatile loads,
      and now are.

    CodeGen/X86/vector-idiv.ll -
    CodeGen/X86/vector-lzcnt-128.ll -
      It's basically impossible to tell what these tests are actually
      testing. But, looks like the code got better due to the memory
      operations being recognized as non-aliasing.

    CodeGen/X86/win32-eh.ll -
      Both loads of the securitycookie are now merged.

Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle

Subscribers: wdng, nhaehnle, nemanjai, arsenm, weimingz, niravd, RKSimon, aemerson, qcolombet, dsanders, resistor, tstellarAMD, t.p.northover, spatel

Differential Revision: https://reviews.llvm.org/D14834

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289659 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-14 15:44:26 +00:00
Hal Finkel
e573748031 [PowerPC] Fix logic dealing with nop after calls (and tail-call eligibility)
This change aims to unify and correct our logic for when we need to allow for
the possibility of the linker adding a TOC restoration instruction after a
call. This comes up in two contexts:

 1. When determining tail-call eligibility. If we make a tail call (i.e.
    directly branch to a function) then there is no place for the linker to add
    a TOC restoration.
 2. When determining when we need to add a nop instruction after a call.
    Likewise, if there is a possibility that the linker might need to add a
    TOC restoration after a call, then we need to put a nop after the call
    (the bl instruction).

First problem: We were using similar, but different, logic to decide (1) and
(2). This is just wrong. Both the resideInSameModule function (used when
determining tail-call eligibility) and the isLocalCall function (used when
deciding if the post-call nop is needed) were supposed to be determining the
same underlying fact (i.e. might a TOC restoration be needed after the call).
The same logic should be used in both places.

Second problem: The logic in both places was wrong. We only know that two
functions will share the same TOC when both functions come from the same
section of the same object. Otherwise the linker might cause the functions to
use different TOC base addresses (unless the multi-TOC linker option is
disabled, in which case only shared-library boundaries are relevant). There are
a number of factors that can cause functions to be placed in different sections
or come from different objects (-ffunction-sections, explicitly-specified
section names, COMDAT, weak linkage, etc.). All of these need to be checked.
The existing logic only checked properties of the callee, but the properties of
the caller must also be checked (for example, calling from a function in a
COMDAT section means calling between sections).

There was a conceptual error in the resideInSameModule function in that it
allowed tail calls to functions with weak linkage and protected/hidden
visibility. While protected/hidden visibility does prevent the function
implementation from being replaced at runtime (via interposition), it does not
prevent the linker from using an alternate implementation at link time (i.e.
using some strong definition to replace the provided weak one during linking).
If this happens, then we're still potentially looking at a required TOC
restoration upon return.

Otherwise, in general, the post-call nop is needed wherever ELF interposition
needs to be supported. We don't currently support ELF interposition at the IR
level (see http://lists.llvm.org/pipermail/llvm-dev/2016-November/107625.html
for more information), and I don't think we should try to make it appear to
work in the backend in spite of that fact. This will yield subtle bugs if
interposition is attempted. As a result, regardless of whether we're in PIC
mode, we don't assume that we need to add the nop to support the possibility of
ELF interposition. However, the necessary check is in place (i.e. calling
GV->isInterposable and TM.shouldAssumeDSOLocal) so when we have functions for
which interposition is allowed at the IR level, we'll add the nop as necessary.
In the mean time, we'll generate more tail calls and fewer nops when compiling
position-independent code.

Differential Revision: https://reviews.llvm.org/D27231

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289638 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-14 07:24:50 +00:00
Guozhi Wei
9def2dd316 [PPC] Prefer direct move on power8 if load 1 or 2 bytes to VSR
Power8 has MTVSRWZ but no LXSIBZX/LXSIHZX, so move 1 or 2 bytes to VSR through MTVSRWZ is much faster than store the extended value into stack and load it with LXSIWZX.
This patch fixes pr31144.

Differential Revision: https://reviews.llvm.org/D27287



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289473 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-12 22:09:02 +00:00
Sean Fertile
d22c85d068 [PPC] Add intrinsics for vector extract word and vector insert word.
Revision: https://reviews.llvm.org/D26547

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289227 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-09 17:21:42 +00:00
Nirav Dave
2c96583422 Revert "In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled."
This reverts commit r289221 which appears to be triggering an assertion

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289226 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-09 17:18:24 +00:00
Nirav Dave
615f3ccd10 In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
Retrying after fixing overly aggressive load-store forwarding optimization.

Simplify Consecutive Merge Store Candidate Search

Now that address aliasing is much less conservative, push through
simplified store merging search which only checks for parallel stores
through the chain subgraph. This is cleaner as the separation of
non-interfering loads/stores from the store-merging logic.

Whem merging stores, search up the chain through a single load, and
finds all possible stores by looking down from through a load and a
TokenFactor to all stores visited. This improves the quality of the
output SelectionDAG and generally the output CodeGen (with some
exceptions).

Additional Minor Changes:

   1. Finishes removing unused AliasLoad code
   2. Unifies the the chain aggregation in the merged stores across
      code paths
   3. Re-add the Store node to the worklist after calling
      SimplifyDemandedBits.
   4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
      arbitrary, but seemed sufficient to not cause regressions in
      tests.

This finishes the change Matt Arsenault started in r246307 and
jyknight's original patch.

Many tests required some changes as memory operations are now
reorderable. Some tests relying on the order were changed to use
volatile memory operations

Noteworthy tests:

    CodeGen/AArch64/argument-blocks.ll -
      It's not entirely clear what the test_varargs_stackalign test is
      supposed to be asserting, but the new code looks right.

    CodeGen/AArch64/arm64-memset-inline.lli -
    CodeGen/AArch64/arm64-stur.ll -
    CodeGen/ARM/memset-inline.ll -

      The backend now generates *worse* code due to store merging
      succeeding, as we do do a 16-byte constant-zero store efficiently.

    CodeGen/AArch64/merge-store.ll -
      Improved, but there still seems to be an extraneous vector insert
      from an element to itself?

    CodeGen/PowerPC/ppc64-align-long-double.ll -
      Worse code emitted in this case, due to the improved store->load
      forwarding.

    CodeGen/X86/dag-merge-fast-accesses.ll -
    CodeGen/X86/MergeConsecutiveStores.ll -
    CodeGen/X86/stores-merging.ll -
    CodeGen/Mips/load-store-left-right.ll -
      Restored correct merging of non-aligned stores

    CodeGen/AMDGPU/promote-alloca-stored-pointer-value.ll -
      Improved. Correctly merges buffer_store_dword calls

    CodeGen/AMDGPU/si-triv-disjoint-mem-access.ll -
      Improved. Sidesteps loading a stored value and
      merges two stores

    CodeGen/X86/pr18023.ll -
      This test has been removed, as it was asserting incorrect
      behavior. Non-volatile stores *CAN* be moved past volatile loads,
      and now are.

    CodeGen/X86/vector-idiv.ll -
    CodeGen/X86/vector-lzcnt-128.ll -
      It's basically impossible to tell what these tests are actually
      testing. But, looks like the code got better due to the memory
      operations being recognized as non-aliasing.

    CodeGen/X86/win32-eh.ll -
      Both loads of the securitycookie are now merged.

Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle

Subscribers: wdng, nhaehnle, nemanjai, arsenm, weimingz, niravd, RKSimon, aemerson, qcolombet, dsanders, resistor, tstellarAMD, t.p.northover, spatel

Differential Revision: https://reviews.llvm.org/D14834

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289221 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-09 16:15:12 +00:00
Nemanja Ivanovic
3d641da82e [PowerPC] Improvements for BUILD_VECTOR Vol. 4
This is the final patch in the series of patches that improves
BUILD_VECTOR handling on PowerPC. This adds a few peephole optimizations
to remove redundant instructions. It also adds a large test case which
encompasses a large set of code patterns that build vectors - this test
case was the motivator for this series of patches.

Differential Revision: https://reviews.llvm.org/D26066


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288800 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-06 11:47:14 +00:00
Matt Arsenault
728c815331 DAG: Fold out out of bounds insert_vector_elt
getNode already prevents formation of out of bounds constant
extract_vector_elts. Do the same for insert_vector_elt.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288603 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-03 23:03:26 +00:00
Nemanja Ivanovic
0b07a5ae00 Revert https://reviews.llvm.org/rL287679
This commit caused some miscompiles that did not show up on any of the bots.
Reverting until we can investigate the cause of those failures.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288214 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-29 23:00:33 +00:00
Nemanja Ivanovic
1ec5b2fb75 [PowerPC] Improvements for BUILD_VECTOR Vol. 1
This patch corresponds to review:
https://reviews.llvm.org/D25912

This is the first patch in a series of 4 that improve the lowering and combining
for BUILD_VECTOR nodes on PowerPC.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288152 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-29 16:11:34 +00:00
Nirav Dave
78f5fdf3e5 Revert "[DAG] Improve loads-from-store forwarding to handle TokenFactor"
This reverts commit r287773 which caused issues with ppc64le builds.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288035 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-28 14:30:29 +00:00
Nirav Dave
3e65807a6f [DAG] Improve loads-from-store forwarding to handle TokenFactor
Forward store values to matching loads down through token
factors. Factored from D14834.

Reviewers: jyknight, hfinkel

Subscribers: hfinkel, nemanjai, llvm-commits

Differential Revision: https://reviews.llvm.org/D26080

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@287773 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-23 16:48:35 +00:00
Nemanja Ivanovic
aa687a6ca9 [PowerPC] Emit VMX loads/stores for aligned ops to avoid adding swaps on LE
This patch corresponds to review:
https://reviews.llvm.org/D26861

It also fixes PR30730.

Committing on behalf of Lei Huang.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@287679 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-22 19:02:07 +00:00
Ehsan Amiri
3d73fcad55 [Power9] Add patterns for vnegd, vnegw
Exploit new instructions by adding patterns to .td file.
https://reviews.llvm.org/D26551



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@287334 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-18 11:05:55 +00:00
Ehsan Amiri
072e86da0c [PPC][DAGCombine] Convert SETCC to subtract when the result is zero extended
When we see a SETCC whose only users are zero extend operations, we can replace
it with a subtraction. This results in doing all calculations in GPRs and
avoids CR use.

Currently we do this only for ULT, ULE, UGT and UGE condition codes. There are
ways that this can be extended. For example for signed condition codes. In that
case we will be introducing additional sign extend instructions, so more careful
profitability analysis may be required.

Another direction to extend this is for equal, not equal conditions. Also when
users of SETCC are any_ext or sign_ext, we might be able to do something 
similar.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@287329 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-18 10:41:44 +00:00
Joerg Sonnenberger
8833323011 Always use relative jump table encodings on PowerPC64.
For the default, small and medium code model, use the existing
difference from the jump table towards the label. For all other code
models, setup the picbase and use the difference between the picbase and
the block address.

Overall, this results in smaller data tables at the expensive of one or
two more arithmetic operation at the jump site. Given that we only create
jump tables with a lot more than two entries, it is a net win in size.
For larger code models the assumption remains that individual functions
are no larger than 2GB.

Differential Revision: https://reviews.llvm.org/D26336


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@287059 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-16 00:37:30 +00:00
Zaara Syeda
34888b1f1a vector load store with length (left justified) llvm portion
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286993 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-15 17:54:19 +00:00
Tony Jiang
6ad6c513ee [PowerPC] Implement BE VSX load/store builtins - llvm portion.
This patch implements all the overloads for vec_xl_be and vec_xst_be. On BE,
they behaves exactly the same with vec_xl and vec_xst, therefore they are
simply implemented by defining a matching macro. On LE, they are implemented
by defining new builtins and intrinsics. For int/float/long long/double, it
is just a load (lxvw4x/lxvd2x) or store(stxvw4x/stxvd2x). For char/char/short,
we also need some extra shuffling before or after call the builtins to get the
desired BE order. For int128, simply call vec_xl or vec_xst.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286967 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-15 14:25:56 +00:00
Sean Fertile
41723f358e [PPC] Add intrinsic mapping to the xscvhpsp instruction
add an intrinsic to expose the 'VSX Scalar Convert Half-Precision to
Single-Precision' instruction.

Differential review: https://reviews.llvm.org/D26536

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286862 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-14 18:43:59 +00:00
Sean Fertile
4bbcd0ad60 [PPC] add intrinsics for vec extract exp/significand and vec test data class.
Differential Revision: https://reviews.llvm.org/D26272

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286829 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-14 14:42:37 +00:00
Nemanja Ivanovic
13e440ebf9 [PowerPC] Add remaining vector permute builtins in altivec.h - LLVM portion
This patch corresponds to review:
https://reviews.llvm.org/D26480

Adds all the intrinsics used for various permute builtins that will
be added to altivec.h.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286638 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-11 21:42:01 +00:00
Nemanja Ivanovic
3726719543 [PowerPC] Add vector conversion builtins to altivec.h - LLVM portion
This patch corresponds to review:
https://reviews.llvm.org/D26307

Adds all the intrinsics used for various conversion builtins that will
be added to altivec.h. These are type conversions between various types of
vectors.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286596 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-11 14:41:19 +00:00
Matthias Braun
ee5205bfae ScheduleDAGInstrs: Add condjump deps to addSchedBarrierDeps()
addSchedBarrierDeps() is supposed to add use operands to the ExitSU
node. The current implementation adds uses for calls/barrier instruction
and the MBB live-outs in all other cases. The use
operands of conditional jump instructions were missed.

Also added code to macrofusion to set the latencies between nodes to
zero to avoid problems with the fusing nodes lingering around in the
pending list now.

Differential Revision: https://reviews.llvm.org/D25140

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286544 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-11 01:34:21 +00:00
Nemanja Ivanovic
790687f4e0 [PowerPC] Implement vector shift builtins - llvm portion
This patch corresponds to review https://reviews.llvm.org/D26095.
Committing on behalf of Tony Jiang.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285681 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-01 09:42:32 +00:00
Nemanja Ivanovic
7e057dcd4e [PPC] add absolute difference altivec instructions and matching intrinsics
This patch corresponds to review https://reviews.llvm.org/D26072.
Committing on behalf of Sean Fertile.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285627 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-31 19:47:52 +00:00
Nemanja Ivanovic
0b61b12b8c Implement vector count leading/trailing bytes with zero lsb and vector parity
builtins - llvm portion

This patch corresponds to review https://reviews.llvm.org/D26003.
Committing on behalf of Zaara Syeda.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285434 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-28 19:38:24 +00:00
Ehsan Amiri
34a73b3124 [PPC] Adding the removed testcase again
This testcase was originally part of r284995, but I put it in a wrong directory.
So I removed it. Before adding it back I did some small enhancements. Also I
changed the assertions a little bit, to take into account the impact of some
changes performed since code review is done.

This is similar to changes done for another testcase in the original commit.
See: https://reviews.llvm.org/D23614#577749
Basically for instead of vxor we now generate xxlxor in some cases, which is
better.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285333 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-27 19:10:09 +00:00
Nemanja Ivanovic
e9fdaa1bbb [PowerPC] - No SExt/ZExt needed for count trailing zeros
This patch corresponds to review:
https://reviews.llvm.org/D25896

It just eliminates the redundant ZExt after a count trailing zeros instruction.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285267 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-27 05:17:58 +00:00
Nemanja Ivanovic
73235dc8dc Do not assume that FP vector operands are never legalized by expanding
This patch ensures that if a floating point vector operand is legalized by
expanding, it is legalized through the stack rather than by calling
DAGTypeLegalizer::IntegerToVector which will cause a failure since the operand
is a non-integer type.

This fixes PR 30715.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285231 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-26 19:51:35 +00:00
Nemanja Ivanovic
4e7356cfaf [PowerPC] Implement vec_insert_exp builtins - llvm portion
This revision corresponds to review: https://reviews.llvm.org/D25957.
Committing on behalf of Zaara Syeda.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285225 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-26 19:03:40 +00:00
Ehsan Amiri
300e976507 [PPC] Generate positive FP zero using xor insn instead of loading from constant area
https://reviews.llvm.org/D23614

Currently we load +0.0 from constant area. That can change to be generated using
XOR instruction.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284995 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-24 17:31:09 +00:00
Ehsan Amiri
5ba8f14a10 [PPC] Better codegen for AND, ANY_EXT, SRL sequence
https://reviews.llvm.org/D24924

This improves the code generated for a sequence of AND, ANY_EXT, SRL instructions. This is a targetted fix for this special pattern. The pattern is generated by target independet dag combiner and so a more general fix may not be necessary. If we come across other similar cases, some ideas for handling it are discussed on the code review.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284983 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-24 15:46:58 +00:00
Sanjay Patel
85745f9561 [DAG] optimize negation of bool
Use mask and negate for legalization of i1 source type with SIGN_EXTEND_INREG.
With the mask, this should be no worse than 2 shifts. The mask can be eliminated
in some cases, so that should be better than 2 shifts.

This change exposed some missing folds related to negation:
https://reviews.llvm.org/rL284239
https://reviews.llvm.org/rL284395

There may be others, so please let me know if you see any regressions.

Differential Revision: https://reviews.llvm.org/D25485


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284611 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-19 16:58:59 +00:00
Tim Northover
31164bc2c9 PowerPC: specify full triple to avoid different Darwin asm syntax.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284281 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-14 21:25:29 +00:00
Sanjay Patel
65a7ee43be [PowerPC] add tests for PR30661
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284279 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-14 20:51:41 +00:00
Guozhi Wei
8bb12b9f5e [PPC] Shorter sequence to load 64bit constant with same hi/lo words
This is a patch to implement pr30640.

When a 64bit constant has the same hi/lo words, we can use rldimi to copy the low word into high word of the same register.

This optimization caused failure of test case bperm.ll because of not optimal heuristic in function SelectAndParts64. It chooses AND or ROTATE to extract bit groups from a register, and OR them together. This optimization lowers the cost of loading 64bit constant mask used in AND method, and causes different code sequence. But actually ROTATE method is better in this test case. The reason is in ROTATE method the final OR operation can be avoided since rldimi can insert the rotated bits into target register directly. So this patch also enhances SelectAndParts64 to prefer ROTATE method when the two methods have same cost and there are multiple bit groups need to be ORed together.

Differential Revision: https://reviews.llvm.org/D25521



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284276 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-14 20:41:50 +00:00
Nirav Dave
080559c6d3 Revert "In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled."
This reverts commit r284151 which appears to be triggering a LTO
failures on Hexagon

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284157 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-13 20:23:25 +00:00
Nirav Dave
19dc709f4b In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
Retrying after upstream changes.

   Simplify Consecutive Merge Store Candidate Search

   Now that address aliasing is much less conservative, push through
   simplified store merging search which only checks for parallel stores
   through the chain subgraph. This is cleaner as the separation of
   non-interfering loads/stores from the store-merging logic.

   Whem merging stores, search up the chain through a single load, and
   finds all possible stores by looking down from through a load and a
   TokenFactor to all stores visited. This improves the quality of the
   output SelectionDAG and generally the output CodeGen (with some
   exceptions).

   Additional Minor Changes:

       1. Finishes removing unused AliasLoad code
       2. Unifies the the chain aggregation in the merged stores across
       code paths
       3. Re-add the Store node to the worklist after calling
       SimplifyDemandedBits.
       4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
       arbitrary, but seemed sufficient to not cause regressions in
       tests.

   This finishes the change Matt Arsenault started in r246307 and
   jyknight's original patch.

   Many tests required some changes as memory operations are now
   reorderable. Some tests relying on the order were changed to use
   volatile memory operations

   Noteworthy tests:

    CodeGen/AArch64/argument-blocks.ll -
      It's not entirely clear what the test_varargs_stackalign test is
      supposed to be asserting, but the new code looks right.

    CodeGen/AArch64/arm64-memset-inline.lli -
    CodeGen/AArch64/arm64-stur.ll -
    CodeGen/ARM/memset-inline.ll -

      The backend now generates *worse* code due to store merging
      succeeding, as we do do a 16-byte constant-zero store efficiently.

    CodeGen/AArch64/merge-store.ll -
      Improved, but there still seems to be an extraneous vector insert
      from an element to itself?

    CodeGen/PowerPC/ppc64-align-long-double.ll -
      Worse code emitted in this case, due to the improved store->load
      forwarding.

    CodeGen/X86/dag-merge-fast-accesses.ll -
    CodeGen/X86/MergeConsecutiveStores.ll -
    CodeGen/X86/stores-merging.ll -
    CodeGen/Mips/load-store-left-right.ll -
      Restored correct merging of non-aligned stores

    CodeGen/AMDGPU/promote-alloca-stored-pointer-value.ll -
      Improved. Correctly merges buffer_store_dword calls

    CodeGen/AMDGPU/si-triv-disjoint-mem-access.ll -
      Improved. Sidesteps loading a stored value and
      merges two stores

    CodeGen/X86/pr18023.ll -
      This test has been removed, as it was asserting incorrect
      behavior. Non-volatile stores *CAN* be moved past volatile loads,
      and now are.

    CodeGen/X86/vector-idiv.ll -
    CodeGen/X86/vector-lzcnt-128.ll -
      It's basically impossible to tell what these tests are actually
      testing. But, looks like the code got better due to the memory
      operations being recognized as non-aliasing.

    CodeGen/X86/win32-eh.ll -
      Both loads of the securitycookie are now merged.

    CodeGen/AMDGPU/vgpr-spill-emergency-stack-slot-compute.ll -
      This test appears to work but no longer exhibits the spill behavior.

Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle

Subscribers: wdng, nhaehnle, nemanjai, arsenm, weimingz, niravd, RKSimon, aemerson, qcolombet, dsanders, resistor, tstellarAMD, t.p.northover, spatel

Differential Revision: https://reviews.llvm.org/D14834

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284151 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-13 19:20:16 +00:00