184 Commits

Author SHA1 Message Date
Matt Arsenault
fc8387b8d1 AMDGPU: Constant fold rcp node
When doing arcp optimization with a constant denominator,
this was leaving behind rcps with constant inputs.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@297248 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-08 00:48:46 +00:00
Matt Arsenault
27f4f2f4bc AMDGPU: Support v2i16/v2f16 packed operations
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@296396 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-27 22:15:25 +00:00
Wei Ding
5d1e915557 AMDGPU : Replace FMAD with FMA when denormals are enabled.
Differential Revision: http://reviews.llvm.org/D29958

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@296186 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-24 23:00:29 +00:00
Matt Arsenault
7d65faa5cc AMDGPU: Add cvt.pkrtz intrinsic
Convert llvm.SI.packf16 test uses

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@295797 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-22 00:27:34 +00:00
Matt Arsenault
6de2a82753 AMDGPU: Remove llvm.AMDGPU.clamp intrinsic
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@295789 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-21 23:46:04 +00:00
Matt Arsenault
aac82e218f AMDGPU: Redefine clamp node as clamp 0.0-1.0
Change implementation to use max instead of add.
min/max/med3 do not flush denormals regardless of the mode,
so it is OK to use it whether or not they are enabled.

Also allow using clamp with f16, and use knowledge
of dx10_clamp.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@295788 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-21 23:35:48 +00:00
Matt Arsenault
13384c6c8a AMDGPU: Add pass to expand memcpy/memmove/memset
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294635 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-09 22:00:42 +00:00
Matt Arsenault
52b8adef73 AMDGPU: Fold fneg into fmin/fmax_legacy
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293972 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-03 00:51:50 +00:00
Matt Arsenault
1c3956ed62 AMDGPU: Fold fneg into fminnum/fmaxnum
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293968 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-03 00:23:15 +00:00
Matt Arsenault
aa5760346f AMDGPU: Check if users of fneg can fold mods
In multi-use cases this can save a few instructions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293962 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-02 23:21:23 +00:00
Nirav Dave
529986a15d Revert "In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled."
This reverts commit r293893 which is miscompiling lua on ARM and
bootstrapping for x86-windows.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293915 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-02 18:24:55 +00:00
Nirav Dave
99b0642f83 In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
Recommiting after fixing X86 inc/dec chain bug.

    * Simplify Consecutive Merge Store Candidate Search

    Now that address aliasing is much less conservative, push through
    simplified store merging search and chain alias analysis which only
    checks for parallel stores through the chain subgraph. This is cleaner
    as the separation of non-interfering loads/stores from the
    store-merging logic.

    When merging stores search up the chain through a single load, and
    finds all possible stores by looking down from through a load and a
    TokenFactor to all stores visited.

    This improves the quality of the output SelectionDAG and the output
    Codegen (save perhaps for some ARM cases where we correctly constructs
    wider loads, but then promotes them to float operations which appear
    but requires more expensive constant generation).

    Some minor peephole optimizations to deal with improved SubDAG shapes (listed below)

    Additional Minor Changes:

      1. Finishes removing unused AliasLoad code

      2. Unifies the chain aggregation in the merged stores across code
         paths

      3. Re-add the Store node to the worklist after calling
         SimplifyDemandedBits.

      4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
         arbitrary, but seems sufficient to not cause regressions in
         tests.

      5. Remove Chain dependencies of Memory operations on CopyfromReg
         nodes as these are captured by data dependence

      6. Forward loads-store values through tokenfactors containing
          {CopyToReg,CopyFromReg} Values.

      7. Peephole to convert buildvector of extract_vector_elt to
         extract_subvector if possible (see
         CodeGen/AArch64/store-merge.ll)

      8. Store merging for the ARM target is restricted to 32-bit as
         some in some contexts invalid 64-bit operations are being
         generated. This can be removed once appropriate checks are
         added.

    This finishes the change Matt Arsenault started in r246307 and
    jyknight's original patch.

    Many tests required some changes as memory operations are now
    reorderable, improving load-store forwarding. One test in
    particular is worth noting:

      CodeGen/PowerPC/ppc64-align-long-double.ll - Improved load-store
      forwarding converts a load-store pair into a parallel store and
      a memory-realized bitcast of the same value. However, because we
      lose the sharing of the explicit and implicit store values we
      must create another local store. A similar transformation
      happens before SelectionDAG as well.

    Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293893 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-02 14:39:42 +00:00
Matt Arsenault
9f7e91552b AMDGPU: Use source modifiers with f16->f32 conversions
The operand types were defined to fit the fp16_to_fp node, which
has the half as an integer type. v_cvt_f32_f16 does support
source modifiers, so change this to have an FP type and modifiers.

For targets without legal f16, this requires recognizing the
bit operations and trying to produce them.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293857 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-02 02:27:04 +00:00
Matt Arsenault
2415a7067f AMDGPU: Cleanup fmin/fmax legacy function
Use a more specific subtarget check and combine hasOneUse checks

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293726 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-01 00:42:40 +00:00
Tom Stellard
f7f8a35213 Re-commit AMDGPU/GlobalISel: Add support for simple shaders
Fix build when global-isel is disabled and fix a warning.

Summary: We can select constant/global G_LOAD, global G_STORE, and G_GEP.

Reviewers: qcolombet, MatzeB, t.p.northover, ab, arsenm

Subscribers: mehdi_amini, vkalintiris, kzhuravl, wdng, nhaehnle, mgorny, yaxunl, tony-tye, modocache, llvm-commits, dberris

Differential Revision: https://reviews.llvm.org/D26730

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293551 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-30 21:56:46 +00:00
Tom Stellard
78e51c03b5 Revert "AMDGPU/GlobalISel: Add support for simple shaders"
This reverts commit r293503.

Revert while I investigate some of the buildbot failures.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293509 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-30 17:42:41 +00:00
Tom Stellard
945c85d877 AMDGPU/GlobalISel: Add support for simple shaders
Summary: We can select constant/global G_LOAD, global G_STORE, and G_GEP.

Reviewers: qcolombet, MatzeB, t.p.northover, ab, arsenm

Subscribers: mehdi_amini, vkalintiris, kzhuravl, wdng, nhaehnle, mgorny, yaxunl, tony-tye, modocache, llvm-commits, dberris

Differential Revision: https://reviews.llvm.org/D26730

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293503 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-30 17:09:15 +00:00
Matthias Braun
88d207542b Cleanup dump() functions.
We had various variants of defining dump() functions in LLVM. Normalize
them (this should just consistently implement the things discussed in
http://lists.llvm.org/pipermail/cfe-dev/2014-January/034323.html

For reference:
- Public headers should just declare the dump() method but not use
  LLVM_DUMP_METHOD or #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
- The definition of a dump method should look like this:
  #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
  LLVM_DUMP_METHOD void MyClass::dump() {
    // print stuff to dbgs()...
  }
  #endif

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293359 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-28 02:02:38 +00:00
Nirav Dave
d9031ef908 Revert "In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled."
This reverts commit r293184 which is failing in LTO builds

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293188 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-26 16:46:13 +00:00
Nirav Dave
dbb7a65598 In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
* Simplify Consecutive Merge Store Candidate Search

    Now that address aliasing is much less conservative, push through
    simplified store merging search and chain alias analysis which only
    checks for parallel stores through the chain subgraph. This is cleaner
    as the separation of non-interfering loads/stores from the
    store-merging logic.

    When merging stores search up the chain through a single load, and
    finds all possible stores by looking down from through a load and a
    TokenFactor to all stores visited.

    This improves the quality of the output SelectionDAG and the output
    Codegen (save perhaps for some ARM cases where we correctly constructs
    wider loads, but then promotes them to float operations which appear
    but requires more expensive constant generation).

    Some minor peephole optimizations to deal with improved SubDAG shapes (listed below)

    Additional Minor Changes:

      1. Finishes removing unused AliasLoad code

      2. Unifies the chain aggregation in the merged stores across code
         paths

      3. Re-add the Store node to the worklist after calling
         SimplifyDemandedBits.

      4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
         arbitrary, but seems sufficient to not cause regressions in
         tests.

      5. Remove Chain dependencies of Memory operations on CopyfromReg
         nodes as these are captured by data dependence

      6. Forward loads-store values through tokenfactors containing
          {CopyToReg,CopyFromReg} Values.

      7. Peephole to convert buildvector of extract_vector_elt to
         extract_subvector if possible (see
         CodeGen/AArch64/store-merge.ll)

      8. Store merging for the ARM target is restricted to 32-bit as
         some in some contexts invalid 64-bit operations are being
         generated. This can be removed once appropriate checks are
         added.

    This finishes the change Matt Arsenault started in r246307 and
    jyknight's original patch.

    Many tests required some changes as memory operations are now
    reorderable, improving load-store forwarding. One test in
    particular is worth noting:

      CodeGen/PowerPC/ppc64-align-long-double.ll - Improved load-store
      forwarding converts a load-store pair into a parallel store and
      a memory-realized bitcast of the same value. However, because we
      lose the sharing of the explicit and implicit store values we
      must create another local store. A similar transformation
      happens before SelectionDAG as well.

    Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293184 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-26 16:02:24 +00:00
Matt Arsenault
baa08a9804 AMDGPU: Fold fneg into round instructions
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293127 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-26 01:25:36 +00:00
Matt Arsenault
8611e02394 AMDGPU: Propagate fast math flags in fneg combines
Can't for fma/mad since it seems they can't have flags currently.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292818 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-23 19:08:34 +00:00
Jan Vesely
5709854b8d AMDGPU/R600: Serialize vector trunc stores to private AS
Add DUMMY_CHAIN SDNode to denote stores of interest

Bugzilla: https://llvm.org/bugs/show_bug.cgi?id=28915
Bugzilla: https://llvm.org/bugs/show_bug.cgi?id=30411

Differential Revision: https://reviews.llvm.org/D27964

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292651 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-20 21:24:26 +00:00
Matt Arsenault
261f60f486 AMDGPU: Disable some fneg combines unless nsz
For -(x + y) -> (-x) + (-y), if x == -y, this would
change the result from -0.0 to 0.0. Since the fma/fmad
combine is an extension of this problem it also
applies there.

fmul should be fine, and I don't think any of the unary
operators or conversions should be a problem either.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292473 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-19 06:35:27 +00:00
Matt Arsenault
cd002582ba AMDGPU: Skip fneg/select combine if it can fold into other
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291792 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-12 18:58:15 +00:00
Matt Arsenault
9db1ec3d4d AMDGPU: Fold free fneg into sin
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291790 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-12 18:48:09 +00:00
Matt Arsenault
49dd8fcb21 AMDGPU: Fold fneg into fmul_legacy
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291784 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-12 18:26:30 +00:00
Matt Arsenault
bd870734a5 AMDGPU: Fold fneg into rcp
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291779 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-12 17:46:35 +00:00
Matt Arsenault
cca494fd03 AMDGPU: Fold fneg into fp_round
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291778 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-12 17:46:33 +00:00
Matt Arsenault
e652041f69 AMDGPU: Fold fneg into fp_extend
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291777 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-12 17:46:28 +00:00
Matt Arsenault
94bf68d551 AMDGPU: Fold fneg into fma or fmad
Patch mostly by Fiona Glaser

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291733 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-12 00:32:16 +00:00
Matt Arsenault
ef33822be5 AMDGPU: Fold fneg into fmul
Patch mostly by Fiona Glaser

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291732 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-12 00:23:20 +00:00
Matt Arsenault
bcf34bbbdd AMDGPU: Fold fneg into fadd
Patch mostly by Fiona Glaser

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291731 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-12 00:09:34 +00:00
Matt Arsenault
8694e2f853 AMDGPU: Pull fneg/fabs out of a select
Allows better source modifier usage.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291729 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-11 23:57:38 +00:00
Matt Arsenault
da59cd0847 AMDGPU: Add tests for HasMultipleConditionRegisters
This was enabled without many specific tests or the comment.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291586 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-10 19:08:15 +00:00
Jan Vesely
0835374acb AMDGPU/R600: Don't use REGISTER_{LOAD,STORE} ISD nodes
This will make transition to SCRATCH_MEMORY easier

Differential Revision: https://reviews.llvm.org/D24746

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291279 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-06 21:00:46 +00:00
Jan Vesely
bf64cb107c AMDGPU/SI: Implement sendmsghalt intrinsic
v2: expose using amdgcn prefix

Differential Revision: https://reviews.llvm.org/D23511

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290977 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-04 18:06:55 +00:00
Matt Arsenault
e2b3286a26 AMDGPU: Invert cmp + select with constant
Canonicalize a select with a constant to the false side. This
enables more instruction shrinking opportunities since an
inline immediate can be used for the false side of v_cndmask_b32_e32.

This seems to usually be better but causes some code size regressions
in some tests.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290372 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-22 21:40:08 +00:00
Matt Arsenault
15a4f4e4ce AMDGPU: Update isFPImmLegal for f16
I don't think this matters because ConstantFP is legal.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290299 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-22 03:05:30 +00:00
Tom Stellard
38206ae07e AMDGPU/SI: Add a MachineMemOperand when lowering llvm.amdgcn.buffer.load.*
Reviewers: arsenm, nhaehnle, mareko

Subscribers: kzhuravl, wdng, yaxunl, llvm-commits, tony-tye

Differential Revision: https://reviews.llvm.org/D27834

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290184 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-20 17:19:44 +00:00
Matt Arsenault
56f1e4a76b AMDGPU: Fix asserting on returned tail calls
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289868 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-15 20:50:12 +00:00
Nirav Dave
e461ebb0f4 Revert "In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled."
Reverting due to ARM MCJIT and MIPS LLD error.

This reverts commit r289659.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289667 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-14 16:43:44 +00:00
Nirav Dave
ee001a514b In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
Retrying after fixing after removing load-store factoring through
token factors in favor of improved token factor operand pruning

Simplify Consecutive Merge Store Candidate Search

Now that address aliasing is much less conservative, push through
simplified store merging search which only checks for parallel stores
through the chain subgraph. This is cleaner as the separation of
non-interfering loads/stores from the store-merging logic.

Whem merging stores, search up the chain through a single load, and
finds all possible stores by looking down from through a load and a
TokenFactor to all stores visited. This improves the quality of the
output SelectionDAG and generally the output CodeGen (with some
exceptions).

Additional Minor Changes:

   1. Finishes removing unused AliasLoad code
   2. Unifies the the chain aggregation in the merged stores across
      code paths
   3. Re-add the Store node to the worklist after calling
      SimplifyDemandedBits.
   4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
      arbitrary, but seemed sufficient to not cause regressions in
      tests.

This finishes the change Matt Arsenault started in r246307 and
jyknight's original patch.

Many tests required some changes as memory operations are now
reorderable. Some tests relying on the order were changed to use
volatile memory operations

Noteworthy tests:

    CodeGen/AArch64/argument-blocks.ll -
      It's not entirely clear what the test_varargs_stackalign test is
      supposed to be asserting, but the new code looks right.

    CodeGen/AArch64/arm64-memset-inline.lli -
    CodeGen/AArch64/arm64-stur.ll -
    CodeGen/ARM/memset-inline.ll -

      The backend now generates *worse* code due to store merging
      succeeding, as we do do a 16-byte constant-zero store efficiently.

    CodeGen/AArch64/merge-store.ll -
      Improved, but there still seems to be an extraneous vector insert
      from an element to itself?

    CodeGen/PowerPC/ppc64-align-long-double.ll -
      Worse code emitted in this case, due to the improved store->load
      forwarding.

    CodeGen/X86/dag-merge-fast-accesses.ll -
    CodeGen/X86/MergeConsecutiveStores.ll -
    CodeGen/X86/stores-merging.ll -
    CodeGen/Mips/load-store-left-right.ll -
      Restored correct merging of non-aligned stores

    CodeGen/AMDGPU/promote-alloca-stored-pointer-value.ll -
      Improved. Correctly merges buffer_store_dword calls

    CodeGen/AMDGPU/si-triv-disjoint-mem-access.ll -
      Improved. Sidesteps loading a stored value and
      merges two stores

    CodeGen/X86/pr18023.ll -
      This test has been removed, as it was asserting incorrect
      behavior. Non-volatile stores *CAN* be moved past volatile loads,
      and now are.

    CodeGen/X86/vector-idiv.ll -
    CodeGen/X86/vector-lzcnt-128.ll -
      It's basically impossible to tell what these tests are actually
      testing. But, looks like the code got better due to the memory
      operations being recognized as non-aliasing.

    CodeGen/X86/win32-eh.ll -
      Both loads of the securitycookie are now merged.

Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle

Subscribers: wdng, nhaehnle, nemanjai, arsenm, weimingz, niravd, RKSimon, aemerson, qcolombet, dsanders, resistor, tstellarAMD, t.p.northover, spatel

Differential Revision: https://reviews.llvm.org/D14834

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289659 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-14 15:44:26 +00:00
Stephan Bergmann
20a600c431 Replace APFloatBase static fltSemantics data members with getter functions
At least the plugin used by the LibreOffice build
(<https://wiki.documentfoundation.org/Development/Clang_plugins>) indirectly
uses those members (through inline functions in LLVM/Clang include files in turn
using them), but they are not exported by utils/extract_symbols.py on Windows,
and accessing data across DLL/EXE boundaries on Windows is generally
problematic.

Differential Revision: https://reviews.llvm.org/D26671

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289647 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-14 11:57:17 +00:00
Matt Arsenault
a151ead662 AMDGPU: Fix i128 mul
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289231 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-09 17:49:14 +00:00
Nirav Dave
2c96583422 Revert "In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled."
This reverts commit r289221 which appears to be triggering an assertion

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289226 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-09 17:18:24 +00:00
Nirav Dave
615f3ccd10 In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
Retrying after fixing overly aggressive load-store forwarding optimization.

Simplify Consecutive Merge Store Candidate Search

Now that address aliasing is much less conservative, push through
simplified store merging search which only checks for parallel stores
through the chain subgraph. This is cleaner as the separation of
non-interfering loads/stores from the store-merging logic.

Whem merging stores, search up the chain through a single load, and
finds all possible stores by looking down from through a load and a
TokenFactor to all stores visited. This improves the quality of the
output SelectionDAG and generally the output CodeGen (with some
exceptions).

Additional Minor Changes:

   1. Finishes removing unused AliasLoad code
   2. Unifies the the chain aggregation in the merged stores across
      code paths
   3. Re-add the Store node to the worklist after calling
      SimplifyDemandedBits.
   4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
      arbitrary, but seemed sufficient to not cause regressions in
      tests.

This finishes the change Matt Arsenault started in r246307 and
jyknight's original patch.

Many tests required some changes as memory operations are now
reorderable. Some tests relying on the order were changed to use
volatile memory operations

Noteworthy tests:

    CodeGen/AArch64/argument-blocks.ll -
      It's not entirely clear what the test_varargs_stackalign test is
      supposed to be asserting, but the new code looks right.

    CodeGen/AArch64/arm64-memset-inline.lli -
    CodeGen/AArch64/arm64-stur.ll -
    CodeGen/ARM/memset-inline.ll -

      The backend now generates *worse* code due to store merging
      succeeding, as we do do a 16-byte constant-zero store efficiently.

    CodeGen/AArch64/merge-store.ll -
      Improved, but there still seems to be an extraneous vector insert
      from an element to itself?

    CodeGen/PowerPC/ppc64-align-long-double.ll -
      Worse code emitted in this case, due to the improved store->load
      forwarding.

    CodeGen/X86/dag-merge-fast-accesses.ll -
    CodeGen/X86/MergeConsecutiveStores.ll -
    CodeGen/X86/stores-merging.ll -
    CodeGen/Mips/load-store-left-right.ll -
      Restored correct merging of non-aligned stores

    CodeGen/AMDGPU/promote-alloca-stored-pointer-value.ll -
      Improved. Correctly merges buffer_store_dword calls

    CodeGen/AMDGPU/si-triv-disjoint-mem-access.ll -
      Improved. Sidesteps loading a stored value and
      merges two stores

    CodeGen/X86/pr18023.ll -
      This test has been removed, as it was asserting incorrect
      behavior. Non-volatile stores *CAN* be moved past volatile loads,
      and now are.

    CodeGen/X86/vector-idiv.ll -
    CodeGen/X86/vector-lzcnt-128.ll -
      It's basically impossible to tell what these tests are actually
      testing. But, looks like the code got better due to the memory
      operations being recognized as non-aliasing.

    CodeGen/X86/win32-eh.ll -
      Both loads of the securitycookie are now merged.

Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle

Subscribers: wdng, nhaehnle, nemanjai, arsenm, weimingz, niravd, RKSimon, aemerson, qcolombet, dsanders, resistor, tstellarAMD, t.p.northover, spatel

Differential Revision: https://reviews.llvm.org/D14834

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289221 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-09 16:15:12 +00:00
Tom Stellard
c53f76cc0b AMDGPU : Add S_SETREG instructions to fix fdiv precision issues.
Patch By: Wei Ding

Summary: This patch fixes the fdiv precision issues.

Reviewers: b-sumner, cfang, wdng, arsenm

Subscribers: kzhuravl, nhaehnle, yaxunl, tony-tye

Differential Revision: https://reviews.llvm.org/D26424

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288879 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-07 02:42:15 +00:00
Matt Arsenault
350d0dab1e AMDGPU: Refactor exp instructions
Structure the definitions a bit more like the other classes.

The main change here is to split EXP with the done bit set
to a separate opcode, so we can set mayLoad = 1 so that it won't
be reordered before the other exp stores, since this has the special
constraint that if the done bit is set then this should be the last
exp in she shader.

Previously all exp instructions were inferred to have unmodeled
side effects.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288695 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-05 20:23:10 +00:00
Stanislav Mekhanoshin
ab827bdc35 [AMDGPU] Allow hoisting of comparisons out of a loop and eliminate condition copies
Codegen prepare sinks comparisons close to a user is we have only one register
for conditions. For AMDGPU we have many SGPRs capable to hold vector conditions.
Changed BE to report we have many condition registers. That way IR LICM pass
would hoist an invariant comparison out of a loop and codegen prepare will not
sink it.

With that done a condition is calculated in one block and used in another.
Current behavior is to store workitem's condition in a VGPR using v_cndmask_b32
and then restore it with yet another v_cmp instruction from that v_cndmask's
result. To mitigate the issue a propagation of source SGPR pair in place of v_cmp
is implemented. Additional side effect of this is that we may consume less VGPRs
at a cost of more SGPRs in case if holding of multiple conditions is needed, and
that is a clear win in most cases.

Differential Revision: https://reviews.llvm.org/D26114

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288053 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-28 18:58:49 +00:00