Commit Graph

107896 Commits

Author SHA1 Message Date
Justin Lebar
d8660fa5dc [NVPTX] Implement __nvvm_atom_add_gen_d builtin.
Summary:
This just seems to have been an oversight.  We already supported the f64
atomic add with an explicit scope (e.g. "cta"), but not the scopeless
version.

Reviewers: tra

Subscribers: jholewinski, sanjoy, cfe-commits, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D39638

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317623 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 22:10:54 +00:00
Dave Lee
595a4486c1 Allow yaml2obj to order implicit sections for ELF
Summary:
This change allows yaml input to control the order of implicitly added sections
(`.symtab`, `.strtab`, `.shstrtab`). The order is controlled by adding a
placeholder section of the given name to the Sections field.

This change is to support changes in D39582, where it is desirable to control
the location of the `.dynsym` section.

Reviewers: compnerd, jakehehrlich

Reviewed By: jakehehrlich

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D39749

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317622 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 22:05:24 +00:00
Dinar Temirbulatov
c7c5ad7774 [SLPVectorizer] Failure to beneficially vectorize 'copyable' elements in integer binary ops.
Patch tries to improve vectorization of the following code:

    void add1(int * __restrict dst, const int * __restrict src) {
      *dst++ = *src++;
      *dst++ = *src++ + 1;
      *dst++ = *src++ + 2;
      *dst++ = *src++ + 3;
    }
    Allows to vectorize even if the very first operation is not a binary add, but just a load.

    Fixed PR34619 and other issues related to previous commit.

    Reviewers: spatel, mzolotukhin, mkuper, hfinkel, RKSimon, filcab, ABataev

    Reviewed By: ABataev, RKSimon

    Subscribers: llvm-commits, RKSimon

    Differential Revision: https://reviews.llvm.org/D28907


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317618 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 21:25:34 +00:00
Mitch Phillips
56fec39d44 Extend SpecialCaseList to allow users to blame matches on entries in the file.
Summary:
Extends SCL functionality to allow users to find the line number in the file the SCL is built from through SpecialCaseList::inSectionBlame(...).

Also removes the need to compile the SCL before use. As the matcher now contains a list of regexes to test against instead of a single regex, the regexes can be individually built on each insertion rather than one large compilation at the end of construction.

This change also fixes a bug where blank lines would cause the parser to become out-of-sync with the line number. An error on line `k` was being reported as being on line `k - num_blank_lines_before_k`.

Note: This change has a cyclical dependency on D39486. Both these changes must be submitted at the same time to avoid a build breakage.

Reviewers: vlad.tsyrklevich

Reviewed By: vlad.tsyrklevich

Subscribers: kcc, pcc, llvm-commits

Differential Revision: https://reviews.llvm.org/D39485

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317617 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 21:16:46 +00:00
Craig Topper
c062c8dc61 [CodeGenPrepare] Fix typo in comment. NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317614 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 20:56:17 +00:00
Graham Yiu
5363e7a31e Use new vector insert half-word and byte instructions when we see insertelement on '8 x i16' and '16 x i8' types. Also extended existing lit testcase to cover these cases.
Differential Revision: https://reviews.llvm.org/D34630

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317613 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 20:55:43 +00:00
Paul Robinson
e9d757c19e [DWARFv5] Support DW_FORM_strp in the .debug_line header.
Supporting this form in .debug_line.dwo will be done as a follow-up.

Differential Revision: https://reviews.llvm.org/D33155

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317607 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 19:57:12 +00:00
Craig Topper
7e1904e9c8 Recommit r317510 "[InstCombine] Pull shifts through a select plus binop with constant"
The hexagon test should be fixed now.

Original commit message:

This pulls shifts through a select+binop with a constant where the select conditionally executes the binop. We already do this for just the binop, but not with the select.

This can allow us to get the select closer to other selects to enable removing one.

Differential Revision: https://reviews.llvm.org/D39222

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317600 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 18:47:24 +00:00
Craig Topper
3c64d8ff3a [InstCombine] Update stale comment. NFC
Datalayout is no longer optional so the comment didn't match what the code currently does.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317594 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 17:37:32 +00:00
Krzysztof Parzyszek
5933b2471c [Hexagon] Make a test more flexible in HexagonLoopIdiomRecognition
An "or" that sets the sign-bit can be replaced with a "xor", if
the sign-bit was known to be clear before. With some changes to
instruction combining, the simple sign-bit check was failing.
Replace it with a more flexible one to catch more cases.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317592 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 17:05:54 +00:00
Florian Hahn
9983524662 [AArch64][SVE] Asm: Add support for (ADD|SUB)_ZZZ
Patch [5/5] in a series to add assembler/disassembler support for AArch64 SVE unpredicated ADD/SUB instructions.

Patch by Sander De Smalen.

Reviewed by: rengolin

Differential Revision: https://reviews.llvm.org/D39091


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317591 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 16:58:13 +00:00
Florian Hahn
6cf02b953e [AArch64][SVE] Asm: Add SVE (Z) Register definitions and parsing support
Patch [3/5] in a series to add assembler/disassembler support for AArch64 SVE unpredicated ADD/SUB instructions.

To summarise, this patch adds:

 * SVE register definitions
 * Methods to parse SVE register operands
 * Methods to print SVE register operands
 * RegKind SVEDataVector to distinguish it from other data types like scalar register or Neon vector.
 * k_SVEDataRegister and SVEDataRegOp to describe SVE registers (which will be extended by further patches with e.g. ElementWidth and the shift-extend type).


Patch by Sander De Smalen.

Reviewed by: rengolin

Differential Revision: https://reviews.llvm.org/D39089


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317590 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 16:45:48 +00:00
Craig Topper
70b92b659f [SelectionDAG] Fix typo in comment. NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317588 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 16:32:31 +00:00
Florian Hahn
861d2963c7 [AArch64][SVE] Asm: Set SVE as unsupported feature for existing scheduler models.
Patch [4/5] in a series to add assembler/disassembler support for AArch64 SVE unpredicated ADD/SUB instructions.

We add SVE as unsupported feature for CPUs that don't have SVE to prevent errors from scheduler models saying it lacks information for these instructions.

Patch by Sander De Smalen.

Reviewed by: rengolin

Differential Revision: https://reviews.llvm.org/D39090


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317582 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 15:03:11 +00:00
Petar Jovanovic
8cec6c4916 Reland "Correct dwarf unwind information in function epilogue for X86"
Reland r317100 with minor fix regarding ComputeCommonTailLength function in
BranchFolding.cpp. Skipping top CFI instructions block needs to executed on
several more return points in ComputeCommonTailLength().

Original r317100 message:

"Correct dwarf unwind information in function epilogue for X86"

This patch aims to provide correct dwarf unwind information in function
epilogue for X86.

It consists of two parts. The first part inserts CFI instructions that set
appropriate cfa offset and cfa register in emitEpilogue() in
X86FrameLowering. This part is X86 specific.

The second part is platform independent and ensures that:

- CFI instructions do not affect code generation
- Unwind information remains correct when a function is modified by
  different passes. This is done in a late pass by analyzing information
  about cfa offset and cfa register in BBs and inserting additional CFI
  directives where necessary.

Changed CFI instructions so that they:

- are duplicable
- are not counted as instructions when tail duplicating or tail merging
- can be compared as equal

Added CFIInstrInserter pass:

- analyzes each basic block to determine cfa offset and register valid at
  its entry and exit
- verifies that outgoing cfa offset and register of predecessor blocks match
  incoming values of their successors
- inserts additional CFI directives at basic block beginning to correct the
  rule for calculating CFA

Having CFI instructions in function epilogue can cause incorrect CFA
calculation rule for some basic blocks. This can happen if, due to basic
block reordering, or the existence of multiple epilogue blocks, some of the
blocks have wrong cfa offset and register values set by the epilogue block
above them.

CFIInstrInserter is currently run only on X86, but can be used by any target
that implements support for adding CFI instructions in epilogue.

Patch by Violeta Vukobrat.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317579 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 14:40:27 +00:00
Alexey Bataev
25ff19d87a [SLP] Fix PR35047: Fix default cost model for cast op in X86.
Summary:
The cost calculation for default case on X86 target does not always
follow correct wayt because of missing 4-th argument in
`BaseT::getCastInstrCost()` call. Added this missing parameter.

Reviewers: hfinkel, mkuper, RKSimon, spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D39687

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317576 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 14:23:44 +00:00
Kristof Beyls
81d5ecb65c Mark intentional fall-through with LLVM_FALLTHROUGH.
... to silence gcc 7's default -Wimplicit-fallthrough.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317573 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 13:31:52 +00:00
Florian Hahn
c5e08cf67c [AArch64][SVE] Asm: Replace 'IsVector' by 'RegKind' in AArch64AsmParser (NFC)
Patch [2/5] in a series to add assembler/disassembler support for AArch64 SVE unpredicated ADD/SUB instructions.

This change is a non functional change that adds RegKind as an alternative to 'isVector' to prepare it for newer types (SVE data vectors and predicate vectors) that will be added in next patches (where the SVE data vector is added as part of this patch set)

Patch by Sander De Smalen.

Reviewed by: rengolin

Differential Revision: https://reviews.llvm.org/D39088


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317569 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 13:07:50 +00:00
Kristof Beyls
55f6a859cc Silence C4715 warning from MSVC (NFC).
The warning started triggering after r317560.
This commit silences it in the same way as previously done in a similar
situation, see
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20140915/236088.html


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317568 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 11:54:00 +00:00
Kristof Beyls
b79469ca2f [GlobalISel] Enable legalizing non-power-of-2 sized types.
This changes the interface of how targets describe how to legalize, see
the below description.

1. Interface for targets to describe how to legalize.

In GlobalISel, the API in the LegalizerInfo class is the main interface
for targets to specify which types are legal for which operations, and
what to do to turn illegal type/operation combinations into legal ones.

For each operation the type sizes that can be legalized without having
to change the size of the type are specified with a call to setAction.
This isn't different to how GlobalISel worked before. For example, for a
target that supports 32 and 64 bit adds natively:

  for (auto Ty : {s32, s64})
    setAction({G_ADD, 0, s32}, Legal);

or for a target that needs a library call for a 32 bit division:

  setAction({G_SDIV, s32}, Libcall);

The main conceptual change to the LegalizerInfo API, is in specifying
how to legalize the type sizes for which a change of size is needed. For
example, in the above example, how to specify how all types from i1 to
i8388607 (apart from s32 and s64 which are legal) need to be legalized
and expressed in terms of operations on the available legal sizes
(again, i32 and i64 in this case). Before, the implementation only
allowed specifying power-of-2-sized types (e.g. setAction({G_ADD, 0,
s128}, NarrowScalar).  A worse limitation was that if you'd wanted to
specify how to legalize all the sized types as allowed by the LLVM-IR
LangRef, i1 to i8388607, you'd have to call setAction 8388607-3 times
and probably would need a lot of memory to store all of these
specifications.

Instead, the legalization actions that need to change the size of the
type are specified now using a "SizeChangeStrategy".  For example:

   setLegalizeScalarToDifferentSizeStrategy(
       G_ADD, 0, widenToLargerAndNarrowToLargest);

This example indicates that for type sizes for which there is a larger
size that can be legalized towards, do it by Widening the size.
For example, G_ADD on s17 will be legalized by first doing WidenScalar
to make it s32, after which it's legal.
The "NarrowToLargest" indicates what to do if there is no larger size
that can be legalized towards. E.g. G_ADD on s92 will be legalized by
doing NarrowScalar to s64.

Another example, taken from the ARM backend is:
   for (unsigned Op : {G_SDIV, G_UDIV}) {
     setLegalizeScalarToDifferentSizeStrategy(Op, 0,
         widenToLargerTypesUnsupportedOtherwise);
     if (ST.hasDivideInARMMode())
       setAction({Op, s32}, Legal);
     else
       setAction({Op, s32}, Libcall);
   }

For this example, G_SDIV on s8, on a target without a divide
instruction, would be legalized by first doing action (WidenScalar,
s32), followed by (Libcall, s32).

The same principle is also followed for when the number of vector lanes
on vector data types need to be changed, e.g.:

   setAction({G_ADD, LLT::vector(8, 8)}, LegalizerInfo::Legal);
   setAction({G_ADD, LLT::vector(16, 8)}, LegalizerInfo::Legal);
   setAction({G_ADD, LLT::vector(4, 16)}, LegalizerInfo::Legal);
   setAction({G_ADD, LLT::vector(8, 16)}, LegalizerInfo::Legal);
   setAction({G_ADD, LLT::vector(2, 32)}, LegalizerInfo::Legal);
   setAction({G_ADD, LLT::vector(4, 32)}, LegalizerInfo::Legal);
   setLegalizeVectorElementToDifferentSizeStrategy(
       G_ADD, 0, widenToLargerTypesUnsupportedOtherwise);

As currently implemented here, vector types are legalized by first
making the vector element size legal, followed by then making the number
of lanes legal. The strategy to follow in the first step is set by a
call to setLegalizeVectorElementToDifferentSizeStrategy, see example
above.  The strategy followed in the second step
"moreToWiderTypesAndLessToWidest" (see code for its definition),
indicating that vectors are widened to more elements so they map to
natively supported vector widths, or when there isn't a legal wider
vector, split the vector to map it to the widest vector supported.

Therefore, for the above specification, some example legalizations are:
  * getAction({G_ADD, LLT::vector(3, 3)})
    returns {WidenScalar, LLT::vector(3, 8)}
  * getAction({G_ADD, LLT::vector(3, 8)})
    then returns {MoreElements, LLT::vector(8, 8)}
  * getAction({G_ADD, LLT::vector(20, 8)})
    returns {FewerElements, LLT::vector(16, 8)}


2. Key implementation aspects.

How to legalize a specific (operation, type index, size) tuple is
represented by mapping intervals of integers representing a range of
size types to an action to take, e.g.:

       setScalarAction({G_ADD, LLT:scalar(1)},
                       {{1, WidenScalar},  // bit sizes [ 1, 31[
                        {32, Legal},       // bit sizes [32, 33[
                        {33, WidenScalar}, // bit sizes [33, 64[
                        {64, Legal},       // bit sizes [64, 65[
                        {65, NarrowScalar} // bit sizes [65, +inf[
                       });

Please note that most of the code to do the actual lowering of
non-power-of-2 sized types is currently missing, this is just trying to
make it possible for targets to specify what is legal, and how non-legal
types should be legalized.  Probably quite a bit of further work is
needed in the actual legalizing and the other passes in GlobalISel to
support non-power-of-2 sized types.

I hope the documentation in LegalizerInfo.h and the examples provided in the
various {Target}LegalizerInfo.cpp and LegalizerInfoTest.cpp explains well
enough how this is meant to be used.

This drops the need for LLT::{half,double}...Size().


Differential Revision: https://reviews.llvm.org/D30529



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317560 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 10:34:34 +00:00
Serguei Katkov
e82a7f1476 [CGP] Disable Select instruction handling in optimizeMemoryInst. NFC
This patch disables the handling of selects in optimization
extensing scope of optimizeMemoryInst.

The optimization itself is disable by default.
The idea here is just to switch optimiztion level step by step.

Specifically, first optimization will be enabled only for Phi nodes,
then select instructions will be added.

In case someone will complain about perfromance it will be easier to
detect what part of optimizations is responsible for that.

Differential Revision: https://reviews.llvm.org/D36073



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317555 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 09:43:08 +00:00
Bjorn Steinbrink
c1c411e7a8 [X86] Don't clobber reserved registers with stack adjustments
Summary:
Calls using invoke in funclet based functions are assumed to clobber
all registers, which causes the stack adjustment using pops to consider
all registers not defined by the call to be undefined, which can
unfortunately include the base pointer, if one is needed.

To prevent this (and possibly other hazards), skip reserved registers
when looking for candidate registers.

This fixes issue #45034 in the Rust compiler.

Reviewers: mkuper

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D39636

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317551 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 08:50:21 +00:00
Craig Topper
c305f3d45a [X86] Add patterns to fold a 64-bit load into the EVEX vcvtph2ps instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317548 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 07:13:07 +00:00
Craig Topper
7f4581842b [X86] Add patterns for folding a v16i8 with the VEX vcvtph2ps intrinsics.
Disable the peephole pass to prove that the pattern is working.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317547 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 07:13:06 +00:00
Craig Topper
0c6a9e1d8e [X86] Add support for using EVEX instructions for the legacy vcvtph2ps intrinsics.
Looks like there's some missed load folding opportunities for i64 loads.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317544 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 07:13:03 +00:00
Craig Topper
f14ad9a908 [X86] Use IMPLICIT_DEF in VEX/EVEX vcvtss2sd/vcvtsd2ss patterns instead of a COPY_TO_REGCLASS.
ExeDepsFix pass should take care of making the registers match.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317542 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 04:44:22 +00:00
Craig Topper
0b9dcde0fa [X86] Remove 'Requires' from instructions with no patterns. NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317541 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 04:44:21 +00:00
Davide Italiano
59aa4918d0 [Support/UNIX] posix_fallocate() can fail with EINVAL.
According to the docs on opegroup.org, the function can return
EINVAL if:

The len argument is less than zero, or the offset argument is less
than zero, or the underlying file system does not support this
operation.

I'd say it's a peculiar choice (when EONOTSUPP is right there), but
let's keep POSIX happy for now. This was independently discovered
by Mark Millard (on FreeBSD/ZFS).

Quickly ack'ed by Rui on IRC.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317535 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 00:47:04 +00:00
Adrian Prantl
0227fe59a9 Make DIExpression::createFragmentExpression() return an Optional.
We can't safely split arithmetic into multiple fragments because we
can't express carry-over between fragments.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317534 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 00:45:34 +00:00
Davide Italiano
964a48a5b6 [IPO/LowerTypesTest] Skip blockaddress(es) when replacing uses.
Blockaddresses refer to the function itself, therefore replacing them
would cause an assertion in doRAUW.

Fixes https://bugs.llvm.org/show_bug.cgi?id=35201

This was found when trying CFI on a proprietary kernel by Dmitry Mikulin.

Differential Revision:  https://reviews.llvm.org/D39695

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317527 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 00:09:25 +00:00
Matt Arsenault
db6cc311a0 AMDGPU: Remove redundant combine
This combine was already done in two places. The
generic combiner already has done this since
r217610, for adds (with a single use).

This one was added in r303641, and added support for handling
or as well. r313251 later added support to the generic
combine for or. It also turns out the isOrEquivalentToAdd
check is not necessary for this combine.

Additionally, we already reproduce this combine in yet
another place in the backend, although in that version
multiple uses of the add are still folded if it will
allow a fold into the addressing mode. That version needs
to be improved to understand ors though, as well as the
correct legal offsets for private.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317526 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-07 00:06:32 +00:00
Vedant Kumar
0ffa8796b2 [DebugInfo] Unify logic to merge DILocations. NFC.
This makes DILocation::getMergedLocation() do what its comment says it
does when merging locations for an Instruction: set the common inlineAt
scope. This simplifies Instruction::applyMergedLocation() a bit.

Testing: check-llvm, check-clang

Differential Revision: https://reviews.llvm.org/D39628

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317524 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-06 23:15:21 +00:00
Simon Dardis
b67df9f5a0 [Support][Chrono] Use explicit cast of text output of time values.
rL316419 exposed a platform specific issue where the type of the values
passed to llvm::format could be different to the format string.

Debian unstable for mips uses long long int for std::chrono:duration,
while x86_64 uses long int.

For mips, this resulted in the value being corrupted when rendered to a
string. Address this by explicitly casting the result of the duration_cast
to the type specified in the format string.

Reviewers: sammccall

Differential Revision: https://reviews.llvm.org/D39597


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317523 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-06 23:01:46 +00:00
Adrian Prantl
790be31f8c InstCombine: salvage the debug info of DCE'ed add instructions.
rdar://problem/31209283

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317522 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-06 22:49:39 +00:00
Craig Topper
acf8758c20 [X86] Make FeatureAVX512 imply FeatureF16C.
The EVEX to VEX pass is already assuming this is true under AVX512VL. We had special patterns to use zmm instructions if VLX and F16C weren't available.

Instead just make AVX512 imply F16C to make the EVEX to VEX behavior explicitly legal and remove the extra patterns.

All known CPUs with AVX512 have F16C so this should safe for now.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317521 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-06 22:49:04 +00:00
Craig Topper
490bc3940d [X86] Make FeatureAVX512 imply FeatureFMA.
Previously our VEX patterns were checking Subtarget.hasFMA() which checked FMA || AVX512. So we were behaving as if AVX512 implied it anyway. Which means we'd allow VEX encoded 128/256 FMA when AVX512F was enabled but AVX512VL is off. Regardless of the FMA flag.

EVEX to VEX also transforms scalar EVEX FMA instructions to their VEX versions even without the FMA flag. Similarly for 128/256 under AVX512VL.

So this makes AVX512 imply FeatureFMA to make our current behavior explicit.

All known CPUs that support AVX512 have VEX FMA instructions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317520 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-06 22:49:01 +00:00
Sanjay Patel
618cf29088 [ValueTracking] readonly (const) is a requirement for converting sqrt to llvm.sqrt; nnan is not
As discussed in D39204, this is effectively a revert of rL265521 which required nnan 
to vectorize sqrt libcalls based on the old LangRef definition of llvm.sqrt. Now that
the definition has been updated so the libcall and intrinsic have the same semantics
apart from potentially setting errno, we can remove the nnan requirement.

We have the right check to know that errno is not set:

if (!ICS.onlyReadsMemory())

...ahead of the switch.

This will solve https://bugs.llvm.org/show_bug.cgi?id=27435 assuming that's being 
built for a target with -fno-math-errno.

Differential Revision: https://reviews.llvm.org/D39642


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317519 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-06 22:40:09 +00:00
Hans Wennborg
e209a1a12d Revert r317510 "[InstCombine] Pull shifts through a select plus binop with constant"
This broke the CodeGen/Hexagon/loop-idiom/pmpy-mod.ll test on a bunch of buildbots.

> This pulls shifts through a select+binop with a constant where the select conditionally executes the binop. We already do this for just the binop, but not with the select.
>
> This can allow us to get the select closer to other selects to enable removing one.
>
> Differential Revision: https://reviews.llvm.org/D39222
>
> git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317510 91177308-0d34-0410-b5e6-96231b3b80d8

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317518 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-06 22:28:02 +00:00
Xinliang David Li
cebfaaf903 Fix comment /NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317514 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-06 21:57:51 +00:00
Bjorn Pettersson
4cbab70b62 [MIRPrinter] Use %subreg.xxx syntax for subregister index operands
Summary:
Print %subreg.<subregidxname> instead of just the subregister
index when printing immediate operands corresponding to subreg
indices in INSERT_SUBREG, EXTRACT_SUBREG, SUBREG_TO_REG and
REG_SEQUENCE.

Reviewers: qcolombet, MatzeB

Reviewed By: MatzeB

Subscribers: nhaehnle, javed.absar, llvm-commits

Differential Revision: https://reviews.llvm.org/D39696

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317513 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-06 21:46:06 +00:00
Craig Topper
aceaaf0aec [InstCombine] Pull shifts through a select plus binop with constant
This pulls shifts through a select+binop with a constant where the select conditionally executes the binop. We already do this for just the binop, but not with the select.

This can allow us to get the select closer to other selects to enable removing one.

Differential Revision: https://reviews.llvm.org/D39222

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317510 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-06 21:07:22 +00:00
Graham Yiu
e005ea7d87 Fix buildbot breakages from r317503. Add parentheses to assignment when using result as a condition.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317508 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-06 21:04:19 +00:00
Graham Yiu
20a90ab746 Adds code to PPC ISEL lowering to recognize byte inserts from vector_shuffles, and use P9 shift and vector insert byte instructions instead of vperm. Extends tests from vector insert half-word.
Differential Revision: https://reviews.llvm.org/D34497

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317503 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-06 20:18:30 +00:00
Dehao Chen
561a742524 Include already promoted counts when computing SUM for VP.
Summary: When computing the SUM for indirect call promotion, if the callsite is already promoted in the profile, it will be promoted before ICP. In the current implementation, ICP only sees remaining counts in SUM. This may cause extra indirect call targets being promoted. This patch updates the SUM to include the counts already promoted earlier. This way we do not end up promoting too many indirect call targets.

Reviewers: tejohnson

Reviewed By: tejohnson

Subscribers: llvm-commits, sanjoy

Differential Revision: https://reviews.llvm.org/D38763

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317502 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-06 19:52:49 +00:00
Guozhi Wei
416cdcce39 [PPC] Use xxbrd to speed up bswap64
Power doesn't have bswap instructions, so llvm generates following code sequence for bswap64.

  rotldi   5, 3, 16
  rotldi   4, 3, 8
  rotldi   9, 3, 24
  rotldi   10, 3, 32
  rotldi   11, 3, 48
  rotldi   12, 3, 56
  rldimi 4, 5, 8, 48
  rldimi 4, 9, 16, 40
  rldimi 4, 10, 24, 32
  rldimi 4, 11, 40, 16
  rldimi 4, 12, 48, 8
  rldimi 4, 3, 56, 0

But Power9 has vector bswap instructions, they can also be used to speed up scalar bswap intrinsic. With this patch, bswap64 can be translated to:

  mtvsrdd 34, 3, 3
  xxbrd 34, 34
  mfvsrld 3, 34

Differential Revision: https://reviews.llvm.org/D39510



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317499 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-06 19:09:38 +00:00
Matt Arsenault
bd04b64cd1 AMDGPU: Select v_mad_u64_u32 and v_mad_i64_i32
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317492 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-06 17:04:37 +00:00
Sanjay Patel
00e900afdb [IR] redefine 'UnsafeAlgebra' / 'reassoc' fast-math-flags and add 'trans' fast-math-flag
As discussed on llvm-dev:
http://lists.llvm.org/pipermail/llvm-dev/2016-November/107104.html
and again more recently:
http://lists.llvm.org/pipermail/llvm-dev/2017-October/118118.html

...this is a step in cleaning up our fast-math-flags implementation in IR to better match
the capabilities of both clang's user-visible flags and the backend's flags for SDNode.

As proposed in the above threads, we're replacing the 'UnsafeAlgebra' bit (which had the 
'umbrella' meaning that all flags are set) with a new bit that only applies to algebraic 
reassociation - 'AllowReassoc'.

We're also adding a bit to allow approximations for library functions called 'ApproxFunc' 
(this was initially proposed as 'libm' or similar).

...and we're out of bits. 7 bits ought to be enough for anyone, right? :) FWIW, I did 
look at getting this out of SubclassOptionalData via SubclassData (spacious 16-bits), 
but that's apparently already used for other purposes. Also, I don't think we can just 
add a field to FPMathOperator because Operator is not intended to be instantiated. 
We'll defer movement of FMF to another day.

We keep the 'fast' keyword. I thought about removing that, but seeing IR like this:
%f.fast = fadd reassoc nnan ninf nsz arcp contract afn float %op1, %op2
...made me think we want to keep the shortcut synonym.

Finally, this change is binary incompatible with existing IR as seen in the 
compatibility tests. This statement:
"Newer releases can ignore features from older releases, but they cannot miscompile 
them. For example, if nsw is ever replaced with something else, dropping it would be 
a valid way to upgrade the IR." 
( http://llvm.org/docs/DeveloperPolicy.html#ir-backwards-compatibility )
...provides the flexibility we want to make this change without requiring a new IR 
version. Ie, we're not loosening the FP strictness of existing IR. At worst, we will 
fail to optimize some previously 'fast' code because it's no longer recognized as 
'fast'. This should get fixed as we audit/squash all of the uses of 'isFast()'.

Note: an inter-dependent clang commit to use the new API name should closely follow 
commit.

Differential Revision: https://reviews.llvm.org/D39304



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317488 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-06 16:27:15 +00:00
Simon Pilgrim
ae8a0007cb [X86][SSE] Merge combineExtractVectorElt_SSE into combineExtractVectorElt. NFCI.
We still early-out for X86ISD::PEXTRW/X86ISD::PEXTRB so no actual change in behaviour, but it'll make it easier to add support in a future patch.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317485 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-06 15:28:25 +00:00
Simon Pilgrim
6e1c5e0143 [X86][SSE] Combine EXTRACT_VECTOR_ELT with combineExtractWithShuffle before XFormVExtractWithShuffleIntoLoad
combineExtractWithShuffle can handle more complex shuffles/bitcasts than we can with the equivalent code in XFormVExtractWithShuffleIntoLoad.

Mainly a compile time improvement now (combineExtractWithShuffle combines will have always failed late on inside XFormVExtractWithShuffleIntoLoad), and will let us merge combineExtractVectorElt_SSE in a future commit.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317481 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-06 14:34:19 +00:00
Yaxun Liu
ec1f0cc416 [AMDGPU] Change alloca addr space of r600 to 5 for amdgiz environment
Differential Revision: https://reviews.llvm.org/D39657


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317479 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-06 14:32:33 +00:00