Commit Graph

4581 Commits

Author SHA1 Message Date
Kamil Rytarowski
26b5adf664 [tblgen] Disable Leak detection for ASan/GCC and LSan/LLVM
Summary: Add support for sanitizing TableGen.cpp with ASan/GCC and LSan/LLVM.

Reviewers: fjricci, kcc, aaron.ballman, mgorny

Reviewed By: fjricci

Subscribers: jakubjelinek, llvm-commits, #llvm

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67908

llvm-svn: 372731
2019-09-24 11:22:34 +00:00
Aditya Nandakumar
41a90454e6 [TableGen] Emit OperandType enums for RegisterOperands/RegisterClasses
https://reviews.llvm.org/D66773

The OpTypes::OperandType was creating an enum for all records that
inherit from Operand, but in reality there are operands for instructions
that inherit from other types too. In particular, RegisterOperand and
RegisterClass. This commit adds those types to the list of operand types
that are tracked by the OperandType enum.

Patch by: nlguillemot

llvm-svn: 372641
2019-09-23 18:51:00 +00:00
Mark Murray
5ef3341e20 Cosmetic; don't use the magic constant 35 when HASH is more readable. This matches other MCK__<THING>_* usage better.
Summary: No functional change. This fixes a magic constant in MCK__*_... macros only.

Reviewers: ostannard

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67840

llvm-svn: 372599
2019-09-23 12:52:42 +00:00
Craig Topper
e9c90270ed [X86][TableGen] Allow timm to appear in output patterns. Use it to remove ConvertToTarget opcodes from the X86 isel table.
We're now using a lot more TargetConstant nodes in SelectionDAG.
But we were still telling isel to convert some of them
to TargetConstants even though they already are. This is because
isel emits a conversion anytime the output pattern has a an 'imm'.
I guess for patterns in instructions we take the 'timm' from the
'set' pattern, but for Pat patterns with explcicit output we
previously had to say 'imm' since 'timm' wasn't allowed in outputs.

llvm-svn: 372525
2019-09-22 19:49:39 +00:00
Kerry McLaughlin
7b577a403e [IntrinsicEmitter] Add overloaded types for SVE intrinsics (Subdivide2 & Subdivide4)
Summary:
Both match the type of another intrinsic parameter of a vector type, but where each element is subdivided to form a vector with more elements of a smaller type.

Subdivide2Argument allows intrinsics such as the following to be defined:
 - declare <vscale x 4 x i32> @llvm.something.nxv4i32(<vscale x 8 x i16>)

Subdivide4Argument allows intrinsics such as:
 - declare <vscale x 4 x i32> @llvm.something.nxv4i32(<vscale x 16 x i8>)

Tests are included in follow up patches which add intrinsics using these types.

Reviewers: sdesmalen, SjoerdMeijer, greened, rovka

Reviewed By: sdesmalen

Subscribers: rovka, tschuett, jdoerfert, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67549

llvm-svn: 372380
2019-09-20 09:48:21 +00:00
Matt Arsenault
c204981f6f Reapply r372285 "GlobalISel: Don't materialize immarg arguments to intrinsics"
This reverts r372314, reapplying r372285 and the commits which depend
on it (r372286-r372293, and r372296-r372297)

This was missing one switch to getTargetConstant in an untested case.

llvm-svn: 372338
2019-09-19 16:26:14 +00:00
James Molloy
7fff32705f [TableGen] Support encoding per-HwMode
Much like ValueTypeByHwMode/RegInfoByHwMode, this patch allows targets
to modify an instruction's encoding based on HwMode. When the
EncodingInfos field is non-empty the Inst and Size fields of the Instruction
are ignored and taken from EncodingInfos instead.

As part of this promote getHwMode() from TargetSubtargetInfo to MCSubtargetInfo.

This is NFC for all existing targets - new code is generated only if targets
use EncodingByHwMode.

llvm-svn: 372320
2019-09-19 13:39:54 +00:00
Hans Wennborg
230a0cd001 Revert r372285 "GlobalISel: Don't materialize immarg arguments to intrinsics"
This broke the Chromium build, causing it to fail with e.g.

  fatal error: error in backend: Cannot select: t362: v4i32 = X86ISD::VSHLI t392, Constant:i8<15>

See llvm-commits thread of r372285 for details.

This also reverts r372286, r372287, r372288, r372289, r372290, r372291,
r372292, r372293, r372296, and r372297, which seemed to depend on the
main commit.

> Encode them directly as an imm argument to G_INTRINSIC*.
>
> Since now intrinsics can now define what parameters are required to be
> immediates, avoid using registers for them. Intrinsics could
> potentially want a constant that isn't a legal register type. Also,
> since G_CONSTANT is subject to CSE and legalization, transforms could
> potentially obscure the value (and create extra work for the
> selector). The register bank of a G_CONSTANT is also meaningful, so
> this could throw off future folding and legalization logic for AMDGPU.
>
> This will be much more convenient to work with than needing to call
> getConstantVRegVal and checking if it may have failed for every
> constant intrinsic parameter. AMDGPU has quite a lot of intrinsics wth
> immarg operands, many of which need inspection during lowering. Having
> to find the value in a register is going to add a lot of boilerplate
> and waste compile time.
>
> SelectionDAG has always provided TargetConstant for constants which
> should not be legalized or materialized in a register. The distinction
> between Constant and TargetConstant was somewhat fuzzy, and there was
> no automatic way to force usage of TargetConstant for certain
> intrinsic parameters. They were both ultimately ConstantSDNode, and it
> was inconsistently used. It was quite easy to mis-select an
> instruction requiring an immediate. For SelectionDAG, start emitting
> TargetConstant for these arguments, and using timm to match them.
>
> Most of the work here is to cleanup target handling of constants. Some
> targets process intrinsics through intermediate custom nodes, which
> need to preserve TargetConstant usage to match the intrinsic
> expectation. Pattern inputs now need to distinguish whether a constant
> is merely compatible with an operand or whether it is mandatory.
>
> The GlobalISelEmitter needs to treat timm as a special case of a leaf
> node, simlar to MachineBasicBlock operands. This should also enable
> handling of patterns for some G_* instructions with immediates, like
> G_FENCE or G_EXTRACT.
>
> This does include a workaround for a crash in GlobalISelEmitter when
> ARM tries to uses "imm" in an output with a "timm" pattern source.

llvm-svn: 372314
2019-09-19 12:33:07 +00:00
Matt Arsenault
6df65c514b GlobalISel: Don't materialize immarg arguments to intrinsics
Encode them directly as an imm argument to G_INTRINSIC*.

Since now intrinsics can now define what parameters are required to be
immediates, avoid using registers for them. Intrinsics could
potentially want a constant that isn't a legal register type. Also,
since G_CONSTANT is subject to CSE and legalization, transforms could
potentially obscure the value (and create extra work for the
selector). The register bank of a G_CONSTANT is also meaningful, so
this could throw off future folding and legalization logic for AMDGPU.

This will be much more convenient to work with than needing to call
getConstantVRegVal and checking if it may have failed for every
constant intrinsic parameter. AMDGPU has quite a lot of intrinsics wth
immarg operands, many of which need inspection during lowering. Having
to find the value in a register is going to add a lot of boilerplate
and waste compile time.

SelectionDAG has always provided TargetConstant for constants which
should not be legalized or materialized in a register. The distinction
between Constant and TargetConstant was somewhat fuzzy, and there was
no automatic way to force usage of TargetConstant for certain
intrinsic parameters. They were both ultimately ConstantSDNode, and it
was inconsistently used. It was quite easy to mis-select an
instruction requiring an immediate. For SelectionDAG, start emitting
TargetConstant for these arguments, and using timm to match them.

Most of the work here is to cleanup target handling of constants. Some
targets process intrinsics through intermediate custom nodes, which
need to preserve TargetConstant usage to match the intrinsic
expectation. Pattern inputs now need to distinguish whether a constant
is merely compatible with an operand or whether it is mandatory.

The GlobalISelEmitter needs to treat timm as a special case of a leaf
node, simlar to MachineBasicBlock operands. This should also enable
handling of patterns for some G_* instructions with immediates, like
G_FENCE or G_EXTRACT.

This does include a workaround for a crash in GlobalISelEmitter when
ARM tries to uses "imm" in an output with a "timm" pattern source.

llvm-svn: 372285
2019-09-19 01:33:14 +00:00
Daniel Sanders
1c646a78c0 Fix compile-time regression caused by rL371928
Summary:
Also fixup rL371928 for cases that occur on our out-of-tree backend

There were still quite a few intermediate APInts and this caused the
compile time of MCCodeEmitter for our target to jump from 16s up to
~5m40s. This patch, brings it back down to ~17s by eliminating pretty
much all of them using two new APInt functions (extractBitsAsZExtValue(),
insertBits() but with a uint64_t). The exact conditions for eliminating
them is that the field extracted/inserted must be <=64-bit which is
almost always true.

Note: The two new APInt API's assume that APInt::WordSize is at least
64-bit because that means they touch at most 2 APInt words. They
statically assert that's true. It seems very unlikely that someone
is patching it to be smaller so this should be fine.

Reviewers: jmolloy

Reviewed By: jmolloy

Subscribers: hiraditya, dexonsmith, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67686

llvm-svn: 372243
2019-09-18 18:14:42 +00:00
Simon Pilgrim
0ca68847de [TableGen] CodeGenMapTable - Don't dereference a dyn_cast result. NFCI.
The static analyzer is warning about potential null dereferences of dyn_cast<> results - in these cases we can safely use cast<> directly as we know that these cases should all be the correct type, which is why its working atm and anyway cast<> will assert if they aren't.

llvm-svn: 372146
2019-09-17 17:32:15 +00:00
Graham Hunter
15543f7650 [SVE][MVT] Fixed-length vector MVT ranges
* Reordered MVT simple types to group scalable vector types
    together.
  * New range functions in MachineValueType.h to only iterate over
    the fixed-length int/fp vector types.
  * Stopped backends which don't support scalable vector types from
    iterating over scalable types.

Reviewers: sdesmalen, greened

Reviewed By: greened

Differential Revision: https://reviews.llvm.org/D66339

llvm-svn: 372099
2019-09-17 10:19:23 +00:00
James Molloy
852739ecc8 [CodeEmitter] Support instruction widths > 64 bits
Some VLIW instruction sets are Very Long Indeed. Using uint64_t constricts the Inst encoding to 64 bits (naturally).

This change switches CodeEmitter to a mode that uses APInts when Inst's bitwidth is > 64 bits (NFC for existing targets).

When Inst.BitWidth > 64 the prototype changes to:

  void TargetMCCodeEmitter::getBinaryCodeForInstr(const MCInst &MI,
                                                  SmallVectorImpl<MCFixup> &Fixups,
                                                  APInt &Inst,
                                                  APInt &Scratch,
                                                  const MCSubtargetInfo &STI);

The Inst parameter returns the encoded instruction, the Scratch parameter is used internally for manipulating operands and is exposed so that the underlying storage can be reused between calls to getBinaryCodeForInstr. The goal is to elide any APInt constructions that we can.

Similarly the operand encoding prototype changes to:

  getMachineOpValue(const MCInst &MI, const MCOperand &MO, APInt &op, SmallVectorImpl<MCFixup> &Fixups, const MCSubtargetInfo &STI);

That is, the operand is passed by reference as APInt rather than returned as uint64_t.

To reiterate, this APInt mode is enabled only when Inst.BitWidth > 64, so this change is NFC for existing targets.

llvm-svn: 371928
2019-09-15 08:35:08 +00:00
Tim Northover
1bb14916f2 AArch64: support arm64_32, an ILP32 slice for watchOS.
This is the main CodeGen patch to support the arm64_32 watchOS ABI in LLVM.
FastISel is mostly disabled for now since it would generate incorrect code for
ILP32.

llvm-svn: 371722
2019-09-12 10:22:23 +00:00
Matt Arsenault
2c4cefbd49 GlobalISel/TableGen: Handle REG_SEQUENCE patterns
The scalar f64 patterns don't work yet because they fail on multiple
results from the unused implicit def of scc in the result bit
operation.

llvm-svn: 371542
2019-09-10 17:57:33 +00:00
Matt Arsenault
4ec23178ea AMDGPU/GlobalISel: Select atomic loads
A new check for an explicitly atomic MMO is needed to avoid
incorrectly matching pattern for non-atomic loads

llvm-svn: 371418
2019-09-09 16:18:07 +00:00
Matt Arsenault
44317511dd AMDGPU: Remove code address space predicates
Fixes 8-byte, 8-byte aligned LDS loads. 16-byte case still broken due
to not be reported as legal.

llvm-svn: 371413
2019-09-09 16:02:07 +00:00
James Molloy
aecd6e19a2 [DFAPacketizer] Reapply: Track resources for packetized instructions
Reapply with fix to reduce resources required by the compiler - use
unsigned[2] instead of std::pair. This causes clang and gcc to compile
the generated file multiple times faster, and hopefully will reduce
the resource requirements on Visual Studio also. This fix is a little
ugly but it's clearly the same issue the previous author of
DFAPacketizer faced (the previous tables use unsigned[2] rather uglily
too).

This patch allows the DFAPacketizer to be queried after a packet is formed to work out which
resources were allocated to the packetized instructions.

This is particularly important for targets that do their own bundle packing - it's not
sufficient to know simply that instructions can share a packet; which slots are used is
also required for encoding.

This extends the emitter to emit a side-table containing resource usage diffs for each
state transition. The packetizer maintains a set of all possible resource states in its
current state. After packetization is complete, all remaining resource states are
possible packetization strategies.

The sidetable is only ~500K for Hexagon, but the extra tracking is disabled by default
(most uses of the packetizer like MachinePipeliner don't care and don't need the extra
maintained state).

Differential Revision: https://reviews.llvm.org/D66936

llvm-svn: 371399
2019-09-09 13:17:55 +00:00
Simon Pilgrim
1c0d839652 Revert rL371198 from llvm/trunk: [DFAPacketizer] Track resources for packetized instructions
This patch allows the DFAPacketizer to be queried after a packet is formed to work out which
resources were allocated to the packetized instructions.

This is particularly important for targets that do their own bundle packing - it's not
sufficient to know simply that instructions can share a packet; which slots are used is
also required for encoding.

This extends the emitter to emit a side-table containing resource usage diffs for each
state transition. The packetizer maintains a set of all possible resource states in its
current state. After packetization is complete, all remaining resource states are
possible packetization strategies.

The sidetable is only ~500K for Hexagon, but the extra tracking is disabled by default
(most uses of the packetizer like MachinePipeliner don't care and don't need the extra
maintained state).

Differential Revision: https://reviews.llvm.org/D66936
........
Reverted as this is causing "compiler out of heap space" errors on MSVC 2017/19 NDEBUG builds

llvm-svn: 371393
2019-09-09 12:33:22 +00:00
Matt Arsenault
0f15b35531 GlobalISel: Support physical register inputs in patterns
llvm-svn: 371253
2019-09-06 20:32:37 +00:00
James Molloy
bd5b6d09b9 [DFAPacketizer] Track resources for packetized instructions
This patch allows the DFAPacketizer to be queried after a packet is formed to work out which
resources were allocated to the packetized instructions.

This is particularly important for targets that do their own bundle packing - it's not
sufficient to know simply that instructions can share a packet; which slots are used is
also required for encoding.

This extends the emitter to emit a side-table containing resource usage diffs for each
state transition. The packetizer maintains a set of all possible resource states in its
current state. After packetization is complete, all remaining resource states are
possible packetization strategies.

The sidetable is only ~500K for Hexagon, but the extra tracking is disabled by default
(most uses of the packetizer like MachinePipeliner don't care and don't need the extra
maintained state).

Differential Revision: https://reviews.llvm.org/D66936

llvm-svn: 371198
2019-09-06 12:20:08 +00:00
Matt Arsenault
fe20e32ba0 GlobalISel/TableGen: Fix handling of EXTRACT_SUBREG constraints
This was only using the correct register constraints if this was the
final result instruction. If the extract was a sub instruction of the
result, it would attempt to use GIR_ConstrainSelectedInstOperands on a
COPY, which won't work. Move the handling to
createAndImportSubInstructionRenderer so it works correctly.

I don't fully understand why runOnPattern and
createAndImportSubInstructionRenderer both need to handle these
special cases, and constrain them with slightly different methods. If
I remove the runOnPattern handling, it does break the constraint when
the final result instruction is EXTRACT_SUBREG.

llvm-svn: 371150
2019-09-06 00:05:58 +00:00
Matt Arsenault
8c03368633 GlobalISel/TableGen: Don't skip REG_SEQUENCE based on patterns
This partially adds support for patterns with REG_SEQUENCE. The source
patterns are now accepted, but the pattern is still rejected due to
missing support for the instruction renderer.

llvm-svn: 370920
2019-09-04 16:19:34 +00:00
James Molloy
74fb57df4f [DFAPacketizer] Allow namespacing of automata per-itinerary
The Hexagon itineraries are cunningly crafted such that functional units between
itineraries do not clash. Because all itineraries are bundled into the same DFA,
a functional unit index clash would cause an incorrect DFA to be generated.

A workaround for this is to ensure all itineraries declare the universe of all
possible functional units, but this isn't ideal for three reasons:
  1) We only have a limited number of FUs we can encode in the packetizer, and
     using the universe causes us to hit the limit without care.
  2) Silent codegen faults are bad, and careful triage of the FU list shouldn't
     be required.
  3) Smooshing all itineraries into the same automaton allows combinations of
     instruction classes that cannot exist, which bloats the table.

A simple solution is to allow "namespacing" packetizers.

Differential Revision: https://reviews.llvm.org/D66940

llvm-svn: 370508
2019-08-30 19:50:49 +00:00
Craig Topper
89128644b0 [ValueTypes] Add v16f16 and v32f16 to EVT::getEVTString and Tablegen's getEnumName
Missed these when I hadded the enum entries

llvm-svn: 370494
2019-08-30 17:34:29 +00:00
Matt Arsenault
5bfe49f2ac GlobalISel/TableGen: Handle setcc patterns
This is a special case because one node maps to two different G_
instructions, and the operand order is changed.

This mostly enables G_FCMP for AMDPGPU. G_ICMP is still manually
selected for now since it has the SALU and VALU complication to deal
with.

llvm-svn: 370280
2019-08-29 01:13:41 +00:00
Jessica Paquette
9c177a174f Add tie-breaker for register class sorting in getSuperRegForSubReg
llvm::stable_sort is apparently not sufficient.

Use the same tie-breaker/sorting style as TopoOrderRC fix bot failures.

E.g.

http://lab.llvm.org:8011/builders/llvm-clang-x86_64-expensive-checks-win/builds/19401/steps/test-check-all/logs/stdio

llvm-svn: 370267
2019-08-28 22:03:05 +00:00
Jessica Paquette
4ebd240de7 [GlobalISel] Import patterns containing SUBREG_TO_REG
Reuse the logic for INSERT_SUBREG to also import SUBREG_TO_REG patterns.

- Split `inferSuperRegisterClass` into two functions, one which tries to use
  an existing TreePatternNode (`inferSuperRegisterClassForNode`), and one that
  doesn't. SUBREG_TO_REG doesn't have a node to leverage, which is the cause
  for the split.

- Rename GlobalISelEmitterInsertSubreg.td to GlobalISelEmitterSubreg.td and
  update it.

- Update impacted tests in the AArch64 and X86 backends.

This is kind of a hit/miss for code size improvements/regressions. E.g. in
add-ext.ll, we now get some identity copies. This isn't really anything the
importer can handle, since it's caused by a later pass introducing the copy for
the sake of correctness.

Differential Revision: https://reviews.llvm.org/D66769

llvm-svn: 370254
2019-08-28 20:12:31 +00:00
Jessica Paquette
2f041a67b9 Recommit "[GlobalISel] Import patterns containing INSERT_SUBREG"
I thought `llvm::sort` was stable for some reason but it's not.

Use `llvm::stable_sort` in `CodeGenTarget::getSuperRegForSubReg`.

Original patch: https://reviews.llvm.org/D66498

llvm-svn: 370084
2019-08-27 17:47:06 +00:00
Jessica Paquette
baa3b0c80e Revert "[GlobalISel] Import patterns containing INSERT_SUBREG"
When EXPENSIVE_CHECKS are enabled, GlobalISelEmitterSubreg.td doesn't get
stable output.

Reverting while I debug it.

See: https://reviews.llvm.org/D66498
llvm-svn: 370080
2019-08-27 17:26:44 +00:00
Cullen Rhodes
b3e8c4675d [IntrinsicEmitter] Support scalable vectors in intrinsics
Summary:
This patch adds support for scalable vectors in intrinsics, enabling
intrinsics such as the following to be defined:

    declare <vscale x 4 x i32> @llvm.something.nxv4i32(<vscale x 4 x i32>)

Support for this is implemented by defining a new type descriptor for
scalable vectors and adding mangling support for scalable vector types
in the name mangling scheme used by 'any' types in intrinsic signatures.

Tests have been added for IRBuilder to test scalable vectors work as
expected when using intrinsics through this interface. This required
implementing an intrinsic that is explicitly defined with scalable
vectors, e.g.  LLVMType<nxv4i32>, an SVE floating-point convert
intrinsic was used for this.  The behaviour of the overloaded type
LLVMScalarOrSameVectorWidth with scalable vectors is tested using the
existing masked load intrinsic. Also added an .ll test to test the
Verifier catches a bad intrinsic argument when passing a fixed-width
predicate (mask) to the masked.load intrinsic where a scalable is
expected.

Patch by Paul Walker

Reviewed By: sdesmalen

Differential Revision: https://reviews.llvm.org/D65930

llvm-svn: 370053
2019-08-27 12:57:09 +00:00
Jessica Paquette
131412fdfc [GlobalISel] Import patterns containing INSERT_SUBREG
This teaches the importer to handle INSERT_SUBREG instructions.

We were missing patterns involving INSERT_SUBREG in AArch64. It appears in
AArch64InstrInfo.td 107 times, and 14 times in AArch64InstrFormats.td.

To meaningfully import it, the GlobalISelEmitter needs to know how to infer a
super register class for a given register class.

This patch introduces the following:

- `getSuperRegForSubReg`, a function which finds the largest register class
which supports a value type and subregister index

- `inferSuperRegisterClass`, a function which finds the appropriate super
register class for an INSERT_SUBREG'

- `inferRegClassFromPattern`, a function which allows for some trivial
lookthrough into instructions

- `getRegClassFromLeaf`, a helper function which returns the register class for
a leaf `TreePatternNode`

- Support for subregister index operands in `importExplicitUseRenderer`

It also

- Updates tests in each backend which are impacted by the change

- Adds GlobalISelEmitterSubreg.td to test that we import and skip the expected
patterns

As a result of this patch, INSERT_SUBREG patterns in X86 may use the
LOW32_ADDR_ACCESS_RBP register class instead of GR32. This is correct, since the
register class contains the same registers as GR32 (except with the addition of
RBP). So, this also teaches X86 to handle that register class. This is in line
with X86ISelLowering, which treats this as a GR class.

Differential Revision: https://reviews.llvm.org/D66498

llvm-svn: 369973
2019-08-26 21:38:57 +00:00
Bjorn Pettersson
22afe8331c [TableGen] Correct comments for end of namespace. NFC
Summary:
Update end-of-namespace comments generated by
tablegen emitters to fulfill the rules setup by
clang-tidy's llvm-namespace-comment checker.

Fixed a few end-of-namespace comments in the
tablegen source code as well.

Reviewers: craig.topper

Reviewed By: craig.topper

Subscribers: craig.topper, stoklund, dschuff, sbc100, jgravelle-google, aheejin, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66396

llvm-svn: 369865
2019-08-25 10:47:30 +00:00
Benjamin Kramer
2e5598037b Use a bit of relaxed constexpr to make FeatureBitset costant intializable
This requires std::intializer_list to be a literal type, which it is
starting with C++14. The downside is that std::bitset is still not
constexpr-friendly so this change contains a re-implementation of most
of it.

Shrinks clang by ~60k.

llvm-svn: 369847
2019-08-24 15:02:44 +00:00
Benjamin Kramer
74e9ad1a9d Retire llvm::less_ptr. llvm::deref is much more flexible.
llvm-svn: 369675
2019-08-22 17:32:16 +00:00
Benjamin Kramer
db09140c2c Retire llvm::less/equal in favor of C++14 std::less<>/equal_to<>.
llvm-svn: 369674
2019-08-22 17:31:59 +00:00
Jessica Paquette
defcf6aa12 Teach GlobalISelEmitter to treat used iPTRAny operands as pointer operands
Overloaded intrinsics can use iPTRAny in used/input operands.

The GlobalISelEmitter doesn't know that these are pointers, so it treats them
as scalars. As a result, these intrinsics can't be imported.

This teaches the GlobalISelEmitter to recognize these as pointers rather than
scalars.

Differential Revision: https://reviews.llvm.org/D65756

llvm-svn: 369455
2019-08-20 22:04:10 +00:00
Matt Arsenault
09382185bf TableGen: Revert changes from r369038
These aren't needed for a specific use yet, and I meant to not commit
these.

llvm-svn: 369201
2019-08-18 00:20:42 +00:00
Matt Arsenault
4c5716bea1 MVT: Add v3i16/v3f16 vectors
AMDGPU has some buffer intrinsics which theoretically could use
this. Some of the generated tables include the 3 and 4 element vector
versions of these rounded to 64-bits, which is ambiguous. Add these to
help the table disambiguate these.

Assertion change is for the path odd sized vectors now take for R600.
v3i16 is widened to v4i16, which then needs to be promoted to v4i32.

llvm-svn: 369038
2019-08-15 18:58:25 +00:00
Jonas Devlieghere
2c693415b7 [llvm] Migrate llvm::make_unique to std::make_unique
Now that we've moved to C++14, we no longer need the llvm::make_unique
implementation from STLExtras.h. This patch is a mechanical replacement
of (hopefully) all the llvm::make_unique instances across the monorepo.

llvm-svn: 369013
2019-08-15 15:54:37 +00:00
David Bolvansky
a9fd1c29b8 [Intrinsics] Add a 'NoAlias' intrinsic property; annotate llvm.memcpy
Reviewers: jdoerfert

Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D66158

llvm-svn: 368810
2019-08-14 08:33:07 +00:00
Michael Liao
b0f9c14a99 [TableGen] Correct the shift to the proper bit width.
- Replace the previous 32-bit shift with 64-bit one matching `OpInit`.

llvm-svn: 368513
2019-08-10 16:15:06 +00:00
Daniel Sanders
91b87f212b [TableGen] Add "InitValue": Handle operands with set bit values in decoder methods
Summary:
The problem:
  When an operand had bits explicitly set to "1" (as in the InitValue.td test case attached), the decoder was ignoring those bits, and the DecoderMethod was receiving an input where the bits were still zero.

The solution:
  We added an "InitValue" variable that stores the initial value of the operand based on what bits were explicitly initialized to 1 in TableGen code. The generated decoder code then uses that initial value to initialize the "tmp" variable, then calls fieldFromInstruction to read the values for the remaining bits that were left unknown in TableGen.

This is mainly useful when there are variations of an instruction that differ based on what bits are set in the operands, since this change makes it possible to access those bits in a DecoderMethod. The DecoderMethod can use those bits to know how to handle the input.

Patch by Nicolas Guillemot

Reviewers: craig.topper, dsanders, fhahn

Reviewed By: dsanders

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D63741

llvm-svn: 368458
2019-08-09 17:30:33 +00:00
Craig Topper
7b7ab0396a [X86] Limit vpermil2pd/vpermil2ps immediates to 4 bits in the assembly parser.
The upper 4 bits of the immediate byte are used to encode a
register. We need to limit the explicit immediate to fit in the
remaining 4 bits.

Fixes PR42899.

llvm-svn: 368123
2019-08-07 05:34:27 +00:00
Amara Emerson
b76a537ffa [GlobalISel] Check LLT size matches memory size for non-truncating stores.
This was causing a bug where non-truncating stores would be selected instead of truncating ones.

Differential Revision: https://reviews.llvm.org/D64845

llvm-svn: 367737
2019-08-02 23:33:13 +00:00
Matt Arsenault
3e43a6c1b7 TableGen: Add MinAlignment predicate
AMDGPU uses some custom code predicates for testing alignments.

I'm still having trouble comprehending the behavior of predicate bits
in the PatFrag hierarchy. Any attempt to abstract these properties
unexpectdly fails to apply them.

llvm-svn: 367373
2019-07-31 00:14:43 +00:00
Matt Arsenault
61ac6a1386 AMDGPU: Avoid emitting "true" predicates
Empty condition strings are considerde always true. This removes a lot
of clutter from the generated matcher tables.

This shrinks the source size of AMDGPUGenDAGISel.inc from 7.3M to
6.1M.

llvm-svn: 367326
2019-07-30 15:56:43 +00:00
Matt Arsenault
bcf27a40b4 TableGen: Support physical register inputs > 255
This was truncating register value that didn't fit in unsigned char.
Switch AMDGPU sendmsg intrinsics to using a tablegen pattern.

llvm-svn: 366695
2019-07-22 15:02:34 +00:00
Stanislav Mekhanoshin
0c9d743a5c [AMDGPU] Allow register tuples to set asm names
This change reverts most of the previous register name generation.
The real problem is that RegisterTuple does not generate asm names.
Added optional operand to RegisterTuple. This way we can simplify
register name access and dramatically reduce the size of static
tables for the backend.

Differential Revision: https://reviews.llvm.org/D64967

llvm-svn: 366598
2019-07-19 18:05:01 +00:00
Hideto Ueno
a5786d13db [Attributor] Deduce "willreturn" function attribute
Summary:
Deduce the "willreturn" attribute for functions.

For now, intrinsics are not willreturn. More annotation will be done in another patch.

Reviewers: jdoerfert

Subscribers: jvesely, nhaehnle, nicholas, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63046

llvm-svn: 366335
2019-07-17 15:15:43 +00:00