602 Commits

Author SHA1 Message Date
Eli Friedman
5ff4fbf6b8 [ARM] Assert that ARMDAGToDAGISel creates valid UBFX/SBFX nodes.
We don't ever check these again (unless you're using
-fno-integrated-as), so make sure the extracted bits are well-defined.

I don't think it's possible to trigger any of the assertions on trunk,
but it's difficult to prove.  (The first one depends on DAGCombine to
minimize the number of set bits in AND masks; I think the others are
mathematically impossible to hit.)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@335931 91177308-0d34-0410-b5e6-96231b3b80d8
2018-06-28 21:49:41 +00:00
Ivan A. Kosarev
212054e1a9 [NEON] Support vldNq intrinsics in AArch32 (LLVM part)
This patch adds support for the q versions of the dup
(load-to-all-lanes) NEON intrinsics, such as vld2q_dup_f16() for
example.

Currently, non-q versions of the dup intrinsics are implemented
in clang by generating IR that first loads the elements of the
structure into the first lane with the lane (to-single-lane)
intrinsics, and then propagating it other lanes. There are at
least two problems with this approach. First, there are no
double-spaced to-single-lane byte-element instructions. For
example, there is no such instruction as 'vld2.8 { d0[0], d2[0]
}, [r0]'. That means we cannot rely on the to-single-lane
intrinsics and instructions to implement the q versions of the
dup intrinsics. Note that to-all-lanes instructions do support
all sizes of data items, including bytes.

The second problem with the current approach is that we need a
separate vdup instruction to propagate the structure to each
lane. So for vld4q_dup_f16() we would need four vdup instructions
in addition to the initial vld instruction.

This patch introduces dup LLVM intrinsics and reworks handling of
the currently supported (non-q) NEON dup intrinsics to expand
them into those LLVM intrinsics, thus eliminating the need for
using to-single-lane intrinsics and instructions.

Additionally, this patch adds support for u64 and s64 dup NEON
intrinsics. These are marked as Arch64-only in the ARM NEON
Reference, but it seems there are no reasons to not support them
in AArch32 mode. Please correct, if that is wrong.

That's what we generate with this patch applied:

vld2q_dup_f16:
  vld2.16 {d0[], d2[]}, [r0]
  vld2.16 {d1[], d3[]}, [r0]

vld3q_dup_f16:
  vld3.16 {d0[], d2[], d4[]}, [r0]
  vld3.16 {d1[], d3[], d5[]}, [r0]

vld4q_dup_f16:
  vld4.16 {d0[], d2[], d4[], d6[]}, [r0]
  vld4.16 {d1[], d3[], d5[], d7[]}, [r0]

Differential Revision: https://reviews.llvm.org/D48439


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@335733 91177308-0d34-0410-b5e6-96231b3b80d8
2018-06-27 13:57:52 +00:00
Tim Northover
4d3b212098 ARM: convert ORR instructions to ADD where possible on Thumb.
Thumb has more 16-bit encoding space dedicated to ADD than ORR, allowing both a
3-address encoding and a wider range of immediates. So, particularly when
optimizing for code size (but it doesn't make things worse elsewhere) it's
beneficial to select an OR operation to an ADD if we know overflow won't occur.

This is made even better by LLVM's penchant for putting operations in canonical
form by converting the other way.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@335119 91177308-0d34-0410-b5e6-96231b3b80d8
2018-06-20 12:09:44 +00:00
Ivan A. Kosarev
c7f180e8c4 [NEON] Support VST1xN intrinsics in AArch32 mode (LLVM part)
We currently support them only in AArch64. The NEON Reference,
however, says they are 'ARMv7, ARMv8' intrinsics.

Differential Revision: https://reviews.llvm.org/D47447


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@334361 91177308-0d34-0410-b5e6-96231b3b80d8
2018-06-10 09:27:27 +00:00
Eli Friedman
09c10d86a8 [ARM] Allow CMPZ transforms even if the input has multiple uses.
It looks like this got left in by accident in r289794; I can't think of
any reason this check would be necessary.  (Maybe it was meant to be a
check that the AND has one use? But we check that a few lines earlier.)

Differential Revision: https://reviews.llvm.org/D47921



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@334322 91177308-0d34-0410-b5e6-96231b3b80d8
2018-06-08 21:16:56 +00:00
Ivan A. Kosarev
a13992d918 [NEON] Support VLD1xN intrinsics in AArch32 mode (LLVM part)
We currently support them only in AArch64. The NEON Reference,
however, says they are 'ARMv7, ARMv8' intrinsics.

Differential Revision: https://reviews.llvm.org/D47120


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@333825 91177308-0d34-0410-b5e6-96231b3b80d8
2018-06-02 16:40:03 +00:00
Ivan A. Kosarev
f646a586eb Revert r333819 "[NEON] Support VLD1xN intrinsics in AArch32 mode (Clang part)"
The LLVM part was committed instead of the Clang part.

Differential Revision: https://reviews.llvm.org/D47121


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@333824 91177308-0d34-0410-b5e6-96231b3b80d8
2018-06-02 16:38:38 +00:00
Ivan A. Kosarev
c5b2db16de [NEON] Support VLD1xN intrinsics in AArch32 mode (Clang part)
We currently support them only in AArch64. The NEON Reference,
however, says they are 'ARMv7, ARMv8' intrinsics.

Differential Revision: https://reviews.llvm.org/D47121


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@333819 91177308-0d34-0410-b5e6-96231b3b80d8
2018-06-02 16:26:42 +00:00
Adrian Prantl
26b584c691 Remove \brief commands from doxygen comments.
We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.

Patch produced by

  for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done

Differential Revision: https://reviews.llvm.org/D46290

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@331272 91177308-0d34-0410-b5e6-96231b3b80d8
2018-05-01 15:54:18 +00:00
Nirav Dave
3dd52e62ea [DAG, X86] Revert r327197 "Revert r327170, r327171, r327172"
Reland ISel cycle checking improvements after simplifying node id
invariant traversal and correcting typo.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@327898 91177308-0d34-0410-b5e6-96231b3b80d8
2018-03-19 20:19:46 +00:00
Sjoerd Meijer
e9c814d4ae [ARM] Support for v4f16 and v8f16 vectors
This is the groundwork for adding the Armv8.2-A FP16 vector intrinsics, which
uses v4f16 and v8f16 vector operands and return values. All the moving parts
are tested with two intrinsics, a 1-operand v8f16 and a 2-operand v4f16
intrinsic. In a follow-up patch the rest of the intrinsics and tests will be
added.

Differential Revision: https://reviews.llvm.org/D44538


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@327839 91177308-0d34-0410-b5e6-96231b3b80d8
2018-03-19 13:35:25 +00:00
Nirav Dave
f35e82f91d Revert "[DAG, X86] Revert r327197 "Revert r327170, r327171, r327172""
as it times out building test-suite on PPC.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@327778 91177308-0d34-0410-b5e6-96231b3b80d8
2018-03-17 19:24:54 +00:00
Nirav Dave
f79c73681d [DAG, X86] Revert r327197 "Revert r327170, r327171, r327172"
Reland ISel cycle checking improvements after simplifying and reducing
node id invariant traversal.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@327777 91177308-0d34-0410-b5e6-96231b3b80d8
2018-03-17 17:42:10 +00:00
Nirav Dave
87407bd576 Revert: r327172 "Correct load-op-store cycle detection analysis"
r327171 "Improve Dependency analysis when doing multi-node Instruction Selection"
        r328170 "[DAG] Enforce stricter NodeId invariant during Instruction selection"

Reverting patch as NodeId invariant change is causing pathological
increases in compile time on PPC

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@327197 91177308-0d34-0410-b5e6-96231b3b80d8
2018-03-10 02:16:15 +00:00
Nirav Dave
0f0ed7bf93 [DAG] Enforce stricter NodeId invariant during Instruction selection
Instruction Selection makes use of the topological ordering of nodes
by node id (a node's operands have smaller node id than it) when doing
cycle detection.  During selection we may violate this property as a
selection of multiple nodes may induce a use dependence (and thus a
node id restriction) between two unrelated nodes. If a selected node
has an unselected successor this may allow us to miss a cycle in
detection an invalid selection.

This patch fixes this by marking all unselected successors of a
selected node have negated node id.  We avoid pruning on such negative
ids but still can reconstruct the original id for pruning.

In-tree targets have been updated to replace DAG-level replacements
with ISel-level ones which enforce this property.

This preemptively fixes PR36312 before triggering commit r324359 relands

Reviewers: craig.topper, bogner, jyknight

Subscribers: arsenm, nhaehnle, javed.absar, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D43198

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@327170 91177308-0d34-0410-b5e6-96231b3b80d8
2018-03-09 20:57:15 +00:00
Florian Hahn
98079e294f [ARM] Fix codegen for VLD3/VLD4/VST3/VST4 with WB
Code generation of VLD3, VLD4, VST3 and VST4 with register writeback is
broken due to 2 separate bugs:

1) VLD1d64TPseudoWB_register and VLD1d64QPseudoWB_register are missing
   rules to expand them to non pseudo MIR. These are selected for
   ARMISD::VLD3_UPD/VLD4_UPD with v1i64 vectors in SelectVLD.

2) Selection of the right VLD/VST instruction is broken for load and
   store of 3 and 4 v1i64 vectors. SelectVLD and SelectVST are called
   with MIR opcode for fixed writeback (ie increment is access size)
   and call getVLDSTRegisterUpdateOpcode() to select an opcode with
   register writeback if base register update is of a different size.
   Since getVLDSTRegisterUpdateOpcode() only knows about
   VLD1/VLD2/VST1/VST2 the call is currently conditional on the number
   of element in the vector.

   However, VLD1/VST1 is selected by SelectVLD/SelectVST's caller for
   load and stores of 3 or 4 v1i64 vectors. Therefore the opcode is not
   updated which later lead to a fixed writeback instruction being
   constructed with an extra operand for the register writeback.

This patch addresses the two issues as follows:
- it adds the necessary mapping from VLD1d64TPseudoWB_register and
  VLD1d64QPseudoWB_register to VLD1d64Twb_register and
  VLD1d64Qwb_register respectively. Like for the existing _fixed
  variants, the cost of these is bumped for unaligned access.
- it changes the logic in SelectVLD and SelectVSD to call isVLDfixed
  and isVSTfixed respectively to decide whether the opcode should be
  updated. It also reworks the logic and comments for pushing the
  writeback offset operand and r0 operand to clarify the logic:
  writeback offset needs to be pushed if it's a register writeback,
  r0 needs to be pushed if not and the instruction is a
  VLD1/VLD2/VST1/VST2.

Reviewers: rengolin, t.p.northover, samparker

Reviewed By: samparker

Patch by Thomas Preud'homme <thomas.preudhomme@arm.com>

Differential Revision: https://reviews.llvm.org/D42970


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@326570 91177308-0d34-0410-b5e6-96231b3b80d8
2018-03-02 13:02:55 +00:00
Sjoerd Meijer
3b4e0d39af [ARM] Armv8.2-A FP16 code generation (part 1/3)
This is the groundwork for Armv8.2-A FP16 code generation .

Clang passes and returns _Float16 values as floats, together with the required
bitconverts and truncs etc. to implement correct AAPCS behaviour, see D42318.
We will implement half-precision argument passing/returning lowering in the ARM
backend soon, but for now this means that this:

_Float16 sub(_Float16 a, _Float16 b) {
  return a + b;
}

gets lowered to this:

define float @sub(float %a.coerce, float %b.coerce) {
entry:
  %0 = bitcast float %a.coerce to i32
  %tmp.0.extract.trunc = trunc i32 %0 to i16
  %1 = bitcast i16 %tmp.0.extract.trunc to half
  <SNIP>
  %add = fadd half %1, %3
  <SNIP>
}

When FullFP16 is *not* supported, we don't make f16 a legal type, and we get
legalization for "free", i.e. nothing changes and everything works as before.
And also f16 argument passing/returning is handled.

When FullFP16 is supported, we do make f16 a legal type, and have 2 places that
we need to patch up: f16 argument passing and returning, which involves minor
tweaks to avoid unnecessary code generation for some bitcasts.

As a "demonstrator" that this works for the different FP16, FullFP16, softfp
modes, etc., I've added match rules to the VSUB instruction description showing
that we can codegen this instruction from IR, but more importantly, also to
some conversion instructions. These conversions were causing issue before in
the FP16 and FullFP16 cases.

I've also added match rules to the VLDRH and VSTRH desriptions, so that we can
actually compile the entire half-precision sub code example above. This showed
that these loads and stores had the wrong addressing mode specified: AddrMode5
instead of AddrMode5FP16, which turned out not be implemented at all, so that
has also been added.

This is the minimal patch that shows all the different moving parts. In patch
2/3 I will add some efficient lowering of bitcasts, and in 2/3 I will add the
remaining Armv8.2-A FP16 instruction descriptions.


Thanks to Sam Parker and Oliver Stannard for their help and reviews!


Differential Revision: https://reviews.llvm.org/D38315



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@323512 91177308-0d34-0410-b5e6-96231b3b80d8
2018-01-26 09:26:40 +00:00
Andre Vieira
ef93cf0a24 [ARM] Fix erroneous availability of SMMLS for Armv7-M
Differential Revision: https://reviews.llvm.org/D41855


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@322360 91177308-0d34-0410-b5e6-96231b3b80d8
2018-01-12 09:21:09 +00:00
David Blaikie
e3a9b4ce3a Fix a bunch more layering of CodeGen headers that are in Target
All these headers already depend on CodeGen headers so moving them into
CodeGen fixes the layering (since CodeGen depends on Target, not the
other way around).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@318490 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-17 01:07:10 +00:00
Momchil Velikov
29a99e3278 [ARM] Split Arm jump table branch into i12 and rs suffixed versions
This is a refactoring/cleanup of Arm `addrmode2` operand class. The patch
removes it completely.

Differential Revision: https://reviews.llvm.org/D39832



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@318291 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-15 12:02:55 +00:00
Javed Absar
65d41d8235 [ARM] Tidy up banked registers encoding
Moves encoding (SYSm) information of banked registers to ARMSystemRegister.td,
where it rightly belongs and forms a single point of reference in the code.

Reviewed by: @fhahn, @rovka, @olista01
Differential Revision: https://reviews.llvm.org/D36219



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@309910 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-03 01:24:12 +00:00
Javed Absar
683224ecbd [ARM] Unify handling of M-Class system registers
This patch cleans up and fixes issues in the M-Class system register handling:

1. It defines the system registers and the encoding (SYSm values) in one place:
   a new ARMSystemRegister.td using SearchableTable, thereby removing the
   hand-coded values which existed in multiple places.

2. Some system registers e.g. BASEPRI_MAX_NS which do not exist were being allowed!
   Ref: ARMv6/7/8M architecture reference manual.

Reviewed by: @t.p.northover, @olist01, @john.brawn
Differential Revision: https://reviews.llvm.org/D35209



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@308456 91177308-0d34-0410-b5e6-96231b3b80d8
2017-07-19 12:57:16 +00:00
Sam Parker
676084e2a9 [ARM] Allow rematerialization of ARM Thumb literal pool loads
Constants are crucial for code size in the ARM Thumb-1 instruction
set. The 16 bit instruction size often does not offer enough space
for immediate arguments. This means that additional instructions are
frequently used to load constants into registers. Since constants are
hoisted, this can lead to significant register spillage if they are
used multiple times in a single function. This can be avoided by
rematerialization, i.e. recomputing a constant instead of reloading
it from the stack. This patch fixes the rematerialization of literal
pool loads in the ARM Thumb instruction set.

Patch by Philip Ginsbach

Differential Revision: https://reviews.llvm.org/D33936


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@308004 91177308-0d34-0410-b5e6-96231b3b80d8
2017-07-14 08:23:56 +00:00
Tim Northover
3d8a34a84b ARM: avoid handing a deleted node back to TableGen during ISel.
When we replaced the multiplicand the destination node might already exist.
When that happens the original gets CSEd and deleted. However, it's actually
used as the offset so nonsense is produced.

Should fix PR32726.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@301983 91177308-0d34-0410-b5e6-96231b3b80d8
2017-05-02 22:45:19 +00:00
Tim Northover
8cc396e7aa ARM: handle post-indexed NEON ops where the offset isn't the access width.
Before, we assumed that any ConstantInt offset was precisely the access width,
so we could use the "[rN]!" form. ISelLowering only ever created that kind, but
further simplification during combining could lead to unexpected constants and
incorrect codegen.

Should fix PR32658.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300878 91177308-0d34-0410-b5e6-96231b3b80d8
2017-04-20 19:54:02 +00:00
Benjamin Kramer
5fc394e871 Fix use-after-frees on memory allocated in a Recycler.
This will become asan errors once the patch lands that poisons the
memory after free. The x86 change is a hack, but I don't see how to
solve this properly at the moment.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300867 91177308-0d34-0410-b5e6-96231b3b80d8
2017-04-20 18:29:14 +00:00
Eli Friedman
89e89fb349 [ARM] Use TableGen patterns to select vtbl. NFC.
Differential Revision: https://reviews.llvm.org/D32103



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300749 91177308-0d34-0410-b5e6-96231b3b80d8
2017-04-19 20:39:39 +00:00
Eli Friedman
152625875f [ARM] Replace some C++ selection code with TableGen patterns. NFC.
Differential Revision: https://reviews.llvm.org/D30794



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@297768 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-14 18:43:37 +00:00
Sam Parker
53c73db7b9 [ARM] Move SMULW[B|T] isel to DAG Combine
Create nodes for smulwb and smulwt and move their selection from
DAGToDAG to DAG combine. smlawb and smlawt can then be selected
using tablegen. Added some helper functions to detect shift patterns
as well as a wrapper around SimplifyDemandBits. Added a couple of
extra tests.

Differential Revision: https://reviews.llvm.org/D30708



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@297716 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-14 09:13:22 +00:00
Artyom Skrobov
180a200844 Refactor the multiply-accumulate combines to act on
ARMISD::ADD[CE] nodes, instead of the generic ISD::ADD[CE].

Summary:
This allows for some simplification because the combines
are no longer limited to just one go at the node before
it gets legalized into an ARM target-specific one.

Reviewers: jmolloy, rogfer01

Subscribers: aemerson, llvm-commits, rengolin

Differential Revision: https://reviews.llvm.org/D30401

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@297453 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-10 12:41:33 +00:00
Artyom Skrobov
b44e58377d In Thumb1 mode, the custom lowering for ARMISD::CMPZ could never emit tADDi3
Reviewers: jmolloy, t.p.northover

Reviewed By: t.p.northover

Subscribers: t.p.northover, aemerson, rengolin, llvm-commits

Differential Revision: https://reviews.llvm.org/D30097

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@295478 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-17 18:59:16 +00:00
John Brawn
b0221e3835 [ARM] Fix incorrect mask bits in MSR encoding for write_register intrinsic
In the encoding of system registers in the M-class MSR instruction the mask bits
should be 2 for registers that don't take a _<bits> qualifier (the instruction
is unpredictable otherwise), and should also be 2 if the register takes a
_<bits> qualifier but it's not present as no _<bits> is an alias for _nzcvq.

Differential Revision: https://reviews.llvm.org/D29828


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294762 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-10 17:41:08 +00:00
Eli Friedman
59768451f2 [ARM] Add ARMISD::VLD1DUP to match vld1_dup more consistently.
Currently, there are substantial problems forming vld1_dup even if the
VDUP survives legalization. The lack of an actual node
leads to terrible results: not only can we not form post-increment vld1_dup
instructions, but we form scalar pre-increment and post-increment
loads which force the loaded value into a GPR. This patch fixes that
by combining the vdup+load into an ARMISD node before DAGCombine
messes it up.

Also includes a crash fix for vld2_dup (see testcase @vld2dupi8_postinc_variable).

Recommiting with fix to avoid forming vld1dup if the type of the load
doesn't match the type of the vdup (see
https://llvm.org/bugs/show_bug.cgi?id=31404).

Differential Revision: https://reviews.llvm.org/D27694



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289972 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-16 18:44:08 +00:00
Nico Weber
56bffdd266 Revert 279703, it caused PR31404.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289923 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-16 04:51:25 +00:00
Sjoerd Meijer
bc7935f3f4 [Thumb] Teach ISel how to lower compares of AND bitmasks efficiently
This is essentially a recommit of r285893, but with a correctness fix. The
problem of the original commit was that this:

bic r5, r7, #31
cbz r5, .LBB2_10

got rewritten into:

lsrs  r5, r7, #5
beq .LBB2_10

The result in destination register r5 is not the same and this is incorrect
when r5 is not dead. So this fix includes checking the uses of the AND
destination register. And also, compared to the original commit, some regression
tests didn't need changing anymore because of this extra check.

For completeness, this was the original commit message:

For the common pattern (CMPZ (AND x, #bitmask), #0), we can do some more
efficient instruction selection if the bitmask is one consecutive sequence of
set bits (32 - clz(bm) - ctz(bm) == popcount(bm)).

1) If the bitmask touches the LSB, then we can remove all the upper bits and
set the flags by doing one LSLS.
2) If the bitmask touches the MSB, then we can remove all the lower bits and
set the flags with one LSRS.
3) If the bitmask has popcount == 1 (only one set bit), we can shift that bit
into the sign bit with one LSLS and change the condition query from NE/EQ to
MI/PL (we could also implement this by shifting into the carry bit and
branching on BCC/BCS).
4) Otherwise, we can emit a sequence of LSLS+LSRS to remove the upper and lower
zero bits of the mask.

1-3 require only one 16-bit instruction and can elide the CMP. 4 requires two
16-bit instructions but can elide the CMP and doesn't require materializing a
complex immediate, so is also a win.

Differential Revision: https://reviews.llvm.org/D27761


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289794 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-15 09:38:59 +00:00
Eli Friedman
c023f9db3f [ARM] Add ARMISD::VLD1DUP to match vld1_dup more consistently.
Currently, there are substantial problems forming vld1_dup even if the
VDUP survives legalization. The lack of an actual node
leads to terrible results: not only can we not form post-increment vld1_dup
instructions, but we form scalar pre-increment and post-increment
loads which force the loaded value into a GPR. This patch fixes that
by combining the vdup+load into an ARMISD node before DAGCombine
messes it up.

Also includes a crash fix for vld2_dup (see testcase @vld2dupi8_postinc_variable).

Differential Revision: https://reviews.llvm.org/D27694



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289703 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-14 20:25:26 +00:00
Chandler Carruth
604dd12e78 Remove a redundant condition found by PVS-Studio.
Filed http://llvm.org/PR30897 to teach Clang to warn on this kind of
stuff.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285945 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-03 17:42:02 +00:00
James Molloy
6300980dd1 Revert "[Thumb] Teach ISel how to lower compares of AND bitmasks efficiently"
This reverts commit r285893. It caused (probably) http://lab.llvm.org:8011/builders/clang-cmake-thumbv7-a15-full-sh/builds/83 .

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285912 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-03 14:08:01 +00:00
James Molloy
e03e2fa99d [Thumb] Teach ISel how to lower compares of AND bitmasks efficiently
This recommits r281323, which was backed out for two reasons. One, a selfhost failure, and two, it apparently caused Chromium failures. Actually, the latter was a red herring. The log has expired from the former, but I suspect that was a red herring too (actually caused by another problematic patch of mine). Therefore reapplying, and will watch the bots like a hawk.

For the common pattern (CMPZ (AND x, #bitmask), #0), we can do some more efficient instruction selection if the bitmask is one consecutive sequence of set bits (32 - clz(bm) - ctz(bm) == popcount(bm)).

1) If the bitmask touches the LSB, then we can remove all the upper bits and set the flags by doing one LSLS.
2) If the bitmask touches the MSB, then we can remove all the lower bits and set the flags with one LSRS.
3) If the bitmask has popcount == 1 (only one set bit), we can shift that bit into the sign bit with one LSLS and change the condition query from NE/EQ to MI/PL (we could also implement this by shifting into the carry bit and branching on BCC/BCS).
4) Otherwise, we can emit a sequence of LSLS+LSRS to remove the upper and lower zero bits of the mask.

1-3 require only one 16-bit instruction and can elide the CMP. 4 requires two 16-bit instructions but can elide the CMP and doesn't require materializing a complex immediate, so is also a win.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285893 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-03 10:18:20 +00:00
Sam Parker
a6ec572d31 [ARM] Predicate UMAAL selection on hasDSP.
UMAAL is a DSP instruction and it is not available on thumbv7m
(Cortex-M3) and thumbv6m (Cortex-M0+1) targets. Also fix wrong
CHECK prefix in longMAC.ll test.

Patch by Vadzim Dambrouski.

Differential Revision: https://reviews.llvm.org/D25890



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285278 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-27 09:47:10 +00:00
James Molloy
71beb002a1 [Thumb] Don't try and emit LDRH/LDRB from the constant pool
This is not a valid encoding - these instructions cannot do PC-relative addressing.

The underlying problem here is of whitelist in ARMISelDAGToDAG that unwraps ARMISD::Wrappers during addressing-mode selection. This didn't realise TargetConstantPool was actually possible, so didn't handle it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@283323 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-05 14:52:13 +00:00
Mehdi Amini
67f335d992 Use StringRef in Pass/PassManager APIs (NFC)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@283004 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-01 02:56:57 +00:00
Tim Northover
ab6de7f248 ARM: check alignment before transforming ldr -> ldm (or similar).
ldm and stm instructions always require 4-byte alignment on the pointer, but we
weren't checking this before trying to reduce code-size by replacing a
post-indexed load/store with them. Unfortunately, we were also dropping this
incormation in DAG ISel too, but that's easy enough to fix.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281893 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-19 09:11:09 +00:00
Sanjay Patel
c0a42ffc17 getVectorElementType().getSizeInBits() -> getScalarSizeInBits() ; NFCI
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281495 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-14 16:37:15 +00:00
James Molloy
9502e5be6f Revert "[Thumb] Teach ISel how to lower compares of AND bitmasks efficiently"
This reverts commit r281323. It caused chromium test failures and a selfhost failure.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281451 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-14 09:45:28 +00:00
James Molloy
e81b6f3153 [Thumb] Teach ISel how to lower compares of AND bitmasks efficiently
For the common pattern (CMPZ (AND x, #bitmask), #0), we can do some more efficient instruction selection if the bitmask is one consecutive sequence of set bits (32 - clz(bm) - ctz(bm) == popcount(bm)).

1) If the bitmask touches the LSB, then we can remove all the upper bits and set the flags by doing one LSLS.
2) If the bitmask touches the MSB, then we can remove all the lower bits and set the flags with one LSRS.
3) If the bitmask has popcount == 1 (only one set bit), we can shift that bit into the sign bit with one LSLS and change the condition query from NE/EQ to MI/PL (we could also implement this by shifting into the carry bit and branching on BCC/BCS).
4) Otherwise, we can emit a sequence of LSLS+LSRS to remove the upper and lower zero bits of the mask.

1-3 require only one 16-bit instruction and can elide the CMP. 4 requires two 16-bit instructions but can elide the CMP and doesn't require materializing a complex immediate, so is also a win.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281323 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-13 12:12:32 +00:00
Nico Weber
eebb0bcce0 Revert r281215, it caused PR30358.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281263 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-12 21:40:50 +00:00
James Molloy
91db09d0e8 [Thumb] Teach ISel how to lower compares of AND bitmasks efficiently
For the common pattern (CMPZ (AND x, #bitmask), #0), we can do some more efficient instruction selection if the bitmask is one consecutive sequence of set bits (32 - clz(bm) - ctz(bm) == popcount(bm)).

1) If the bitmask touches the LSB, then we can remove all the upper bits and set the flags by doing one LSLS.
2) If the bitmask touches the MSB, then we can remove all the lower bits and set the flags with one LSRS.
3) If the bitmask has popcount == 1 (only one set bit), we can shift that bit into the sign bit with one LSLS and change the condition query from NE/EQ to MI/PL (we could also implement this by shifting into the carry bit and branching on BCC/BCS).
4) Otherwise, we can emit a sequence of LSLS+LSRS to remove the upper and lower zero bits of the mask.

1-3 require only one 16-bit instruction and can elide the CMP. 4 requires two 16-bit instructions but can elide the CMP and doesn't require materializing a complex immediate, so is also a win.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281215 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-12 14:30:48 +00:00
James Molloy
5349cafffb [Thumb] Select (CMPZ X, -C) -> (CMPZ (ADDS X, C), 0)
The CMPZ #0 disappears during peepholing, leaving just a tADDi3, tADDi8 or t2ADDri. This avoids having to materialize the expensive negative constant in Thumb-1, and allows a shrinking from a 32-bit CMN to a 16-bit ADDS in Thumb-2.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281040 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-09 12:52:24 +00:00
Justin Bogner
6673ea81f6 Replace "fallthrough" comments with LLVM_FALLTHROUGH
This is a mechanical change of comments in switches like fallthrough,
fall-through, or fall-thru to use the LLVM_FALLTHROUGH macro instead.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278902 91177308-0d34-0410-b5e6-96231b3b80d8
2016-08-17 05:10:15 +00:00