527 Commits

Author SHA1 Message Date
Artyom Skrobov
8e3da3b912 Reapply r298417 "[ARM] Recommit the glueless lowering of addc/adde in Thumb1"
The UB in t2_so_imm_neg conversion has been addressed under D31242 / r298512

This reverts commit r298482.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@298562 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-22 23:35:51 +00:00
Vitaly Buka
122028fe3a Revert "[ARM] Recommit the glueless lowering of addc/adde in Thumb1, including the amended (no UB anymore) fix for adding/subtracting -2147483648."
Fails check-llvm with ubsan

This reverts commit r298417.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@298482 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-22 05:07:44 +00:00
Artyom Skrobov
4a0582c127 [ARM] Recommit the glueless lowering of addc/adde in Thumb1,
including the amended (no UB anymore) fix for adding/subtracting -2147483648.

This reverts r298328 "[ARM] Revert r297443 and r297820."
and partially reverts r297842 "Revert "[Thumb1] Fix the bug when adding/subtracting -2147483648""

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@298417 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-21 18:39:41 +00:00
Eli Friedman
4355f366da [ARM] Revert r297443 and r297820.
The glueless lowering of addc/adde in Thumb1 has known serious
miscompiles (see https://reviews.llvm.org/D31081), and r297820
causes an infinite loop for certain constructs.  It's not
clear when they will be fixed, so let's just take them out
of the tree for now.

(I resolved a small conflict with r297453.)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@298328 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-21 00:26:39 +00:00
Matthias Braun
e00719deb7 TargetInstrInfo: Provide default implementation of isTailCall().
In fact this default implementation should be the only implementation,
keep it virtual for now to accomodate targets that don't model flags
correctly.

Differential Revision: https://reviews.llvm.org/D30747

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@297980 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-16 20:02:30 +00:00
Artyom Skrobov
0117588515 De-duplicate the two implementations of ARMBaseInstrInfo::isProfitableToIfCvt() [NFC]
Reviewers: congh, rengolin

Subscribers: aemerson, llvm-commits

Differential Revision: https://reviews.llvm.org/D30934

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@297738 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-14 13:38:45 +00:00
Artyom Skrobov
c2bf545fa9 For Thumb1, lower ADDC/ADDE/SUBC/SUBE via the glueless ARMISD nodes,
same as already done for ARM and Thumb2.

Reviewers: jmolloy, rogfer01, efriedma

Subscribers: aemerson, llvm-commits, rengolin

Differential Revision: https://reviews.llvm.org/D30400

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@297443 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-10 07:40:27 +00:00
Krzysztof Parzyszek
88a7ff46b1 Make TargetInstrInfo::isPredicable take a const reference, NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@296901 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-03 18:30:54 +00:00
Eugene Zelenko
6159a002c7 [ARM] Fix some Clang-tidy modernize and Include What You Use warnings; other minor fixes (NFC).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293229 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-26 23:40:06 +00:00
Serge Rogatch
eab2fdc64a [XRay][Arm32] Reduce the portion of the stub and implement more staging for tail calls - in LLVM
Summary:
This patch provides more staging for tail calls in XRay Arm32 . When the logging part of XRay is ready for tail calls, its support in the core part of XRay Arm32 may be as easy as changing the number passed to the handler from 1 to 2.
Coupled patch:
- https://reviews.llvm.org/D28674

Reviewers: dberris, rengolin

Reviewed By: dberris

Subscribers: llvm-commits, iid_iunknown, aemerson, rengolin, dberris

Differential Revision: https://reviews.llvm.org/D28673

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293185 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-26 16:17:03 +00:00
Sjoerd Meijer
360cd34a82 [Thumb] Add support for tMUL in the compare instruction peephole optimizer.
We also want to optimise tests like this: return a*b == 0.  The MULS
instruction is flag setting, so we don't need the CMP instruction but can
instead branch on the result of the MULS. The generated instructions sequence
for this example was: MULS, MOVS, MOVS, CMP. The MOVS instruction load the
boolean values resulting from the select instruction, but these MOVS
instructions are flag setting and were thus preventing this optimisation. Now
we first reorder and move the MULS to before the CMP and generate sequence
MOVS, MOVS, MULS, CMP so that the optimisation could trigger. Reordering of the
MULS and MOVS is safe to do because the subsequent MOVS instructions just set
the CPSR register and don't use it, i.e. the CPSR is dead.

Differential Revision: https://reviews.llvm.org/D27990


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292608 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-20 13:10:12 +00:00
Diana Picus
0f6ded417f [ARM] Use helpers for adding pred / CC operands. NFC
Hunt down some of the places where we use bare addReg(0) or addImm(AL).addReg(0)
and replace with add(condCodeOp()) and add(predOps()). This should make it
easier to understand what those operands represent (without having to look at
the definition of the instruction that we're adding to).

Differential Revision: https://reviews.llvm.org/D27984

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292587 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-20 08:15:24 +00:00
Diana Picus
1279248fc3 [ARM] CodeGen: Remove AddDefaultCC. NFC.
Replace all uses of AddDefaultCC with add(condCodeOp()).
The transformation has been done automatically with a custom tool based on Clang
AST Matchers + RefactoringTool.

Differential Revision: https://reviews.llvm.org/D28557

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291893 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-13 10:18:01 +00:00
Diana Picus
8a47810cd6 [CodeGen] Rename MachineInstrBuilder::addOperand. NFC
Rename from addOperand to just add, to match the other method that has been
added to MachineInstrBuilder for adding more than just 1 operand.

See https://reviews.llvm.org/D28057 for the whole discussion.

Differential Revision: https://reviews.llvm.org/D28556

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291891 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-13 09:58:52 +00:00
Diana Picus
0aacbaa851 [ARM] CodeGen: Remove AddDefaultPred. NFC.
Replace all uses of AddDefaultPred with MachineInstrBuilder::add(predOps()).
This makes the code building MachineInstrs more readable, because it allows us
to write code like:

MIB.addSomeOperand(blah)
   .add(predOps())
   .addAnotherOperand(blahblah)

instead of

AddDefaultPred(MIB.addSomeOperand(blah))
    .addAnotherOperand(blahblah)

This commit also adds the predOps helper in the ARM backend, as well as the add
method taking a variable number of operands to the MachineInstrBuilder.

The transformation has been done mostly automatically with a custom tool based
on Clang AST Matchers + RefactoringTool.

Differential Revision: https://reviews.llvm.org/D28555

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291890 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-13 09:37:56 +00:00
Sjoerd Meijer
bc7935f3f4 [Thumb] Teach ISel how to lower compares of AND bitmasks efficiently
This is essentially a recommit of r285893, but with a correctness fix. The
problem of the original commit was that this:

bic r5, r7, #31
cbz r5, .LBB2_10

got rewritten into:

lsrs  r5, r7, #5
beq .LBB2_10

The result in destination register r5 is not the same and this is incorrect
when r5 is not dead. So this fix includes checking the uses of the AND
destination register. And also, compared to the original commit, some regression
tests didn't need changing anymore because of this extra check.

For completeness, this was the original commit message:

For the common pattern (CMPZ (AND x, #bitmask), #0), we can do some more
efficient instruction selection if the bitmask is one consecutive sequence of
set bits (32 - clz(bm) - ctz(bm) == popcount(bm)).

1) If the bitmask touches the LSB, then we can remove all the upper bits and
set the flags by doing one LSLS.
2) If the bitmask touches the MSB, then we can remove all the lower bits and
set the flags with one LSRS.
3) If the bitmask has popcount == 1 (only one set bit), we can shift that bit
into the sign bit with one LSLS and change the condition query from NE/EQ to
MI/PL (we could also implement this by shifting into the carry bit and
branching on BCC/BCS).
4) Otherwise, we can emit a sequence of LSLS+LSRS to remove the upper and lower
zero bits of the mask.

1-3 require only one 16-bit instruction and can elide the CMP. 4 requires two
16-bit instructions but can elide the CMP and doesn't require materializing a
complex immediate, so is also a win.

Differential Revision: https://reviews.llvm.org/D27761


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289794 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-15 09:38:59 +00:00
James Molloy
6300980dd1 Revert "[Thumb] Teach ISel how to lower compares of AND bitmasks efficiently"
This reverts commit r285893. It caused (probably) http://lab.llvm.org:8011/builders/clang-cmake-thumbv7-a15-full-sh/builds/83 .

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285912 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-03 14:08:01 +00:00
James Molloy
e03e2fa99d [Thumb] Teach ISel how to lower compares of AND bitmasks efficiently
This recommits r281323, which was backed out for two reasons. One, a selfhost failure, and two, it apparently caused Chromium failures. Actually, the latter was a red herring. The log has expired from the former, but I suspect that was a red herring too (actually caused by another problematic patch of mine). Therefore reapplying, and will watch the bots like a hawk.

For the common pattern (CMPZ (AND x, #bitmask), #0), we can do some more efficient instruction selection if the bitmask is one consecutive sequence of set bits (32 - clz(bm) - ctz(bm) == popcount(bm)).

1) If the bitmask touches the LSB, then we can remove all the upper bits and set the flags by doing one LSLS.
2) If the bitmask touches the MSB, then we can remove all the lower bits and set the flags with one LSRS.
3) If the bitmask has popcount == 1 (only one set bit), we can shift that bit into the sign bit with one LSLS and change the condition query from NE/EQ to MI/PL (we could also implement this by shifting into the carry bit and branching on BCC/BCS).
4) Otherwise, we can emit a sequence of LSLS+LSRS to remove the upper and lower zero bits of the mask.

1-3 require only one 16-bit instruction and can elide the CMP. 4 requires two 16-bit instructions but can elide the CMP and doesn't require materializing a complex immediate, so is also a win.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285893 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-03 10:18:20 +00:00
Tim Northover
1ff4f02fe6 ARM: don't rely on push/pop reglists being in order when folding SP adjust.
It would be a very nice invariant to rely on, but unfortunately it doesn't
necessarily hold (and the causes of mis-sorted reglists appear to be quite
varied) so to be robust the frame lowering code can't assume that the first
register in the list is also the first one that actually gets pushed.

Should fix an issue where we were turning something like:

    push {r8, r4, r7, lr}
    sub sp, #24

into nonsense like:

    push {r2, r3, r4, r5, r6, r7, r8, r4, r7, lr}

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285232 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-26 20:01:00 +00:00
Matt Arsenault
93e6e5414d Finish renaming remaining analyzeBranch functions
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281535 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-14 20:43:16 +00:00
Matt Arsenault
b1a710d5f0 Make analyzeBranch family of instruction names consistent
analyzeBranch was renamed to use lowercase first, rename
the related set to match.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281506 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-14 17:24:15 +00:00
Matt Arsenault
ab302cda5e AArch64: Use TTI branch functions in branch relaxation
The main change is to return the code size from
InsertBranch/RemoveBranch.

Patch mostly by Tim Northover

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281505 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-14 17:23:48 +00:00
James Molloy
9502e5be6f Revert "[Thumb] Teach ISel how to lower compares of AND bitmasks efficiently"
This reverts commit r281323. It caused chromium test failures and a selfhost failure.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281451 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-14 09:45:28 +00:00
James Molloy
e81b6f3153 [Thumb] Teach ISel how to lower compares of AND bitmasks efficiently
For the common pattern (CMPZ (AND x, #bitmask), #0), we can do some more efficient instruction selection if the bitmask is one consecutive sequence of set bits (32 - clz(bm) - ctz(bm) == popcount(bm)).

1) If the bitmask touches the LSB, then we can remove all the upper bits and set the flags by doing one LSLS.
2) If the bitmask touches the MSB, then we can remove all the lower bits and set the flags with one LSRS.
3) If the bitmask has popcount == 1 (only one set bit), we can shift that bit into the sign bit with one LSLS and change the condition query from NE/EQ to MI/PL (we could also implement this by shifting into the carry bit and branching on BCC/BCS).
4) Otherwise, we can emit a sequence of LSLS+LSRS to remove the upper and lower zero bits of the mask.

1-3 require only one 16-bit instruction and can elide the CMP. 4 requires two 16-bit instructions but can elide the CMP and doesn't require materializing a complex immediate, so is also a win.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281323 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-13 12:12:32 +00:00
Nico Weber
eebb0bcce0 Revert r281215, it caused PR30358.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281263 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-12 21:40:50 +00:00
James Molloy
91db09d0e8 [Thumb] Teach ISel how to lower compares of AND bitmasks efficiently
For the common pattern (CMPZ (AND x, #bitmask), #0), we can do some more efficient instruction selection if the bitmask is one consecutive sequence of set bits (32 - clz(bm) - ctz(bm) == popcount(bm)).

1) If the bitmask touches the LSB, then we can remove all the upper bits and set the flags by doing one LSLS.
2) If the bitmask touches the MSB, then we can remove all the lower bits and set the flags with one LSRS.
3) If the bitmask has popcount == 1 (only one set bit), we can shift that bit into the sign bit with one LSLS and change the condition query from NE/EQ to MI/PL (we could also implement this by shifting into the carry bit and branching on BCC/BCS).
4) Otherwise, we can emit a sequence of LSLS+LSRS to remove the upper and lower zero bits of the mask.

1-3 require only one 16-bit instruction and can elide the CMP. 4 requires two 16-bit instructions but can elide the CMP and doesn't require materializing a complex immediate, so is also a win.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281215 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-12 14:30:48 +00:00
Justin Lebar
c71d5b41ef [CodeGen] Split out the notions of MI invariance and MI dereferenceability.
Summary:
An IR load can be invariant, dereferenceable, neither, or both.  But
currently, MI's notion of invariance is IR-invariant &&
IR-dereferenceable.

This patch splits up the notions of invariance and dereferenceability at
the MI level.  It's NFC, so adds some probably-unnecessary
"is-dereferenceable" checks, which we can remove later if desired.

Reviewers: chandlerc, tstellarAMD

Subscribers: jholewinski, arsenm, nemanjai, llvm-commits

Differential Revision: https://reviews.llvm.org/D23371

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281151 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-11 01:38:58 +00:00
James Molloy
c68299484e [Thumb1] Teach optimizeCompareInstr about thumb1 compares
This avoids us doing a completely unneeded "cmp r0, #0" after a flag-setting instruction if we only care about the Z or C flags.

Add LSL/LSR to the whitelist while we're here and add testing. This code could really do with a spring clean.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281027 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-09 09:51:06 +00:00
Saleem Abdulrasool
02c3a66ad9 ARM: workaround bundled operation predication
This is a Windows ARM specific issue.  If the code path in the if conversion
ends up using a relocation which will form a IMAGE_REL_ARM_MOV32T, we end up
with a bundle to ensure that the mov.w/mov.t pair is not split up.  This is
normally fine, however, if the branch is also predicated, then we end up trying
to predicate the bundle.

For now, report a bundle as being unpredicatable.  Although this is false, this
would trigger a failure case previously anyways, so this is no worse.  That is,
there should not be any code which would previously have been if converted and
predicated which would not be now.

Under certain circumstances, it may be possible to "predicate the bundle".  This
would require scanning all bundle instructions, and ensure that the bundle
contains only predicatable instructions, and converting the bundle into an IT
block sequence.  If the bundle is larger than the maximal IT block length (4
instructions), it would require materializing multiple IT blocks from the single
bundle.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@280689 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-06 04:00:12 +00:00
Oliver Stannard
b85d8f463e [ARM] Add support for embedded position-independent code
This patch adds support for some new relocation models to the ARM
backend:

* Read-only position independence (ROPI): Code and read-only data is accessed
  PC-relative. The offsets between all code and RO data sections are known at
  static link time. This does not affect read-write data.
* Read-write position independence (RWPI): Read-write data is accessed relative
  to the static base register (r9). The offsets between all writeable data
  sections are known at static link time. This does not affect read-only data.

These two modes are independent (they specify how different objects
should be addressed), so they can be used individually or together. They
are otherwise the same as the "static" relocation model, and are not
compatible with SysV-style PIC using a global offset table.

These modes are normally used by bare-metal systems or systems with
small real-time operating systems. They are designed to avoid the need
for a dynamic linker, the only initialisation required is setting r9 to
an appropriate value for RWPI code.

I have only added support to SelectionDAG, not FastISel, because
FastISel is currently disabled for bare-metal targets where these modes
would be used.

Differential Revision: https://reviews.llvm.org/D23195



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278015 91177308-0d34-0410-b5e6-96231b3b80d8
2016-08-08 15:28:31 +00:00
Matthias Braun
f79c57a412 MachineFunction: Return reference for getFrameInfo(); NFC
getFrameInfo() never returns nullptr so we should use a reference
instead of a pointer.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@277017 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-28 18:40:00 +00:00
Sjoerd Meijer
7b78e6e140 TargetInstrInfo: rename GetInstSizeInBytes to getInstSizeInBytes. NFC
Differential Revision: https://reviews.llvm.org/D22925


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276997 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-28 16:32:22 +00:00
Justin Lebar
14fc45e102 [CodeGen] Take a MachineMemOperand::Flags in MachineFunction::getMachineMemOperand.
Summary:
Previously we took an unsigned.

Hooray for type-safety.

Reviewers: chandlerc

Subscribers: dsanders, llvm-commits

Differential Revision: http://reviews.llvm.org/D22282

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275591 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-15 18:26:59 +00:00
Jacques Pienaar
48ed4ab2d6 Rename AnalyzeBranch* to analyzeBranch*.
Summary: NFC. Rename AnalyzeBranch/AnalyzeBranchPredicate to analyzeBranch/analyzeBranchPredicate to follow LLVM coding style and be consistent with TargetInstrInfo's analyzeCompare and analyzeSelect.

Reviewers: tstellarAMD, mcrosier

Subscribers: mcrosier, jholewinski, jfb, arsenm, dschuff, jyknight, dsanders, nemanjai

Differential Revision: https://reviews.llvm.org/D22409

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275564 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-15 14:41:04 +00:00
Duncan P. N. Exon Smith
c347f0af45 ARM: Remove implicit iterator conversions, NFC
Remove remaining implicit conversions from MachineInstrBundleIterator to
MachineInstr* from the ARM backend.  In most cases, I made them less attractive
by preferring MachineInstr& or using a ranged-based for loop.

Once all the backends are fixed I'll make the operator explicit so that this
doesn't bitrot back.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274920 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-08 20:21:17 +00:00
Saleem Abdulrasool
179856d60c ARM: support high registers in __builtin_longjmp on WoA
Windows on ARM uses a pure thumb-2 environment.  This means that it can select a
high register when doing a __builtin_longjmp.  We would use a tLDRi which would
truncate the register to a low register.  Use a t2LDRi12 to get the full
register file access.  Tweak the code to just load into PC, as that is an
interworking branch on all supported cores anyways.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274815 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-08 00:48:22 +00:00
Diana Picus
06a4440843 [ARM] Do not test for CPUs, use SubtargetFeatures. Also remove 2 flags.
This is a follow-up for r273544.

The end goal is to get rid of the isSwift / isCortexXY / isWhatever methods.

This commit also removes two command-line flags that weren't used in any of the
tests: widen-vmovs and swift-partial-update-clearance. The former may be easily
replaced with the mattr mechanism, but the latter may not (as it is a subtarget
property, and not a proper feature).

Differential Revision: http://reviews.llvm.org/D21797

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274620 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-06 11:22:11 +00:00
Duncan P. N. Exon Smith
4383a516d1 CodeGen: Use MachineInstr& in LiveVariables API, NFC
Change all the methods in LiveVariables that expect non-null
MachineInstr* to take MachineInstr& and update the call sites.  This
clarifies the API, and designs away a class of iterator to pointer
implicit conversions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274319 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-01 01:51:32 +00:00
Duncan P. N. Exon Smith
567409db69 CodeGen: Use MachineInstr& in TargetInstrInfo, NFC
This is mostly a mechanical change to make TargetInstrInfo API take
MachineInstr& (instead of MachineInstr* or MachineBasicBlock::iterator)
when the argument is expected to be a valid MachineInstr.  This is a
general API improvement.

Although it would be possible to do this one function at a time, that
would demand a quadratic amount of churn since many of these functions
call each other.  Instead I've done everything as a block and just
updated what was necessary.

This is mostly mechanical fixes: adding and removing `*` and `&`
operators.  The only non-mechanical change is to split
ARMBaseInstrInfo::getOperandLatencyImpl out from
ARMBaseInstrInfo::getOperandLatency.  Previously, the latter took a
`MachineInstr*` which it updated to the instruction bundle leader; now,
the latter calls the former either with the same `MachineInstr&` or the
bundle leader.

As a side effect, this removes a bunch of MachineInstr* to
MachineBasicBlock::iterator implicit conversions, a necessary step
toward fixing PR26753.

Note: I updated WebAssembly, Lanai, and AVR (despite being
off-by-default) since it turned out to be easy.  I couldn't run tests
for AVR since llc doesn't link with it turned on.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274189 91177308-0d34-0410-b5e6-96231b3b80d8
2016-06-30 00:01:54 +00:00
Rafael Espindola
eeeea1e6dc Don't pass a Reloc::Model to GVIsIndirectSymbol.
It already has access to it.

While at it, rename it to isGVIndirectSymbol.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274023 91177308-0d34-0410-b5e6-96231b3b80d8
2016-06-28 15:38:13 +00:00
Rafael Espindola
b13ddb18db Don't pass Reloc::Model to places that already have it. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274022 91177308-0d34-0410-b5e6-96231b3b80d8
2016-06-28 15:18:26 +00:00
Diana Picus
92eaa569c3 [ARM] Do not test for CPUs, use SubtargetFeatures (Part 2). NFCI
This is a follow-up for r273544.

The end goal is to get rid of the isSwift / isCortexXY / isWhatever methods.

Since the ARM backend seems to have quite a lot of calls to these methods, I
intend to submit 5-6 subtarget features at a time, instead of one big lump.

Differential Revision: http://reviews.llvm.org/D21685

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@273853 91177308-0d34-0410-b5e6-96231b3b80d8
2016-06-27 09:08:23 +00:00
Diana Picus
f8f2c7f04e [ARM] Do not test for CPUs, use SubtargetFeatures (Part 1). NFCI
This is a cleanup commit similar to r271555, but for ARM.

The end goal is to get rid of the isSwift / isCortexXY / isWhatever methods.

Since the ARM backend seems to have quite a lot of calls to these methods, I
intend to submit 5-6 subtarget features at a time, instead of one big lump.

Differential Revision: http://reviews.llvm.org/D21432

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@273544 91177308-0d34-0410-b5e6-96231b3b80d8
2016-06-23 07:47:35 +00:00
Benjamin Kramer
af18e017d2 Pass DebugLoc and SDLoc by const ref.
This used to be free, copying and moving DebugLocs became expensive
after the metadata rewrite. Passing by reference eliminates a ton of
track/untrack operations. No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@272512 91177308-0d34-0410-b5e6-96231b3b80d8
2016-06-12 15:39:02 +00:00
Diana Picus
c5d0b1da6e [ARM] Remove redundant check. NFC
isSwift is tested earlier and known to be false when we reach this code.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@272127 91177308-0d34-0410-b5e6-96231b3b80d8
2016-06-08 10:29:02 +00:00
Tim Northover
464549ab4d ARM: fix handling of SUB immediates in peephole opt.
We were negating an immediate that was going to be used in a SUBri form
unnecessarily. Since ADD/SUB are very similar we *can* do that, but we have to
change the SUB to an ADD at the same time. This also applies to ADD, and allows
us to handle a slightly larger range of immediates for those two operations.

rdar://25992245

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@268276 91177308-0d34-0410-b5e6-96231b3b80d8
2016-05-02 18:30:08 +00:00
Saleem Abdulrasool
9dd2ef1cf2 ARM: follow up improvements for SVN r263118
The initial change was insufficiently complete for always getting the semantics
of __builtin_longjmp correct.  The builtin is translated into a
`tInt_eh_sjlj_longjmp` DAG node.  This node set R7 as clobbered.  However, the
code would then follow up with a clobber of R11.  I had failed to notice the
imp-def,kill on R7 in the isel.  Unfortunately, it seems that it is not possible
to conditionalise the Defs list via an !if.  Instead, construct a new parallel
WIN node and prefer that when targeting windows.  This ensures that we now both
correctly model the __builtin_longjmp as well as construct the frame in a more
ABI conformant manner.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@263123 91177308-0d34-0410-b5e6-96231b3b80d8
2016-03-10 16:26:37 +00:00
Duncan P. N. Exon Smith
1d75c8d9ec CodeGen: Change MachineInstr to use MachineInstr&, NFC
Change MachineInstr API to prefer MachineInstr& over MachineInstr*
whenever the parameter is expected to be non-null.  Slowly inching
toward being able to fix PR26753.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262149 91177308-0d34-0410-b5e6-96231b3b80d8
2016-02-27 20:01:33 +00:00
Duncan P. N. Exon Smith
5b9b80ea30 CodeGen: TII: Take MachineInstr& in predicate API, NFC
Change TargetInstrInfo API to take `MachineInstr&` instead of
`MachineInstr*` in the functions related to predicated instructions
(I'll try to come back later and get some of the rest).  All of these
functions require non-null parameters already, so references are more
clear.  As a bonus, this happens to factor away a host of implicit
iterator => pointer conversions.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@261605 91177308-0d34-0410-b5e6-96231b3b80d8
2016-02-23 02:46:52 +00:00
Duncan P. N. Exon Smith
110284e56c CodeGen: Bring back MachineBasicBlock::iterator::getInstrIterator()...
This is a little embarrassing.

When I reverted r261504 (getIterator() => getInstrIterator()) in
r261567, I did a `git grep` to see if there were new calls to
`getInstrIterator()` that I needed to migrate.  There were 10-20 hits,
and I blindly did a `sed ...` before calling `ninja check`.

However, these were `MachineInstrBundleIterator::getInstrIterator()`,
which predated r261567.  Perhaps coincidentally, these had an identical
name and return type.

This commit undoes my careless sed and restores
`MachineBasicBlock::iterator::getInstrIterator()`.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@261577 91177308-0d34-0410-b5e6-96231b3b80d8
2016-02-22 21:30:15 +00:00