973 Commits

Author SHA1 Message Date
Bob Wilson
cb01efb798 Enable optimization of sin / cos pair into call to __sincos_stret for iOS7+.
rdar://12856873
Patch by Evan Cheng, with a fix for rdar://13209539 by Tilmann Scheller

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193942 91177308-0d34-0410-b5e6-96231b3b80d8
2013-11-03 06:14:38 +00:00
Jim Grosbach
0e536ee4ca Legalize: Improve legalization of long vector extends.
When an extend more than doubles the size of the elements (e.g., a zext
from v16i8 to v16i32), the normal legalization method of splitting the
vectors will run into problems as by the time the destination vector is
legal, the source vector is illegal. The end result is the operation
often becoming scalarized, with the typical horrible performance. For
example, on x86_64, the simple input of:
define void @bar(<16 x i8> %a, <16 x i32>* %p) nounwind {
  %tmp = zext <16 x i8> %a to <16 x i32>
  store <16 x i32> %tmp, <16 x i32>*%p
  ret void
}

Generates:
  .section  __TEXT,__text,regular,pure_instructions
  .section  __TEXT,__const
  .align  5
LCPI0_0:
  .long 255                     ## 0xff
  .long 255                     ## 0xff
  .long 255                     ## 0xff
  .long 255                     ## 0xff
  .long 255                     ## 0xff
  .long 255                     ## 0xff
  .long 255                     ## 0xff
  .long 255                     ## 0xff
  .section  __TEXT,__text,regular,pure_instructions
  .globl  _bar
  .align  4, 0x90
_bar:
  vpunpckhbw  %xmm0, %xmm0, %xmm1
  vpunpckhwd  %xmm0, %xmm1, %xmm2
  vpmovzxwd %xmm1, %xmm1
  vinsertf128 $1, %xmm2, %ymm1, %ymm1
  vmovaps LCPI0_0(%rip), %ymm2
  vandps  %ymm2, %ymm1, %ymm1
  vpmovzxbw %xmm0, %xmm3
  vpunpckhwd  %xmm0, %xmm3, %xmm3
  vpmovzxbd %xmm0, %xmm0
  vinsertf128 $1, %xmm3, %ymm0, %ymm0
  vandps  %ymm2, %ymm0, %ymm0
  vmovaps %ymm0, (%rdi)
  vmovaps %ymm1, 32(%rdi)
  vzeroupper
  ret

So instead we can check if there are legal types that enable us to split
more cleverly when the input vector is already legal such that we don't
turn it into an illegal type. If the extend is such that it's more than
doubling the size of the input we check if
  - the number of vector elements is even,
  - the source type is legal,
  - the type of a split source is illegal,
  - the type of an extended (by doubling element size) source is legal, and
  - the type of that extended source when split is legal.
If the conditions are met, instead of just splitting both the
destination and the source types, we create an extend that only goes up
one "step" (doubling the element width), and the continue legalizing the
rest of the operation normally. The result is that this operates as a
new, more effecient, termination condition for the loop of "split the
operation until the destination type is legal."

With this change, the above example now compiles to:
_bar:
  vpxor %xmm1, %xmm1, %xmm1
  vpunpcklbw  %xmm1, %xmm0, %xmm2
  vpunpckhwd  %xmm1, %xmm2, %xmm3
  vpunpcklwd  %xmm1, %xmm2, %xmm2
  vinsertf128 $1, %xmm3, %ymm2, %ymm2
  vpunpckhbw  %xmm1, %xmm0, %xmm0
  vpunpckhwd  %xmm1, %xmm0, %xmm3
  vpunpcklwd  %xmm1, %xmm0, %xmm0
  vinsertf128 $1, %xmm3, %ymm0, %ymm0
  vmovaps %ymm0, 32(%rdi)
  vmovaps %ymm2, (%rdi)
  vzeroupper
  ret

This generalizes a custom lowering that was added a while back to the
ARM backend. That lowering is no longer necessary, and is removed. The
testcases for it, however, provide excellent ARM tests for this change
and so remain.

rdar://14735100

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193727 91177308-0d34-0410-b5e6-96231b3b80d8
2013-10-31 00:20:48 +00:00
Manman Ren
d1ea5928dd Struct byval cleanup: add helper functions to reduce code duplication.
Helper functions are added:
emitPostLd: emit a post-increment load operation with given size.
emitPostSt: emit a post-increment store operation with given size.

No functionality change.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193656 91177308-0d34-0410-b5e6-96231b3b80d8
2013-10-29 22:27:32 +00:00
Tim Northover
214c37d181 ARM: don't expand atomicrmw inline on Cortex-M0
There's a barrier instruction so that should still be used, but most actual
atomic operations are going to need a platform decision on the correct
behaviour (either nop if single-threaded or OS-support otherwise).

rdar://problem/15287210

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193399 91177308-0d34-0410-b5e6-96231b3b80d8
2013-10-25 09:30:24 +00:00
Jim Grosbach
3115047182 ARM: Tweak usage of '*vfp' compiler_rt functions.
Only use them if the subtarget has ARM mode, as these routines are implemented
as ARM code.

rdar://15302004

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193381 91177308-0d34-0410-b5e6-96231b3b80d8
2013-10-24 23:07:11 +00:00
David Peixotto
e8d84d8936 Remove class abstraction from ARM struct byval lowering
This commit changes the struct byval lowering for arm to use inline
checks for the subtarget instead of a class abstraction to represent
the differences. The class abstraction was judged to be too much
code for this task.

No intended functionality change.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193357 91177308-0d34-0410-b5e6-96231b3b80d8
2013-10-24 16:39:36 +00:00
Tim Northover
6c0138e5fc ARM: Use non-VFP softcalls on embedded Darwinish targets
The compiler-rt functions __adddf3vfp and so on exist purely to allow Thumb1
code to make use of VFP instructions by switching back to ARM mode, they make
no sense for M-class processors which don't even have an ARM mode.

Given that justification, in practice this is a platform ABI decision so the
actual check is based on that rather than CPU features.

rdar://problem/15302004

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193327 91177308-0d34-0410-b5e6-96231b3b80d8
2013-10-24 10:37:09 +00:00
David Peixotto
7014d274e4 17309 ARM backend incorrectly lowers COPY_STRUCT_BYVAL_I32 for thumb1 targets
This commit implements the correct lowering of the
COPY_STRUCT_BYVAL_I32 pseudo-instruction for thumb1 targets.
Previously, the lowering of COPY_STRUCT_BYVAL_I32 generated the
post-increment forms of ldr/ldrh/ldrb instructions. Thumb1 does not
have the post-increment form of these instructions so the generated
assembly contained invalid instructions.

Passing the generated assembly to gcc caused it to complain with an
error like this:

  Error: cannot honor width suffix -- `ldrb r3,[r0],#1'

and the integrated assembler would generate an object file with an
invalid instruction encoding.

This commit contains a small test case that demonstrates the problem
with thumb1 targets as well as an expanded test case that more
throughly tests the lowering of byval struct passing for arm,
thumb1, and thumb2 targets.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192916 91177308-0d34-0410-b5e6-96231b3b80d8
2013-10-17 19:52:05 +00:00
David Peixotto
6483751a36 Refactor lowering for COPY_STRUCT_BYVAL_I32
This commit refactors the lowering of the COPY_STRUCT_BYVAL_I32
pseudo-instruction in the ARM backend. We introduce a new helper
class that encapsulates all of the operations needed during the
lowering. The operations are implemented for each subtarget in
different subclasses. Currently only arm and thumb2 subtargets are
supported.

This refactoring was done to easily implement support for thumb1
subtargets. This initial patch does not add support for thumb1, but
is only a refactoring. A follow on patch will implement the support
for thumb1 subtargets.

No intended functionality change.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192915 91177308-0d34-0410-b5e6-96231b3b80d8
2013-10-17 19:49:22 +00:00
Manman Ren
05ac87f864 Struct byval: fix a copy-paste error for thumb2.
PR17309


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192730 91177308-0d34-0410-b5e6-96231b3b80d8
2013-10-15 19:42:32 +00:00
Manman Ren
fb92f46459 Struct byval: use the correct alignment for loads generated to load
from struct byval to registers.

We used to pass 0 which means the alignment of PtrVT. Even when the alignment
of the struct is smaller than 4, the LOADs would have alignment of 4, and
further optimizations could combine the LOADs into a ldm, which would
cause crash.

The fix is to pass the alignment of the struct byval.

rdar://problem/15144402


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192126 91177308-0d34-0410-b5e6-96231b3b80d8
2013-10-07 19:47:53 +00:00
Matthias Braun
4e54f41d6c ARM: do not add a regmask for TAILJUMPs
The jump doesn't really kill the registers, the following call does but
we never get back anyway.
This avoids some verify-machineinstrs problems when TAILJUMPs are
if-converted.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191962 91177308-0d34-0410-b5e6-96231b3b80d8
2013-10-04 16:52:54 +00:00
Tim Northover
bba9390fc6 ARM: support interrupt attribute
This function-attribute modifies the callee-saved register list and function
epilogue (specifically the return instruction) so that a routine is suitable
for use as an interrupt-handler of the specified type without disrupting
user-mode applications.

rdar://problem/14207019

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191766 91177308-0d34-0410-b5e6-96231b3b80d8
2013-10-01 14:33:28 +00:00
Amara Emerson
268c743a3b [ARM] Use the load-acquire/store-release instructions optimally in AArch32.
Patch by Artyom Skrobov.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191428 91177308-0d34-0410-b5e6-96231b3b80d8
2013-09-26 12:22:36 +00:00
Weiming Zhao
541681c848 Fix PR 17368: disable vector mul distribution for square of add/sub for ARM
Generally, it is desirable to distribute (a + b) * c to a*c + b*c for
ARM with VMLx forwarding, where a, b and c are vectors.
However, for (a + b)*(a + b), distribution will result in one extra
instruction.
With distribution:
  x = a + b (add)
  y = a * x (mul)
  z = y + b * y (mla)

Without distribution:
  x = a + b (add)
  z = x * x (mul)

This patch checks if a mul is a square of add/sub. If yes, skip
distribution.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191410 91177308-0d34-0410-b5e6-96231b3b80d8
2013-09-25 23:12:06 +00:00
Joey Gouly
2a9af9f18e [ARMv8] Change hasV8Fp to hasFPARMv8, and other command line options
to be more consistent.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190692 91177308-0d34-0410-b5e6-96231b3b80d8
2013-09-13 13:46:57 +00:00
Joey Gouly
4897151df6 [ARMv8] Implement the new DMB/DSB operands.
This removes the custom ISD Node: MEMBARRIER and replaces it
with an intrinsic.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190055 91177308-0d34-0410-b5e6-96231b3b80d8
2013-09-05 15:35:24 +00:00
Cameron Esfahani
441c557708 Clean up some usage of Triple. The base class has methods for determining if the target is iOS and Linux.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189604 91177308-0d34-0410-b5e6-96231b3b80d8
2013-08-29 20:23:14 +00:00
Tim Northover
22266c1d48 ARM: Use "dmb sy" for barriers on M-class CPUs
The usual default of "dmb ish" (inner-shareable) isn't even a valid instruction
on v6M or v7M (well, it does the same thing but software is strongly
discouraged from using it) so we should emit a full-system barrier there.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189483 91177308-0d34-0410-b5e6-96231b3b80d8
2013-08-28 14:39:19 +00:00
Joey Gouly
a0b2d332c1 [ARMv8] Add CodeGen for VMAXNM/VMINNM.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189103 91177308-0d34-0410-b5e6-96231b3b80d8
2013-08-23 12:01:13 +00:00
Joey Gouly
35eab1db2f [ARMv8] Add CodeGen support for VSEL.
This uses the ARMcmov pattern that Tim cleaned up in r188995.

Thanks to Simon Tatham for his floating point help!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189024 91177308-0d34-0410-b5e6-96231b3b80d8
2013-08-22 15:29:11 +00:00
Joey Gouly
bad8d4ca59 [ARM] Constrain some register classes in EmitAtomicBinary64 so that
we pass these tests with -verify-machineinstrs.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189006 91177308-0d34-0410-b5e6-96231b3b80d8
2013-08-22 12:19:24 +00:00
Tim Northover
32c2bfda77 ARM: implement some simple f64 materializations.
Previously we used a const-pool load for virtually all 64-bit floating values.
Actually, we can get quite a few common values (including 0.0, 1.0) via "vmov"
instructions of one stripe or another.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188773 91177308-0d34-0410-b5e6-96231b3b80d8
2013-08-20 08:57:11 +00:00
Tim Northover
8775a51d94 ARM: implement allowTruncateForTailCall
Now that it's in place, it seems silly not to let ARM make use of the extra
tail call opportunities.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187795 91177308-0d34-0410-b5e6-96231b3b80d8
2013-08-06 13:58:03 +00:00
Saleem Abdulrasool
f7f22a64df [ARM] check bitwidth in PerformORCombine
When simplifying a (or (and B A) (and C ~A)) to a (VBSL A B C) ensure that the
bitwidth of the second operands to both ands match before comparing the negation
of the values.

Split the check of the value of the second operands to the ands.  Move the cast
and variable declaration slightly higher to make it slightly easier to follow.

Bug-Id: 16700
Signed-off-by: Saleem Abdulrasool <compnerd@compnerd.org>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187404 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-30 04:43:08 +00:00
Quentin Colombet
17f99a991f [ARM][ISel] Improve the lowering of vector loads.
When vectors are built from a single value, the ARM lowering issues a
scalar_to_vector node.
This node is then always morphed into a move from the general purpose unit to
the vector unit.
When the value comes from a load, this can be simplified into a vector load to
the right lane.

This patch changes the lowering of insert_vector_elt to expose a vector
friendly pattern in this situation.

This is a step toward fixing <rdar://problem/14170854>.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186999 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-23 22:34:47 +00:00
Tim Northover
ad9a0d27d3 ARM: allow printing of ARM atomic DAG nodes.
We'd forgotten to provide string representations for the special ARMISD atomic
nodes; this adds them in. No effect on CodeGen, just makes the output of
"-view-whatever-dags" slightly more readable.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186406 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-16 12:15:36 +00:00
Tim Northover
2f438131f1 ARM: implement ldrex, strex and clrex intrinsics
Intrinsics already existed for the 64-bit variants, so these support operations
of size at most 32-bits.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186392 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-16 09:46:55 +00:00
Renato Golin
103ba845f0 ARM EABI divmod support
This patch enables calls to __aeabi_idivmod when in EABI mode,
by using the remainder value returned on registers (R1),
enabled by the ARM triple "none-eabi". Note that Darwin and
GNUEABI triples will continue lowering on GNU style, that is,
using the stack for the remainder.

Still need to add SREM/UREM support fix for 64-bit lowering.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186390 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-16 09:32:17 +00:00
Craig Topper
b9df53a40b Use llvm::array_lengthof to replace sizeof(array)/sizeof(array[0]).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186301 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-15 04:27:47 +00:00
Craig Topper
a0ec3f9b7b Use SmallVectorImpl& instead of SmallVector to avoid repeating small vector size.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186274 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-14 04:42:23 +00:00
Jim Grosbach
dc2d418dd2 ARM: Improve codegen for generic vselect.
Fall back to by-element insert rather than building it up on the stack.

rdar://14351991

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185846 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-08 18:18:52 +00:00
Jakob Stoklund Olesen
f349a6e9e6 Remove the EXCEPTIONADDR, EHSELECTION, and LSDAADDR ISD opcodes.
These exception-related opcodes are not used any longer.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185625 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-04 13:54:20 +00:00
Jakob Stoklund Olesen
c93822901a Revert r185595-185596 which broke buildbots.
Revert "Simplify landing pad lowering."
Revert "Remove the EXCEPTIONADDR, EHSELECTION, and LSDAADDR ISD opcodes."

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185600 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-04 00:26:30 +00:00
Jakob Stoklund Olesen
62204220e1 Remove the EXCEPTIONADDR, EHSELECTION, and LSDAADDR ISD opcodes.
These exception-related opcodes are not used any longer.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185596 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-03 23:56:31 +00:00
Quentin Colombet
8e2e5ff024 [ARM] Improve the instruction selection of vector loads.
In the ARM back-end, build_vector nodes are lowered to a target specific
build_vector that uses floating point type. 
This works well, unless the inserted bitcasts survive until instruction
selection. In that case, they incur moves between integer unit and floating
point unit that may result in inefficient code.

In other words, this conversion may introduce artificial dependencies when the
code leading to the build vector cannot be completed with a floating point type.

In particular, this happens when loads are not aligned.

Before this patch, in that case, the compiler generates general purpose loads
and creates the floating point vector from them, instead of directly using the
vector unit.

The patch uses a vector friendly sequence of code when the inserted bitcasts to
floating point survived DAGCombine.

This is done by a target specific DAGCombine that changes the target specific
build_vector into a sequence of insert_vector_elt that get rid of the bitcasts.

<rdar://problem/14170854>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185587 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-03 21:42:57 +00:00
Tim Northover
a10c01a6c6 ARM: relax the atomic release barrier to "dmb ishst" on Swift
Swift cores implement store barriers that are stronger than the ARM
specification but weaker than general barriers. They are, in fact, just about
enough to provide the ordering needed for atomic operations with release
semantics.

This patch makes use of that quirk.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185527 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-03 09:20:36 +00:00
Tim Northover
40d0492cde Revert r185339 (ARM: relax the atomic release barrier to "dmb ishst")
Turns out I'd misread the architecture reference manual and thought
that was a load/store-store barrier, when it's not.

Thanks for pointing it out Eli!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185356 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-01 18:37:33 +00:00
Tim Northover
d59fc0af0a ARM: relax the atomic release barrier to "dmb ishst"
I believe the full "dmb ish" barrier is not required to guarantee release
semantics for atomic operations. The weaker "dmb ishst" prevents previous
operations being reordered with a store executed afterwards, which is enough.

A key point to note (fortunately already correct) is that this barrier alone is
*insufficient* for sequential consistency, no matter how liberally placed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185339 91177308-0d34-0410-b5e6-96231b3b80d8
2013-07-01 14:48:48 +00:00
Tim Northover
bcd8e7ad4d ARM: ensure fixed-point conversions have sane types
We were generating intrinsics for NEON fixed-point conversions that didn't
exist (e.g. float -> i16). There are two cases to consider:
  + iN is smaller than float. In this case we can do the conversion but need an
    extend or truncate as well.
  + iN is larger than float. In this case using the NEON conversion would be
    incorrect so we don't perform any combining.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185158 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-28 15:29:25 +00:00
Stephen Lin
6b97ebe9a3 ARM: Proactively ensure that the LowerCallResult hack for 'this'-returns is not used for incompatible calling conventions.
(Currently, ARM 'this'-returns are handled in the standard calling convention case by treating R0 as preserved and doing some extra magic in LowerCallResult; this may not apply to calling conventions added in the future so this patch provides and documents an interface for indicating such)


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185024 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-26 21:42:14 +00:00
Chad Rosier
5b3fca50a0 The getRegForInlineAsmConstraint function should only accept MVT value types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184642 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-22 18:37:38 +00:00
Michael Gottesman
41502e1af7 [ARMTargetLowering] ARMISD::{SUB,ADD}{C,E} second result is a boolean implying that upper bits are always 0.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184231 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-18 20:49:45 +00:00
Michael Gottesman
8493edfb4b Converted an overly aggressive assert to a conditional check in AddCombineTo64bitMLAL.
Said assert assumes that ADDC will always have a glue node as its second
argument and is checked before we even know that we are actually performing the
relevant MLAL optimization. This is incorrect since on ARM we *CAN* codegen ADDC
with a use list based second argument. Thus to have both effects, I converted
the assert to a conditional check which if it fails we do not perform the
optimization.

In terms of tests I can not produce an ADDC from the IR level until I get in my
multiprecision optimization patch which is forthcoming. The tests for said patch
would cause this assert to fail implying that said tests will provide the
relevant tests.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184230 91177308-0d34-0410-b5e6-96231b3b80d8
2013-06-18 20:49:40 +00:00
Andrew Trick
6e0b2a0cb0 Order CALLSEQ_START and CALLSEQ_END nodes.
Fixes PR16146: gdb.base__call-ar-st.exp fails after
pre-RA-sched=source fixes.

Patch by Xiaoyi Guo!

This also fixes an unsupported dbg.value test case. Codegen was
previously incorrect but the test was passing by luck.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182885 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-29 22:03:55 +00:00
Andrew Trick
ac6d9bec67 Track IR ordering of SelectionDAG nodes 2/4.
Change SelectionDAG::getXXXNode() interfaces as well as call sites of
these functions to pass in SDLoc instead of DebugLoc.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182703 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-25 02:42:55 +00:00
Michael J. Spencer
c6af2432c8 Replace Count{Leading,Trailing}Zeros_{32,64} with count{Leading,Trailing}Zeros.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182680 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-24 22:23:49 +00:00
Tim Northover
5a02fc4b5f ARM: implement @llvm.readcyclecounter intrinsic
This implements the @llvm.readcyclecounter intrinsic as the specific
MRC instruction specified in the ARM manuals for CPUs with the Power
Management extensions.

Older CPUs had slightly different methods which may also have to be
implemented eventually, but this should cover all v7 cases.

rdar://problem/13939186

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182603 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-23 19:11:20 +00:00
Stepan Dyatkovskiy
083bc97344 PR15868 fix.
Introduction:
In case when stack alignment is 8 and GPRs parameter part size is not N*8:
we add padding to GPRs part, so part's last byte must be recovered at
address K*8-1.
We need to do it, since remained (stack) part of parameter starts from
address K*8, and we need to "attach" "GPRs head" without gaps to it:

Stack:
|---- 8 bytes block ----| |---- 8 bytes block ----| |---- 8 bytes...
[ [padding] [GPRs head] ] [ ------ Tail passed via stack  ------ ...

FIX:
Note, once we added padding we need to correct *all* Arg offsets that are going
after padded one. That's why we need this fix: Arg offsets were never corrected
before this patch. See new test-cases included in patch.

We also don't need to insert padding for byval parameters that are stored in GPRs
only. We need pad only last byval parameter and only in case it outsides GPRs
and stack alignment = 8.
Though, stack area, allocated for recovered byval params, must satisfy
"Size mod 8 = 0" restriction.

This patch reduces stack usage for some cases:
We can reduce ArgRegsSaveArea since inner N*4 bytes sized byval params my be
"packed" with alignment 4 in some cases.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182237 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-20 08:01:34 +00:00
Benjamin Kramer
4dc8bdf87d Replace some bit operations with simpler ones. No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182226 91177308-0d34-0410-b5e6-96231b3b80d8
2013-05-19 22:01:57 +00:00