28231 Commits

Author SHA1 Message Date
Kai Nacke
858f51b2b0 [mips] Add registers and ALL check prefix to octeon test case.
No functional change.

Reviewed by D. Sanders

llvm-svn: 226574
2015-01-20 16:14:02 +00:00
Kai Nacke
84b6d0bc71 [mips] Add octeon branch instructions bbit0/bbit032/bbit1/bbit132
This commits adds the octeon branch instructions bbit0/bbit032/bbit1/bbit132.
It also includes patterns for instruction selection and test cases.

Reviewed by D. Sanders

llvm-svn: 226573
2015-01-20 16:10:51 +00:00
Evgeniy Stepanov
7e7783512b [msan] Optimize -msan-check-constant-shadow.
The new code does not create new basic blocks in the case when shadow is a
compile-time constant; it generates either an unconditional __msan_warning
call or nothing instead.

llvm-svn: 226569
2015-01-20 15:21:35 +00:00
Chandler Carruth
7cd55815b7 [PM] Port LoopInfo to the new pass manager, adding both a LoopAnalysis
pass and a LoopPrinterPass with the expected associated wiring.

I've added a RUN line to the only test case (!!!) we have that actually
prints loops. Everything seems to be working.

This is somewhat exciting as this is the first analysis using another
analysis to go in for the new pass manager. =D I also believe it is the
last analysis necessary for porting instcombine, but of course I may yet
discover more.

llvm-svn: 226560
2015-01-20 10:58:50 +00:00
Karthik Bhat
8fee134b62 Fix Operandreorder logic in SLPVectorizer to generate longer vectorizable chain.
This patch fixes 2 issues in reorderInputsAccordingToOpcode
1) AllSameOpcodeLeft and AllSameOpcodeRight was being calculated incorrectly resulting in code not being vectorized in few cases.
2) Adds logic to reorder operands if we get longer chain of consecutive loads enabling vectorization. Handled the same for cases were we have AltOpcode.
Thanks Michael for inputs and review.
Review: http://reviews.llvm.org/D6677

llvm-svn: 226547
2015-01-20 06:11:00 +00:00
David Majnemer
4556f4a8ee Bitcode: Don't create comdats when autoupgrading macho bitcode
Don't infer COMDAT groups from older bitcode if the target is macho,
it doesn't have COMDATs.

llvm-svn: 226546
2015-01-20 05:58:07 +00:00
Frederic Riss
53e390879c [dsymutil] Add the detected target triple to the debug map.
It will be needed to instantiate the Target object that we will
use to create all the MC objects for the dwarf emission.

llvm-svn: 226525
2015-01-19 23:33:14 +00:00
Duncan P. N. Exon Smith
56c8d44827 AsmParser: Fix error location for missing fields
llvm-svn: 226524
2015-01-19 23:32:36 +00:00
Simon Pilgrim
2ea397ac36 [X86][AVX] Missing AVX1 memory folding float instructions
Now that we can create much more exhaustive X86 memory folding tests, this patch adds the missing AVX1/F16C floating point instruction stack foldings we can easily test for including the scalar intrinsics (add, div, max, min, mul, sub), conversions float/int to double, half precision conversions, rounding, dot product and bit test. The patch also adds a couple of obviously missing SSE instructions (more to follow once we have full SSE testing).

Now that scalar folding is working it broke a very old test (2006-10-07-ScalarSSEMiscompile.ll) - this test appears to make no sense as its trying to ensure that a scalar subtraction isn't folded as it 'would zero the top elts of the loaded vector' - this test just appears to be wrong to me.

Differential Revision: http://reviews.llvm.org/D7055

llvm-svn: 226513
2015-01-19 22:40:45 +00:00
Rafael Espindola
a02b7738f7 Add r224985 back with fixes.
The fixes are to note that AArch64 has additional restrictions on when local
relocations can be used. In particular, ld64 requires that relocations to
cstring/cfstrings use linker visible symbols.

Original message:

In an assembly expression like

bar:
  .long L0 + 1

the intended semantics is that bar will contain a pointer one byte past L0.

In sections that are merged by content (strings, 4 byte constants, etc), a
single position in the section doesn't give the linker enough information.
For example, it would not be able to tell a relocation must point to the
end of a string, since that would look just like the start of the next.

The solution used in ELF to use relocation with symbols if there is a non-zero
addend.

In MachO before this patch we would just keep all symbols in some sections.

This would miss some cases (only cstrings on x86_64 were implemented) and was
inefficient since most relocations have an addend of 0 and can be represented
without the symbol.

This patch implements the non-zero addend logic for MachO too.

llvm-svn: 226503
2015-01-19 21:11:14 +00:00
Colin LeMahieu
9cc2ac99d5 [Hexagon] Updating muxir/ri/ii intrinsics. Setting predicate registers as compatible with i32 rather than doing custom type conversion.
llvm-svn: 226500
2015-01-19 20:31:18 +00:00
Colin LeMahieu
89f087ffd8 [Hexagon] Converting intrinsics combine imm/imm, simple shifts and extends.
llvm-svn: 226483
2015-01-19 18:56:19 +00:00
Colin LeMahieu
c9d3d690e7 [Hexagon] Converting remaining ALU32/ALU intrinsics.
llvm-svn: 226480
2015-01-19 18:33:58 +00:00
Colin LeMahieu
dc770a4bae [Hexagon] Converting ALU32/ALU intrinsics to new patterns.
llvm-svn: 226478
2015-01-19 18:22:19 +00:00
Adrian Prantl
e6b30cee57 Remove support for DIVariable's FlagIndirectVariable and expect
frontends to use a DIExpression with a DW_OP_deref instead.

This is not only a much more natural place for this informationl; there
is also a technical reason: The FlagIndirectVariable is used to mark a
variable that is turned into a reference by virtue of the calling
convention; this happens for example to aggregate return values.
The inliner, for example, may actually need to undo this indirection to
correctly represent the value in its new context. This is impossible to
implement because the DIVariable can't be safely modified. We can however
safely construct a new DIExpression on the fly.

llvm-svn: 226476
2015-01-19 17:57:29 +00:00
Greg Fitzgerald
91e9dd41fa [AArch64] Implement GHC calling convention
Original patch by Luke Iannini.  Minor improvements and test added by
Erik de Castro Lopo.

Differential Revision: http://reviews.llvm.org/D6877

From: Erik de Castro Lopo <erikd@mega-nerd.com>
llvm-svn: 226473
2015-01-19 17:40:05 +00:00
Colin LeMahieu
9b7e71b043 [Hexagon] Converting halfword to double accumulating multiply intrinsics.
llvm-svn: 226472
2015-01-19 17:36:32 +00:00
Rafael Espindola
bd29c767dc Produce errors when an assignment expression would use a common symbol.
An assignment will produce a symbol with a given section and offset. There is
no way to represent something like "1 byte after a common symbol".

This matches the behavior of GNU as.

Part of PR22217.

llvm-svn: 226470
2015-01-19 17:30:24 +00:00
Bradley Smith
ae052a6630 [ARM] SSAT/USAT with an 'asr #32' shift should result in an undefined encoding rather than unpredictable
llvm-svn: 226469
2015-01-19 16:37:17 +00:00
Bradley Smith
7561d7842c [ARM] Fixup sign extend instruction availability w.r.t. DSP extension
llvm-svn: 226468
2015-01-19 16:36:02 +00:00
Rafael Espindola
004c23be3b Bring r226038 back.
No change in this commit, but clang was changed to also produce trivial comdats when
needed.

Original message:

Don't create new comdats in CodeGen.

This patch stops the implicit creation of comdats during codegen.

Clang now sets the comdat explicitly when it is required. With this patch clang and gcc
now produce the same result in pr19848.

llvm-svn: 226467
2015-01-19 15:16:06 +00:00
Michael Kuperstein
83ab484af5 [MIScheduler] Slightly better handling of constrainLocalCopy when both source and dest are local
This fixes PR21792.

Differential Revision: http://reviews.llvm.org/D6823

llvm-svn: 226433
2015-01-19 07:30:47 +00:00
Hal Finkel
89328f88bf [PowerPC] Add r2 as an operand for all calls under both PPC64 ELF V1 and V2
Our PPC64 ELF V2 call lowering logic added r2 as an operand to all direct call
instructions in order to represent the dependency on the TOC base pointer
value. Restricting this to ELF V2, however, does not seem to make sense: calls
under ELF V1 have the same dependence, and indirect calls have an r2 dependence
just as direct ones. Make sure the dependence is noted for all calls under both
ELF V1 and ELF V2.

llvm-svn: 226432
2015-01-19 07:20:27 +00:00
Matt Arsenault
e51db749de R600: Remove redundant test
This is already covered in ftrunc.ll

llvm-svn: 226412
2015-01-18 19:30:32 +00:00
Daniel Sanders
b006cbb88e [mips] 'CHECK :' is not a valid check directive. Fixed.
llvm-svn: 226409
2015-01-18 18:43:10 +00:00
Daniel Sanders
26a5237a9a [mips] Make whitespace in disassembler tests more consistent. NFC.
The tests for the ISA's should now be approximately diffable. That is, the
output of 'diff valid-mips1.txt valid-mips2.txt' should be emit the lines
for instructions that were added/removed to/from MIPS-I by MIPS-II. This
doesn't work perfectly at the moment due to ordering differences but it
should be close.

llvm-svn: 226408
2015-01-18 18:38:36 +00:00
Daniel Sanders
60099a2678 [mips] Make whitespace of disassembler tests more consistent by removing blank lines. NFC.
llvm-svn: 226407
2015-01-18 18:21:19 +00:00
Simon Pilgrim
c7e9e7d766 [X86][SSE] Added scalar min/max folding tests. NFC.
llvm-svn: 226406
2015-01-18 18:06:23 +00:00
Simon Pilgrim
fe4662d567 [X86][SSE] Added float extract and xmm extract/insert stack folding tests. NFC.
llvm-svn: 226405
2015-01-18 17:04:32 +00:00
Simon Pilgrim
f6df496d0c [X86][SSE] Added scalar conversion stack folding tests. NFC.
llvm-svn: 226404
2015-01-18 16:22:15 +00:00
Simon Pilgrim
5e48e7f807 AVX1 stack folding tests. NFC.
Begun adding more exhaustive tests - all floating point instructions should now be either tested or have placeholders. We do seem to have a number of missing instructions, I will add a patch for review once the remaining working instructions are added.

I'll then move on to SSE tests and then the integer instructions.

llvm-svn: 226400
2015-01-18 12:56:39 +00:00
Hal Finkel
e9b3a1a030 [PowerPC] Initial PPC64 calling-convention changes for fastcc
The default calling convention specified by the PPC64 ELF (V1 and V2) ABI is
designed to work with both prototyped and non-prototyped/varargs functions. As
a result, GPRs and stack space are allocated for every argument, even those
that are passed in floating-point or vector registers.

GlobalOpt::OptimizeFunctions will transform local non-varargs functions (that
do not have their address taken) to use the 'fast' calling convention.

When functions are using the 'fast' calling convention, don't allocate GPRs for
arguments passed in other types of registers, and don't allocate stack space for
arguments passed in registers. Other changes for the fast calling convention
may be added in the future.

llvm-svn: 226399
2015-01-18 12:08:47 +00:00
Hal Finkel
470187b350 [PowerPC] Don't list R11 as a patchpoint scratch register
R11's status is the same under both the PPC64 ELF V1 and V2 ABIs: it is
reserved for use as an "environment pointer" for compilation models that
require such a thing. We don't, we also don't need a second scratch register,
and because we support only "local" patchpoint call targets, we might as well
let R11 be used for anyregcc patchpoints.

llvm-svn: 226369
2015-01-17 03:57:34 +00:00
Mehdi Amini
e119c6afaf Improve DAG combine pass on certain IR vector patterns
Loading 2 2x32-bit float vectors into the bottom half of a 256-bit vector
produced suboptimal code in AVX2 mode with certain IR combinations.

In particular, the IR optimizer folded 2f32 + 2f32 -> 4f32, 4f32 + 4f32
(undef) -> 8f32 into a 2f32 + 2f32 -> 8f32, which seems more canonical,
but then mysteriously generated rather bad code; the movq/movhpd combination
didn't match.

The problem lay in the BUILD_VECTOR optimization path. The 2f32 inputs
would get promoted to 4f32 by the type legalizer, eventually resulting
in a BUILD_VECTOR on two 4f32 into an 8f32. The BUILD_VECTOR then, recognizing
these were both half the output size, concatted them and then produced
a shuffle. However, the resulting concat + shuffle was more complex than
it should be; in the case where the upper half of the output is undef, we
probably want to generate shuffle + concat instead.

This enhancement causes the vector_shuffle combine step to recognize this
suboptimal pattern and correct it. I included it there instead of in BUILD_VECTOR
in case the same suboptimal pattern occurs for other reasons.

This results in the optimizer correctly producing the optimal movq + movhpd
sequence for all three variations on this IR, even with AVX2.

I've included a test case.

Radar link: rdar://problem/19287012
Fix for PR 21943.

From: Fiona Glaser <fglaser@apple.com>
llvm-svn: 226360
2015-01-17 01:35:56 +00:00
Kevin Enderby
b88ab47be3 Change the test case for llvm-objdump’s -archive-headers option to not check the size
while I once again try to figure out why only the clang-cmake-armv7-a15-full bot
is getting that value wrong.

llvm-svn: 226345
2015-01-16 23:29:07 +00:00
Matt Arsenault
ffd1bd1d5d R600: Clean up floor tests
These were using different naming schemes,
not using multiple check prefixes and not using
-LABEL.

llvm-svn: 226333
2015-01-16 22:11:00 +00:00
Kevin Enderby
2473644cb2 Fix the Archive::Child::getRawSize() method used by llvm-objdump’s -archive-headers option
and tweak its use in llvm-objdump.  Add back the test case for the -archive-headers option.

llvm-svn: 226332
2015-01-16 22:10:36 +00:00
Colin LeMahieu
b0ae008450 [Hexagon] Converting halfword to doubleword multiply intrinsics.
llvm-svn: 226326
2015-01-16 21:41:57 +00:00
Colin LeMahieu
3f0206ff73 [Hexagon] Converting accumulating halfword multiply intrinsics to patterns.
llvm-svn: 226324
2015-01-16 21:36:34 +00:00
Colin LeMahieu
6559af4ce0 [Hexagon] Beginning converting intrinsics to patterns instead of duplicated definitions. Converting halfword multiply intrinsics.
llvm-svn: 226318
2015-01-16 20:38:54 +00:00
Adam Nemet
9fe8c32290 [AVX512] Add intrinsics for masked aligned FP loads and stores
Similar to the unaligned cases.

Test was generated with update_llc_test_checks.py.

Part of <rdar://problem/17688758>

llvm-svn: 226296
2015-01-16 18:50:09 +00:00
Adam Nemet
553d4195e9 [AVX512] Remove trailing whitespaces in this test
llvm-svn: 226295
2015-01-16 18:50:07 +00:00
Duncan P. N. Exon Smith
97ed3e1e77 IR: Allow 16-bits for column info
Raise the limit for column information from 8 bits to 16 bits.

llvm-svn: 226291
2015-01-16 17:33:08 +00:00
Andrea Di Biagio
d2869df8d3 [X86][DAG] Disable target specific combine on INSERTPS dag nodes at -O0.
This patch disables target specific combine on X86ISD::INSERTPS dag nodes
if optlevel is CodeGenOpt::None.

The backend currently implements a target specific combine rule that converts
a vector load used by an INSERTPS dag node into a scalar load plus a
scalar_to_vector. This allows ISel to select a single INSERTPSrm instead of
two instructions (i.e. a vector load plus INSERTPSrr).

However, the existing target combine rule on INSERTPS nodes only works under
the assumption that ISel will always be able to match an INSERTPSrm. This is
not true in general at -O0, since the backend only allows folding a load into
the memory operand of an instruction if the optimization level is not
CodeGenOpt::None.

In the example below:

//
__m128 test(__m128 a, __m128 *b) {
  __m128 c = _mm_insert_ps(a, *b, 1 << 6);
  return c;
}
//

Before this patch, at -O0, the backend would have canonicalized the load to 'b'
into a scalar load plus scalar_to_vector. Later on, ISel would have selected an
INSERTPSrr leaving the insertps mask in an inconsistent state:

  movss 4(%rdi), %xmm1
  insertps  $64, %xmm1, %xmm0 # xmm0 = xmm1[1],xmm0[1,2,3].

With this patch, the backend avoids folding the vector load into the operand of
the INSERTPS. The new codegen at -O0 is:

  movaps (%rdi), %xmm1
  insertps  $64, %xmm1, %xmm0 # %xmm1[1],xmm0[1,2,3].

llvm-svn: 226277
2015-01-16 14:55:26 +00:00
Simon Pilgrim
f787eeaa8f [X86] Refactored stack memory folding tests to explicitly force register spilling
The current 'big vectors' stack folded reload testing pattern is very bulky and makes it difficult to test all instructions as big vectors will tend to use only the ymm instruction implementations.

This patch changes the tests to use a nop call that lists explicit xmm registers as sideeffects, with this we can force a partial register spill of the relevant registers and then check that the reload is correctly folded. The asm generated only adds the forced spill, a nop instruction and a couple of extra labels (a fraction of the current approach).

More exhaustive tests will follow shortly, I've added some extra tests (the xmm versions of some of the existing folding tests) as a starting point.

Differential Revision: http://reviews.llvm.org/D6932

llvm-svn: 226264
2015-01-16 09:32:54 +00:00
Timur Iskhodzhanov
6d120e1a54 Revert r226242 - Revert Revert Don't create new comdats in CodeGen
This breaks AddressSanitizer (ninja check-asan) on Windows

llvm-svn: 226251
2015-01-16 08:38:45 +00:00
Filipe Cabecinhas
47d5b20f32 Use report_fatal_error instead of llvm_unreachable, so we don't crash on user input
llvm-svn: 226248
2015-01-16 04:54:12 +00:00
Hal Finkel
04316a019c [PowerPC] Adjust PatchPoints for ppc64le
Bill Schmidt pointed out that some adjustments would be needed to properly
support powerpc64le (using the ELF V2 ABI). For one thing, R11 is not available
as a scratch register, so we need to use R12. R12 is also available under ELF
V1, so to maintain consistency, I flipped the order to make R12 the first
scratch register in the array under both ABIs.

llvm-svn: 226247
2015-01-16 04:40:58 +00:00
Mehdi Amini
a1e86a9849 Fix Reassociate handling of constant in presence of undef float
http://reviews.llvm.org/D6993

llvm-svn: 226245
2015-01-16 03:00:58 +00:00
Rafael Espindola
f1394d41f0 Revert "Revert Don't create new comdats in CodeGen"
This reverts commit r226173, adding r226038 back.

No change in this commit, but clang was changed to also produce trivial comdats for
costructors, destructors and vtables when needed.

Original message:

Don't create new comdats in CodeGen.

This patch stops the implicit creation of comdats during codegen.

Clang now sets the comdat explicitly when it is required. With this patch clang and gcc
now produce the same result in pr19848.

llvm-svn: 226242
2015-01-16 02:22:55 +00:00