55284 Commits

Author SHA1 Message Date
Aaron Ballman
1529798ef4 MSVC's implementation of isalnum will assert on characters > 255, so we need to use an unsigned char to ensure the integer promotion happens properly. This fixes an assert in debug builds with CodeGen\X86\utf8.ll
llvm-svn: 160286
2012-07-16 16:18:18 +00:00
Kostya Serebryany
c80a9f4bea [asan] refactor instrumentation to allow merging the crash callbacks (not fully implemented yet, no functionality change except the BB order)
llvm-svn: 160284
2012-07-16 16:15:40 +00:00
NAKAMURA Takumi
cd72e724ac Target/AMDGPU: Fix includes, or msvc build failed.
llvm-svn: 160280
2012-07-16 15:43:50 +00:00
NAKAMURA Takumi
48743bc036 Target/AMDGPU/AMDILIntrinsicInfo.cpp: Use llvm_unreachable() in nonreturn function, instead of assert(0).
llvm-svn: 160279
2012-07-16 15:43:09 +00:00
NAKAMURA Takumi
877e9fac64 Target/AMDGPU/R600KernelParameters.cpp: Don't use "and", "or" as conditional operator...
llvm-svn: 160278
2012-07-16 15:42:35 +00:00
Jack Carter
f2bf098c4f Doubleword Shift Left Logical Plus 32
Mips shift instructions DSLL, DSRL and DSRA are transformed into
DSLL32, DSRL32 and DSRA32 respectively if the shift amount is between
32 and 63

Here is a description of DSLL:

Purpose: Doubleword Shift Left Logical Plus 32
To execute a left-shift of a doubleword by a fixed amount--32 to 63 bits

Description: GPR[rd] <- GPR[rt] << (sa+32)

The 64-bit doubleword contents of GPR rt are shifted left, inserting
 zeros into the emptied bits; the result is placed in
GPR rd. The bit-shift amount in the range 0 to 31 is specified by sa.

This patch implements the direct object output of these instructions.

llvm-svn: 160277
2012-07-16 15:14:51 +00:00
NAKAMURA Takumi
2d04e559df Target/AMDGPU: [CMake] Fix dependencies. 1) Add intrinsics_gen. Add AMDGPUCommonTableGen.
llvm-svn: 160276
2012-07-16 15:09:11 +00:00
NAKAMURA Takumi
4fd62f7458 Target/AMDGPU/R600KernelParameters.cpp: Fix two includes, <llvm/IRBuilder.h> and <llvm/TypeBuilder.h>
llvm-svn: 160275
2012-07-16 15:08:47 +00:00
Tom Stellard
c75d49d526 Build script changes for R600/SI Codegen v6
llvm-svn: 160272
2012-07-16 14:17:16 +00:00
Tom Stellard
9f326179fc AMDGPU: Add core backend files for R600/SI codegen v6
llvm-svn: 160270
2012-07-16 14:17:08 +00:00
Kostya Serebryany
ea753f7fc1 [asan] initialize asan error callbacks in runOnModule instead of doing that on-demand
llvm-svn: 160269
2012-07-16 14:09:42 +00:00
Nadav Rotem
67ff66bd0c Fix a bug in the 3-address conversion of LEA when one of the operands is an
undef virtual register. The problem is that ProcessImplicitDefs removes the
definition of the register and marks all uses as undef. If we lose the undef
marker then we get a register which has no def, is not marked as undef. The
live interval analysis does not collect information for these virtual
registers and we crash in later passes.

Together with Michael Kuperstein <michael.m.kuperstein@intel.com>

llvm-svn: 160260
2012-07-16 10:52:25 +00:00
Chandler Carruth
800f86f31b Revert r160254 temporarily.
It turns out that ASan relied on the at-the-end block insertion order to
(purely by happenstance) disable some LLVM optimizations, which in turn
start firing when the ordering is made more "normal". These
optimizations in turn merge many of the instrumentation reporting calls
which breaks the return address based error reporting in ASan.

We're looking at several different options for fixing this.

llvm-svn: 160256
2012-07-16 10:01:02 +00:00
Chandler Carruth
2f34858fa0 Teach AddressSanitizer to create basic blocks in a more natural order.
This is particularly useful to the backend code generators which try to
process things in the incoming function order.

Also, cleanup some uses of IRBuilder to be a bit simpler and more clear.

llvm-svn: 160254
2012-07-16 08:58:53 +00:00
Alexey Samsonov
c68bb48704 This CL changes the function prologue and epilogue emitted on X86 when stack needs realignment.
It is intended to fix PR11468.

Old prologue and epilogue looked like this:
push %rbp
mov %rsp, %rbp
and $alignment, %rsp
push %r14
push %r15
...
pop %r15
pop %r14
mov %rbp, %rsp
pop %rbp

The problem was to reference the locations of callee-saved registers in exception handling:
locations of callee-saved had to be re-calculated regarding the stack alignment operation. It would
take some effort to implement this in LLVM, as currently MachineLocation can only have the form
"Register + Offset". Funciton prologue and epilogue are now changed to:

push %rbp
mov %rsp, %rbp
push %14
push %15
and $alignment, %rsp
...
lea -$size_of_saved_registers(%rbp), %rsp
pop %r15
pop %r14
pop %rbp

Reviewed by Chad Rosier.

llvm-svn: 160248
2012-07-16 06:54:09 +00:00
Chandler Carruth
0e3ed1b1f3 Move llvm/Support/TypeBuilder.h -> llvm/TypeBuilder.h. This completes
the move of *Builder classes into the Core library.

No uses of this builder in Clang or DragonEgg I could find.

If there is a desire to have an IR-building-support library that
contains all of these builders, that can be easily added, but currently
it seems likely that these add no real overhead to VMCore.

llvm-svn: 160243
2012-07-15 23:45:24 +00:00
Chandler Carruth
8748299227 Move llvm/Support/MDBuilder.h to llvm/MDBuilder.h, to live with
IRBuilder, DIBuilder, etc.

This is the proper layering as MDBuilder can't be used (or implemented)
without the Core Metadata representation.

Patches to Clang and Dragonegg coming up.

llvm-svn: 160237
2012-07-15 23:26:50 +00:00
Nadav Rotem
338506d265 Fix a bug in the scalarization of BUILD_VECTOR. BUILD_VECTOR elements may be wider than the output element type. Make sure to trunc them if needed.
Together with Michael Kuperstein <michael.m.kuperstein@intel.com>

llvm-svn: 160235
2012-07-15 20:39:08 +00:00
Nadav Rotem
a09775b875 Teach getTargetVShiftNode about TargetConstant nodes.
llvm-svn: 160234
2012-07-15 20:27:43 +00:00
Nadav Rotem
0377e0d234 Rename VBROADCASTSDrm into VBROADCASTSDYrm to match the naming convention.
Allow the folding of vbroadcastRR to vbroadcastRM, where the memory operand is a spill slot.

PR12782.

Together with Michael Kuperstein <michael.m.kuperstein@intel.com>

llvm-svn: 160230
2012-07-15 12:26:30 +00:00
Nadav Rotem
ff9fadbd0a Refactor the code that checks that all operands of a node are UNDEFs.
Add a micro-optimization to getNode of CONCAT_VECTORS when both operands are undefs.
Can't find a testcase for this because VECTOR_SHUFFLE already handles undef operands, but Duncan suggested that we add this.

Together with Michael Kuperstein <michael.m.kuperstein@intel.com>

llvm-svn: 160229
2012-07-15 08:38:23 +00:00
Chandler Carruth
a974e1667b Reapply r160194, switching to use LV information for finding local kills.
The notable fix is to look at any dependencies attached to the kill
instruction (or other instructions between MI nad the kill) where the
dependencies are specific to the register in question.

The old code implicitly handled this by rejecting the transform if *any*
other uses were found within the block, but after the start point. The
new code directly finds the kill, and has to re-use the existing
dependency scan to check for non-kill uses.

This was caught by self-host, but I found the bug via inspection and use
of absurd assert scaffolding to compute the kills in two ways and
compare them. So I have no useful testcase for this other than
"bootstrap". I'd work harder to reduce a test case if this particular
code were likely to live for a long time.

Thanks to Benjamin Kramer for reviewing the fix itself.

llvm-svn: 160228
2012-07-15 03:29:46 +00:00
Nadav Rotem
c40d85dda5 AVX: Fix a bug in getTargetVShiftNode. The shift amount has to be a 128bit vector with the same element type as the input vector.
This is needed because of the patterns we have for the VP[SLL/SRA/SRL][W/D/Q] instructions.

llvm-svn: 160222
2012-07-14 22:26:05 +00:00
Nadav Rotem
a6dfd89cd0 Add a dagcombine optimization to convert concat_vectors of undefs into a single undef.
The unoptimized concat_vectors isd prevented the canonicalization of the vector_shuffle node.

llvm-svn: 160221
2012-07-14 21:30:27 +00:00
Jakob Stoklund Olesen
f5156bd4b2 Account for early-clobber reload instructions.
No test case, there are no in-tree targets that require this.

llvm-svn: 160219
2012-07-14 18:45:35 +00:00
Jakob Stoklund Olesen
70ab9a6638 Be more verbose when detecting dominance problems.
Catch uses of undefined physregs that haven't been added to basic block
live-in lists. Run the verifier to pinpoint the problem.

Also run the verifier when a virtual register use is not jointly
dominated by defs.

llvm-svn: 160207
2012-07-13 23:39:05 +00:00
Andrew Trick
8030f89de4 LSR Fix: check SCEV expression safety before expansion.
All SCEV expressions used by LSR formulae must be safe to
expand. i.e. they may not contain UDiv unless we can prove nonzero
denominator.

Fixes PR11356: LSR hoists UDiv.

llvm-svn: 160205
2012-07-13 23:33:10 +00:00
Andrew Trick
2582249334 IVUsers should only generate SCEV's for values that are safe to speculate.
This allows SCEVExpander to run on the IV expressions.

This codifies an assumption made by LSR to complete the fix for
PR11356, but I haven't been able to generate a separate unit test for
this part. I'm adding it as an extra safety check.

llvm-svn: 160204
2012-07-13 23:33:05 +00:00
Andrew Trick
7efd13174a Factor SCEV traversal code so I can use it elsewhere. No functionality.
llvm-svn: 160203
2012-07-13 23:33:03 +00:00
Joel Jones
12ea066486 This is one of the first steps at moving to replace target-dependent
intrinsics with target-indepdent intrinsics.  The first instruction(s) to be 
handled are the vector versions of count leading zeros (ctlz).

The changes here are to clang so that it generates a target independent 
vector ctlz when it sees an ARM dependent vector ctlz.  The changes in llvm 
are to match the target independent vector ctlz and in VMCore/AutoUpgrade.cpp 
to update any existing bc files containing ARM dependent vector ctlzs with 
target-independent ctlzs.  There are also changes to an existing test case in 
llvm for ARM vector count instructions and a new test for the bitcode upgrade.

<rdar://problem/11831778>

There is deliberately no test for the change to clang, as so far as I know, no
consensus has been reached regarding how to test neon instructions in clang;
q.v. <rdar://problem/8762292>

llvm-svn: 160200
2012-07-13 23:25:25 +00:00
Chandler Carruth
0d0cb77247 Revert r160194, which switched to use LV information for finding local
kills.

This is causing miscompiles that I'm working on tracking down.

llvm-svn: 160196
2012-07-13 22:23:32 +00:00
Chandler Carruth
7b376840a6 Use the LiveVariables information to efficiently get local kills. This
removes the largest scaling problem in the test cases from PR13225 when
ASan is switched to insert basic blocks in the natural CFG order.

It may also solve some scaling problems for more normal code with large
numbers of basic blocks and variables.

llvm-svn: 160194
2012-07-13 21:18:38 +00:00
Jakob Stoklund Olesen
b8af245a15 Remove variable_ops from call instructions in most targets.
Call instructions are no longer required to be variadic, and
variable_ops should only be used for instructions that encode a variable
number of arguments, like the ARM stm/ldm instructions.

llvm-svn: 160189
2012-07-13 20:44:29 +00:00
Jakob Stoklund Olesen
6944c56847 Remove variable_ops from ARM call instructions.
Function argument registers are added to the call SDNode, but
InstrEmitter now knows how to make those operands implicit, and the call
instruction doesn't have to be variadic.

Explicit register operands should only be those that are encoded in the
instruction, implicit register operands are for extra dependencies like
call argument and return values.

llvm-svn: 160188
2012-07-13 20:27:00 +00:00
Jack Carter
b8e3cf5fbc The Mips specific relocation R_MIPS_GOT_DISP
is used in cases where global symbols are 
directly represented in the GOT and we use an 
offset into the global offset table.

This patch adds direct object support for R_MIPS_GOT_DISP.

llvm-svn: 160183
2012-07-13 19:15:47 +00:00
Benjamin Kramer
308eb1b4c0 Make helper functions static.
llvm-svn: 160173
2012-07-13 13:25:15 +00:00
Craig Topper
a75da664ff Mark VINSERTI128rm as MayLoad=1. Fixes PR13348.
llvm-svn: 160162
2012-07-13 05:46:28 +00:00
Galina Kistanova
7a6df6e6a2 Fixed few warnings; trimmed empty lines.
llvm-svn: 160159
2012-07-13 01:25:27 +00:00
Jim Grosbach
91775b53f2 Provide function name in 'Cannot select' fatal error.
When dumping the DAG for a fatal 'Cannot select' back-end error, also
provide the name of the function the construct is in. Useful when dealing
with large testcases, as the next step is to llvm-extract the function
in question to get a small(er) testcase.

llvm-svn: 160152
2012-07-13 00:29:09 +00:00
Eric Christopher
2db5457cb8 The end of the prologue should be marked with is_stmt.
Fixes PR13303.

Patch by Paul Robinson!

llvm-svn: 160148
2012-07-12 23:30:25 +00:00
Galina Kistanova
8b02fc8d50 Fixed few warnings.
llvm-svn: 160142
2012-07-12 20:45:36 +00:00
Benjamin Kramer
558c56f216 Give the rdrand instructions a SideEffect flag and a chain so MachineCSE and MachineLICM don't touch it.
I already had the necessary things in place for IR-level passes but missed the machine passes.

llvm-svn: 160137
2012-07-12 18:14:57 +00:00
Benjamin Kramer
f8e67a04f4 Add intrinsics for Ivy Bridge's rdrand instruction.
The rdrand/cmov sequence is the same that is emitted by both
GCC and ICC.

Fixes PR13284.

llvm-svn: 160117
2012-07-12 09:31:43 +00:00
Duncan Sands
30441cb37c The result type of EXTRACT_VECTOR_ELT doesn't have to match the element type of
the input vector, it can be bigger (this is helpful for powerpc where <2 x i16>
is a legal vector type but i16 isn't a legal type, IIRC).  However this wasn't
being taken into account by ExpandRes_EXTRACT_VECTOR_ELT, causing PR13220.
Lightly tweaked version of a patch by Michael Liao.

llvm-svn: 160116
2012-07-12 09:01:35 +00:00
Craig Topper
6eef81b65b Update GATHER instructions to support 2 read-write operands. Patch from myself and Manman Ren.
llvm-svn: 160110
2012-07-12 06:52:41 +00:00
Evan Cheng
e6c5349fcd Instcombine was transforming:
%shr = lshr i64 %key, 3
  %0 = load i64* %val, align 8
  %sub = add i64 %0, -1
  %and = and i64 %sub, %shr
  ret i64 %and

to:
  %shr = lshr i64 %key, 3
  %0 = load i64* %val, align 8
  %sub = add i64 %0, 2305843009213693951
  %and = and i64 %sub, %shr
  ret i64 %and

The demanded bit optimization is actually a pessimization because add -1 would
be codegen'ed as a sub 1. Teach the demanded constant shrinking optimization
to check for negated constant to make sure it is actually reducing the width
of the constant.

rdar://11793464

llvm-svn: 160101
2012-07-12 01:45:35 +00:00
Jim Grosbach
89caeb3736 TableGen: Location information for diagnostic.
def Pat<...>;

Results in 'record name is not a string!' diagnostic. Not the best,
but the lack of location information moves it from not very helpful
into completely useless. We're in the Record class when throwing the
error, so just add the location info directly.

llvm-svn: 160098
2012-07-12 00:53:31 +00:00
Manman Ren
6b6d3e2854 ARM: fix typo in comments
llvm-svn: 160093
2012-07-11 23:47:00 +00:00
Manman Ren
0479bff92d ARM: Fix optimizeCompare to correctly check safe condition.
It is safe if CPSR is killed or re-defined.
When we are done with the basic block, check whether CPSR is live-out.
Do not optimize away cmp if CPSR is live-out.

llvm-svn: 160090
2012-07-11 22:51:44 +00:00
Jack Carter
7eebd50da0 Patch for Mips direct object generation.
When WriteFragmentData() case FT_align called
Asm.getBackend().writeNopData() is called, nothing
is done since Mips implementation of writeNopData just
returned "true".

For some reason this has not caused problems in 32 bit
mode, but in 64 bit mode it caused an assert when processing
multiple function units.

The test case included will assert without this patch. It
runs twice with different flags to prevent false positives
due to changes in code generation over time.

llvm-svn: 160084
2012-07-11 22:17:39 +00:00