5009 Commits

Author SHA1 Message Date
Chris Lattner
da4217646a Custom lower all BUILD_VECTOR's so that we can compile vec_splat_u8(8) into
"vspltisb v0, 8" instead of a constant pool load.

llvm-svn: 27335
2006-04-02 00:43:36 +00:00
Chris Lattner
38318b2706 Implement vnot using VNOR instead of using 'vspltisb v0, -1' and vxor
llvm-svn: 27331
2006-04-01 22:41:47 +00:00
Chris Lattner
894231e63e ADd a note
llvm-svn: 27324
2006-04-01 04:08:29 +00:00
Chris Lattner
32bb17a5f3 Shrinkify some more intrinsic definitions.
llvm-svn: 27322
2006-03-31 22:41:56 +00:00
Evan Cheng
403cd8f787 An entry about packed type alignments.
llvm-svn: 27321
2006-03-31 22:35:14 +00:00
Chris Lattner
12e9ce7104 Pull operand asm string into base class, shrinkifying intrinsic definitions.
No functionality change.

llvm-svn: 27320
2006-03-31 22:34:05 +00:00
Evan Cheng
fc0a2ac06e TargetData.cpp::getTypeInfo() was returning alignment of element type as the
alignment of a packed type. This is obviously wrong. Added a workaround that
returns the size of the packed type as its alignment. The correct fix would
be to return a target dependent alignment value provided via TargetLowering
(or some other interface).

llvm-svn: 27319
2006-03-31 22:33:42 +00:00
Chris Lattner
3d6e5f8a05 Fix 80 column violations :)
llvm-svn: 27315
2006-03-31 21:57:36 +00:00
Evan Cheng
4623ebd3d0 Use a X86 target specific node X86ISD::PINSRW instead of a mal-formed
INSERT_VECTOR_ELT to insert a 16-bit value in a 128-bit vector.

llvm-svn: 27314
2006-03-31 21:55:24 +00:00
Evan Cheng
fb980688f1 Added support for SSE3 horizontal ops: haddp{s|d} and hsub{s|d}.
llvm-svn: 27310
2006-03-31 21:29:33 +00:00
Chris Lattner
d66dd2a4ee fix a pasto
llvm-svn: 27308
2006-03-31 21:19:06 +00:00
Chris Lattner
28219f34bc Add vperm support for all datatypes
llvm-svn: 27307
2006-03-31 20:00:35 +00:00
Chris Lattner
336d6646ab Rearrange code a bit
llvm-svn: 27306
2006-03-31 19:52:36 +00:00
Chris Lattner
786f782398 Add, sub and shuffle are legal for all vector types
llvm-svn: 27305
2006-03-31 19:48:58 +00:00
Evan Cheng
7b9a0c6d7a Add support to use pextrw and pinsrw to extract and insert a word element
from a 128-bit vector.

llvm-svn: 27304
2006-03-31 19:22:53 +00:00
Evan Cheng
5da48f30bb Add vector_extract and vector_insert nodes.
llvm-svn: 27303
2006-03-31 19:21:16 +00:00
Chris Lattner
d27ced882b add a note
llvm-svn: 27302
2006-03-31 19:00:22 +00:00
Chris Lattner
e3774da014 note to self: *save* file, then check it in
llvm-svn: 27291
2006-03-31 06:04:53 +00:00
Chris Lattner
95d358dbdb Implement an item from the readme, folding vcmp/vcmp. instructions with
identical instructions into a single instruction.  For example, for:

void test(vector float *x, vector float *y, int *P) {
  int v = vec_any_out(*x, *y);
  *x = (vector float)vec_cmpb(*x, *y);
  *P = v;
}

we now generate:

_test:
        mfspr r2, 256
        oris r6, r2, 49152
        mtspr 256, r6
        lvx v0, 0, r4
        lvx v1, 0, r3
        vcmpbfp. v0, v1, v0
        mfcr r4, 2
        stvx v0, 0, r3
        rlwinm r3, r4, 27, 31, 31
        xori r3, r3, 1
        stw r3, 0(r5)
        mtspr 256, r2
        blr

instead of:

_test:
        mfspr r2, 256
        oris r6, r2, 57344
        mtspr 256, r6
        lvx v0, 0, r4
        lvx v1, 0, r3
        vcmpbfp. v2, v1, v0
        mfcr r4, 2
***     vcmpbfp v0, v1, v0
        rlwinm r4, r4, 27, 31, 31
        stvx v0, 0, r3
        xori r3, r4, 1
        stw r3, 0(r5)
        mtspr 256, r2
        blr

Testcase here: CodeGen/PowerPC/vcmp-fold.ll

llvm-svn: 27290
2006-03-31 06:02:07 +00:00
Chris Lattner
560f734320 compactify some more instruction definitions
llvm-svn: 27288
2006-03-31 05:38:32 +00:00
Chris Lattner
2c3d6bdb55 Compactify comparisons.
llvm-svn: 27287
2006-03-31 05:32:57 +00:00
Chris Lattner
e330741a6c Lower vector compares to VCMP nodes, just like we lower vector comparison
predicates to VCMPo nodes.

llvm-svn: 27285
2006-03-31 05:13:27 +00:00
Chris Lattner
a7a7c035b3 These are done
llvm-svn: 27284
2006-03-31 04:53:21 +00:00
Chris Lattner
00921c047c Was returning the wrong type.
llvm-svn: 27277
2006-03-31 01:50:09 +00:00
Chris Lattner
a31d719e0a Mark INSERT_VECTOR_ELT as expand
llvm-svn: 27276
2006-03-31 01:48:55 +00:00
Evan Cheng
4ca9bbc1bb Expand all INSERT_VECTOR_ELT (obviously bad) for now.
llvm-svn: 27275
2006-03-31 01:30:39 +00:00
Chris Lattner
8e0dfe133c Modify the TargetLowering::getPackedTypeBreakdown method to also return the
unpromoted element type.

llvm-svn: 27273
2006-03-31 00:46:36 +00:00
Evan Cheng
5d9fc9fdd0 Typo
llvm-svn: 27272
2006-03-31 00:33:57 +00:00
Evan Cheng
c55052da81 Ok for vector_shuffle mask to contain undef elements.
llvm-svn: 27271
2006-03-31 00:30:29 +00:00
Chris Lattner
7f48037ef1 Implement TargetLowering::getPackedTypeBreakdown
llvm-svn: 27270
2006-03-31 00:28:56 +00:00
Chris Lattner
87d3a2e045 Add the rest of the vmul instructions and the vmulsum* instructions.
llvm-svn: 27268
2006-03-30 23:39:06 +00:00
Chris Lattner
22b7e551f1 Use a new tblgen feature to significantly shrinkify instruction definitions that
directly correspond to intrinsics.

llvm-svn: 27266
2006-03-30 23:21:27 +00:00
Chris Lattner
6aca6013d2 Add a bunch of new instructions for intrinsics.
llvm-svn: 27265
2006-03-30 23:07:36 +00:00
Evan Cheng
d3c692650f Make sure all possible shuffles are matched.
Use pshufd, pshuhw, and pshulw to shuffle v4f32 if shufps doesn't match.
Use shufps to shuffle v4f32 if pshufd, pshuhw, and pshulw don't match.

llvm-svn: 27259
2006-03-30 19:54:57 +00:00
Evan Cheng
4150ec59a3 More logical ops patterns
llvm-svn: 27257
2006-03-30 07:33:32 +00:00
Evan Cheng
57d481a78a Add support for _mm_cmp{cc}_ss and _mm_cmp{cc}_ps intrinsics
llvm-svn: 27256
2006-03-30 06:21:22 +00:00
Evan Cheng
82d2a6910f Add 128-bit pmovmskb intrinsic support.
llvm-svn: 27255
2006-03-30 00:33:26 +00:00
Evan Cheng
9ebe75e915 Change SSE pack operation definitions to fit what the intrinsics expected.
For example, packsswb actually creates a v16i8 from a pair of v8i16. But since
the intrinsic specification forces the output type to match the operands.

llvm-svn: 27254
2006-03-29 23:53:14 +00:00
Evan Cheng
7bc3bc8246 - Added some SSE2 128-bit packed integer ops.
- Added SSE2 128-bit integer pack with signed saturation ops.
- Added pshufhw and pshuflw ops.

llvm-svn: 27252
2006-03-29 23:07:14 +00:00
Evan Cheng
d0d3eade59 Need to special case splat after all. Make the second operand of splat
vector_shuffle undef.

llvm-svn: 27250
2006-03-29 19:02:40 +00:00
Evan Cheng
e7701928bb Floating point logical operation patterns should match bit_convert. Or else
integer vector logical operations would match andp{s|d} instead of pand.

llvm-svn: 27248
2006-03-29 18:47:40 +00:00
Evan Cheng
02b5de9b3e - More shuffle related bug fixes.
- Whenever possible use ops of the right packed types for vector shuffles /
  splats.

llvm-svn: 27246
2006-03-29 03:04:49 +00:00
Evan Cheng
6e8b924416 Another entry about shuffles.
llvm-svn: 27245
2006-03-29 03:03:46 +00:00
Evan Cheng
5194a37602 - Only use pshufd for v4i32 vector shuffles.
- Other shuffle related fixes.

llvm-svn: 27244
2006-03-29 01:30:51 +00:00
Chris Lattner
1a773f8f18 add a note
llvm-svn: 27243
2006-03-29 00:24:13 +00:00
Evan Cheng
e7a50a5851 Added aliases to scalar SSE instructions, e.g. addss, to match x86 intrinsics.
The source operands type are v4sf with upper bits passes through.
Added matching code for these.

llvm-svn: 27240
2006-03-28 23:51:43 +00:00
Evan Cheng
178e36174a Fixing buggy code.
llvm-svn: 27239
2006-03-28 23:41:33 +00:00
Chris Lattner
93559450b8 add a note
llvm-svn: 27227
2006-03-28 18:56:23 +00:00
Jim Laskey
eb38a3e83a Expose base register for DwarfWriter. Refactor code accordingly.
llvm-svn: 27225
2006-03-28 13:48:33 +00:00
Jim Laskey
fa6dfa9212 Added missing paren on behalf of Ramana Radhakrishnan.
llvm-svn: 27223
2006-03-28 10:17:11 +00:00