Chris Lattner
389e309bfb
Intrinsics that just load from memory can be treated like loads: they don't
...
have to serialize against each other. This allows us to schedule lvx's
across each other, for example.
llvm-svn: 27346
2006-04-02 03:41:14 +00:00
Chris Lattner
e314cf19ba
Adjust to change in Intrinsics.gen interface.
...
llvm-svn: 27344
2006-04-02 03:35:01 +00:00
Chris Lattner
104db817c8
Constant fold all of the vector binops. This allows us to compile this:
...
"vector unsigned char mergeLowHigh = (vector unsigned char)
( 8, 9, 10, 11, 16, 17, 18, 19, 12, 13, 14, 15, 20, 21, 22, 23 );
vector unsigned char mergeHighLow = vec_xor( mergeLowHigh, vec_splat_u8(8));"
aka:
void %test2(<16 x sbyte>* %P) {
store <16 x sbyte> cast (<4 x int> xor (<4 x int> cast (<16 x ubyte> < ubyte 8, ubyte 9, ubyte 10, ubyte 11, ubyte 16, ubyte 17, ubyte 18, ubyte 19, ubyte 12, ubyte 13, ubyte 14, ubyte 15, ubyte 20, ubyte 21, ubyte 22, ubyte 23 > to <4 x int>), <4 x int> cast (<16 x sbyte> < sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8 > to <4 x int>)) to <16 x sbyte>), <16 x sbyte> * %P
ret void
}
into this:
_test2:
mfspr r2, 256
oris r4, r2, 32768
mtspr 256, r4
li r4, lo16(LCPI2_0)
lis r5, ha16(LCPI2_0)
lvx v0, r5, r4
stvx v0, 0, r3
mtspr 256, r2
blr
instead of this:
_test2:
mfspr r2, 256
oris r4, r2, 49152
mtspr 256, r4
li r4, lo16(LCPI2_0)
lis r5, ha16(LCPI2_0)
vspltisb v0, 8
lvx v1, r5, r4
vxor v0, v1, v0
stvx v0, 0, r3
mtspr 256, r2
blr
... which occurs here:
http://developer.apple.com/hardware/ve/calcspeed.html
llvm-svn: 27343
2006-04-02 03:25:57 +00:00
Chris Lattner
badebf1c9b
Add a new -view-legalize-dags command line option
...
llvm-svn: 27342
2006-04-02 03:07:27 +00:00
Chris Lattner
52732a272f
Implement constant folding of bit_convert of arbitrary constant vbuild_vector nodes.
...
llvm-svn: 27341
2006-04-02 02:53:43 +00:00
Chris Lattner
8a66373ad7
These entries already exist
...
llvm-svn: 27340
2006-04-02 02:51:27 +00:00
Chris Lattner
5eefd8e0b3
Add some missing node names
...
llvm-svn: 27339
2006-04-02 02:41:18 +00:00
Chris Lattner
2a24d68439
New note
...
llvm-svn: 27337
2006-04-02 01:47:20 +00:00
Chris Lattner
3aa0246b4a
Constant fold casts from things like <4 x int> -> <4 x uint>, likewise int<->fp.
...
llvm-svn: 27336
2006-04-02 01:38:28 +00:00
Chris Lattner
da4217646a
Custom lower all BUILD_VECTOR's so that we can compile vec_splat_u8(8) into
...
"vspltisb v0, 8" instead of a constant pool load.
llvm-svn: 27335
2006-04-02 00:43:36 +00:00
Chris Lattner
13e8d5973c
Prefer larger register classes over smaller ones when a register occurs in
...
multiple register classes. This fixes PowerPC/2006-04-01-FloatDoubleExtend.ll
llvm-svn: 27334
2006-04-02 00:24:45 +00:00
Chris Lattner
704770bfe7
add valuemapper support for inline asm
...
llvm-svn: 27332
2006-04-01 23:17:11 +00:00
Chris Lattner
38318b2706
Implement vnot using VNOR instead of using 'vspltisb v0, -1' and vxor
...
llvm-svn: 27331
2006-04-01 22:41:47 +00:00
Chris Lattner
c2e9b030da
Fix InstCombine/2006-04-01-InfLoop.ll
...
llvm-svn: 27330
2006-04-01 22:05:01 +00:00
Chris Lattner
497bbd4650
Fold A^(B&A) -> (B&A)^A
...
Fold (B&A)^A == ~B & A
This implements InstCombine/xor.ll:test2[56]
llvm-svn: 27328
2006-04-01 08:03:55 +00:00
Chris Lattner
a76347d917
Fix Transforms/IndVarsSimplify/2006-03-31-NegativeStride.ll and
...
PR726 by performing consistent signed division, not consistent unsigned
division when evaluating scev's. Do not touch udivs.
llvm-svn: 27326
2006-04-01 04:48:52 +00:00
Chris Lattner
894231e63e
ADd a note
...
llvm-svn: 27324
2006-04-01 04:08:29 +00:00
Chris Lattner
79819f52dc
If we can look through vector operations to find the scalar version of an
...
extract_element'd value, do so.
llvm-svn: 27323
2006-03-31 23:01:56 +00:00
Chris Lattner
32bb17a5f3
Shrinkify some more intrinsic definitions.
...
llvm-svn: 27322
2006-03-31 22:41:56 +00:00
Evan Cheng
403cd8f787
An entry about packed type alignments.
...
llvm-svn: 27321
2006-03-31 22:35:14 +00:00
Chris Lattner
12e9ce7104
Pull operand asm string into base class, shrinkifying intrinsic definitions.
...
No functionality change.
llvm-svn: 27320
2006-03-31 22:34:05 +00:00
Evan Cheng
fc0a2ac06e
TargetData.cpp::getTypeInfo() was returning alignment of element type as the
...
alignment of a packed type. This is obviously wrong. Added a workaround that
returns the size of the packed type as its alignment. The correct fix would
be to return a target dependent alignment value provided via TargetLowering
(or some other interface).
llvm-svn: 27319
2006-03-31 22:33:42 +00:00
Chris Lattner
b088cfc01a
Delete identity shuffles, implementing CodeGen/Generic/vector-identity-shuffle.ll
...
llvm-svn: 27317
2006-03-31 22:16:43 +00:00
Chris Lattner
3d6e5f8a05
Fix 80 column violations :)
...
llvm-svn: 27315
2006-03-31 21:57:36 +00:00
Evan Cheng
4623ebd3d0
Use a X86 target specific node X86ISD::PINSRW instead of a mal-formed
...
INSERT_VECTOR_ELT to insert a 16-bit value in a 128-bit vector.
llvm-svn: 27314
2006-03-31 21:55:24 +00:00
Evan Cheng
fb980688f1
Added support for SSE3 horizontal ops: haddp{s|d} and hsub{s|d}.
...
llvm-svn: 27310
2006-03-31 21:29:33 +00:00
Chris Lattner
d66dd2a4ee
fix a pasto
...
llvm-svn: 27308
2006-03-31 21:19:06 +00:00
Chris Lattner
28219f34bc
Add vperm support for all datatypes
...
llvm-svn: 27307
2006-03-31 20:00:35 +00:00
Chris Lattner
336d6646ab
Rearrange code a bit
...
llvm-svn: 27306
2006-03-31 19:52:36 +00:00
Chris Lattner
786f782398
Add, sub and shuffle are legal for all vector types
...
llvm-svn: 27305
2006-03-31 19:48:58 +00:00
Evan Cheng
7b9a0c6d7a
Add support to use pextrw and pinsrw to extract and insert a word element
...
from a 128-bit vector.
llvm-svn: 27304
2006-03-31 19:22:53 +00:00
Evan Cheng
5da48f30bb
Add vector_extract and vector_insert nodes.
...
llvm-svn: 27303
2006-03-31 19:21:16 +00:00
Chris Lattner
d27ced882b
add a note
...
llvm-svn: 27302
2006-03-31 19:00:22 +00:00
Chris Lattner
8e584affdb
constant fold extractelement with undef operands.
...
llvm-svn: 27301
2006-03-31 18:31:40 +00:00
Chris Lattner
0af2e8be73
extractelement(undef,x) -> undef
...
llvm-svn: 27300
2006-03-31 18:25:14 +00:00
Chris Lattner
774bdd598c
Do not endian swap split vector loads. This fixes UnitTests/Vector/sumarray-dbl on PPC.
...
Now all UnitTests/Vector/* tests pass on PPC.
llvm-svn: 27299
2006-03-31 18:22:37 +00:00
Chris Lattner
ffa44397a5
Do not endian swap the operands to a store if the operands came from a vector.
...
This fixes UnitTests/Vector/simple.c with altivec.
llvm-svn: 27298
2006-03-31 18:20:46 +00:00
Chris Lattner
0e7da656a7
Remove dead *extloads. This allows us to codegen vector.ll:test_extract_elt
...
to:
test_extract_elt:
alloc r3 = ar.pfs,0,1,0,0
adds r8 = 12, r32
;;
ldfs f8 = [r8]
mov ar.pfs = r3
br.ret.sptk.many rp
instead of:
test_extract_elt:
alloc r3 = ar.pfs,0,1,0,0
adds r8 = 28, r32
adds r9 = 24, r32
adds r10 = 20, r32
adds r11 = 16, r32
;;
ldfs f6 = [r8]
;;
ldfs f6 = [r9]
adds r8 = 12, r32
adds r9 = 8, r32
adds r14 = 4, r32
;;
ldfs f6 = [r10]
;;
ldfs f6 = [r11]
ldfs f8 = [r8]
;;
ldfs f6 = [r9]
;;
ldfs f6 = [r14]
;;
ldfs f6 = [r32]
mov ar.pfs = r3
br.ret.sptk.many rp
llvm-svn: 27297
2006-03-31 18:10:41 +00:00
Chris Lattner
c3be332547
Delete dead loads in the dag. This allows us to compile
...
vector.ll:test_extract_elt2 into:
_test_extract_elt2:
lfd f1, 32(r3)
blr
instead of:
_test_extract_elt2:
lfd f0, 56(r3)
lfd f0, 48(r3)
lfd f0, 40(r3)
lfd f1, 32(r3)
lfd f0, 24(r3)
lfd f0, 16(r3)
lfd f0, 8(r3)
lfd f0, 0(r3)
blr
llvm-svn: 27296
2006-03-31 18:06:18 +00:00
Chris Lattner
9d379a4ef3
Implement PromoteOp for VEXTRACT_VECTOR_ELT. Thsi fixes
...
Generic/vector.ll:test_extract_elt on non-sse X86 systems.
llvm-svn: 27294
2006-03-31 17:55:51 +00:00
Chris Lattner
e05a1ec544
Scalarized vector stores need not be legal, e.g. if the vector element type
...
needs to be promoted or expanded. Relegalize the scalar store once created.
This fixes CodeGen/Generic/vector.ll:test1 on non-SSE x86 targets.
llvm-svn: 27293
2006-03-31 17:37:22 +00:00
Jeff Cohen
6c699c72a8
Fix build breakage.
...
llvm-svn: 27292
2006-03-31 07:22:05 +00:00
Chris Lattner
e3774da014
note to self: *save* file, then check it in
...
llvm-svn: 27291
2006-03-31 06:04:53 +00:00
Chris Lattner
95d358dbdb
Implement an item from the readme, folding vcmp/vcmp. instructions with
...
identical instructions into a single instruction. For example, for:
void test(vector float *x, vector float *y, int *P) {
int v = vec_any_out(*x, *y);
*x = (vector float)vec_cmpb(*x, *y);
*P = v;
}
we now generate:
_test:
mfspr r2, 256
oris r6, r2, 49152
mtspr 256, r6
lvx v0, 0, r4
lvx v1, 0, r3
vcmpbfp. v0, v1, v0
mfcr r4, 2
stvx v0, 0, r3
rlwinm r3, r4, 27, 31, 31
xori r3, r3, 1
stw r3, 0(r5)
mtspr 256, r2
blr
instead of:
_test:
mfspr r2, 256
oris r6, r2, 57344
mtspr 256, r6
lvx v0, 0, r4
lvx v1, 0, r3
vcmpbfp. v2, v1, v0
mfcr r4, 2
*** vcmpbfp v0, v1, v0
rlwinm r4, r4, 27, 31, 31
stvx v0, 0, r3
xori r3, r4, 1
stw r3, 0(r5)
mtspr 256, r2
blr
Testcase here: CodeGen/PowerPC/vcmp-fold.ll
llvm-svn: 27290
2006-03-31 06:02:07 +00:00
Chris Lattner
560f734320
compactify some more instruction definitions
...
llvm-svn: 27288
2006-03-31 05:38:32 +00:00
Chris Lattner
2c3d6bdb55
Compactify comparisons.
...
llvm-svn: 27287
2006-03-31 05:32:57 +00:00
Chris Lattner
e330741a6c
Lower vector compares to VCMP nodes, just like we lower vector comparison
...
predicates to VCMPo nodes.
llvm-svn: 27285
2006-03-31 05:13:27 +00:00
Chris Lattner
a7a7c035b3
These are done
...
llvm-svn: 27284
2006-03-31 04:53:21 +00:00
Chris Lattner
17549e4da1
Add a new method to verify intrinsic function prototypes.
...
llvm-svn: 27282
2006-03-31 04:46:47 +00:00
Chris Lattner
f30369b9b1
Make sure to pass enough values to phi nodes when we are dealing with
...
decimated vectors. This fixes UnitTests/Vector/sumarray-dbl.c
llvm-svn: 27280
2006-03-31 02:12:18 +00:00