342 Commits

Author SHA1 Message Date
Chris Lattner
7e80a5a16b Fix a problem fully scalarizing values.
llvm-svn: 26811
2006-03-16 23:05:19 +00:00
Chris Lattner
413bb13b27 Add support for CopyFromReg from vector values. Note: this doesn't support
illegal vector types yet!

llvm-svn: 26799
2006-03-16 19:57:50 +00:00
Chris Lattner
34f2e70f60 Teach CreateRegForValue how to handle vector types.
llvm-svn: 26798
2006-03-16 19:51:18 +00:00
Chris Lattner
95577cb3cd add support for vector->vector casts
llvm-svn: 26788
2006-03-15 22:19:46 +00:00
Jim Laskey
c741139c24 Handle the removal of the debug chain.
llvm-svn: 26729
2006-03-13 13:07:37 +00:00
Evan Cheng
57c206232d Added a parameter to control whether Constant::getStringValue() would chop
off the result string at the first null terminator.

llvm-svn: 26704
2006-03-10 23:52:03 +00:00
Chris Lattner
02e6c29ad2 scrape out bits of llvm-db
llvm-svn: 26701
2006-03-10 22:48:19 +00:00
Chris Lattner
2024573f47 Simplify the interface to the schedulers, to not pass the selected heuristicin.
llvm-svn: 26692
2006-03-10 07:49:12 +00:00
Chris Lattner
dc7268f6a3 remove dbg_declare, it's not used yet.
llvm-svn: 26659
2006-03-09 20:02:42 +00:00
Jim Laskey
a1ad999bba Get rid of the multiple copies of getStringValue. Now a Constant:: method.
llvm-svn: 26616
2006-03-08 18:11:07 +00:00
Chris Lattner
3f23d22d3f Change the interface for getting a target HazardRecognizer to be more clean.
llvm-svn: 26608
2006-03-08 04:25:59 +00:00
Chris Lattner
c6d1fbab70 Hoist the HazardRecognizer out of the ScheduleDAGList.cpp file to where
targets can implement them.  Make the top-down scheduler non-g5-specific.

Remove the old testing hazard recognizer.

llvm-svn: 26569
2006-03-06 00:22:00 +00:00
Chris Lattner
ef4bef419b Split the list scheduler into top-down and bottom-up pieces. The priority
function of the top-down scheduler are completely bogus currently, and
having (future) PPC specific in this file is also wrong, but this is a
small incremental step.

llvm-svn: 26552
2006-03-05 21:10:33 +00:00
Chris Lattner
4b4b3e6cbb Codegen copysign[f] into a FCOPYSIGN node
llvm-svn: 26542
2006-03-05 05:09:38 +00:00
Evan Cheng
4afe769361 Add more vector NodeTypes: VSDIV, VUDIV, VAND, VOR, and VXOR.
llvm-svn: 26504
2006-03-03 07:01:07 +00:00
Chris Lattner
999aa36a04 remove the read/write port/io intrinsics.
llvm-svn: 26479
2006-03-03 00:19:58 +00:00
Chris Lattner
ab22220755 Split memcpy/memset/memmove intrinsics into i32/i64 versions, resolving
PR709, and paving the way for future progress.

llvm-svn: 26476
2006-03-03 00:00:25 +00:00
Evan Cheng
d8b92e04a2 Vector ops lowering.
llvm-svn: 26436
2006-03-01 01:09:54 +00:00
Chris Lattner
a451b5ffd4 Add support for output memory constraints.
llvm-svn: 26410
2006-02-27 23:45:39 +00:00
Jeff Cohen
9773f9c447 Get VC++ building again.
llvm-svn: 26351
2006-02-24 02:52:40 +00:00
Chris Lattner
6248339a24 Implement (most of) selection of inline asm memory operands.
llvm-svn: 26350
2006-02-24 02:13:54 +00:00
Chris Lattner
3f267c1683 Lower C_Memory operands.
llvm-svn: 26346
2006-02-24 01:11:24 +00:00
Chris Lattner
5a70e894d1 Fix an endianness problem on big-endian targets with expanded operands
to inline asms.  Mark some methods const.

llvm-svn: 26334
2006-02-23 20:06:57 +00:00
Chris Lattner
b4951fbe82 Record all of the expanded registers in the DAG and machine instr, fixing
several bugs in inline asm expanded operands.

llvm-svn: 26332
2006-02-23 19:21:04 +00:00
Chris Lattner
2fc133f091 This fixes a couple of problems with expansion
llvm-svn: 26318
2006-02-22 23:09:03 +00:00
Chris Lattner
d1886b8a4e Change a whole bunch of code to be built around RegsForValue instead of
a single register number.  This fully implements promotion for inline asms,
expand is close but not quite right yet.

llvm-svn: 26316
2006-02-22 22:37:12 +00:00
Chris Lattner
6bb2c3e9cd split register class handling from explicit physreg handling.
llvm-svn: 26308
2006-02-22 00:56:39 +00:00
Chris Lattner
b5b7cc99c4 Adjust to changes in getRegForInlineAsmConstraint prototype
llvm-svn: 26306
2006-02-21 23:12:12 +00:00
Evan Cheng
a2433f32b4 Dumb bug. Code sees a memcpy from X+c so it increments src offset. But it
turns out not to point to a constant string but it forgot change the offset
back.

llvm-svn: 26242
2006-02-16 23:11:42 +00:00
Evan Cheng
131901cbb8 If the false case is the current basic block, then this is a self loop.
We do not want to emit "Loop: ... brcond Out; br Loop", as it adds an extra
instruction in the loop.  Instead, invert the condition and emit
"Loop: ... br!cond Loop; br Out.

Generalize the fix by moving it from PPCDAGToDAGISel to SelectionDAGLowering.

llvm-svn: 26231
2006-02-16 08:27:56 +00:00
Evan Cheng
07063456aa Remove an unused function parameter.
llvm-svn: 26221
2006-02-15 22:12:35 +00:00
Evan Cheng
5c2ecfd29b Turn a memcpy from string constant into a series of stores of constant values.
llvm-svn: 26219
2006-02-15 21:59:04 +00:00
Evan Cheng
3f9201ab48 Lower memcpy with small constant size operand into a series of load / store
ops.

llvm-svn: 26195
2006-02-15 01:54:51 +00:00
Evan Cheng
6789a748ac Doh again!
llvm-svn: 26188
2006-02-14 23:05:54 +00:00
Evan Cheng
5a2a9d3896 Keep to < 80 cols
llvm-svn: 26177
2006-02-14 20:12:38 +00:00
Evan Cheng
8937c047b3 Missed a break so memcpy cases fell through to memset. Doh.
llvm-svn: 26176
2006-02-14 19:45:56 +00:00
Evan Cheng
57eebe8ab4 Fixed a build breakage.
llvm-svn: 26175
2006-02-14 09:11:59 +00:00
Evan Cheng
f6c74c0096 Rename maxStoresPerMemSet to maxStoresPerMemset, etc.
llvm-svn: 26174
2006-02-14 08:38:30 +00:00
Evan Cheng
0bfe83eb5b Expand memset dst, c, size to a series of stores if size falls below the
target specific theshold, e.g. 16 for x86.

llvm-svn: 26171
2006-02-14 08:22:34 +00:00
Chris Lattner
47f4f2c148 now that libcalls don't suck, we can remove this hack
llvm-svn: 26164
2006-02-14 05:39:35 +00:00
Jim Laskey
fac853338f Rename to better reflect usage (current and planned.)
llvm-svn: 26145
2006-02-13 12:50:39 +00:00
Jim Laskey
55467f611d Reorg for integration with gcc4. Old style debug info will not be passed though
to SelIDAG.

llvm-svn: 26115
2006-02-11 01:01:30 +00:00
Evan Cheng
062ac6e46b Get rid of some memory leaks identified by Valgrind
llvm-svn: 25960
2006-02-04 06:49:00 +00:00
Chris Lattner
013f5fc2fa Add initial support for immediates. This allows us to compile this:
int %rlwnm(int %A, int %B) {
  %C = call int asm "rlwnm $0, $1, $2, $3, $4", "=r,r,r,n,n"(int %A, int %B, int 4, int 17)
  ret int %C
}

into:

_rlwnm:
        or r2, r3, r3
        or r3, r4, r4
        rlwnm r2, r2, r3, 4, 17    ;; note the immediates :)
        or r3, r2, r2
        blr

llvm-svn: 25955
2006-02-04 02:26:14 +00:00
Chris Lattner
13f609dfe2 Initial early support for non-register operands, like immediates
llvm-svn: 25952
2006-02-04 02:16:44 +00:00
Chris Lattner
47b11a250c remove some #ifdef'd out code, which should properly be in the dag combiner anyway.
llvm-svn: 25941
2006-02-03 20:13:59 +00:00
Chris Lattner
e35694c0bf Implement matching constraints. We can now say things like this:
%C = call int asm "xyz $0, $1, $2, $3", "=r,r,r,0"(int %A, int %B, int 4)

and get:

xyz r2, r3, r4, r2

note that the r2's are pinned together.  Yaay for 2-address instructions.

2342 ----------------------------------------------------------------------

llvm-svn: 25893
2006-02-02 00:25:23 +00:00
Chris Lattner
9665c2d76f Implement simple register assignment for inline asms. This allows us to compile:
int %test(int %A, int %B) {
  %C = call int asm "xyz $0, $1, $2", "=r,r,r"(int %A, int %B)
  ret int %C
}

into:

 (0x8906130, LLVM BB @0x8902220):
        %r2 = OR4 %r3, %r3
        %r3 = OR4 %r4, %r4
        INLINEASM <es:xyz $0, $1, $2>, %r2<def>, %r2, %r3
        %r3 = OR4 %r2, %r2
        BLR

which asmprints as:

_test:
        or r2, r3, r3
        or r3, r4, r4
        xyz $0, $1, $2      ;; need to print the operands now :)
        or r3, r2, r2
        blr

llvm-svn: 25878
2006-02-01 18:59:47 +00:00
Chris Lattner
5549a10792 adjust to changes in InlineAsm interface. Fix a few minor bugs.
llvm-svn: 25865
2006-02-01 01:28:23 +00:00
Chris Lattner
e113238f5c Handle physreg input/outputs. We now compile this:
int %test_cpuid(int %op) {
        %B = alloca int
        %C = alloca int
        %D = alloca int
        %A = call int asm "cpuid", "=eax,==ebx,==ecx,==edx,eax"(int* %B, int* %C, int* %D, int %op)
        %Bv = load int* %B
        %Cv = load int* %C
        %Dv = load int* %D
        %x = add int %A, %Bv
        %y = add int %x, %Cv
        %z = add int %y, %Dv
        ret int %z
}

to this:

_test_cpuid:
        sub %ESP, 16
        mov DWORD PTR [%ESP], %EBX
        mov %EAX, DWORD PTR [%ESP + 20]
        cpuid
        mov DWORD PTR [%ESP + 8], %ECX
        mov DWORD PTR [%ESP + 12], %EBX
        mov DWORD PTR [%ESP + 4], %EDX
        mov %ECX, DWORD PTR [%ESP + 12]
        add %EAX, %ECX
        mov %ECX, DWORD PTR [%ESP + 8]
        add %EAX, %ECX
        mov %ECX, DWORD PTR [%ESP + 4]
        add %EAX, %ECX
        mov %EBX, DWORD PTR [%ESP]
        add %ESP, 16
        ret

... note the proper register allocation.  :)

it is unclear to me why the loads aren't folded into the adds.

llvm-svn: 25827
2006-01-31 02:03:41 +00:00