Commit Graph

11379 Commits

Author SHA1 Message Date
Chris Lattner
ded6e64b53 Fix encoding of existing shift instructions, add rr shifts
llvm-svn: 12739
2004-04-07 04:26:57 +00:00
Chris Lattner
c585ee2bdd New testcase
llvm-svn: 12738
2004-04-07 04:08:21 +00:00
Chris Lattner
13546cb380 Add a bunch more instructions
llvm-svn: 12737
2004-04-07 04:06:46 +00:00
Chris Lattner
a58da750eb Merge my changes with brians
llvm-svn: 12736
2004-04-07 04:05:49 +00:00
Brian Gaeke
ece16e53c4 Add in some things I forgot, which Chris helpfully reminded me of...
llvm-svn: 12735
2004-04-07 04:05:12 +00:00
Brian Gaeke
4b90f62e6d Add support for the "Y" register, used by MUL & DIV.
llvm-svn: 12734
2004-04-07 04:01:11 +00:00
Brian Gaeke
8651efab54 Add UDIV, SDIV, and a few variants of WR.
llvm-svn: 12733
2004-04-07 04:01:00 +00:00
Brian Gaeke
0d35bd3ca9 Preliminary support for getting 64-bit integer constants into registers.
Preliminary support for division. It's gross because you have to initialize
the "Y" register, which is the top 32 bits of the thing you're dividing.

llvm-svn: 12732
2004-04-07 04:00:49 +00:00
Brian Gaeke
2aa3485241 Prune unnecessary #includes
llvm-svn: 12731
2004-04-06 23:25:07 +00:00
Brian Gaeke
c59ef116a2 Simple delay slot filler pass.
llvm-svn: 12730
2004-04-06 23:21:45 +00:00
Brian Gaeke
38ad8d1aea Add references to delay slot filler pass.
Fill in addPassesToJITCompile method.

llvm-svn: 12729
2004-04-06 23:21:24 +00:00
Brian Gaeke
0ee6eb1c1a First attempt at handling frame index elimination.
llvm-svn: 12728
2004-04-06 22:10:22 +00:00
Brian Gaeke
b7f86edbf3 First attempt at special-casing printing of [%reg + offset] for
ld/st instructions - doesn't seem to work yet, but I think it's
just a typo or something somewhere.

llvm-svn: 12727
2004-04-06 22:10:11 +00:00
Brian Gaeke
74d26802a4 Delete reference to "the Mach-O Runtime ABI".
llvm-svn: 12726
2004-04-06 22:09:59 +00:00
Brian Gaeke
da22005285 Deal with call return values.
Don't put NOPs in delay slots at all. We'll have a fix-up pass later.

llvm-svn: 12725
2004-04-06 22:09:23 +00:00
John Criswell
cb52af2958 Adding kimwitu++ license.
llvm-svn: 12719
2004-04-06 20:23:45 +00:00
Chris Lattner
39bdc2681e Bugs fixed new features implemented
llvm-svn: 12716
2004-04-06 19:48:42 +00:00
Jakub Staszak
fc0d9bb7e9 file based off InstSelectSimple.cpp, slowly being replaced by generated code from the really simple X86 instruction selector tablegen backend
llvm-svn: 12715
2004-04-06 19:35:17 +00:00
Jakub Staszak
06dc0add14 Tablgen files for really simple instruction selector
llvm-svn: 12714
2004-04-06 19:34:00 +00:00
Jakub Staszak
3c2d9c95e2 Tablegen backend for really simple instruction selector
llvm-svn: 12713
2004-04-06 19:31:31 +00:00
Jakub Staszak
6cb338c254 add tablgen backend for really simple instruction selector
llvm-svn: 12712
2004-04-06 19:30:56 +00:00
Chris Lattner
3808778190 Fix PR313: [x86] JIT miscompiles unsigned short to floating point
llvm-svn: 12711
2004-04-06 19:29:36 +00:00
Chris Lattner
993d6106c7 Fix incorrect encoding of some ADC and SBB instuctions
llvm-svn: 12710
2004-04-06 19:20:32 +00:00
John Criswell
afaf687c16 Added licensing information for treecc.
llvm-svn: 12703
2004-04-06 17:51:10 +00:00
Chris Lattner
54e93df11a Fix a minor bug in previous checking
Enable folding of long seteq/setne comparisons into branches and select instructions
Implement unfolded long relational comparisons against a constants a bit more efficiently

Folding comparisons changes code that looks like this:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        mov %ECX, %EAX
        or %ECX, %EDX
        sete %CL
        test %CL, %CL
        je .LBB2 # PC rel: F

into code that looks like this:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        mov %ECX, %EAX
        or %ECX, %EDX
        jne .LBB2 # PC rel: F

This speeds up 186.crafty by 6% with llc-ls.

llvm-svn: 12702
2004-04-06 17:34:50 +00:00
Misha Brukman
e2d817bc26 Wrap at 80 cols.
llvm-svn: 12701
2004-04-06 17:04:30 +00:00
Chris Lattner
1a40aaf3ba Minor cleanups
llvm-svn: 12700
2004-04-06 16:54:04 +00:00
Chris Lattner
929187dff0 Document new option
llvm-svn: 12699
2004-04-06 16:46:12 +00:00
Chris Lattner
3259504cce Add a new gccld -native-cbe option which causes gccld to generate native code
for the application with the C backend instead of the native LLVM code generator

llvm-svn: 12698
2004-04-06 16:43:13 +00:00
Chris Lattner
2d9b28ac0b Improve codegen of long == and != comparisons against constants. Before,
comparing a long against zero got us this:

        sub %ESP, 8
        mov DWORD PTR [%ESP + 4], %ESI
        mov DWORD PTR [%ESP], %EDI
        mov %EAX, DWORD PTR [%ESP + 12]
        mov %EDX, DWORD PTR [%ESP + 16]
        mov %ECX, 0
        mov %ESI, 0
        mov %EDI, %EAX
        xor %EDI, %ECX
        mov %ECX, %EDX
        xor %ECX, %ESI
        or %EDI, %ECX
        sete %CL
        test %CL, %CL
        je .LBB2 # PC rel: F

Now it gets us this:

        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        mov %ECX, %EAX
        or %ECX, %EDX
        sete %CL
        test %CL, %CL
        je .LBB2 # PC rel: F

llvm-svn: 12696
2004-04-06 16:02:27 +00:00
Chris Lattner
1aa66730ad Update docs a bit
llvm-svn: 12695
2004-04-06 15:22:35 +00:00
Chris Lattner
ef435a5e51 Remove some options that don't really have anything to do with bugpoint
llvm-svn: 12694
2004-04-06 15:14:10 +00:00
Chris Lattner
fd7b570dff Handle various other important cases of multiplying a long constant immediate. For
example, multiplying X*(1 + (1LL << 32)) now produces:

test:
        mov %ECX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        mov %EAX, %ECX
        add %EDX, %ECX
        ret

[[[Note to Alkis: why isn't linear scan generating this code??  This might be a
 problem with your intervals being too conservative:

test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        add %EDX, %EAX
        ret

end note]]]

Whereas GCC produces this:

T:
        sub     %esp, 12
        mov     %edx, DWORD PTR [%esp+16]
        mov     DWORD PTR [%esp+8], %edi
        mov     %ecx, DWORD PTR [%esp+20]
        xor     %edi, %edi
        mov     DWORD PTR [%esp], %ebx
        mov     %ebx, %edi
        mov     %eax, %edx
        mov     DWORD PTR [%esp+4], %esi
        add     %ebx, %edx
        mov     %edi, DWORD PTR [%esp+8]
        lea     %edx, [%ecx+%ebx]
        mov     %esi, DWORD PTR [%esp+4]
        mov     %ebx, DWORD PTR [%esp]
        add     %esp, 12
        ret

I'm not sure example what GCC is smoking here, but it looks like it has just
confused itself with a bunch of stack slots or something.  The intel compiler
is better, but still not good:

T:
        movl      4(%esp), %edx                                 #2.11
        movl      8(%esp), %eax                                 #2.11
        lea       (%eax,%edx), %ecx                             #3.12
        movl      $1, %eax                                      #3.12
        mull      %edx                                          #3.12
        addl      %ecx, %edx                                    #3.12
        ret                                                     #3.12

llvm-svn: 12693
2004-04-06 04:55:43 +00:00
Chris Lattner
6038e5a4a1 Efficiently handle a long multiplication by a constant. For this testcase:
long %test(long %X) {
        %Y = mul long %X, 123
        ret long %Y
}

we used to generate:

test:
        sub %ESP, 12
        mov DWORD PTR [%ESP + 8], %ESI
        mov DWORD PTR [%ESP + 4], %EDI
        mov DWORD PTR [%ESP], %EBX
        mov %ECX, DWORD PTR [%ESP + 16]
        mov %ESI, DWORD PTR [%ESP + 20]
        mov %EDI, 123
        mov %EBX, 0
        mov %EAX, %ECX
        mul %EDI
        imul %ESI, %EDI
        add %ESI, %EDX
        imul %ECX, %EBX
        add %ESI, %ECX
        mov %EDX, %ESI
        mov %EBX, DWORD PTR [%ESP]
        mov %EDI, DWORD PTR [%ESP + 4]
        mov %ESI, DWORD PTR [%ESP + 8]
        add %ESP, 12
        ret

Now we emit:
test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
        mov %EDX, 123
        mul %EDX
        imul %ECX, %ECX, 123
        add %ECX, %EDX
        mov %EDX, %ECX
        ret

Which, incidently, is substantially nicer than what GCC manages:
T:
        sub     %esp, 8
        mov     %eax, 123
        mov     DWORD PTR [%esp], %ebx
        mov     %ebx, DWORD PTR [%esp+16]
        mov     DWORD PTR [%esp+4], %esi
        mov     %esi, DWORD PTR [%esp+12]
        imul    %ecx, %ebx, 123
        mov     %ebx, DWORD PTR [%esp]
        mul     %esi
        mov     %esi, DWORD PTR [%esp+4]
        add     %esp, 8
        lea     %edx, [%ecx+%edx]
        ret

llvm-svn: 12692
2004-04-06 04:29:36 +00:00
Misha Brukman
cc7ea3a1f7 * Added link to newly written ExtendingLLVM.html document
* Eliminated extraneous space in the HTML

llvm-svn: 12691
2004-04-06 04:22:43 +00:00
Misha Brukman
406bb7ec5d Incorporated Chris' comments.
llvm-svn: 12690
2004-04-06 04:17:51 +00:00
Misha Brukman
c67693e356 Added notes on extending LLVM with new instructions, intrinsics, types, etc.
llvm-svn: 12689
2004-04-06 03:53:49 +00:00
Chris Lattner
dd0d31ca2a Improve code generation of long shifts by 32.
On this testcase:

long %test(long %X) {
        %Y = shr long %X, ubyte 32
        ret long %Y
}

instead of:
t:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EAX, DWORD PTR [%ESP + 8]
        sar %EAX, 0
        mov %EDX, 0
        ret


we now emit:
test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EAX, DWORD PTR [%ESP + 8]
        mov %EDX, 0
        ret

llvm-svn: 12688
2004-04-06 03:42:38 +00:00
Chris Lattner
7eb61104dc Bugfixes: inc/dec don't set the carry flag!
llvm-svn: 12687
2004-04-06 03:36:57 +00:00
Chris Lattner
8cdbb1fe84 Improve code for passing constant longs as arguments to function calls.
For example, on this instruction:

        call void %test(long 1234)

Instead of this:
        mov %EAX, 1234
        mov %ECX, 0
        mov DWORD PTR [%ESP], %EAX
        mov DWORD PTR [%ESP + 4], %ECX
        call test

We now emit this:
        mov DWORD PTR [%ESP], 1234
        mov DWORD PTR [%ESP + 4], 0
        call test

llvm-svn: 12686
2004-04-06 03:23:00 +00:00
Chris Lattner
2738d6d4a4 Emit more efficient 64-bit operations when the RHS is a constant, and one
of the words of the constant is zeros.  For example:
  Y = and long X, 1234

now generates:
  Yl = and Xl, 1234
  Yh = 0

instead of:
  Yl = and Xl, 1234
  Yh = and Xh, 0

llvm-svn: 12685
2004-04-06 03:15:53 +00:00
Chris Lattner
bdbedf9523 Fix typeo
llvm-svn: 12684
2004-04-06 02:13:25 +00:00
Chris Lattner
606639ed1a Add support for simple immediate handling to long instruction selection.
This allows us to handle code like 'add long %X, 123456789012' more efficiently.

llvm-svn: 12683
2004-04-06 02:11:49 +00:00
Chris Lattner
e84f12a165 The sbb instructions really ARE sbb's, not adc's
llvm-svn: 12682
2004-04-06 02:02:11 +00:00
Chris Lattner
0808f5daa5 Implement negation of longs efficiently. For this testcase:
long %test(long %X) {
        %Y = sub long 0, %X
        ret long %Y
}

We used to generate:

test:
        sub %ESP, 4
        mov DWORD PTR [%ESP], %ESI
        mov %ECX, DWORD PTR [%ESP + 8]
        mov %ESI, DWORD PTR [%ESP + 12]
        mov %EAX, 0
        mov %EDX, 0
        sub %EAX, %ECX
        sbb %EDX, %ESI
        mov %ESI, DWORD PTR [%ESP]
        add %ESP, 4
        ret

Now we generate:

test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        neg %EAX
        adc %EDX, 0
        neg %EDX
        ret

llvm-svn: 12681
2004-04-06 01:48:06 +00:00
Chris Lattner
56dcdcf638 Minor tweak to avoid an extra reg-reg copy that the register allocator has to eliminate
llvm-svn: 12680
2004-04-06 01:25:33 +00:00
Chris Lattner
42cf317fca Two changes:
* In promote32, if we can just promote a constant value, do so instead of
    promoting a constant dynamically.
  * In visitReturn inst, actually USE the promote32 argument that takes a
    Value*

The end result of this is that we now generate this:

test:
        mov %EAX, 0
        ret

instead of...

test:
        mov %AX, 0
        movzx %EAX, %AX
        ret

for:

ushort %test() {
        ret ushort 0
}

llvm-svn: 12679
2004-04-06 01:21:00 +00:00
Chris Lattner
d2c51f2cc3 Merge the code generator miscompilation code into the optimizer miscompilation
code.  This "instantly" gives us loop-extractor power to assist with the
debugment of our nasty codegen issues.  :)

llvm-svn: 12678
2004-04-05 22:58:16 +00:00
Chris Lattner
0442a87f47 Make a method public
llvm-svn: 12677
2004-04-05 22:01:48 +00:00
Chris Lattner
f856e42795 Minor cleanups, remove some old debug code
llvm-svn: 12676
2004-04-05 21:37:55 +00:00