Commit Graph

684 Commits

Author SHA1 Message Date
Jakub Staszak
fc0d9bb7e9 file based off InstSelectSimple.cpp, slowly being replaced by generated code from the really simple X86 instruction selector tablegen backend
llvm-svn: 12715
2004-04-06 19:35:17 +00:00
Jakub Staszak
06dc0add14 Tablgen files for really simple instruction selector
llvm-svn: 12714
2004-04-06 19:34:00 +00:00
Chris Lattner
3808778190 Fix PR313: [x86] JIT miscompiles unsigned short to floating point
llvm-svn: 12711
2004-04-06 19:29:36 +00:00
Chris Lattner
993d6106c7 Fix incorrect encoding of some ADC and SBB instuctions
llvm-svn: 12710
2004-04-06 19:20:32 +00:00
Chris Lattner
54e93df11a Fix a minor bug in previous checking
Enable folding of long seteq/setne comparisons into branches and select instructions
Implement unfolded long relational comparisons against a constants a bit more efficiently

Folding comparisons changes code that looks like this:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        mov %ECX, %EAX
        or %ECX, %EDX
        sete %CL
        test %CL, %CL
        je .LBB2 # PC rel: F

into code that looks like this:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        mov %ECX, %EAX
        or %ECX, %EDX
        jne .LBB2 # PC rel: F

This speeds up 186.crafty by 6% with llc-ls.

llvm-svn: 12702
2004-04-06 17:34:50 +00:00
Chris Lattner
2d9b28ac0b Improve codegen of long == and != comparisons against constants. Before,
comparing a long against zero got us this:

        sub %ESP, 8
        mov DWORD PTR [%ESP + 4], %ESI
        mov DWORD PTR [%ESP], %EDI
        mov %EAX, DWORD PTR [%ESP + 12]
        mov %EDX, DWORD PTR [%ESP + 16]
        mov %ECX, 0
        mov %ESI, 0
        mov %EDI, %EAX
        xor %EDI, %ECX
        mov %ECX, %EDX
        xor %ECX, %ESI
        or %EDI, %ECX
        sete %CL
        test %CL, %CL
        je .LBB2 # PC rel: F

Now it gets us this:

        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        mov %ECX, %EAX
        or %ECX, %EDX
        sete %CL
        test %CL, %CL
        je .LBB2 # PC rel: F

llvm-svn: 12696
2004-04-06 16:02:27 +00:00
Chris Lattner
fd7b570dff Handle various other important cases of multiplying a long constant immediate. For
example, multiplying X*(1 + (1LL << 32)) now produces:

test:
        mov %ECX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        mov %EAX, %ECX
        add %EDX, %ECX
        ret

[[[Note to Alkis: why isn't linear scan generating this code??  This might be a
 problem with your intervals being too conservative:

test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        add %EDX, %EAX
        ret

end note]]]

Whereas GCC produces this:

T:
        sub     %esp, 12
        mov     %edx, DWORD PTR [%esp+16]
        mov     DWORD PTR [%esp+8], %edi
        mov     %ecx, DWORD PTR [%esp+20]
        xor     %edi, %edi
        mov     DWORD PTR [%esp], %ebx
        mov     %ebx, %edi
        mov     %eax, %edx
        mov     DWORD PTR [%esp+4], %esi
        add     %ebx, %edx
        mov     %edi, DWORD PTR [%esp+8]
        lea     %edx, [%ecx+%ebx]
        mov     %esi, DWORD PTR [%esp+4]
        mov     %ebx, DWORD PTR [%esp]
        add     %esp, 12
        ret

I'm not sure example what GCC is smoking here, but it looks like it has just
confused itself with a bunch of stack slots or something.  The intel compiler
is better, but still not good:

T:
        movl      4(%esp), %edx                                 #2.11
        movl      8(%esp), %eax                                 #2.11
        lea       (%eax,%edx), %ecx                             #3.12
        movl      $1, %eax                                      #3.12
        mull      %edx                                          #3.12
        addl      %ecx, %edx                                    #3.12
        ret                                                     #3.12

llvm-svn: 12693
2004-04-06 04:55:43 +00:00
Chris Lattner
6038e5a4a1 Efficiently handle a long multiplication by a constant. For this testcase:
long %test(long %X) {
        %Y = mul long %X, 123
        ret long %Y
}

we used to generate:

test:
        sub %ESP, 12
        mov DWORD PTR [%ESP + 8], %ESI
        mov DWORD PTR [%ESP + 4], %EDI
        mov DWORD PTR [%ESP], %EBX
        mov %ECX, DWORD PTR [%ESP + 16]
        mov %ESI, DWORD PTR [%ESP + 20]
        mov %EDI, 123
        mov %EBX, 0
        mov %EAX, %ECX
        mul %EDI
        imul %ESI, %EDI
        add %ESI, %EDX
        imul %ECX, %EBX
        add %ESI, %ECX
        mov %EDX, %ESI
        mov %EBX, DWORD PTR [%ESP]
        mov %EDI, DWORD PTR [%ESP + 4]
        mov %ESI, DWORD PTR [%ESP + 8]
        add %ESP, 12
        ret

Now we emit:
test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
        mov %EDX, 123
        mul %EDX
        imul %ECX, %ECX, 123
        add %ECX, %EDX
        mov %EDX, %ECX
        ret

Which, incidently, is substantially nicer than what GCC manages:
T:
        sub     %esp, 8
        mov     %eax, 123
        mov     DWORD PTR [%esp], %ebx
        mov     %ebx, DWORD PTR [%esp+16]
        mov     DWORD PTR [%esp+4], %esi
        mov     %esi, DWORD PTR [%esp+12]
        imul    %ecx, %ebx, 123
        mov     %ebx, DWORD PTR [%esp]
        mul     %esi
        mov     %esi, DWORD PTR [%esp+4]
        add     %esp, 8
        lea     %edx, [%ecx+%edx]
        ret

llvm-svn: 12692
2004-04-06 04:29:36 +00:00
Chris Lattner
dd0d31ca2a Improve code generation of long shifts by 32.
On this testcase:

long %test(long %X) {
        %Y = shr long %X, ubyte 32
        ret long %Y
}

instead of:
t:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EAX, DWORD PTR [%ESP + 8]
        sar %EAX, 0
        mov %EDX, 0
        ret


we now emit:
test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EAX, DWORD PTR [%ESP + 8]
        mov %EDX, 0
        ret

llvm-svn: 12688
2004-04-06 03:42:38 +00:00
Chris Lattner
7eb61104dc Bugfixes: inc/dec don't set the carry flag!
llvm-svn: 12687
2004-04-06 03:36:57 +00:00
Chris Lattner
8cdbb1fe84 Improve code for passing constant longs as arguments to function calls.
For example, on this instruction:

        call void %test(long 1234)

Instead of this:
        mov %EAX, 1234
        mov %ECX, 0
        mov DWORD PTR [%ESP], %EAX
        mov DWORD PTR [%ESP + 4], %ECX
        call test

We now emit this:
        mov DWORD PTR [%ESP], 1234
        mov DWORD PTR [%ESP + 4], 0
        call test

llvm-svn: 12686
2004-04-06 03:23:00 +00:00
Chris Lattner
2738d6d4a4 Emit more efficient 64-bit operations when the RHS is a constant, and one
of the words of the constant is zeros.  For example:
  Y = and long X, 1234

now generates:
  Yl = and Xl, 1234
  Yh = 0

instead of:
  Yl = and Xl, 1234
  Yh = and Xh, 0

llvm-svn: 12685
2004-04-06 03:15:53 +00:00
Chris Lattner
bdbedf9523 Fix typeo
llvm-svn: 12684
2004-04-06 02:13:25 +00:00
Chris Lattner
606639ed1a Add support for simple immediate handling to long instruction selection.
This allows us to handle code like 'add long %X, 123456789012' more efficiently.

llvm-svn: 12683
2004-04-06 02:11:49 +00:00
Chris Lattner
e84f12a165 The sbb instructions really ARE sbb's, not adc's
llvm-svn: 12682
2004-04-06 02:02:11 +00:00
Chris Lattner
0808f5daa5 Implement negation of longs efficiently. For this testcase:
long %test(long %X) {
        %Y = sub long 0, %X
        ret long %Y
}

We used to generate:

test:
        sub %ESP, 4
        mov DWORD PTR [%ESP], %ESI
        mov %ECX, DWORD PTR [%ESP + 8]
        mov %ESI, DWORD PTR [%ESP + 12]
        mov %EAX, 0
        mov %EDX, 0
        sub %EAX, %ECX
        sbb %EDX, %ESI
        mov %ESI, DWORD PTR [%ESP]
        add %ESP, 4
        ret

Now we generate:

test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        neg %EAX
        adc %EDX, 0
        neg %EDX
        ret

llvm-svn: 12681
2004-04-06 01:48:06 +00:00
Chris Lattner
56dcdcf638 Minor tweak to avoid an extra reg-reg copy that the register allocator has to eliminate
llvm-svn: 12680
2004-04-06 01:25:33 +00:00
Chris Lattner
42cf317fca Two changes:
* In promote32, if we can just promote a constant value, do so instead of
    promoting a constant dynamically.
  * In visitReturn inst, actually USE the promote32 argument that takes a
    Value*

The end result of this is that we now generate this:

test:
        mov %EAX, 0
        ret

instead of...

test:
        mov %AX, 0
        movzx %EAX, %AX
        ret

for:

ushort %test() {
        ret ushort 0
}

llvm-svn: 12679
2004-04-06 01:21:00 +00:00
Chris Lattner
9236135e8f Support getelementptr instructions which use uint's to index into structure
types and can have arbitrary 32- and 64-bit integer types indexing into
sequential types.

llvm-svn: 12653
2004-04-05 01:30:19 +00:00
Alkis Evlogimenos
27ed33c309 Clean up code a bit.
llvm-svn: 12615
2004-04-02 18:11:32 +00:00
Alkis Evlogimenos
85e007a6dc Fix type in comments
llvm-svn: 12611
2004-04-02 16:02:50 +00:00
Alkis Evlogimenos
84ee10f9e1 Fix type in instruction builder instantiation
llvm-svn: 12610
2004-04-02 15:51:03 +00:00
Alkis Evlogimenos
20b074682c Add more ADC and SBB variants
llvm-svn: 12607
2004-04-02 07:11:10 +00:00
Chris Lattner
b6e4e5a95e Simplify code by using the more powerful BuildMI forms.
Implement a small optimization.  In test/Regression/CodeGen/X86/select.ll,
we now generate this for foldSel3:

foldSel3:
        mov %AL, BYTE PTR [%ESP + 4]
        fld DWORD PTR [%ESP + 8]
        fld DWORD PTR [%ESP + 12]
        mov %EAX, DWORD PTR [%ESP + 16]
        mov %ECX, DWORD PTR [%ESP + 20]
        cmp %EAX, %ECX
        fxch %ST(1)
        fcmovae %ST(0), %ST(1)
***     fstp %ST(1)
        ret

Instead of:

foldSel3:
        mov %AL, BYTE PTR [%ESP + 4]
        fld DWORD PTR [%ESP + 8]
        fld DWORD PTR [%ESP + 12]
        mov %EAX, DWORD PTR [%ESP + 16]
        mov %ECX, DWORD PTR [%ESP + 20]
        cmp %EAX, %ECX
        fxch %ST(1)
        fcmovae %ST(0), %ST(1)
***     fxch %ST(1)
***     fstp %ST(0)
        ret

In practice, this only effects code size: performance should be basically
unaffected.

llvm-svn: 12588
2004-04-01 04:06:09 +00:00
Chris Lattner
78027ca4ff Wrap at 80 cols
llvm-svn: 12587
2004-04-01 04:03:27 +00:00
Chris Lattner
2e0755a058 Generate slightly smaller code, "test R, R" instead of "cmp R, 0"
llvm-svn: 12579
2004-03-31 22:22:36 +00:00
Chris Lattner
97e8b80649 The X86 backend no longer needs the select lowering pass.
llvm-svn: 12578
2004-03-31 22:03:46 +00:00
Chris Lattner
e5d60adc20 Codegen FP select instructions into X86 conditional moves. Annoyingly enough
the X86 does not support a full set of fp cmove instructions, so we can't always
fold the condition into the select.  :(  Yuck.

llvm-svn: 12577
2004-03-31 22:03:35 +00:00
Chris Lattner
d50df93168 Add support for floating point conditional move instructions
llvm-svn: 12576
2004-03-31 22:02:36 +00:00
Chris Lattner
4d543b4201 Add support for FP cmoves
llvm-svn: 12575
2004-03-31 22:02:21 +00:00
Chris Lattner
e4fa3010db Add FP conditional move instructions, which annoyingly have special properties
that require the asmwriter to be extended (printing implicit uses before the
explicit operands)

llvm-svn: 12574
2004-03-31 22:02:13 +00:00
Chris Lattner
f477746a61 Fold comparisons into select instructions, making much better code and
using our broad selection of movcc instructions.  :)

llvm-svn: 12560
2004-03-30 22:39:09 +00:00
Chris Lattner
6c1dd729d3 Implement spill code folding for all of the conditional move instructions
llvm-svn: 12554
2004-03-30 21:29:47 +00:00
Chris Lattner
ff016bd6fe Add direct support for integer select instructions, though we still don't support
folding compares into the select yet.

llvm-svn: 12553
2004-03-30 21:22:00 +00:00
Chris Lattner
57968a98df Fix some serious bugs in the cmov descriptions, which didn't cause a problem because
we never generated them

Make indentation a bit more consistent

llvm-svn: 12549
2004-03-30 20:18:02 +00:00
Chris Lattner
95942c021a Fix a fairly major performance problem. If a PHI node had a constant as
an incoming value from a block, the selector would evaluate the constant
at the TOP of the block instead of at the end of the block.  This made the
live range for the constant span the entire block, increasing register
pressure needlessly.

llvm-svn: 12542
2004-03-30 19:10:12 +00:00
Chris Lattner
87479998f2 Add the select lowering pass to get initial support for select instructions
llvm-svn: 12541
2004-03-30 18:41:59 +00:00
Chris Lattner
b8f179cb9b Malloc doesn't kill a load. This patch need not go into 1.2 though.
llvm-svn: 12500
2004-03-18 17:01:26 +00:00
Chris Lattner
ef7c1e9f7f Fix a really nasty bug that was breaking ijpeg in LLC mode. We were incorrectly
folding load instructions into other instructions across free instruction
boundaries.  Perhaps this will also fix the other strange failures?

llvm-svn: 12494
2004-03-18 06:29:54 +00:00
Alkis Evlogimenos
6ac147a7fb Add LAHF instruction
llvm-svn: 12424
2004-03-15 17:20:14 +00:00
Alkis Evlogimenos
2b94b048a9 Another API change to MRegisterInfo::foldMemoryOperand. Instead of a
MachineBasicBlock::iterator take a MachineInstr*.

llvm-svn: 12392
2004-03-14 20:14:27 +00:00
Alkis Evlogimenos
ff9482b664 Change MRegisterInfo::foldMemoryOperand to return the folded
instruction to make the API more flexible.

llvm-svn: 12386
2004-03-14 07:19:51 +00:00
Chris Lattner
b45245327e It helps if I save the file. :)
llvm-svn: 12357
2004-03-13 00:24:52 +00:00
Chris Lattner
f7bc6fd913 Rename the intrinsic enum values for llvm.va_* from Intrinsic::va_* to
Intrinsic::va*.  This avoid conflicting with macros in the stdlib.h file.

llvm-svn: 12356
2004-03-13 00:24:00 +00:00
Alkis Evlogimenos
da990ad8a4 Add support for a wider range of CMOV instructions.
llvm-svn: 12336
2004-03-12 17:59:56 +00:00
Misha Brukman
992e44e3c5 Fix compilation on Sparc: assert(0) => abort()
llvm-svn: 12289
2004-03-11 19:08:24 +00:00
Alkis Evlogimenos
a13672fd71 Check if printing of implicit uses is required for all types of shift
instructions.

llvm-svn: 12258
2004-03-09 06:10:15 +00:00
Alkis Evlogimenos
7c0224327e Differentiate between extended precision floats (80-bit) and double precision floats (64-bit)
llvm-svn: 12254
2004-03-09 03:37:54 +00:00
Alkis Evlogimenos
f86d2df13d Use newly added API to emit bytes for instructions that gas misassembles
llvm-svn: 12253
2004-03-09 03:35:34 +00:00
Alkis Evlogimenos
085957be0b Add emitInstruction() API so that we can get the bytes of a simple instruction
llvm-svn: 12252
2004-03-09 03:34:53 +00:00
Alkis Evlogimenos
813daf05c3 Constify things a bit
llvm-svn: 12251
2004-03-09 03:30:12 +00:00
Chris Lattner
a55628694a Implement folding explicit load instructions into binary operations. For a
testcase like this:

int %test(int* %P, int %A) {
        %Pv = load int* %P
        %B = add int %A, %Pv
        ret int %B
}

We now generate:
test:
        mov %ECX, DWORD PTR [%ESP + 4]
        mov %EAX, DWORD PTR [%ESP + 8]
        add %EAX, DWORD PTR [%ECX]
        ret

Instead of:
test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
        mov %EAX, DWORD PTR [%EAX]
        add %EAX, %ECX
        ret

... saving one instruction, and often a register.  Note that there are a lot
of other instructions that could use this, but they aren't handled.  I'm not
really interested in adding them, but mul/div and all of the FP instructions
could be supported as well if someone wanted to add them.

llvm-svn: 12204
2004-03-08 01:58:35 +00:00
Chris Lattner
9a9b1c4822 Rearrange and refactor some code. No functionality changes.
llvm-svn: 12203
2004-03-08 01:18:36 +00:00
Alkis Evlogimenos
65649a50e9 Add memory operand version of conditional move.
llvm-svn: 12190
2004-03-07 03:19:11 +00:00
Brian Gaeke
0b913593ae make -print-machineinstrs work for both SparcV9 and X86
llvm-svn: 12122
2004-03-04 19:16:23 +00:00
Alkis Evlogimenos
e8ebdcc780 Add assertion for scale verification.
llvm-svn: 12120
2004-03-04 18:05:02 +00:00
Misha Brukman
491ff34abf Doxygenify some comments.
llvm-svn: 12064
2004-03-01 23:53:11 +00:00
Brian Gaeke
b78f8498f0 TargetCacheInfo has been removed; its only uses were to propagate a constant
(16) into certain areas of the SPARC V9 back-end. I'm fairly sure the US IIIi's
dcache has 32-byte lines, so I'm not sure where the 16 came from. However, in
the interest of not breaking things any more than they already are, I'm going
to leave the constant alone.

llvm-svn: 12043
2004-03-01 06:43:29 +00:00
Chris Lattner
8c1d67b55f Handle passing constant integers to functions much more efficiently. Instead
of generating this code:

        mov %EAX, 4
        mov DWORD PTR [%ESP], %EAX
        mov %AX, 123
        movsx %EAX, %AX
        mov DWORD PTR [%ESP + 4], %EAX
        call Y

we now generate:
        mov DWORD PTR [%ESP], 4
        mov DWORD PTR [%ESP + 4], 123
        call Y

Which hurts the eyes less.  :)

Considering that register pressure around call sites is already high (with all
of the callee clobber registers n stuff), this may help a lot.

llvm-svn: 12028
2004-03-01 02:42:43 +00:00
Chris Lattner
c686a9ab37 Fix a minor code-quality issue. When passing 8 and 16-bit integer constants
to function calls, we would emit dead code, like this:

int Y(int, short, double);
int X() {
  Y(4, 123, 4);
}

--- Old
X:
        sub %ESP, 20
        mov %EAX, 4
        mov DWORD PTR [%ESP], %EAX
***     mov %AX, 123
        mov %AX, 123
        movsx %EAX, %AX
        mov DWORD PTR [%ESP + 4], %EAX
        fld QWORD PTR [.CPIX_0]
        fstp QWORD PTR [%ESP + 8]
        call Y
        mov %EAX, 0
        # IMPLICIT_USE %EAX %ESP
        add %ESP, 20
        ret

Now we emit:
X:
        sub %ESP, 20
        mov %EAX, 4
        mov DWORD PTR [%ESP], %EAX
        mov %AX, 123
        movsx %EAX, %AX
        mov DWORD PTR [%ESP + 4], %EAX
        fld QWORD PTR [.CPIX_0]
        fstp QWORD PTR [%ESP + 8]
        call Y
        mov %EAX, 0
        # IMPLICIT_USE %EAX %ESP
        add %ESP, 20
        ret

Next up, eliminate the mov AX and movsx entirely!

llvm-svn: 12026
2004-03-01 02:34:08 +00:00
Alkis Evlogimenos
e186d8eb2f Add instruction name description.
llvm-svn: 11998
2004-02-29 18:44:03 +00:00
Alkis Evlogimenos
8d8f872b3d Use correct template for SHLD and SHRD instructions so that the memory
operand size is correctly specified.

llvm-svn: 11997
2004-02-29 09:19:40 +00:00
Alkis Evlogimenos
10f4523e9a Improve allocation order:
1) For 8-bit registers try to use first the ones that are parts of the
   same register (AL then AH). This way we only alias 2 16/32-bit
   registers after allocating 4 8-bit variables.

2) Move EBX as the last register to allocate. This will cause less
   spills to happen since we will have 8-bit registers available up to
   register excaustion (assuming we use the allocation order). It
   would be nice if we could push all of the 8-bit aliased registers
   towards the end but we much prefer to keep callee saved register to
   the end to avoid saving them on entry and exit of the function.

For example this gives a slight reduction of spills with linear scan
on 164.gzip.

Before:

11221 asm-printer           - Number of machine instrs printed
  975 spiller               - Number of loads added
  675 spiller               - Number of stores added
  398 spiller               - Number of register spills

After:

11182 asm-printer           - Number of machine instrs printed
  952 spiller               - Number of loads added
  652 spiller               - Number of stores added
  386 spiller               - Number of register spills

llvm-svn: 11996
2004-02-29 09:17:01 +00:00
Alkis Evlogimenos
7ecfe0a839 A big X86 instruction rename. The instructions are renamed to make
their names more decriptive. A name consists of the base name, a
default operand size followed by a character per operand with an
optional special size. For example:

ADD8rr -> add, 8-bit register, 8-bit register

IMUL16rmi -> imul, 16-bit register, 16-bit memory, 16-bit immediate

IMUL16rmi8 -> imul, 16-bit register, 16-bit memory, 8-bit immediate

MOVSX32rm16 -> movsx, 32-bit register, 16-bit memory

llvm-svn: 11995
2004-02-29 08:50:03 +00:00
Chris Lattner
a7db4ff17a Eliminate the X86-specific BMI functions, using BuildMI instead.
Replace uses of addZImm with addImm.

llvm-svn: 11992
2004-02-29 07:22:16 +00:00
Chris Lattner
e8e0bafbba Fix a miscompilation of 197.parser that occurs when you have single basic
block loops.

llvm-svn: 11990
2004-02-29 07:10:16 +00:00
Chris Lattner
c2977ac665 Adjust to change in TII ctor arguments
llvm-svn: 11987
2004-02-29 06:31:44 +00:00
Chris Lattner
cfc8f02250 These two virtual methods are never called.
llvm-svn: 11984
2004-02-29 05:59:33 +00:00
Alkis Evlogimenos
0f96b44e0e Use correct template for ADC instruction with memory operands.
llvm-svn: 11974
2004-02-29 02:18:17 +00:00
Alkis Evlogimenos
6815402082 SHLD and SHRD take 32-bit operands but an 8-bit immediate. Rename them
to denote this fact.

llvm-svn: 11972
2004-02-28 23:46:44 +00:00
Alkis Evlogimenos
e8dac99a43 Floating point loads/stores act on memory operands. Rename them to
denote this fact.

llvm-svn: 11971
2004-02-28 23:42:35 +00:00
Alkis Evlogimenos
1d71a15be9 Rename instruction templates to be easier to the human eye to
parse. The name is now I (operand size)*. For example:

Im32 -> instruction with 32-bit memory operands.

Im16i8 -> instruction with 16-bit memory operands and 8 bit immediate
          operands.

llvm-svn: 11970
2004-02-28 23:09:03 +00:00
Alkis Evlogimenos
6038a89025 Uncomment instructions that take both an immediate and a memory
operand but their sizes differ.

llvm-svn: 11969
2004-02-28 22:06:59 +00:00
Alkis Evlogimenos
f208a0fd81 Each instruction now has both an ImmType and a MemType. This describes
the size of the immediate and the memory operand on instructions that
use them. This resolves problems with instructions that take both a
memory and an immediate operand but their sizes differ (i.e. ADDmi32b).

llvm-svn: 11967
2004-02-28 22:02:05 +00:00
Alkis Evlogimenos
977dbaadf7 Do not generate instructions with mismatched memory/immediate sized
operands. The X86 backend doesn't handle them properly right now.

llvm-svn: 11944
2004-02-28 06:01:43 +00:00
Alkis Evlogimenos
84f00e93f7 Further comment updates.
llvm-svn: 11933
2004-02-28 03:20:31 +00:00
Alkis Evlogimenos
edbe362160 Update comments.
llvm-svn: 11932
2004-02-28 03:12:31 +00:00
Alkis Evlogimenos
0f91ce52a0 My previous commit broke the jit. The shift instructions always take
an 8-bit immediate. So mark the shifts that take immediates as taking
an 8-bit argument. The rest with the implicit use of CL are marked
appropriately.

A bug still exists:

def SHLDmri32  : I2A8 <"shld", 0xA4, MRMDestMem>, TB;           // [mem32] <<= [mem32],R32 imm8

The immediate in the above instruction is 8-bit but the memory
reference is 32-bit. The printer prints this as an 8-bit reference
which confuses the assembler. Same with SHRDmri32.

llvm-svn: 11931
2004-02-28 02:56:26 +00:00
Alkis Evlogimenos
ace6d81654 Fix argument size for SHL, SHR, SAR, SHLD and SHRD families of
instructions.

llvm-svn: 11923
2004-02-27 19:46:30 +00:00
Alkis Evlogimenos
839c70f45d Fix encoding of ADD and SUB family of instructions. Also rearrange
them so that they are consistent with AND, XOR, etc...

llvm-svn: 11922
2004-02-27 18:57:00 +00:00
Alkis Evlogimenos
56d357aa23 Rename MRMS[0-7]{r,m} to MRM[0-7]{r,m}.
llvm-svn: 11921
2004-02-27 18:55:12 +00:00
Alkis Evlogimenos
5ac109957f Add memory operand folding support for the SETcc family of
instructions.

llvm-svn: 11907
2004-02-27 16:13:37 +00:00
Alkis Evlogimenos
0742b93bb9 Add memory operand folding support for SHLD and SHRD instructions.
llvm-svn: 11905
2004-02-27 15:03:18 +00:00
Alkis Evlogimenos
b1f67f6741 Add memory operand folding support for SHL, SHR and SAR, SHLD instructions.
llvm-svn: 11903
2004-02-27 09:28:43 +00:00
Alkis Evlogimenos
cf49d13ed2 Rename SHL, SHR, SAR, SHLD and SHLR instructions to make them
consistent with the rest and also pepare for the addition of their
memory operand variants.

llvm-svn: 11902
2004-02-27 06:57:05 +00:00
Alkis Evlogimenos
b15631fcfa Uncomment assertions that register# != 0 on calls to
MRegisterInfo::is{Physical,Virtual}Register. Apply appropriate fixes
to relevant files.

llvm-svn: 11882
2004-02-26 22:00:20 +00:00
Chris Lattner
6a3796eaf9 Fix some warnings, some of which were spurious, and some of which were real
bugs.  Thanks Brian!

llvm-svn: 11859
2004-02-26 01:20:02 +00:00
Chris Lattner
7c05e5d4d8 Fix failures in 099.go due to the cfgsimplify pass creating switch instructions
where there did not used to be any before

llvm-svn: 11829
2004-02-25 19:30:19 +00:00
Chris Lattner
ab9628ad18 Teach the instruction selector how to transform 'array' GEP computations into X86
scaled indexes.  This allows us to compile GEP's like this:

int* %test([10 x { int, { int } }]* %X, int %Idx) {
        %Idx = cast int %Idx to long
        %X = getelementptr [10 x { int, { int } }]* %X, long 0, long %Idx, ubyte 1, ubyte 0
        ret int* %X
}

Into a single address computation:

test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
        lea %EAX, DWORD PTR [%EAX + 8*%ECX + 4]
        ret

Before it generated:
test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
        shl %ECX, 3
        add %EAX, %ECX
        lea %EAX, DWORD PTR [%EAX + 4]
        ret

This is useful for things like int/float/double arrays, as the indexing can be folded into
the loads&stores, reducing register pressure and decreasing the pressure on the decode unit.
With these changes, I expect our performance on 256.bzip2 and gzip to improve a lot.  On
bzip2 for example, we go from this:

10665 asm-printer           - Number of machine instrs printed
   40 ra-local              - Number of loads/stores folded into instructions
 1708 ra-local              - Number of loads added
 1532 ra-local              - Number of stores added
 1354 twoaddressinstruction - Number of instructions added
 1354 twoaddressinstruction - Number of two-address instructions
 2794 x86-peephole          - Number of peephole optimization performed

to this:
9873 asm-printer           - Number of machine instrs printed
  41 ra-local              - Number of loads/stores folded into instructions
1710 ra-local              - Number of loads added
1521 ra-local              - Number of stores added
 789 twoaddressinstruction - Number of instructions added
 789 twoaddressinstruction - Number of two-address instructions
2142 x86-peephole          - Number of peephole optimization performed

... and these types of instructions are often in tight loops.

Linear scan is also helped, but not as much.  It goes from:

8787 asm-printer           - Number of machine instrs printed
2389 liveintervals         - Number of identity moves eliminated after coalescing
2288 liveintervals         - Number of interval joins performed
3522 liveintervals         - Number of intervals after coalescing
5810 liveintervals         - Number of original intervals
 700 spiller               - Number of loads added
 487 spiller               - Number of stores added
 303 spiller               - Number of register spills
1354 twoaddressinstruction - Number of instructions added
1354 twoaddressinstruction - Number of two-address instructions
 363 x86-peephole          - Number of peephole optimization performed

to:

7982 asm-printer           - Number of machine instrs printed
1759 liveintervals         - Number of identity moves eliminated after coalescing
1658 liveintervals         - Number of interval joins performed
3282 liveintervals         - Number of intervals after coalescing
4940 liveintervals         - Number of original intervals
 635 spiller               - Number of loads added
 452 spiller               - Number of stores added
 288 spiller               - Number of register spills
 789 twoaddressinstruction - Number of instructions added
 789 twoaddressinstruction - Number of two-address instructions
 258 x86-peephole          - Number of peephole optimization performed

Though I'm not complaining about the drop in the number of intervals.  :)

llvm-svn: 11820
2004-02-25 07:00:55 +00:00
Chris Lattner
dccf14825c * Make the previous patch more efficient by not allocating a temporary MachineInstr
to do analysis.

*** FOLD getelementptr instructions into loads and stores when possible,
    making use of some of the crazy X86 addressing modes.

For example, the following C++ program fragment:

struct complex {
    double re, im;
    complex(double r, double i) : re(r), im(i) {}
};
inline complex operator+(const complex& a, const complex& b) {
    return complex(a.re+b.re, a.im+b.im);
}
complex addone(const complex& arg) {
    return arg + complex(1,0);
}

Used to be compiled to:
_Z6addoneRK7complex:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
***     mov %EDX, %ECX
        fld QWORD PTR [%EDX]
        fld1
        faddp %ST(1)
***     add %ECX, 8
        fld QWORD PTR [%ECX]
        fldz
        faddp %ST(1)
***     mov %ECX, %EAX
        fxch %ST(1)
        fstp QWORD PTR [%ECX]
***     add %EAX, 8
        fstp QWORD PTR [%EAX]
        ret

Now it is compiled to:
_Z6addoneRK7complex:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
        fld QWORD PTR [%ECX]
        fld1
        faddp %ST(1)
        fld QWORD PTR [%ECX + 8]
        fldz
        faddp %ST(1)
        fxch %ST(1)
        fstp QWORD PTR [%EAX]
        fstp QWORD PTR [%EAX + 8]
        ret

Other programs should see similar improvements, across the board.  Note that
in addition to reducing instruction count, this also reduces register pressure
a lot, always a good thing on X86.  :)

llvm-svn: 11819
2004-02-25 06:13:04 +00:00
Chris Lattner
10d08a2955 Add a helper to create an addressing mode given all of the pieces.
llvm-svn: 11818
2004-02-25 06:01:07 +00:00
Chris Lattner
c0e2bc0250 add an inefficient way of folding structure and constant array indexes together
into a single LEA instruction.  This should improve the code generated for
things like X->A.B.C[12].D.

The bigger benefit is still coming though.  Note that this uses an LEA instruction
instead of an add, giving the register allocator more freedom.  We should probably
never generate ADDri32's.

llvm-svn: 11817
2004-02-25 03:45:50 +00:00
Chris Lattner
969f90db77 Implement special case for storing an immediate into memory so that we don't need
an intermediate register.

llvm-svn: 11816
2004-02-25 02:56:58 +00:00
Alkis Evlogimenos
9b103024ef Refactor rewinding code for finding the first terminator of a basic
block into MachineBasicBlock::getFirstTerminator().

This also fixes a bug in the implementation of the above in both
RegAllocLocal and InstrSched, where instructions where added after the
terminator if the basic block's only instruction was a terminator (it
shouldn't matter for RegAllocLocal since this case never occurs in
practice).

llvm-svn: 11748
2004-02-23 18:14:48 +00:00
Chris Lattner
40e15a6000 Simplify code a bit, don't go off the end of the block, now that the current
block we are in might be empty

llvm-svn: 11744
2004-02-23 07:42:19 +00:00
Chris Lattner
28e4e925eb We were forgetting to add FP_REG_KILL instructions to basic blocks which will
eventually get an assignment due to elimination of PHIs.

llvm-svn: 11743
2004-02-23 07:29:45 +00:00
Chris Lattner
b200638dc4 Work around a gas bug. Print '-9223372036854775808' as unsigned.
llvm-svn: 11729
2004-02-23 03:27:05 +00:00
Chris Lattner
85f13fae06 Implement cast fp -> bool
llvm-svn: 11728
2004-02-23 03:21:41 +00:00
Chris Lattner
795ca35cde Stop passing iterators around by reference now that we have ilists!
Implement cast Type::ULongTy -> double

llvm-svn: 11726
2004-02-23 03:10:10 +00:00
Chris Lattner
f9acb33dfd Add a new cmove instruction
llvm-svn: 11722
2004-02-23 01:16:05 +00:00