Commit Graph

309 Commits

Author SHA1 Message Date
Chris Lattner
e4d5c441e0 This mega patch converts us from using Function::a{iterator|begin|end} to
using Function::arg_{iterator|begin|end}.  Likewise Module::g* -> Module::global_*.

This patch is contributed by Gabor Greif, thanks!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@20597 91177308-0d34-0410-b5e6-96231b3b80d8
2005-03-15 04:54:21 +00:00
Chris Lattner
6e7c47c12d Fix a subtle bug involving constant expr casts from int to fp
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19410 91177308-0d34-0410-b5e6-96231b3b80d8
2005-01-09 01:49:29 +00:00
Chris Lattner
8c92628d1c Wrap long line.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19367 91177308-0d34-0410-b5e6-96231b3b80d8
2005-01-08 06:59:50 +00:00
Chris Lattner
7ab65934a0 The X86 instruction selector already handles codegen of:
store float 123.45, float* %P

as an integer store.  This adds handling of float immediate stores as integers
for arguments passed function calls.

This is now tested by CodeGen/X86/store-fp-constant.ll


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19364 91177308-0d34-0410-b5e6-96231b3b80d8
2005-01-08 05:45:24 +00:00
Chris Lattner
6ac95f9679 Codegen -1 and -0.0 more efficiently. This implements CodeGen/X86/negatize_zero.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19313 91177308-0d34-0410-b5e6-96231b3b80d8
2005-01-06 21:19:16 +00:00
Chris Lattner
5384b38ccc 1. If a double FP constant must be put into a constant pool, but it can be
precisely represented as a float, put it into the constant pool as a
   float.
2. Use the cbw/cwd/cdq instructions instead of an explicit SAR for signed
   division.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19291 91177308-0d34-0410-b5e6-96231b3b80d8
2005-01-05 16:30:14 +00:00
Chris Lattner
8cdbc35216 Change the sentinal
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19007 91177308-0d34-0410-b5e6-96231b3b80d8
2004-12-17 00:46:51 +00:00
Chris Lattner
11cf7aa775 Create a stack slot for the return address lazily instead of eagerly. This
save small amounts of time for functions that don't call llvm.returnaddress
or llvm.frameaddress (which is almost all functions).


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19006 91177308-0d34-0410-b5e6-96231b3b80d8
2004-12-17 00:07:46 +00:00
Chris Lattner
c0354c904b Set the rounding mode for the X86 FPU to 64-bits instead of 80-bits. We
don't support long double anyway, and this gives us FP results closer to
other targets.

This also speeds up 179.art from 41.4s to 18.32s, by eliminating a problem
with extra precision that causes an FP == comparison to fail (leading to
extra loop iterations).


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@18895 91177308-0d34-0410-b5e6-96231b3b80d8
2004-12-13 17:23:11 +00:00
Chris Lattner
223d4c4b3a Fix a regression caused by the previous patch
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@18449 91177308-0d34-0410-b5e6-96231b3b80d8
2004-12-03 05:13:15 +00:00
Chris Lattner
3986924e0b Consider 64-bit registers to be FP as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@18432 91177308-0d34-0410-b5e6-96231b3b80d8
2004-12-02 17:57:21 +00:00
Tanya Lattner
9855b84469 Reverting this patch:
http://mail.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20041122/021428.html

It broke Mutlisource/Applications/obsequi


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@18407 91177308-0d34-0410-b5e6-96231b3b80d8
2004-12-01 18:27:03 +00:00
Chris Lattner
928b47ae2e Revamp long/ulong comparisons to use a much more efficient sequence (thanks
to Brian and the Sun compiler for pointing out that the obvious works :)

This also enables folding all long comparisons into setcc and branch
instructions: before we could only do == and !=

For example, for:
void test(unsigned long long A, unsigned long long B) {
   if (A < B) foo();
 }

We now generate:

test:
        subl $4, %esp
        movl %esi, (%esp)
        movl 8(%esp), %eax
        movl 12(%esp), %ecx
        movl 16(%esp), %edx
        movl 20(%esp), %esi
        subl %edx, %eax
        sbbl %esi, %ecx
        jae .LBBtest_2  # UnifiedReturnBlock
.LBBtest_1:     # then
        call foo
        movl (%esp), %esi
        addl $4, %esp
        ret
.LBBtest_2:     # UnifiedReturnBlock
        movl (%esp), %esi
        addl $4, %esp
        ret

Instead of:

test:
        subl $12, %esp
        movl %esi, 8(%esp)
        movl %ebx, 4(%esp)
        movl 16(%esp), %eax
        movl 20(%esp), %ecx
        movl 24(%esp), %edx
        movl 28(%esp), %esi
        cmpl %edx, %eax
        setb %al
        cmpl %esi, %ecx
        setb %bl
        cmove %ax, %bx
        testb %bl, %bl
        je .LBBtest_2   # UnifiedReturnBlock
.LBBtest_1:     # then
        call foo
        movl 4(%esp), %ebx
        movl 8(%esp), %esi
        addl $12, %esp
        ret
.LBBtest_2:     # UnifiedReturnBlock
        movl 4(%esp), %ebx
        movl 8(%esp), %esi
        addl $12, %esp
        ret


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@18330 91177308-0d34-0410-b5e6-96231b3b80d8
2004-11-29 05:55:24 +00:00
Chris Lattner
39a83dc37c Fix a major bug in the signed shr code, which apparently only breaks 134.perl!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17902 91177308-0d34-0410-b5e6-96231b3b80d8
2004-11-16 18:40:52 +00:00
Chris Lattner
36c625d3a5 Simplify and rearrange long shift code
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17861 91177308-0d34-0410-b5e6-96231b3b80d8
2004-11-15 23:16:34 +00:00
Chris Lattner
ce7cafa960 shld is a very high latency operation. Instead of emitting it for shifts of
two or three, open code the equivalent operation which is faster on athlon
and P4 (by a substantial margin).

For example, instead of compiling this:

long long X2(long long Y) { return Y << 2; }

to:

X3_2:
        movl 4(%esp), %eax
        movl 8(%esp), %edx
        shldl $2, %eax, %edx
        shll $2, %eax
        ret

Compile it to:

X2:
        movl 4(%esp), %eax
        movl 8(%esp), %ecx
        movl %eax, %edx
        shrl $30, %edx
        leal (%edx,%ecx,4), %edx
        shll $2, %eax
        ret

Likewise, for << 3, compile to:

X3:
        movl 4(%esp), %eax
        movl 8(%esp), %ecx
        movl %eax, %edx
        shrl $29, %edx
        leal (%edx,%ecx,8), %edx
        shll $3, %eax
        ret

This matches icc, except that icc open codes the shifts as adds on the P4.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17707 91177308-0d34-0410-b5e6-96231b3b80d8
2004-11-13 20:48:57 +00:00
Chris Lattner
62f5a9402c Add missing check
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17706 91177308-0d34-0410-b5e6-96231b3b80d8
2004-11-13 20:04:38 +00:00
Chris Lattner
44205cadba Compile:
long long X3_2(long long Y) { return Y+Y; }
int X(int Y) { return Y+Y; }

into:

X3_2:
        movl 4(%esp), %eax
        movl 8(%esp), %edx
        addl %eax, %eax
        adcl %edx, %edx
        ret
X:
        movl 4(%esp), %eax
        addl %eax, %eax
        ret

instead of:

X3_2:
        movl 4(%esp), %eax
        movl 8(%esp), %edx
        shldl $1, %eax, %edx
        shll $1, %eax
        ret

X:
        movl 4(%esp), %eax
        shll $1, %eax
        ret


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17705 91177308-0d34-0410-b5e6-96231b3b80d8
2004-11-13 20:03:48 +00:00
Chris Lattner
611fb259ba Don't print stuff out from the code generator. This broke the JIT horribly
last night. :)  bork!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17093 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-17 17:40:50 +00:00
Chris Lattner
56a31c69c8 Rewrite support for cast uint -> FP. In particular, we used to compile this:
double %test(uint %X) {
        %tmp.1 = cast uint %X to double         ; <double> [#uses=1]
        ret double %tmp.1
}

into:

test:
        sub %ESP, 8
        mov %EAX, DWORD PTR [%ESP + 12]
        mov %ECX, 0
        mov DWORD PTR [%ESP], %EAX
        mov DWORD PTR [%ESP + 4], %ECX
        fild QWORD PTR [%ESP]
        add %ESP, 8
        ret

... which basically zero extends to 8 bytes, then does an fild for an
8-byte signed int.

Now we generate this:


test:
        sub %ESP, 4
        mov %EAX, DWORD PTR [%ESP + 8]
        mov DWORD PTR [%ESP], %EAX
        fild DWORD PTR [%ESP]
        shr %EAX, 31
        fadd DWORD PTR [.CPItest_0 + 4*%EAX]
        add %ESP, 4
        ret

        .section .rodata
        .align  4
.CPItest_0:
        .quad   5728578726015270912

This does a 32-bit signed integer load, then adds in an offset if the sign
bit of the integer was set.

It turns out that this is substantially faster than the preceeding sequence.
Consider this testcase:

unsigned a[2]={1,2};
volatile double G;

void main() {
    int i;
    for (i=0; i<100000000; ++i )
        G += a[i&1];
}

On zion (a P4 Xeon, 3Ghz), this patch speeds up the testcase from 2.140s
to 0.94s.

On apoc, an athlon MP 2100+, this patch speeds up the testcase from 1.72s
to 1.34s.

Note that the program takes 2.5s/1.97s on zion/apoc with GCC 3.3 -O3
-fomit-frame-pointer.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17083 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-17 08:01:28 +00:00
Chris Lattner
de95c9e0bb fold:
%X = and Y, constantint
  %Z = setcc %X, 0

instead of emitting:

        and %EAX, 3
        test %EAX, %EAX
        je .LBBfoo2_2   # UnifiedReturnBlock

We now emit:

        test %EAX, 3
        je .LBBfoo2_2   # UnifiedReturnBlock

This triggers 581 times on 176.gcc for example.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17080 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-17 06:10:40 +00:00
Chris Lattner
30483b0c84 Teach the X86 backend about unreachable and undef. Among other things, we
now compile:

'foo() {}' into "ret" instead of "mov EAX, 0; ret"


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17049 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-16 18:13:05 +00:00
Chris Lattner
358a9027a8 Instruction select globals with offsets better. For example, on this test
case:

int C[100];
int foo() {
  return C[4];
}

We now codegen:

foo:
        mov %EAX, DWORD PTR [C + 16]
        ret

instead of:

foo:
        mov %EAX, OFFSET C
        mov %EAX, DWORD PTR [%EAX + 16]
        ret

Other impressive features may be coming later.

This patch is contributed by Jeff Cohen!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17011 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-15 05:05:29 +00:00
Chris Lattner
b0f4e389db Fix a major regression from the bugfix for 2004-10-08-SelectSetCCFold.llx,
which prevented setcc's from being folded into branches.  It appears that
conditional branchinst's CC operand is actually operand(2), not operand(0)
as we might expect. :(


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16859 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-08 22:24:31 +00:00
Chris Lattner
d04cd55796 Fix bug: 2004-10-08-SelectSetCCFold.llx. Normally this is hidden by the
instcombine xform, which is why we didn't notice it before.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16840 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-08 16:34:13 +00:00
Chris Lattner
09c750f73d Remove debugging code, fix encoding problem. This fixes the problems
the JIT had last night.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16766 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-06 14:31:50 +00:00
Chris Lattner
2483f67914 Codegen signed mod by 2 or -2 more efficiently. Instead of generating:
t:
        mov %EDX, DWORD PTR [%ESP + 4]
        mov %ECX, 2
        mov %EAX, %EDX
        sar %EDX, 31
        idiv %ECX
        mov %EAX, %EDX
        ret

Generate:
t:
        mov %ECX, DWORD PTR [%ESP + 4]
***     mov %EAX, %ECX
        cdq
        and %ECX, 1
        xor %ECX, %EDX
        sub %ECX, %EDX
***     mov %EAX, %ECX
        ret

Note that the two marked moves are redundant, and should be eliminated by the
register allocator, but aren't.

Compare this to GCC, which generates:

t:
        mov     %eax, DWORD PTR [%esp+4]
        mov     %edx, %eax
        shr     %edx, 31
        lea     %ecx, [%edx+%eax]
        and     %ecx, -2
        sub     %eax, %ecx
        ret

or ICC 8.0, which generates:

t:
        movl      4(%esp), %ecx                                 #3.5
        movl      $-2147483647, %eax                            #3.25
        imull     %ecx                                          #3.25
        movl      %ecx, %eax                                    #3.25
        sarl      $31, %eax                                     #3.25
        addl      %ecx, %edx                                    #3.25
        subl      %edx, %eax                                    #3.25
        addl      %eax, %eax                                    #3.25
        negl      %eax                                          #3.25
        subl      %eax, %ecx                                    #3.25
        movl      %ecx, %eax                                    #3.25
        ret                                                     #3.25

We would be in great shape if not for the moves.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16763 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-06 05:01:07 +00:00
Chris Lattner
3ffdff6448 Fix a scary bug with signed division by a power of two. We used to generate:
s:   ;; X / 4
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, %EAX
        sar %ECX, 1
        shr %ECX, 30
        mov %EDX, %EAX
        add %EDX, %ECX
        sar %EAX, 2
        ret

When we really meant:

s:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, %EAX
        sar %ECX, 1
        shr %ECX, 30
        add %EAX, %ECX
        sar %EAX, 2
        ret

Hey, this also reduces register pressure too :)


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16761 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-06 04:19:43 +00:00
Chris Lattner
610f1e2785 Codegen signed divides by 2 and -2 more efficiently. In particular
instead of:

s:   ;; X / 2
        movl 4(%esp), %eax
        movl %eax, %ecx
        shrl $31, %ecx
        movl %eax, %edx
        addl %ecx, %edx
        sarl $1, %eax
        ret

t:   ;; X / -2
        movl 4(%esp), %eax
        movl %eax, %ecx
        shrl $31, %ecx
        movl %eax, %edx
        addl %ecx, %edx
        sarl $1, %eax
        negl %eax
        ret

Emit:

s:
        movl 4(%esp), %eax
        cmpl $-2147483648, %eax
        sbbl $-1, %eax
        sarl $1, %eax
        ret

t:
        movl 4(%esp), %eax
        cmpl $-2147483648, %eax
        sbbl $-1, %eax
        sarl $1, %eax
        negl %eax
        ret


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16760 91177308-0d34-0410-b5e6-96231b3b80d8
2004-10-06 04:02:39 +00:00
Misha Brukman
eae1bf10ea s/ISel/X86ISel/ to have unique class names for debugging via gdb because the C++
front-end in gcc does not mangle classes in anonymous namespaces correctly.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16469 91177308-0d34-0410-b5e6-96231b3b80d8
2004-09-21 18:21:21 +00:00
Reid Spencer
2da5c3dda6 Convert code to compile with vc7.1.
Patch contributed by Paolo Invernizzi. Thanks Paolo!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16368 91177308-0d34-0410-b5e6-96231b3b80d8
2004-09-15 17:06:42 +00:00
Reid Spencer
551ccae044 Changes For Bug 352
Move include/Config and include/Support into include/llvm/Config,
include/llvm/ADT and include/llvm/Support. From here on out, all LLVM
public header files must be under include/llvm/.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16137 91177308-0d34-0410-b5e6-96231b3b80d8
2004-09-01 22:55:40 +00:00
Reid Spencer
fc989e1ee0 Reduce the number of arguments in the instruction builder and make some
improvements on instruction selection that account for register and frame
index bases.

Patch contributed by Jeff Cohen. Thanks Jeff!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16110 91177308-0d34-0410-b5e6-96231b3b80d8
2004-08-30 00:13:26 +00:00
Misha Brukman
91b5ca838a Fix file header as it has been renamed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15239 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-26 18:45:48 +00:00
Chris Lattner
3dbb504081 Fix cases where we generated horrible code like this:
mov %EDI, 12
        add %EDI, %ECX
        mov %ECX, 12
        add %ECX, %EDX
        mov %EDX, 12
        add %EDX, %ESI

instead (really!) generate this:

        add %ECX, 12
        add %EDX, 12
        add %ESI, 12


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15090 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-21 21:28:26 +00:00
Chris Lattner
4771288fe3 While I'm at it, don't break codegen of mul by 3,5,9.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15013 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-19 23:50:57 +00:00
Chris Lattner
596b97f1ab Generate better code for multiplies by negative constants like -4, -1, -9, etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15012 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-19 23:47:21 +00:00
Reid Spencer
8863f1814b bug 122:
- Replace ConstantPointerRef usage with GlobalValue usage
- Minimize redundant isa<GlobalValue> usage
- Correct isa<Constant> for GlobalValue subclass


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14950 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-18 00:38:32 +00:00
Chris Lattner
76e2df2645 Patches towards fixing PR341
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14841 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-15 02:14:30 +00:00
Chris Lattner
d2995df5b7 Improve codegen for the LLVM offsetof/sizeof "operator". Before we compiled
this LLVM function:

int %foo() {
        ret int cast (int** getelementptr (int** null, int 1) to int)
}

into:

foo:
        mov %EAX, 0
        lea %EAX, DWORD PTR [%EAX + 4]
        ret

now we compile it into:

foo:
        mov %EAX, 4
        ret

This sequence is frequently generated by the MSIL front-end, and soon the malloc lowering pass and
Java front-ends as well..

-Chris


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14834 91177308-0d34-0410-b5e6-96231b3b80d8
2004-07-15 00:58:53 +00:00
Chris Lattner
8b486a114e Fix a regression from r1.224. In particular, codegen a cast from double ->
float as a truncation by going through memory.  This truncation was being
skipped, which caused 175.vpr to fail after aggressive register promotion.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14473 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-29 00:14:38 +00:00
Chris Lattner
3048373748 Move the IntrinsicLowering header into the CodeGen directory, as per PR346
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14266 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-20 07:49:54 +00:00
Chris Lattner
667ea024b5 Codegen sub C, X a little bit better for register pressure. Instead of
mov REG, C
sub REG, X

generate:

neg X
add X, C

which uses one less reg


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14213 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-18 00:50:37 +00:00
Chris Lattner
a6f9fe6dbc Fold setcc instructions into select and branches that are not in the same BB as
the setcc.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14212 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-18 00:29:22 +00:00
Chris Lattner
ccd9796a46 Do not fold loads into instructions if it is used more than once. In particular
we do not want to fold the load in cases like this:

  X = load
    = add A, X
    = add B, X


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14204 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-17 22:15:25 +00:00
Chris Lattner
f70c22b019 Rename Type::PrimitiveID to TypeId and ::getPrimitiveID() to ::getTypeID()
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14201 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-17 18:19:28 +00:00
Chris Lattner
4adf066f99 Remove support for llvm.isnan. Alkis wins :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14189 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-15 21:48:07 +00:00
Chris Lattner
dc5724478e Add basic support for the isunordered intrinsic. The isnan stuff still needs to go
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14185 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-15 21:36:44 +00:00
Chris Lattner
01cdb1b367 By far, one of the most common uses of isnan is to make 'isunordered'
comparisons.  In an 'isunordered' predicate, which looks like this at
the LLVM level:

        %a = call bool %llvm.isnan(double %X)
        %b = call bool %llvm.isnan(double %Y)
        %COM = or bool %a, %b

We used to generate this code:

        fxch %ST(1)
        fucomip %ST(0), %ST(0)
        setp %AL
        fucomip %ST(0), %ST(0)
        setp %AH
        or %AL, %AH

With this patch, we generate this code:

        fucomip %ST(0), %ST(1)
        fstp %ST(0)
        setp %AL

Which should make alkis happy.  Tested as X86/compare_folding.llx:test1


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14148 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-11 05:33:49 +00:00
Chris Lattner
0ca2c8e02c Now that compare instructions aren't lumped in with the other twoargfp instructions,
we can get rid of the FpUCOM/FpUCOMi pseudo instructions, which makes stuff simpler
and faster.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14144 91177308-0d34-0410-b5e6-96231b3b80d8
2004-06-11 04:49:02 +00:00