Commit Graph

24755 Commits

Author SHA1 Message Date
Chris Lattner
5609ba71a5 Another bad case I noticed
llvm-svn: 28177
2006-05-08 21:39:45 +00:00
Chris Lattner
4f3345f1f1 add a note
llvm-svn: 28176
2006-05-08 21:24:21 +00:00
Chris Lattner
eed10e837c Make the case I just checked in stronger. Now we compile this:
short test2(short X, short x) {
  int Y = (short)(X+x);
  return Y >> 1;
}

to:

_test2:
        add r2, r3, r4
        extsh r2, r2
        srawi r3, r2, 1
        blr

instead of:

_test2:
        add r2, r3, r4
        extsh r2, r2
        srwi r2, r2, 1
        extsh r3, r2
        blr

llvm-svn: 28175
2006-05-08 21:18:59 +00:00
Chris Lattner
7b8a0cfff3 Implement and_sext.ll:test3, generating:
_test4:
        srawi r3, r3, 16
        blr

instead of:

_test4:
        srwi r2, r3, 16
        extsh r3, r2
        blr

for:

short test4(unsigned X) {
  return (X >> 16);
}

llvm-svn: 28174
2006-05-08 20:59:41 +00:00
Chris Lattner
4d8e0c2a55 new testcase
llvm-svn: 28173
2006-05-08 20:58:58 +00:00
Nate Begeman
db854c6772 Yet more readme updating
llvm-svn: 28172
2006-05-08 20:54:02 +00:00
Chris Lattner
4f66de151c Compile this:
short test4(unsigned X) {
  return (X >> 16);
}

to:

_test4:
        movl 4(%esp), %eax
        sarl $16, %eax
        ret

instead of:

_test4:
        movl $-65536, %eax
        andl 4(%esp), %eax
        sarl $16, %eax
        ret

llvm-svn: 28171
2006-05-08 20:51:54 +00:00
Nate Begeman
1ff4d8f2fe New note about something bad happening in target independent optimizers
llvm-svn: 28170
2006-05-08 20:08:28 +00:00
Nate Begeman
b8fa6337df Proving once again that I am not as smart as the compiler
llvm-svn: 28169
2006-05-08 19:09:24 +00:00
Nate Begeman
a706539a72 Fold more shifts into inserts, and update the README
llvm-svn: 28168
2006-05-08 17:38:32 +00:00
Chris Lattner
bbe4393bc4 Fold shifts with undef operands.
llvm-svn: 28167
2006-05-08 17:29:49 +00:00
Chris Lattner
6cac867da1 When tracking demanded bits, if any bits from the sext of an SRA are demanded,
then so is the input sign bit.  This fixes mediabench/g721 on X86.

llvm-svn: 28166
2006-05-08 17:22:53 +00:00
Nate Begeman
1f359b07de Make emission of jump tables a bit less conservative; they are now required
to be only 31.25% dense, rather than 75% dense.

llvm-svn: 28165
2006-05-08 16:51:36 +00:00
Evan Cheng
0fb3fc3626 Fixing truncate. Previously we were emitting truncate from r16 to r8 as
movw. That is we promote the destination operand to r16. So
        %CH = TRUNC_R16_R8 %BP
is emitted as
        movw %bp, %cx.

This is incorrect. If %cl is live, it would be clobbered.
Ideally we want to do the opposite, that is emitted it as
        movb ??, %ch
But this is not possible since %bp does not have a r8 sub-register.

We are now defining a new register class R16_ which is a subclass of R16
containing only those 16-bit registers that have r8 sub-registers (i.e.
AX - DX). We isel the truncate to two instructions, a MOV16to16_ to copy the
value to the R16_ class, followed by a TRUNC_R16_R8.

Due to bug 770, the register colaescer is not going to coalesce between R16 and
R16_. That will be fixed later so we can eliminate the MOV16to16_. Right now, it
can only be eliminated if we are lucky that source and destination registers are
the same.

llvm-svn: 28164
2006-05-08 08:01:26 +00:00
Chris Lattner
c1736fdce0 Move the definition of value_use_iterator::getOperandNo to User.h where the
definition of the User class is available, this fixes the  build with some
compiler versions.

llvm-svn: 28163
2006-05-08 05:59:36 +00:00
Nate Begeman
591488077e Update some stuff now that the new rlwimi code has gone in
llvm-svn: 28162
2006-05-08 02:52:38 +00:00
Nate Begeman
b8e351aec5 Fix PR772
llvm-svn: 28161
2006-05-08 01:35:01 +00:00
Nate Begeman
c5a92246bc Remove unncessary include
llvm-svn: 28160
2006-05-08 01:33:11 +00:00
Chris Lattner
0da270237b This test passes now, remove xfail marker
Change test to be a positive test instead of a negative test

llvm-svn: 28159
2006-05-07 18:16:31 +00:00
Evan Cheng
698b0517b5 Typo's
llvm-svn: 28158
2006-05-07 10:10:20 +00:00
Jeff Cohen
44ece070c7 Unlike Unix, Windows won't let a file be implicitly replaced via renaming without explicit permission.
llvm-svn: 28157
2006-05-07 02:51:51 +00:00
Nate Begeman
dc94b738d0 New rlwimi implementation, which is superior to the old one. There are
still a couple missed optimizations, but we now generate all the possible
rlwimis for multiple inserts into the same bitfield.  More regression tests
to come.

llvm-svn: 28156
2006-05-07 00:23:38 +00:00
Chris Lattner
5c9c9f0eb6 Use ComputeMaskedBits to determine # sign bits as a fallback. This allows us
to handle all kinds of stuff, including silly things like:
sextinreg(setcc,i16) -> setcc.

llvm-svn: 28155
2006-05-06 23:48:13 +00:00
Chris Lattner
8b8093dea2 Add some more sign propagation cases
llvm-svn: 28154
2006-05-06 23:40:29 +00:00
Jeff Cohen
37413e2c29 Apply bug fix supplied by Greg Pettyjohn for a bug he found: '<invalid>' is not a legal path on Windows.
llvm-svn: 28153
2006-05-06 23:25:53 +00:00
Chris Lattner
f76c0b6662 Simplify some code, add a couple minor missed folds
llvm-svn: 28152
2006-05-06 23:06:26 +00:00
Chris Lattner
4a4cbde0bb constant fold sign_extend_inreg
llvm-svn: 28151
2006-05-06 23:05:41 +00:00
Chris Lattner
3d5d01a74b remove cases handled elsewhere
llvm-svn: 28150
2006-05-06 22:43:44 +00:00
Chris Lattner
1fce346023 Add some more simple sign bit propagation cases.
llvm-svn: 28149
2006-05-06 22:39:59 +00:00
Jeff Cohen
248e133255 Fix some loose ends in MASM support.
llvm-svn: 28148
2006-05-06 21:27:14 +00:00
Chris Lattner
3da425e496 new testcase we handle right now.
llvm-svn: 28147
2006-05-06 18:15:50 +00:00
Chris Lattner
ca06e2522e Use the new TargetLowering::ComputeNumSignBits method to eliminate
sign_extend_inreg operations.  Though ComputeNumSignBits is still rudimentary,
this is enough to compile this:

short test(short X, short x) {
  int Y = X+x;
  return (Y >> 1);
}
short test2(short X, short x) {
  int Y = (short)(X+x);
  return Y >> 1;
}

into:

_test:
        add r2, r3, r4
        srawi r3, r2, 1
        blr
_test2:
        add r2, r3, r4
        extsh r2, r2
        srawi r3, r2, 1
        blr

instead of:

_test:
        add r2, r3, r4
        srawi r2, r2, 1
        extsh r3, r2
        blr
_test2:
        add r2, r3, r4
        extsh r2, r2
        srawi r2, r2, 1
        extsh r3, r2
        blr

llvm-svn: 28146
2006-05-06 09:30:03 +00:00
Chris Lattner
3a77411d76 Add some really really simple code for computing sign-bit propagation.
This will certainly be enhanced in the future.

llvm-svn: 28145
2006-05-06 09:27:13 +00:00
Chris Lattner
06d617846d Add some new methods for computing sign bit information.
llvm-svn: 28144
2006-05-06 09:26:22 +00:00
Chris Lattner
02d3258820 When inserting casts, be careful of where we put them. We cannot insert
a cast immediately before a PHI node.

This fixes Regression/CodeGen/Generic/2006-05-06-GEP-Cast-Sink-Crash.ll

llvm-svn: 28143
2006-05-06 09:10:37 +00:00
Chris Lattner
1038441de5 new testcase
llvm-svn: 28142
2006-05-06 09:09:47 +00:00
Chris Lattner
7661770087 Move some code around.
Make the "fold (and (cast A), (cast B)) -> (cast (and A, B))" transformation
only apply when both casts really will cause code to be generated.  If one or
both doesn't, then this xform doesn't remove a cast.

This fixes Transforms/InstCombine/2006-05-06-Infloop.ll

llvm-svn: 28141
2006-05-06 09:00:16 +00:00
Chris Lattner
d689dd7659 new testcase from ghostscript that inf looped instcombine
llvm-svn: 28140
2006-05-06 08:58:06 +00:00
Chris Lattner
89fa42b51e Teach the X86 backend about non-i32 inline asm register classes.
llvm-svn: 28139
2006-05-06 00:29:37 +00:00
Chris Lattner
fd1923bfa6 Fold (trunc (srl x, c)) -> (srl (trunc x), c)
llvm-svn: 28138
2006-05-06 00:11:52 +00:00
Chris Lattner
32e96402c0 Fold trunc(any_ext). This gives stuff like:
27,28c27
<       movzwl %di, %edi
<       movl %edi, %ebx
---
>       movw %di, %bx

llvm-svn: 28137
2006-05-05 22:56:26 +00:00
Chris Lattner
c912cf0b07 Shrink shifts when possible.
llvm-svn: 28136
2006-05-05 22:53:17 +00:00
Chris Lattner
2f0d27a72a Implement ComputeMaskedBits/SimplifyDemandedBits for ISD::TRUNCATE
llvm-svn: 28135
2006-05-05 22:32:12 +00:00
Chris Lattner
daae9ee503 Print a grouping around inline asm blocks so that we can tell when we are
using them.

llvm-svn: 28134
2006-05-05 21:50:04 +00:00
Chris Lattner
12bb901c93 Print *some* grouping around inline asm blocks so we know where they are.
llvm-svn: 28133
2006-05-05 21:48:50 +00:00
Chris Lattner
06cb36074a Indent multiline asm strings more nicely
llvm-svn: 28132
2006-05-05 21:47:05 +00:00
Chris Lattner
a03676690b Teach the code generator to use cvtss2sd as extload f32 -> f64
llvm-svn: 28131
2006-05-05 21:35:18 +00:00
Chris Lattner
4b581e4167 Fold (fpext (load x)) -> (extload x)
llvm-svn: 28130
2006-05-05 21:34:35 +00:00
Chris Lattner
229d3e3c2d More aggressively sink GEP offsets into loops. For example, before we
generated:

        movl 8(%esp), %eax
        movl %eax, %edx
        addl $4316, %edx
        cmpb $1, %cl
        ja LBB1_2       #cond_false
LBB1_1: #cond_true
        movl L_QuantizationTables720$non_lazy_ptr, %ecx
        movl %ecx, (%edx)
        movl L_QNOtoQuantTableShift720$non_lazy_ptr, %edx
        movl %edx, 4460(%eax)
        ret
...

Now we generate:

        movl 8(%esp), %eax
        cmpb $1, %cl
        ja LBB1_2       #cond_false
LBB1_1: #cond_true
        movl L_QuantizationTables720$non_lazy_ptr, %ecx
        movl %ecx, 4316(%eax)
        movl L_QNOtoQuantTableShift720$non_lazy_ptr, %ecx
        movl %ecx, 4460(%eax)
        ret

... which uses one fewer register.

llvm-svn: 28129
2006-05-05 21:17:49 +00:00
Chris Lattner
73bfc4c2ea Fix an infinite loop compiling oggenc last night.
llvm-svn: 28128
2006-05-05 20:51:30 +00:00