Chris Lattner
b4cf4ffb04
Speed up folding operations into loads.
...
llvm-svn: 19733
2005-01-21 21:43:02 +00:00
Chris Lattner
fd4d7f71ae
The ever-important vanity pass name :)
...
llvm-svn: 19731
2005-01-21 21:35:14 +00:00
Chris Lattner
5f2fbeaa69
Fix a FIXME: realize that argument stores are all independent (don't alias)
...
llvm-svn: 19728
2005-01-21 19:46:38 +00:00
Chris Lattner
febeb380ae
Implement ADD_PARTS/SUB_PARTS so that 64-bit integer add/sub work. This
...
fixes most of the remaining llc-beta failures.
llvm-svn: 19716
2005-01-20 18:53:00 +00:00
Chris Lattner
8b0a2a3251
Fix a crash compiling 134.perl.
...
llvm-svn: 19711
2005-01-20 16:50:16 +00:00
Chris Lattner
6534e1ede3
Fix a problem where were were literally selecting for INCREASED register
...
pressure, not decreases register pressure. Fix problem where we accidentally
swapped the operands of SHLD, which caused fourinarow to fail. This fixes
fourinarow.
llvm-svn: 19697
2005-01-19 17:24:34 +00:00
Chris Lattner
b75589131d
When commuting these instructions, make sure to actually swap the operands too.
...
llvm-svn: 19694
2005-01-19 16:55:52 +00:00
Chris Lattner
fde1a5688b
Implement Regression/CodeGen/X86/rotate.ll: emit rotate instructions (which
...
typically cost 1 cycle) instead of shld/shrd instruction (which are typically
6 or more cycles). This also saves code space.
For example, instead of emitting:
rotr:
mov %EAX, DWORD PTR [%ESP + 4]
mov %CL, BYTE PTR [%ESP + 8]
shrd %EAX, %EAX, %CL
ret
rotli:
mov %EAX, DWORD PTR [%ESP + 4]
shrd %EAX, %EAX, 27
ret
Emit:
rotr32:
mov %CL, BYTE PTR [%ESP + 8]
mov %EAX, DWORD PTR [%ESP + 4]
ror %EAX, %CL
ret
rotli32:
mov %EAX, DWORD PTR [%ESP + 4]
ror %EAX, 27
ret
We also emit byte rotate instructions which do not have a sh[lr]d counterpart
at all.
llvm-svn: 19692
2005-01-19 08:07:05 +00:00
Chris Lattner
34757ff939
Add rotate instructions.
...
llvm-svn: 19690
2005-01-19 07:50:03 +00:00
Chris Lattner
e539ce8223
Match 16-bit shld/shrd instructions as well, implementing shift-double.llx:test5
...
llvm-svn: 19689
2005-01-19 07:37:26 +00:00
Chris Lattner
9d5ee289d7
Improve coverage of the X86 instruction set by adding 16-bit shift doubles.
...
llvm-svn: 19687
2005-01-19 07:31:24 +00:00
Chris Lattner
c03f360215
Teach the code generator that shrd/shld is commutable if it has an immediate.
...
This allows us to generate this:
foo:
mov %EAX, DWORD PTR [%ESP + 4]
mov %EDX, DWORD PTR [%ESP + 8]
shld %EDX, %EDX, 2
shl %EAX, 2
ret
instead of this:
foo:
mov %EAX, DWORD PTR [%ESP + 4]
mov %ECX, DWORD PTR [%ESP + 8]
mov %EDX, %EAX
shrd %EDX, %ECX, 30
shl %EAX, 2
ret
Note the magically transmogrifying immediate.
llvm-svn: 19686
2005-01-19 07:11:01 +00:00
Chris Lattner
33efebcdc8
Finegrainify namespacification
...
Add default impl of commuteInstruction
Add notes about ugly V9 code.
llvm-svn: 19684
2005-01-19 06:53:34 +00:00
Chris Lattner
575e912fcf
Codegen long >> 2 to this:
...
foo:
mov %EAX, DWORD PTR [%ESP + 4]
mov %EDX, DWORD PTR [%ESP + 8]
shrd %EAX, %EDX, 2
sar %EDX, 2
ret
instead of this:
test1:
mov %ECX, DWORD PTR [%ESP + 4]
shr %ECX, 2
mov %EDX, DWORD PTR [%ESP + 8]
mov %EAX, %EDX
shl %EAX, 30
or %EAX, %ECX
sar %EDX, 2
ret
and long << 2 to this:
foo:
mov %EAX, DWORD PTR [%ESP + 4]
mov %ECX, DWORD PTR [%ESP + 8]
*** mov %EDX, %EAX
shrd %EDX, %ECX, 30
shl %EAX, 2
ret
instead of this:
foo:
mov %EAX, DWORD PTR [%ESP + 4]
mov %ECX, %EAX
shr %ECX, 30
mov %EDX, DWORD PTR [%ESP + 8]
shl %EDX, 2
or %EDX, %ECX
shl %EAX, 2
ret
The extra copy (marked ***) can be eliminated when I teach the code generator
that shrd32rri8 is really commutative.
llvm-svn: 19681
2005-01-19 06:18:43 +00:00
Chris Lattner
419a5d213b
X86 shifts mask the amount.
...
llvm-svn: 19678
2005-01-19 03:36:30 +00:00
Chris Lattner
fbd1f8e4fd
Add a hook to find out how the target handles shift amounts that are out of
...
range. Either they are undefined (the default), they mask the shift amount
to the size of the register (X86, Alpha, etc), or they extend the shift (PPC).
This defaults to undefined, which is conservatively correct.
llvm-svn: 19677
2005-01-19 03:36:14 +00:00
Chris Lattner
6dec8cb829
Code to handle FP_EXTEND is dead now. X86 doesn't support any data types to
...
FP_EXTEND from!
llvm-svn: 19674
2005-01-18 20:05:56 +00:00
Chris Lattner
798e9c85d6
Remove more dead code.
...
llvm-svn: 19673
2005-01-18 19:50:08 +00:00
Chris Lattner
401814508f
The selection dag code handles the promotions from F32 to F64 for us, so we
...
don't need to even think about F32 in the X86 code anymore.
llvm-svn: 19672
2005-01-18 19:46:54 +00:00
Chris Lattner
dc09e52b3e
Fix 124.m88ksim.
...
llvm-svn: 19667
2005-01-18 17:35:28 +00:00
Chris Lattner
a04b1ee7a8
Do not emit loads multiple times, potentially in the wrong places.
...
llvm-svn: 19661
2005-01-18 04:18:32 +00:00
Tanya Lattner
d3459278f2
Minor changes.
...
llvm-svn: 19660
2005-01-18 04:15:41 +00:00
Chris Lattner
722ddeb86e
Eliminate bad assertions.
...
llvm-svn: 19659
2005-01-18 04:00:54 +00:00
Chris Lattner
8f3a8d96e2
* Eliminate the TokenSet and just use the ExprMap for both tokens and values.
...
* Insert some really pedantic assertions that will notice when we emit the
same loads more than one time, exposing bugs. This turns a miscompilation in
bzip2 into a compile-fail. yaay.
llvm-svn: 19658
2005-01-18 03:51:59 +00:00
Chris Lattner
b3edb09ede
Rely on the code in MatchAddress to do this work. Otherwise we fail to
...
match (X+Y)+(Z << 1), because we match the X+Y first, consuming the index
register, then there is no place to put the Z.
llvm-svn: 19652
2005-01-18 02:25:52 +00:00
Chris Lattner
ce2e0125dc
Fix a problem where probing for addressing modes caused expressions to be
...
emitted too early. In particular, this fixes
Regression/CodeGen/X86/regpressure.ll:regpressure3.
This also improves the 2nd basic block in 164.gzip:flush_block, which went from
.LBBflush_block_1: # loopentry.1.i
movzx %EAX, WORD PTR [dyn_ltree + 20]
movzx %ECX, WORD PTR [dyn_ltree + 16]
mov DWORD PTR [%ESP + 32], %ECX
movzx %ECX, WORD PTR [dyn_ltree + 12]
movzx %EDX, WORD PTR [dyn_ltree + 8]
movzx %EBX, WORD PTR [dyn_ltree + 4]
mov DWORD PTR [%ESP + 36], %EBX
movzx %EBX, WORD PTR [dyn_ltree]
add DWORD PTR [%ESP + 36], %EBX
add %EDX, DWORD PTR [%ESP + 36]
add %ECX, %EDX
add DWORD PTR [%ESP + 32], %ECX
add %EAX, DWORD PTR [%ESP + 32]
movzx %ECX, WORD PTR [dyn_ltree + 24]
add %EAX, %ECX
mov %ECX, 0
mov %EDX, %ECX
to
.LBBflush_block_1: # loopentry.1.i
movzx %EAX, WORD PTR [dyn_ltree]
movzx %ECX, WORD PTR [dyn_ltree + 4]
add %ECX, %EAX
movzx %EAX, WORD PTR [dyn_ltree + 8]
add %EAX, %ECX
movzx %ECX, WORD PTR [dyn_ltree + 12]
add %ECX, %EAX
movzx %EAX, WORD PTR [dyn_ltree + 16]
add %EAX, %ECX
movzx %ECX, WORD PTR [dyn_ltree + 20]
add %ECX, %EAX
movzx %EAX, WORD PTR [dyn_ltree + 24]
add %ECX, %EAX
mov %EAX, 0
mov %EDX, %EAX
... which results in less spilling in the function.
This change alone speeds up 164.gzip from 37.23s to 36.24s on apoc. The
default isel takes 37.31s.
llvm-svn: 19650
2005-01-18 01:06:26 +00:00
Chris Lattner
a78f9ced61
Fix indentation.
...
llvm-svn: 19649
2005-01-17 23:25:45 +00:00
Chris Lattner
dff1e3e86f
Don't bother using max here.
...
llvm-svn: 19647
2005-01-17 23:02:13 +00:00
Chris Lattner
2d86b43318
Do not give token factor nodes outrageous weights
...
llvm-svn: 19645
2005-01-17 22:56:09 +00:00
Chris Lattner
f2878ce8ba
Two changes:
...
1. Fold [mem] += (1|-1) into inc [mem]/dec [mem] to save some icache space.
2. Do not let token factor nodes prevent forming '[mem] op= val' folds.
llvm-svn: 19643
2005-01-17 22:10:42 +00:00
Chris Lattner
40c0fca632
Refactor load/op/store folding into it's own method, no functionality changes.
...
llvm-svn: 19641
2005-01-17 19:25:26 +00:00
Chris Lattner
2348abc421
Fix a major regression last night that prevented us from producing [mem] op= reg
...
operations.
The body of the if is less indented but unmodified in this patch.
llvm-svn: 19638
2005-01-17 17:49:14 +00:00
Chris Lattner
adb669ab1f
Codegen this:
...
int %foo(int %X) {
%T = add int %X, 13
%S = mul int %T, 3
ret int %S
}
as this:
mov %ECX, DWORD PTR [%ESP + 4]
lea %EAX, DWORD PTR [%ECX + 2*%ECX + 39]
ret
instead of this:
mov %ECX, DWORD PTR [%ESP + 4]
mov %EAX, %ECX
add %EAX, 13
imul %EAX, %EAX, 3
ret
llvm-svn: 19633
2005-01-17 06:48:02 +00:00
Tanya Lattner
5a10531cf8
Added tmp instructions to preserve ssa.
...
llvm-svn: 19632
2005-01-17 06:47:26 +00:00
Chris Lattner
51590b615c
Fix test/Regression/CodeGen/X86/2005-01-17-CycleInDAG.ll and 132.ijpeg.
...
Do not fold a load into an operation if it will induce a cycle in the DAG.
Repeat after me: dAg.
llvm-svn: 19631
2005-01-17 06:26:58 +00:00
Chris Lattner
f1e85bec5a
Do not fold a load into a comparison that is used by more than one place.
...
The comparison will probably be folded, so this is not ok to do.
This fixed 197.parser.
llvm-svn: 19624
2005-01-17 01:34:14 +00:00
Chris Lattner
1b8c8fe020
Do not codegen 'xor bool, true' as 'not reg'. not reg inverts the upper bits
...
of the bytereg. This fixes yacr2, 300.twolf and probably others.
llvm-svn: 19622
2005-01-17 00:23:16 +00:00
Chris Lattner
46dac4394c
Set up the shift and setcc types.
...
If we emit a load because we followed a token chain to get to it, try to
fold it into its single user if possible.
llvm-svn: 19620
2005-01-17 00:00:33 +00:00
Chris Lattner
4c88cc95ee
Shift and setcc types default to the pointer type.
...
llvm-svn: 19619
2005-01-16 23:59:48 +00:00
Tanya Lattner
fea188af7e
Added paramters to a few functions in order to allow me to change the functions to preserve SSA
...
llvm-svn: 19615
2005-01-16 08:51:10 +00:00
Chris Lattner
9ffc59287e
* Adjust to changes in TargetLowering interfaces.
...
* Remove custom promotion for bool and byte select ops. Legalize now
promotes them for us.
* Allow folding ConstantPoolIndexes into EXTLOAD's, useful for float immediates.
* Declare which operations are not supported better.
* Add some hacky code for TRUNCSTORE to pretend that we have truncstore
for i16 types. This is useful for testing promotion code because I can
just remove 16-bit registers all together and verify that programs work.
llvm-svn: 19614
2005-01-16 07:34:08 +00:00
Chris Lattner
b49d2a7b0f
Use enums, move virtual dtor out of line.
...
llvm-svn: 19610
2005-01-16 07:28:11 +00:00
Chris Lattner
be2a427f51
cycles_t -> CycleCount_t
...
llvm-svn: 19604
2005-01-16 04:20:30 +00:00
Reid Spencer
afa1cb9e11
Rename BUILD_* to PROJ_*
...
llvm-svn: 19592
2005-01-16 02:21:29 +00:00
Tanya Lattner
66cf1a6f82
Fixed a couple of instructions that broke SSA.
...
llvm-svn: 19587
2005-01-16 02:14:17 +00:00
Chris Lattner
605b9a23a2
Improve compatiblity with HPUX on Itanium, patch by Duraid Madina
...
llvm-svn: 19586
2005-01-16 01:31:31 +00:00
Chris Lattner
06c297f8ca
Set up identity transforms.
...
llvm-svn: 19584
2005-01-16 01:20:18 +00:00
Chris Lattner
1d0e1ffe02
Move some information out of LegalizeDAG into the generic Target interface.
...
llvm-svn: 19581
2005-01-16 01:10:58 +00:00
Chris Lattner
98611ce291
Add a new target-independent code generator flag.
...
llvm-svn: 19567
2005-01-15 06:00:32 +00:00
Chris Lattner
f3d950e816
Add support for truncstore and *extload.
...
llvm-svn: 19566
2005-01-15 05:22:24 +00:00