Commit Graph

388 Commits

Author SHA1 Message Date
Chris Lattner
a9fcb187af Generalize FP constant shrinking optimization to apply to any vt
except ppc long double.  This allows us to shrink constant pool
entries for x86 long double constants, which in turn allows us to
use flds/fldl instead of fldt.

llvm-svn: 47938
2008-03-05 06:48:13 +00:00
Evan Cheng
e0b3c221ab Add a target lowering hook to control whether it's worthwhile to compress fp constant.
For x86, if sse2 is available, it's not a good idea since cvtss2sd is slower than a movsd load and it prevents load folding. On x87, it's important to shrink fp constant since fldt is very expensive.

llvm-svn: 47931
2008-03-05 01:30:59 +00:00
Evan Cheng
14f556a6d7 Really fix the test.
llvm-svn: 47882
2008-03-04 08:01:56 +00:00
Evan Cheng
7a67175fcc Fix broken test.
llvm-svn: 47881
2008-03-04 07:59:13 +00:00
Evan Cheng
3123d6ced3 Add PR1501 test case.
llvm-svn: 47874
2008-03-04 00:47:45 +00:00
Chris Lattner
299977b5ca Evan implemented these.
llvm-svn: 47828
2008-03-02 18:05:14 +00:00
Evan Cheng
e1d3e0958b Set to default: x86 no longer fold and into test if it has more than one use.
llvm-svn: 47711
2008-02-28 07:46:38 +00:00
Evan Cheng
da92e34fe3 Fix a bug in dead spill slot elimination.
llvm-svn: 47687
2008-02-27 19:57:11 +00:00
Chris Lattner
e51c23341d actually run llc, thanks Dan :)
llvm-svn: 47677
2008-02-27 17:46:54 +00:00
Evan Cheng
295ae42ede Don't track max alignment during stack object allocations since they can be deleted later. Let PEI compute it.
llvm-svn: 47668
2008-02-27 10:04:56 +00:00
Chris Lattner
1f46cc2345 Make X86TargetLowering::LowerSINT_TO_FP return without creating a dead
stack slot and store if the  SINT_TO_FP is actually legal.  This allows
us to compile:

double a(double b) {return (unsigned)b;}

to:

_a:
	cvttsd2siq	%xmm0, %rax
	movl	%eax, %eax
	cvtsi2sdq	%rax, %xmm0
	ret

instead of:

_a:
	subq	$8, %rsp
	cvttsd2siq	%xmm0, %rax
	movl	%eax, %eax
	cvtsi2sdq	%rax, %xmm0
	addq	$8, %rsp
	ret

crazy.

llvm-svn: 47660
2008-02-27 05:57:41 +00:00
Chris Lattner
bc686e546a Compile x86-64-and-mask.ll into:
_test:
	movl	%edi, %eax
	ret

instead of:

_test:
        movl    $4294967295, %ecx
        movq    %rdi, %rax
        andq    %rcx, %rax
        ret

It would be great to write this as a Pat pattern that used subregs 
instead of a 'pseudo' instruction, but I don't know how to do that
in td files.

llvm-svn: 47658
2008-02-27 05:47:54 +00:00
Evan Cheng
7553230e3a Spiller now remove unused spill slots.
llvm-svn: 47657
2008-02-27 03:04:06 +00:00
Evan Cheng
701b6a1dc3 Enable -coalescer-commute-instrs by default.
llvm-svn: 47623
2008-02-26 20:40:22 +00:00
Dan Gohman
8a8f3fe7e0 Avoid aborting on invalid shift counts.
llvm-svn: 47612
2008-02-26 18:50:50 +00:00
Eli Friedman
1f2cabfbcf Fix for pr2093: direct operands aren't necessarily addresses, so don't
try to simplify them.

llvm-svn: 47610
2008-02-26 18:37:49 +00:00
Evan Cheng
8e99554e84 This is possible:
vr1 = extract_subreg vr2, 3
...
vr3 = extract_subreg vr1, 2
The end result is vr3 is equal to vr2 with subidx 2.

llvm-svn: 47592
2008-02-26 08:03:41 +00:00
Evan Cheng
6366bbf577 Fix PR2076. CodeGenPrepare now sinks address computation for inline asm memory
operands into inline asm block.

llvm-svn: 47589
2008-02-26 02:42:37 +00:00
Evan Cheng
d299f09bc5 Rematerialization logic was overly conservative when it comes to loads from fixed stack slots.
llvm-svn: 47529
2008-02-23 03:38:34 +00:00
Evan Cheng
6782480bd1 Update test.
llvm-svn: 47527
2008-02-23 02:57:25 +00:00
Evan Cheng
4e9d5f1ead Remat of pic loads are now on by default.
llvm-svn: 47525
2008-02-23 02:08:30 +00:00
Evan Cheng
7c3a8d0056 Really. Why doesn't every arch support MMX?
llvm-svn: 47513
2008-02-23 00:56:14 +00:00
Evan Cheng
3b35d2a86c Test case for PR2082.
llvm-svn: 47501
2008-02-22 20:38:49 +00:00
Evan Cheng
1b417c4d84 Allow re-materialization of pic load (controlled by -remat-pic-load for now).
llvm-svn: 47476
2008-02-22 09:25:47 +00:00
Chris Lattner
a64d4179d4 copy mmx values from/to memory with GPRs on x86-32
instead of with mmx registers.  This horribleness is apparently
done by gcc to avoid having to insert emms in places that really 
should have it.  This is the second half of rdar://5741668.

llvm-svn: 47474
2008-02-22 05:18:04 +00:00
Chris Lattner
e70bc39d74 Start using GPR's to copy around mmx value instead of mmx regs.
GCC apparently does this, and code depends on not having to do
emms when this happens.  This is x86-64 only so far, second half
should handle x86-32.

rdar://5741668

llvm-svn: 47470
2008-02-22 02:09:43 +00:00
Chris Lattner
4f87f1c087 Treat clobber operands like early clobbers: if we have
any, we force sdisel to do all regalloc for an asm.  This
leads to gross but correct codegen.

This fixes the rest of PR2078.

llvm-svn: 47454
2008-02-21 19:43:13 +00:00
Tanya Lattner
8116db05a6 Remove llvm-upgrade and update tests.
llvm-svn: 47432
2008-02-21 07:42:26 +00:00
Chris Lattner
99b5a37d39 Fix a (harmless) but where vregs were added to the used reg lists for
inline asms.

Fix PR2078 by marking aliases of registers used when a register is 
marked used.  This prevents EAX from being allocated when AX is listed
in the clobber set for the asm.

llvm-svn: 47426
2008-02-21 04:55:52 +00:00
Evan Cheng
33ee06fa48 XFAIL this for now.
llvm-svn: 47355
2008-02-20 02:38:58 +00:00
Chris Lattner
aaafe47a55 this test requires sse2
llvm-svn: 47331
2008-02-19 18:07:46 +00:00
Chris Lattner
3a4ac3a69e Don't fold and's into test instructions if they have multiple uses.
This compiles test-nofold.ll into:

_test:
	movl	$15, %ecx
	andl	4(%esp), %ecx
	testl	%ecx, %ecx
	movl	$42, %eax
	cmove	%ecx, %eax
	ret

instead of:
_test:
	movl	4(%esp), %eax
	movl	%eax, %ecx
	andl	$15, %ecx
	testl	$15, %eax
	movl	$42, %eax
	cmove	%ecx, %eax
	ret

llvm-svn: 47330
2008-02-19 17:37:35 +00:00
Chris Lattner
67f2a6c009 rename tests to avoid a test- prefix when they aren't related to the test instruction.
llvm-svn: 47329
2008-02-19 17:33:52 +00:00
Nick Lewycky
69457748ab Don't spew stats to stderr.
llvm-svn: 47308
2008-02-19 03:11:47 +00:00
Nick Lewycky
0560401b2e Fix up the run line for this new test.
llc: for the -info-output-file option:  requires a value!

llvm-svn: 47306
2008-02-19 02:58:36 +00:00
Evan Cheng
de4579d0b3 New test.
llvm-svn: 47302
2008-02-19 02:09:58 +00:00
Evan Cheng
bb577266bf - When DAG combiner is folding a bit convert into a BUILD_VECTOR, it should check if it's essentially a SCALAR_TO_VECTOR. Avoid turning (v8i16) <10, u, u, u> to <10, 0, u, u, u, u, u, u>. Instead, simply convert it to a SCALAR_TO_VECTOR of the proper type.
- X86 now normalize SCALAR_TO_VECTOR to (BIT_CONVERT (v4i32 SCALAR_TO_VECTOR)). Get rid of X86ISD::S2VEC.

llvm-svn: 47290
2008-02-18 23:04:32 +00:00
Dan Gohman
70b9b2f77f Don't mark scalar integer multiplication as Expand on x86, since x86
has plain one-result scalar integer multiplication instructions.
This avoids expanding such instructions into MUL_LOHI sequences that
must be special-cased at isel time, and avoids the problem with that
code that provented memory operands from being folded.

This fixes PR1874, addressesing the most common case. The uncommon
cases of optimizing multiply-high operations will require work
in DAGCombiner.

llvm-svn: 47277
2008-02-18 17:55:26 +00:00
Andrew Lenharth
c178981b85 llvm.memory.barrier, and impl for x86 and alpha
llvm-svn: 47204
2008-02-16 01:24:58 +00:00
Evan Cheng
94742fb5d4 This test is not interesting.
llvm-svn: 47189
2008-02-15 23:06:21 +00:00
Chris Lattner
9c24f3ec37 Fix a miscompilation from Dan's recent apintification.
llvm-svn: 47128
2008-02-14 18:48:56 +00:00
Chris Lattner
d696c25db5 This readme entry is done, testcase here: CodeGen/X86/zero-remat.ll
llvm-svn: 47106
2008-02-14 05:39:46 +00:00
Evan Cheng
cbbcb3144d Fix test.
llvm-svn: 47102
2008-02-14 01:32:53 +00:00
Chris Lattner
a30946c576 In SDISel, for targets that support FORMAL_ARGUMENTS nodes, lower this
node as soon as we create it in SDISel.  Previously we would lower it in
legalize.  The problem with this is that it only exposes the argument
loads implied by FORMAL_ARGUMENTs after legalize, so that only dag combine 2
can hack on them.  This causes us to miss some optimizations because 
datatype expansion also happens here.

Exposing the loads early allows us to do optimizations on them.  For example
we now compile arg-cast.ll to:

_foo:
	movl	$2147483647, %eax
	andl	8(%esp), %eax
	ret

where we previously produced:

_foo:
	subl	$12, %esp
	movsd	16(%esp), %xmm0
	movsd	%xmm0, (%esp)
	movl	$2147483647, %eax
	andl	4(%esp), %eax
	addl	$12, %esp
	ret

It might also make sense to do this for ISD::CALL nodes, which have implicit
stores on many targets.

llvm-svn: 47054
2008-02-13 07:39:09 +00:00
Evan Cheng
68a88c1f52 New tests.
llvm-svn: 47047
2008-02-13 03:23:53 +00:00
Evan Cheng
0d2efb485d Don't mask the isel bug.
llvm-svn: 47018
2008-02-12 19:11:29 +00:00
Evan Cheng
6c7520f922 This test assumes no SSE4.1.
llvm-svn: 47017
2008-02-12 19:11:08 +00:00
Evan Cheng
1ab096a313 Fix some test cases.
llvm-svn: 46998
2008-02-12 07:22:46 +00:00
Dale Johannesen
304406f01c Alignment of struct containing vectors depends on
whether SSE is present, on Darwin anyway.  Make it
explicit.

llvm-svn: 46909
2008-02-09 19:04:25 +00:00
Evan Cheng
90f03a0b88 It's not always safe to fold movsd into xorpd, etc. Check the alignment of the load address first to make sure it's 16 byte aligned.
llvm-svn: 46893
2008-02-08 21:20:40 +00:00