Commit Graph

9518 Commits

Author SHA1 Message Date
Dan Gohman
c017343459 Reformat the allocation-order arrays to a more conventional style.
llvm-svn: 63121
2009-01-27 19:25:38 +00:00
Dan Gohman
f7e8bf0511 Respect the DisableRedZone flag on PowerPC.
llvm-svn: 63119
2009-01-27 19:19:28 +00:00
Dan Gohman
7d80f8688e Simplify findNonImmUse; return the result using the return value
instead of via a by-reference argument. No functionality change.

llvm-svn: 63118
2009-01-27 19:04:30 +00:00
Evan Cheng
a05436f739 Implement multiple with overflow by 2 with an add instruction.
llvm-svn: 63090
2009-01-27 03:30:42 +00:00
Dan Gohman
2e0343e321 Eliminate unnecessary operands-list traversals.
llvm-svn: 63088
2009-01-27 02:37:43 +00:00
Dan Gohman
f3c2ac3497 Enable the red zone on x86-64 by default.
llvm-svn: 63078
2009-01-27 00:58:47 +00:00
Dan Gohman
4ad174b236 Fix the Red Zone calculation for functions with frame pointers.
Don't use the Red Zone when dynamic stack realignment is needed.
This could be implemented, but most x86-64 ABIs don't require
dynamic stack realignment so it isn't urgent.

llvm-svn: 63074
2009-01-27 00:40:06 +00:00
Scott Michel
e00d746487 CellSPU:
- Update DWARF debugging support.

llvm-svn: 63059
2009-01-26 22:33:37 +00:00
Scott Michel
56fa9ba0b6 Make the Dwarf macro information section optional; CellSPU's assembler
doesn't support it. The default is set to 'true', so this should not
impact any other target backends.

llvm-svn: 63058
2009-01-26 22:32:51 +00:00
Dan Gohman
3a51d8e847 Implement Red Zone utilization on x86-64. This is currently
disabled by default; I'll enable it when I hook it up with
the llvm-gcc flag which controls it.

llvm-svn: 63056
2009-01-26 22:22:31 +00:00
Evan Cheng
ec03e0cd3b Enhance logic in X86DAGToDAGISel::PreprocessForRMW which move load inside callseq_start to allow it to be folded into a call. It was not considering the cases where a token factor is between the load and the callseq_start.
llvm-svn: 63022
2009-01-26 18:43:34 +00:00
Dan Gohman
4abaebae0c Take the next steps in making SDUse more consistent with LLVM Use, and
tidy up SDUse and related code.
 - Replace the operator= member functions with a set method, like
   LLVM Use has, and variants setInitial and setNode, which take
   care up updating use lists, like LLVM Use's does. This simplifies
   code that calls these functions.
 - getSDValue() is renamed to get(), as in LLVM Use, though most
   places can either use the implicit conversion to SDValue or the
   convenience functions instead.
 - Fix some more node vs. value terminology issues.

Also, eliminate the one remaining use of SDOperandPtr, and
SDOperandPtr itself.

llvm-svn: 62995
2009-01-26 04:35:06 +00:00
Scott Michel
af51520775 Untabify code.
llvm-svn: 62991
2009-01-26 03:37:41 +00:00
Scott Michel
da9360e77e CellSPU:
- Rename fcmp.ll test to fcmp32.ll, start adding new double tests to fcmp64.ll
- Fix select_bits.ll test
- Capitulate to the DAGCombiner and move i64 constant loads to instruction
  selection (SPUISelDAGtoDAG.cpp).

  <rant>DAGCombiner will insert all kinds of 64-bit optimizations after
  operation legalization occurs and now we have to do most of the work that
  instruction selection should be doing twice (once to determine if v2i64
  build_vector can be handled by SelectCode(), which then runs all of the
  predicates a second time to select the necessary instructions.) But,
  CellSPU is a good citizen.</rant>

llvm-svn: 62990
2009-01-26 03:31:40 +00:00
Nate Begeman
48639fe6dc Fix a typo
llvm-svn: 62989
2009-01-26 03:15:54 +00:00
Nate Begeman
d2f708eca5 De-identifying per sabre review
llvm-svn: 62988
2009-01-26 03:15:31 +00:00
Nate Begeman
92efc4f0ce Map address space 256 to gs; similar mappings could be supported for the
other x86 segments.  address space 0 is stack/default, 1-255 are reserved for
client use.

llvm-svn: 62980
2009-01-26 01:24:32 +00:00
Nate Begeman
81d70f3f54 Support pattern matching various x86 sse shifts.
llvm-svn: 62979
2009-01-26 00:52:55 +00:00
Chris Lattner
e7bf6037e2 silence a warning when assertions are disabled.
llvm-svn: 62976
2009-01-25 23:08:00 +00:00
Torok Edwin
6f715ebe85 should have removed the + when manually applying a patch!
llvm-svn: 62973
2009-01-25 20:29:34 +00:00
Torok Edwin
3f54410405 revert this patch for now, because Codegen does still want to generate SSE code,
for example in the case of va-args. XFAIL associated tests.

llvm-svn: 62972
2009-01-25 20:21:24 +00:00
Torok Edwin
49b1d3e3cc If user explicitly asks not to use SSE, don't force it. This fixes LLVM part of PR3402.
llvm-svn: 62967
2009-01-25 17:58:56 +00:00
Evan Cheng
71ca3e2bdb Private linkage support for PPC / Darwin.
llvm-svn: 62955
2009-01-25 06:32:01 +00:00
Nate Begeman
48f3fe9199 Fix an indent and a typo.
llvm-svn: 62940
2009-01-24 22:12:48 +00:00
Torok Edwin
6dd79be128 add note about possible GEP improvement with fields of size 0.
llvm-svn: 62925
2009-01-24 19:30:25 +00:00
Chris Lattner
97b6f6a674 hopefully address PR3379 by making the P modifier work in x86 inline asm.
llvm-svn: 62887
2009-01-23 22:33:40 +00:00
Bob Wilson
186046e657 Add SelectionDAG::getNOT method to construct bitwise NOT operations,
corresponding to the "not" and "vnot" PatFrags.  Use the new method
in some places where it seems appropriate.

llvm-svn: 62768
2009-01-22 17:39:32 +00:00
Evan Cheng
c971801ae1 Eliminate a couple of fields from TargetRegisterClass: SubRegClasses and SuperRegClasses. These are not necessary. Also eliminate getSubRegisterRegClass and getSuperRegisterRegClass. These are slow and their results can change if register file names change. Just use TargetLowering::getRegClassFor() to get the right TargetRegisterClass instead.
llvm-svn: 62762
2009-01-22 09:10:11 +00:00
Chris Lattner
fcf56e7fbe add a note
llvm-svn: 62760
2009-01-22 07:16:03 +00:00
Dan Gohman
29b575c6cd Recognize inline asm for bswap on x86-64 GLIBC. This allows it
to be supported in the JIT.

llvm-svn: 62730
2009-01-21 23:40:54 +00:00
Evan Cheng
43d680b0d8 Also favors NOT64r.
llvm-svn: 62710
2009-01-21 19:45:31 +00:00
Chris Lattner
2b6b947b4f fix warning in release-asserts mode and spelling of assert.
llvm-svn: 62699
2009-01-21 18:38:18 +00:00
Dan Gohman
704f0d5879 Fix a recent regression. ClrOpcode is not set for i8; for i8, if
we want to clear %ah to zero before a division, just use a
zero-extending mov to %al. This fixes PR3366.

llvm-svn: 62691
2009-01-21 14:50:16 +00:00
Sanjiv Gupta
ebef67f13c Fixed build warnings. Restoring changes done in 62600, they were lost in 62655.
llvm-svn: 62681
2009-01-21 09:02:46 +00:00
Duncan Sands
392dc77fc6 Cleanup whitespace and comments, and tweak some
prototypes, in operand type legalization.  No
functionality change.

llvm-svn: 62680
2009-01-21 09:00:29 +00:00
Sanjiv Gupta
37fdb5ca11 Implement LowerOperationWrapper for legalizer.
Also a few signed comparison fixes.

llvm-svn: 62665
2009-01-21 05:44:05 +00:00
Scott Michel
c80e71ac35 CellSPU:
- Ensure that (operation) legalization emits proper FDIV libcall when needed.
- Fix various bugs encountered during llvm-spu-gcc build, along with various
  cleanups.
- Start supporting double precision comparisons for remaining libgcc2 build.
  Discovered interesting DAGCombiner feature, which is currently solved via
  custom lowering (64-bit constants are not legal on CellSPU, but DAGCombiner
  insists on inserting one anyway.)
- Update README.

llvm-svn: 62664
2009-01-21 04:58:48 +00:00
Evan Cheng
0ed6a9d7e0 Favors generating "not" over "xor -1". For example.
unsigned test(unsigned a) {
  return ~a;
}
llvm used to generate:
movl    $4294967295, %eax
xorl    4(%esp), %eax

Now it generates:
movl      4(%esp), %eax
notl      %eax

It's 3 bytes shorter.

llvm-svn: 62661
2009-01-21 02:09:05 +00:00
Evan Cheng
b3c82db63d Change TargetInstrInfo::isMoveInstr to return source and destination sub-register indices as well.
llvm-svn: 62600
2009-01-20 19:12:24 +00:00
Dan Gohman
7663e08915 Add a README entry noticed while investigating PR3216.
llvm-svn: 62558
2009-01-20 01:07:33 +00:00
Evan Cheng
06cfade044 DIVREM isel deficiency: If sign bit is known zero, zero out DX/EDX/RDX instead of sign extending the low part (in AX/EAX/RAX) into it.
llvm-svn: 62519
2009-01-19 19:06:11 +00:00
Evan Cheng
3c00875658 Fix 80 col violations.
llvm-svn: 62518
2009-01-19 18:57:29 +00:00
Evan Cheng
2f50b49f22 Handle ISD::DECLARE with PIC relocation model.
llvm-svn: 62516
2009-01-19 18:31:51 +00:00
Evan Cheng
a14fd26a8b Minor tweak to LowerUINT_TO_FP_i32. Bias (after scalar_to_vector) has two uses so we should make it the second source operand of ISD::OR so 2-address pass won't have to be smart about commuting.
%reg1024<def> = MOVSDrm %reg0, 1, %reg0, <cp#0>, Mem:LD(8,8) [ConstantPool + 0]
%reg1025<def> = MOVSD2PDrr %reg1024
%reg1026<def> = MOVDI2PDIrm <fi#-1>, 1, %reg0, 0, Mem:LD(4,16) [FixedStack-1 + 0]
%reg1027<def> = ORPSrr %reg1025<kill>, %reg1026<kill>
%reg1028<def> = MOVPD2SDrr %reg1027<kill>
%reg1029<def> = SUBSDrr %reg1028<kill>, %reg1024<kill>
%reg1030<def> = CVTSD2SSrr %reg1029<kill>
MOVSSmr <fi#0>, 1, %reg0, 0, %reg1030<kill>, Mem:ST(4,4) [FixedStack0 + 0]
%reg1031<def> = LD_Fp32m80 <fi#0>, 1, %reg0, 0, Mem:LD(4,16) [FixedStack0 + 0]
RET %reg1031<kill>, %ST0<imp-use,kill>

The reason 2-addr pass isn't smart enough to commute the ORPSrr is because it can't look pass the MOVSD2PDrr instruction.

llvm-svn: 62505
2009-01-19 08:19:57 +00:00
Evan Cheng
53e83a2eb9 Now not UINT_TO_FP is legal (it's marked custom), dag combiner won't
optimize it to a SINT_TO_FP when the sign bit is known zero. X86 isel should perform the optimization itself.

llvm-svn: 62504
2009-01-19 08:08:22 +00:00
Bill Wendling
ce30a8cab9 Extend thi
llvm-svn: 62415
2009-01-17 07:40:19 +00:00
Evan Cheng
182d9c4c9f Fix MatchAddress bug that's preventing negative displacement from being folded in 64-bit mode.
llvm-svn: 62413
2009-01-17 07:09:27 +00:00
Bill Wendling
ddd55bdfec Temporarily revert my last change. It is causing a bootstrap failure.
llvm-svn: 62405
2009-01-17 04:23:51 +00:00
Bill Wendling
d18c38c0f2 Implement a special algorithm for converting uint_to_fp for i32 values on
X86. This code:

void f() {
  uint32_t x;
  float y = (float)x;
}

used to be:

     movl     %eax, -8(%ebp)
     movl     [2^52 double], -4(%ebp)
     movsd    -8(%ebp), %xmm0
     subsd    [2^52 double], %xmm0
     cvtsd2ss %xmm0, %xmm0

Is now:

   movsd        [2^52 double], %xmm0
   movsd        %xmm0, %xmm1
   movd         %ecx, %xmm2
   orps         %xmm2, %xmm1
   subsd        %xmm0, %xmm1
   cvtsd2ss     %xmm1, %xmm0

This is faster on X86. Note that there's an extra load of %xmm0 into %xmm1. That
will be fixed in a later coalescer fix.

llvm-svn: 62404
2009-01-17 03:56:04 +00:00
Oscar Fuentes
ee36d9ce83 CMake: Add lib/Target/IA64/IA64Subtarget.cpp
llvm-svn: 62394
2009-01-17 01:50:32 +00:00