Commit Graph

81 Commits

Author SHA1 Message Date
Chris Lattner
f0b42a6f29 This code has always been dead on itanium
llvm-svn: 22916
2005-08-19 18:34:37 +00:00
Duraid Madina
4efc0b6f2b a bugfix (up top) and a quick repair job: disable generation of dep.z
(which died about a week ago) so we're back to load-(2^n-1)-then-AND
sequences. slow, but things should now be Almost Completely Working,
modulo those pesky alignment/ABI issues.

llvm-svn: 22904
2005-08-19 13:25:50 +00:00
Chris Lattner
5194ff37c4 Mark some instructions as variable_ops, and PSEUDO_ALLOC as taking a GPR.
I'm not convinced this is all of them,  but I can't do much testing, because
IA64 LLC crashes on big programs :(

llvm-svn: 22892
2005-08-19 00:47:42 +00:00
Chris Lattner
583658a766 update the backends to work with the new CopyFromReg/CopyToReg/ImplicitDef nodes
llvm-svn: 22807
2005-08-16 21:56:37 +00:00
Nate Begeman
f6b6378f23 Implement BR_CC and BRTWOWAY_CC. This allows the removal of a rather nasty
fixme from the PowerPC backend.  Emit slightly better code for legalizing
select_cc.

llvm-svn: 22805
2005-08-16 19:49:35 +00:00
Duraid Madina
6325af5006 sorry!! this is temporary; for some reason the nasty constmul code seems to
be an infinite loop when using g++-4.0.1*, this kills the ia64 nightly
tester. A proper fix shall be forthcoming!!! thanks for not killing me. :)

llvm-svn: 22748
2005-08-10 12:38:57 +00:00
Chris Lattner
1277703a48 Update the targets to the new SETCC/CondCodeSDNode interfaces.
llvm-svn: 22729
2005-08-09 20:21:10 +00:00
Chris Lattner
cc8ae687e1 Update to use the new MathExtras.h support for log2 computation.
Patch contributed by Jim Laskey!

llvm-svn: 22594
2005-08-02 19:26:06 +00:00
Jeff Cohen
bd51ec7461 Eliminate all remaining tabs and trailing spaces.
llvm-svn: 22523
2005-07-27 06:12:32 +00:00
Chris Lattner
ffaf40a143 Change *EXTLOAD to use an VTSDNode operand instead of being an MVTSDNode.
This is the last MVTSDNode.

This allows us to eliminate a bunch of special case code for handling
MVTSDNodes.

Also, remove some uses of dyn_cast that should really be cast (which is
cheaper in a release build).

llvm-svn: 22368
2005-07-10 01:56:13 +00:00
Chris Lattner
273b81e0c0 Change TRUNCSTORE to use a VTSDNode operand instead of being an MVTSTDNode
llvm-svn: 22366
2005-07-10 00:29:18 +00:00
Chris Lattner
199560c668 Make several cleanups to Andrews varargs change:
1. Pass Value*'s into lowering methods so that the proper pointers can be
   added to load/stores from the valist
2. Intrinsics that return void should only return a token chain, not a token
   chain/retval pair.
3. Rename LowerVAArgNext -> LowerVAArg, because VANext is long gone.
4. Now that we have Value*'s available in the lowering methods, pass them
   into any load/stores from the valist that are emitted

llvm-svn: 22339
2005-07-05 19:58:54 +00:00
Chris Lattner
06282f51cf Refactor the addPassesToEmitAssembly interface into a addPassesToEmitFile
interface.

llvm-svn: 22282
2005-06-25 02:48:37 +00:00
Andrew Lenharth
4fd2bde906 If we support structs as va_list, we must pass pointers to them to va_copy
See last commit for LangRef, this implements it on all targets.

llvm-svn: 22273
2005-06-22 21:04:42 +00:00
Andrew Lenharth
a9214fec08 core changes for varargs
llvm-svn: 22254
2005-06-18 18:34:52 +00:00
Duraid Madina
44ac7502fd re-enable direct calls, this should just be a performance boost
llvm-svn: 22148
2005-05-20 11:39:17 +00:00
Duraid Madina
03a7327d63 make angry compilers happy again
llvm-svn: 22054
2005-05-15 14:44:13 +00:00
Chris Lattner
93007dda7d treat TAILCALL nodes identically to CALL nodes
llvm-svn: 21977
2005-05-13 20:29:26 +00:00
Chris Lattner
9d788e93a6 Add an isTailCall flag to LowerCallTo
llvm-svn: 21958
2005-05-13 18:50:42 +00:00
Chris Lattner
094bbfcebb rename the ADJCALLSTACKDOWN/ADJCALLSTACKUP nodes to be CALLSEQ_START/BEGIN.
llvm-svn: 21915
2005-05-12 23:24:06 +00:00
Chris Lattner
7e08dd591c Pass in Calling Convention to use into LowerCallTo
llvm-svn: 21899
2005-05-12 19:56:45 +00:00
Duraid Madina
b9062e56cf add the popcount instruction and support this in the isel
the primary user of this will probably end up being find-first-set-bit/find-
last-set-bit, which i'll get around to...

llvm-svn: 21860
2005-05-11 05:16:09 +00:00
Chris Lattner
d5d2886ee7 No really IA*64* :)
llvm-svn: 21858
2005-05-11 05:03:56 +00:00
Duraid Madina
64a52fc615 fix and cleanup constmul code a bit, this fixes mediabench/toast and
probably a couple of other tests.

llvm-svn: 21814
2005-05-09 13:18:34 +00:00
Andrew Lenharth
8e2beec4d1 fix typo
llvm-svn: 21693
2005-05-04 19:25:37 +00:00
Duraid Madina
4d9c8f8dce support multiplication by constant negative integers
this constmul code is still buggy though, so beware. mul by 7427 is currently
broken, for example. will fix it when I get a moment :)

llvm-svn: 21652
2005-05-02 07:27:14 +00:00
Duraid Madina
7a185a79a5 add support for bools to SELECT, this fixes Prolangs-C/bison from the
testsuite, however 09-vor is still dead (hopefully for other reasons!)

llvm-svn: 21651
2005-05-02 06:41:13 +00:00
Chris Lattner
ce0d8c2408 This target doesn't support the FSIN/FCOS/FSQRT nodes yet
llvm-svn: 21633
2005-04-30 04:26:06 +00:00
Andrew Lenharth
2a00530fa7 Implement Value* tracking for loads and stores in the selection DAG. This enables one to use alias analysis in the backends.
(TRUNK)Stores and (EXT|ZEXT|SEXT)Loads have an extra SDOperand which is a SrcValueSDNode which contains the Value*.  Note that if the operation is introduced by the backend, it will still have the operand, but the value* will be null.

llvm-svn: 21599
2005-04-27 20:10:01 +00:00
Duraid Madina
2f8f3f018d clean up some warnings
llvm-svn: 21590
2005-04-27 11:57:39 +00:00
Duraid Madina
90bcae7fd2 constmul bugfix: multiply by 27611 was broken
llvm-svn: 21564
2005-04-26 09:42:50 +00:00
Duraid Madina
675a0b9769 clean up the code! (oops) lots more cleaning left, however.
llvm-svn: 21563
2005-04-26 08:43:47 +00:00
Duraid Madina
ee826ec8f6 * Add code to reduce multiplies by constant integers to shifts, adds and
subtracts. This is a very rough and nasty implementation of Lefevre's
  "pattern finding" algorithm. With a few small changes though, it should
  end up beating most other methods in common use, regardless of the size
  of the constant (currently, it's often one or two shifts worse)

  TODO: rewrite it so it's not hideously ugly (this is a translation from
        perl, which doesn't help ;)
        bypass most of it for multiplies by 2^n+1
	(eventually) teach it that some combinations of shift+add are
	cheaper than others (e.g. shladd on ia64, scaled adds on alpha)
	get it to try multiple booth encodings in search of the cheapest
	routine
	make it work for negative constants

  This is hacked up as a DAG->DAG transform, so once I clean it up I hope
  it'll be pulled out of here and put somewhere else. The only thing backends
  should really have to worry about for now is where to draw the line
  between using this code vs. going ahead and doing an integer multiply
  anyway.

llvm-svn: 21560
2005-04-26 07:23:02 +00:00
Misha Brukman
66d3c6e020 Convert tabs to spaces
llvm-svn: 21452
2005-04-22 17:54:37 +00:00
Misha Brukman
1fdf39f2ea Remove trailing whitespace
llvm-svn: 21424
2005-04-21 23:13:11 +00:00
Duraid Madina
0c40c548c0 print negative 64 bit immediates as negative numbers, makes things a little
easier on the eyes, not that numbers like 18446744073709541376 are bad or
anything

llvm-svn: 21300
2005-04-14 10:08:01 +00:00
Duraid Madina
b8dfae92e9 oops, this stopped us turning movl r4=0xFFFFFFFF;; and rX, r4 into zxt4
llvm-svn: 21299
2005-04-14 10:06:35 +00:00
Duraid Madina
eb4c15b42c we have zextloads, not sextloads!
llvm-svn: 21296
2005-04-14 08:37:32 +00:00
Duraid Madina
b9d2d9ac63 * add the shladd instruction
* fold left shifts of 1, 2, 3 or 4 bits into adds

  This doesn't save much now, but should get a serious workout once
  multiplies by constants get converted to shift/add/sub sequences.
  Hold on! :)

llvm-svn: 21282
2005-04-13 06:12:04 +00:00
Duraid Madina
67e553e521 * if ANDing with a constant of the form:
0x00000..00FFF..FF
      ^      ^
      ^      ^
    any number of
    0's followed by
    some number of
    1's

    then we use dep.z to just paste zeros over the input. For the special
    cases where this is zxt1/zxt2/zxt4, we use those instructions instead,
    because we're all about readability!!!
    that's what it's about!! readability!

  *twitch* ;D

llvm-svn: 21279
2005-04-13 04:50:54 +00:00
Chris Lattner
a2e92e69da Remove special handling of ZERO_EXTEND_INREG. This pessimizes code, causing
things like this:

       mov r9 = 65535;;
       and r8 = r8, r9;;

To be emitted instead of:

        zxt2 r8 = r8;;

To get this back, the selector for ISD::AND should recognize this case.

llvm-svn: 21269
2005-04-13 02:41:52 +00:00
Duraid Madina
39fcec1541 * OK, after changing to use liveIn/liveOut instead of IDEFs,
to avoid redundant mov out3=r44 type instructions, we need to
tell the register allocator the truth about out? registers.

FIXME: unfortunately, since the list of allocatable registers is immutable,
we can't simply 'delete r127' from the allocation order, say, if 'out0' is
used. The only correct thing we can do is have a linear order of regs:

out7, out6 ... out2, out1, out0, r32, r33, r34 ... r126, r127

and slide a 'window' of 96 registers along this line, depending on how many
of the out? regs a function actually uses. The only downside of this is
that the out? registers will be allocated _first_, which makes the
resulting assembly ugly. :( Note this in the README. Hope this gets fixed
soon. :) (note the 3rd person speech there)

llvm-svn: 21252
2005-04-12 18:42:59 +00:00
Chris Lattner
be2dceff48 Put out* into the allocation order, allowing the register allocator to
coallesce moves into outgoing args.

llvm-svn: 21249
2005-04-12 15:12:51 +00:00
Chris Lattner
37712352c0 Make sure to realize that calls use their argument regs
llvm-svn: 21248
2005-04-12 15:12:19 +00:00
Duraid Madina
2821f99f19 stop emitting IDEFs for args - change to using liveIn/liveOut
llvm-svn: 21247
2005-04-12 14:54:44 +00:00
Chris Lattner
55e620f08d IA64 supports this operation.
llvm-svn: 21228
2005-04-11 18:55:36 +00:00
Duraid Madina
01aaf77792 hmm, should probably change addImm() to take 64-bit arguments one day anyway.
llvm-svn: 21224
2005-04-11 07:16:39 +00:00
Duraid Madina
d2ae9221c7 assorted fixes:
* clean up immediates (we use 14, 22 and 64 bit immediates now. sane.)
  * fold r0/f0/f1 registers into comparisons against 0/0.0/1.0
  * fix nasty thinko - didn't use two-address form of conditional add
    for extending bools to integers, so occasionally there would be
    garbage in the result. it's amazing how often zeros are just
    sitting around in registers ;) - this should fix a bunch of tests.

llvm-svn: 21221
2005-04-11 05:55:56 +00:00
Duraid Madina
2d3e52576d ok, the "ia64 has a boatload of registers" joke stopped being funny today ;)
* fix overallocation of integer (stacked) registers: we can't allocate
  registers for local use if they are required as output registers

this fixes 'toast' in the test suite, and all sorts of larger programs
like bzip2 etc.

llvm-svn: 21178
2005-04-09 11:53:00 +00:00
Chris Lattner
39f963f968 This target does not support/want ISD::BRCONDTWOWAY
llvm-svn: 21164
2005-04-09 03:22:37 +00:00