llvm/lib/Target
Chris Lattner 60e32b0776 Fit in 80-cols
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30556 91177308-0d34-0410-b5e6-96231b3b80d8
2006-09-21 05:46:00 +00:00
..
Alpha Account for pseudo-ops correctly 2006-09-20 20:08:52 +00:00
ARM fix header 2006-09-19 16:41:40 +00:00
CBackend Added some eye-candy for Subtarget type checking 2006-09-17 20:25:45 +00:00
IA64 Adding dllimport, dllexport and external weak linkage types. 2006-09-14 18:23:27 +00:00
PowerPC The DarwinAsmPrinter need not check for isDarwin. createPPCAsmPrinterPass 2006-09-20 17:12:19 +00:00
Sparc Adding dllimport, dllexport and external weak linkage types. 2006-09-14 18:23:27 +00:00
X86 Fit in 80-cols 2006-09-21 05:46:00 +00:00
Makefile Add the README files to the distribution. 2006-04-13 06:39:24 +00:00
MRegisterInfo.cpp Get darwin intel debugging up and running. 2006-08-03 17:27:09 +00:00
README.txt item done 2006-09-19 06:19:03 +00:00
SubtargetFeature.cpp Clean up some commentary. 2006-03-24 10:00:56 +00:00
Target.td Add code size to target instruction use it as the 3rd isel sorting tie-breaker. 2006-07-19 00:24:41 +00:00
TargetAsmInfo.cpp Oops - forgot to update banner. 2006-09-06 19:21:41 +00:00
TargetData.cpp Don't pass target name into TargetData anymore, it is never used or needed. 2006-06-16 18:22:52 +00:00
TargetFrameInfo.cpp remove some more dead sparcv9 support stuff 2006-08-03 18:55:44 +00:00
TargetInstrInfo.cpp Typo! How did we commute nodes before?! 2006-05-12 01:46:26 +00:00
TargetMachine.cpp 1. Remove condition on delete. 2006-09-07 23:39:26 +00:00
TargetMachineRegistry.cpp remove always-null IntrinsicLowering argument. 2006-03-23 05:28:02 +00:00
TargetSchedule.td Add a default NoItinerary class for targets to use. 2006-01-27 01:41:38 +00:00
TargetSelectionDAG.td Vector extract / insert index operand should have ptr type. 2006-06-15 08:19:05 +00:00
TargetSubtarget.cpp Eliminate all remaining tabs and trailing spaces. 2005-07-27 06:12:32 +00:00

Target Independent Opportunities:

===-------------------------------------------------------------------------===

FreeBench/mason contains code like this:

static p_type m0u(p_type p) {
  int m[]={0, 8, 1, 2, 16, 5, 13, 7, 14, 9, 3, 4, 11, 12, 15, 10, 17, 6};
  p_type pu;
  pu.a = m[p.a];
  pu.b = m[p.b];
  pu.c = m[p.c];
  return pu;
}

We currently compile this into a memcpy from a static array into 'm', then
a bunch of loads from m.  It would be better to avoid the memcpy and just do
loads from the static array.

//===---------------------------------------------------------------------===//

Make the PPC branch selector target independant

//===---------------------------------------------------------------------===//

Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
precision don't matter (ffastmath).  Misc/mandel will like this. :)

//===---------------------------------------------------------------------===//

Solve this DAG isel folding deficiency:

int X, Y;

void fn1(void)
{
  X = X | (Y << 3);
}

compiles to

fn1:
	movl Y, %eax
	shll $3, %eax
	orl X, %eax
	movl %eax, X
	ret

The problem is the store's chain operand is not the load X but rather
a TokenFactor of the load X and load Y, which prevents the folding.

There are two ways to fix this:

1. The dag combiner can start using alias analysis to realize that y/x
   don't alias, making the store to X not dependent on the load from Y.
2. The generated isel could be made smarter in the case it can't
   disambiguate the pointers.

Number 1 is the preferred solution.

This has been "fixed" by a TableGen hack. But that is a short term workaround
which will be removed once the proper fix is made.

//===---------------------------------------------------------------------===//

On targets with expensive 64-bit multiply, we could LSR this:

for (i = ...; ++i) {
   x = 1ULL << i;

into:
 long long tmp = 1;
 for (i = ...; ++i, tmp+=tmp)
   x = tmp;

This would be a win on ppc32, but not x86 or ppc64.

//===---------------------------------------------------------------------===//

Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)

//===---------------------------------------------------------------------===//

Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply.

//===---------------------------------------------------------------------===//

Interesting? testcase for add/shift/mul reassoc:

int bar(int x, int y) {
  return x*x*x+y+x*x*x*x*x*y*y*y*y;
}
int foo(int z, int n) {
  return bar(z, n) + bar(2*z, 2*n);
}

//===---------------------------------------------------------------------===//

These two functions should generate the same code on big-endian systems:

int g(int *j,int *l)  {  return memcmp(j,l,4);  }
int h(int *j, int *l) {  return *j - *l; }

this could be done in SelectionDAGISel.cpp, along with other special cases,
for 1,2,4,8 bytes.

//===---------------------------------------------------------------------===//

This code:
int rot(unsigned char b) { int a = ((b>>1) ^ (b<<7)) & 0xff; return a; }

Can be improved in two ways:

1. The instcombiner should eliminate the type conversions.
2. The X86 backend should turn this into a rotate by one bit.

//===---------------------------------------------------------------------===//

Add LSR exit value substitution. It'll probably be a win for Ackermann, etc.

//===---------------------------------------------------------------------===//

It would be nice to revert this patch:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html

And teach the dag combiner enough to simplify the code expanded before 
legalize.  It seems plausible that this knowledge would let it simplify other
stuff too.

//===---------------------------------------------------------------------===//

For packed types, TargetData.cpp::getTypeInfo() returns alignment that is equal
to the type size. It works but can be overly conservative as the alignment of
specific packed types are target dependent.

//===---------------------------------------------------------------------===//

We should add 'unaligned load/store' nodes, and produce them from code like
this:

v4sf example(float *P) {
  return (v4sf){P[0], P[1], P[2], P[3] };
}

//===---------------------------------------------------------------------===//

We should constant fold packed type casts at the LLVM level, regardless of the
cast.  Currently we cannot fold some casts because we don't have TargetData
information in the constant folder, so we don't know the endianness of the 
target!

//===---------------------------------------------------------------------===//

Add support for conditional increments, and other related patterns.  Instead
of:

	movl 136(%esp), %eax
	cmpl $0, %eax
	je LBB16_2	#cond_next
LBB16_1:	#cond_true
	incl _foo
LBB16_2:	#cond_next

emit:
	movl	_foo, %eax
	cmpl	$1, %edi
	sbbl	$-1, %eax
	movl	%eax, _foo

//===---------------------------------------------------------------------===//

Combine: a = sin(x), b = cos(x) into a,b = sincos(x).

Expand these to calls of sin/cos and stores:
      double sincos(double x, double *sin, double *cos);
      float sincosf(float x, float *sin, float *cos);
      long double sincosl(long double x, long double *sin, long double *cos);

Doing so could allow SROA of the destination pointers.  See also:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687

//===---------------------------------------------------------------------===//

Scalar Repl cannot currently promote this testcase to 'ret long cst':

        %struct.X = type { int, int }
        %struct.Y = type { %struct.X }
ulong %bar() {
        %retval = alloca %struct.Y, align 8             ; <%struct.Y*> [#uses=3]
        %tmp12 = getelementptr %struct.Y* %retval, int 0, uint 0, uint 0
        store int 0, int* %tmp12
        %tmp15 = getelementptr %struct.Y* %retval, int 0, uint 0, uint 1
        store int 1, int* %tmp15
        %retval = cast %struct.Y* %retval to ulong*
        %retval = load ulong* %retval           ; <ulong> [#uses=1]
        ret ulong %retval
}

it should be extended to do so.

//===---------------------------------------------------------------------===//

Turn this into a single byte store with no load (the other 3 bytes are
unmodified):

void %test(uint* %P) {
	%tmp = load uint* %P
        %tmp14 = or uint %tmp, 3305111552
        %tmp15 = and uint %tmp14, 3321888767
        store uint %tmp15, uint* %P
        ret void
}

//===---------------------------------------------------------------------===//

dag/inst combine "clz(x)>>5 -> x==0" for 32-bit x.

Compile:

int bar(int x)
{
  int t = __builtin_clz(x);
  return -(t>>5);
}

to:

_bar:   addic r3,r3,-1
        subfe r3,r3,r3
        blr

//===---------------------------------------------------------------------===//

Legalize should lower ctlz like this:
  ctlz(x) = popcnt((x-1) & ~x)

on targets that have popcnt but not ctlz.  itanium, what else?

//===---------------------------------------------------------------------===//

quantum_sigma_x in 462.libquantum contains the following loop:

      for(i=0; i<reg->size; i++)
	{
	  /* Flip the target bit of each basis state */
	  reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
	} 

Where MAX_UNSIGNED/state is a 64-bit int.  On a 32-bit platform it would be just
so cool to turn it into something like:

   long long Res = ((MAX_UNSIGNED) 1 << target);
   if (target < 32) {
     for(i=0; i<reg->size; i++)
       reg->node[i].state ^= Res & 0xFFFFFFFFULL;
   } else {
     for(i=0; i<reg->size; i++)
       reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
   }
   
... which would only do one 32-bit XOR per loop iteration instead of two.

It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
alas...

//===---------------------------------------------------------------------===//