7559 Commits

Author SHA1 Message Date
Jakob Stoklund Olesen
010e9bb778 Store sub-class lists as a bit vector.
This uses less memory and it reduces the complexity of sub-class
operations:

- hasSubClassEq() and friends become O(1) instead of O(N).

- getCommonSubClass() becomes O(N) instead of O(N^2).

In the future, TableGen will infer register classes.  This makes it
cheap to add them.

llvm-svn: 140898
2011-09-30 22:19:07 +00:00
Jakob Stoklund Olesen
76da38e8e8 Expand the x86 V_SET0* pseudos right after register allocation.
This also makes it possible to reduce the number of pseudo instructions
and get rid of the encoding information.

llvm-svn: 140776
2011-09-29 05:10:54 +00:00
Eli Friedman
81fc13efd2 PR11033: Make sure we don't generate PCMPGTQ and PCMPEQQ if the target CPU does not support them.
llvm-svn: 140723
2011-09-28 21:00:25 +00:00
Jakob Stoklund Olesen
bbe41f6be8 Rename SSEDomainFix -> lib/CodeGen/ExecutionDepsFix.
I'll clean up the source in the next commit.

llvm-svn: 140663
2011-09-28 00:01:54 +00:00
Jakob Stoklund Olesen
2bf243f464 Remove X86-dependent stuff from SSEDomainFix.
This also enables domain swizzling for AVX code which required a few
trivial test changes.

The pass will be moved to lib/CodeGen shortly.

llvm-svn: 140659
2011-09-27 23:50:46 +00:00
Jakob Stoklund Olesen
b843221bf0 Promote the X86 Get/SetSSEDomain functions to TargetInstrInfo.
I am going to unify the SSEDomainFix and NEONMoveFix passes into a
single target independent pass.  They are essentially doing the same
thing.

llvm-svn: 140652
2011-09-27 22:57:18 +00:00
Craig Topper
a08173e534 Fix VEX decoding in i386 mode. Fixes PR11008.
llvm-svn: 140515
2011-09-26 05:12:43 +00:00
Jakob Stoklund Olesen
59b2982dcf Only run MF.verify() with EXPENSIVE_CHECKS=1.
llvm-svn: 140441
2011-09-24 01:11:19 +00:00
Duncan Sands
6d3fe8d11a Implement Chris's suggestion of legalizing the various SSE and AVX
hadd/hsub intrinsics into the new fhadd/fhsub X86 node.

llvm-svn: 140383
2011-09-23 16:10:22 +00:00
Eli Friedman
31c7bde95a PR10991: make fast-isel correctly check whether accessing a global through an alias involves thread-local storage. (I'm not entirely sure how this is supposed to work, but this patch makes fast-isel consistent with the normal isel path.)
llvm-svn: 140355
2011-09-22 23:41:28 +00:00
Jakob Stoklund Olesen
a608b612f1 Add support for GR32 <-> FR32 cross class copies.
We already support GR64 <-> VR128 copies.  All of these copies break
partial register dependencies by zeroing the high part of the target
register.

llvm-svn: 140348
2011-09-22 22:45:24 +00:00
Duncan Sands
1da590b589 Synthesize SSE3/AVX 128 bit horizontal add/sub instructions from
floating point add/sub of appropriate shuffle vectors.  Does not
synthesize the 256 bit AVX versions because they work differently.

llvm-svn: 140332
2011-09-22 20:15:48 +00:00
Craig Topper
95f048d1ff Fix register printing in disassembling of push/pop of segment registers and in/out in Intel syntax mode. Fixes PR10960
llvm-svn: 140299
2011-09-22 07:01:50 +00:00
Benjamin Kramer
978ef840ac The SSE version differences for fmin/fmax are more involved than I thought.
- x87: no min or max.
- SSE1: min/max for single precision scalars and vectors.
- SSE2: min/max for single and double precision scalars and vectors.
- AVX: as SSE2, but also supports the wider ymm vectors. (this is covered by the isTypeLegal check)

llvm-svn: 140296
2011-09-22 03:27:22 +00:00
Benjamin Kramer
5844bacf0a X86: Don't form min/max nodes if the target is missing SSE.
llvm-svn: 140294
2011-09-22 03:01:42 +00:00
Benjamin Kramer
8b12bfc4ec X86Disassembler: if verbose logging is going to nulls(), disable logging completely.
Otherwise we'll spend a ridiculous amount of time pretty printing debug output and then discarding it.

llvm-svn: 140276
2011-09-21 21:47:35 +00:00
Nadav Rotem
71bd67ac2e fix comment
llvm-svn: 140258
2011-09-21 17:14:40 +00:00
Nadav Rotem
8fc9d777a3 Insert a sanity check on the combining of x86 truncing-store nodes. This comes to replace the problematic check that was removed in r139995.
llvm-svn: 140246
2011-09-21 08:45:10 +00:00
Richard Trieu
a675de9fac Change:
assert(!"error message");

To:

  assert(0 && "error message");

which is more consistant across the code base.

llvm-svn: 140234
2011-09-21 03:09:09 +00:00
Owen Anderson
fbec62c99e In the disassembler C API, be careful not to confuse the comment streamer that the disassembler outputs annotations on with the streamer that the InstPrinter will print them on.
llvm-svn: 140217
2011-09-21 00:25:23 +00:00
Bruno Cardoso Lopes
629e7c2410 Revert r140097, working on a better approach
llvm-svn: 140203
2011-09-20 23:19:29 +00:00
Bruno Cardoso Lopes
035414367a Simplify max/minp[s|d] dagcombine matching
llvm-svn: 140199
2011-09-20 22:34:45 +00:00
Bruno Cardoso Lopes
b3eab8c22d Tidy up a bit more, fix tab and remove trailing whitespaces
llvm-svn: 140186
2011-09-20 21:45:26 +00:00
Bruno Cardoso Lopes
906f64c461 The wrong relocation was being emitted for several SSSE3 instructions.
This fixes PR10963. Thanks to Benjamin for finding the wrong tablegen
declaration.

llvm-svn: 140184
2011-09-20 21:39:21 +00:00
Bruno Cardoso Lopes
dab989502d Tidy up code!
llvm-svn: 140183
2011-09-20 21:39:06 +00:00
Craig Topper
df17f1cc99 Extend changes from r139986 to produce 256-bit AVX minps/minpd/maxps/maxpd.
llvm-svn: 140140
2011-09-20 07:38:59 +00:00
Bruno Cardoso Lopes
de0dc10d6d Fix PR10949. Fix the encoding of VMOVPQIto64rr.
llvm-svn: 140098
2011-09-19 23:36:59 +00:00
Bruno Cardoso Lopes
7cf7f02c3d Based on the small opt Zvi's patch was trying to achieve, eliminate
128-bit undef subvector insertion into a 256-bit vector

llvm-svn: 140097
2011-09-19 23:36:50 +00:00
Bruno Cardoso Lopes
9e5ef44daf Match X86ISD::FSETCCsd and X86ISD::FSETCCss while in AVX mode. This fix
PR10955 and PR10948.

llvm-svn: 140069
2011-09-19 21:29:24 +00:00
Nadav Rotem
a6af03c6fb Fix typos in my prev commit, found by Tobi.
llvm-svn: 140003
2011-09-18 19:00:23 +00:00
Nadav Rotem
1cfdc59e94 setOperationAction should be done on the return value of the type, not the operands.
llvm-svn: 140001
2011-09-18 14:57:03 +00:00
Nadav Rotem
cfc77bc719 When promoting integer vectors we often create ext-loads. This patch adds a
dag-combine optimization to implement the ext-load efficiently (using shuffles).

For example the type <4 x i8> is stored in memory as i32, but it needs to
find its way into a <4 x i32> register. Previously we scalarized the memory
access, now we use shuffles.

llvm-svn: 139995
2011-09-18 10:39:32 +00:00
Craig Topper
c5a97d12bb Fix typo by changing Lower256IntVETCC to Lower256IntVSETCC.
llvm-svn: 139993
2011-09-18 08:03:58 +00:00
Duncan Sands
4149334f09 Synthesize x86 max/min instructions also for vectors (i.e. produce
maxps and maxpd).  This broke the sse41-blend.ll testcase by causing
maxpd to be produced rather than a cmp+blend pair, which is the reason
I tweaked it.  Gives a small speedup on doduc with dragonegg when the
GCC vectorizer is used.

llvm-svn: 139986
2011-09-17 16:49:39 +00:00
Bruno Cardoso Lopes
f611f6c371 Describe more AVX 128-bit convert instructions without patterns to have
mayLoad = 1

llvm-svn: 139973
2011-09-16 23:41:29 +00:00
Bruno Cardoso Lopes
396b8136bf Add mayLoad attribute to AVX convert instructions, since non of them
are declared with load patterns. This fix the crash in PR10941. No testcases,
since a fold is triggered and then converted back to the register form
afterwards.

llvm-svn: 139953
2011-09-16 22:02:14 +00:00
Bruno Cardoso Lopes
a60e62ad02 Fix PR10884.
This PR basically reports a problem where a crash in generated code
happened due to %rbp being clobbered:

  pushq %rbp
  movq  %rsp, %rbp
  ....
  vmovmskps %ymm12, %ebp
  ....
  movq  %rbp, %rsp
  popq  %rbp
  ret

Since Eric's r123367 commit, the default stack alignment for x86 32-bit
has changed to be 16-bytes. Since then, the MaxStackAlignmentHeuristicPass
hasn't been really used, but with AVX it becomes useful again, since per
ABI compliance we don't always align the stack to 256-bit, but only when
there are 256-bit incoming arguments.

ReserveFP was only used by this pass, but there's no RA target hook that
uses getReserveFP() to check for the presence of FP (since nothing was
triggering the pass to run, the uses of getReserveFP() were removed
through time without being noticed). Change this pass to use
setForceFramePointer, which is properly called by MachineFunction
hasFP method.

The testcase is very big and dependent on RA, not sure if it's worth
adding to test/CodeGen/X86.

llvm-svn: 139939
2011-09-16 20:58:28 +00:00
Owen Anderson
e54c4beb5a Don't attach annotations to MCInst's. Instead, have the disassembler return, and the printer accept, an annotation string which can be passed through if the client cares about annotations.
llvm-svn: 139876
2011-09-15 23:38:46 +00:00
Bruno Cardoso Lopes
1465f4d334 Add a fixme note!
llvm-svn: 139872
2011-09-15 23:04:24 +00:00
Bruno Cardoso Lopes
7ad9ea026a Add the remaining AVX versions of instructions to X86InstrInfo, this
time for describing high latency ones and for recognizting loads
from the same base pointer

llvm-svn: 139864
2011-09-15 22:15:52 +00:00
Bruno Cardoso Lopes
901f6ff218 Factor out partial register update checks for some SSE instructions.
Also add the AVX versions and add comments!

llvm-svn: 139854
2011-09-15 21:42:23 +00:00
Owen Anderson
84d4e5d0e2 Add support for stored annotations to MCInst, and provide facilities for MC-based InstPrinters to print them out. Enhance the ARM and X86 InstPrinter's to do so in verbose mode.
llvm-svn: 139820
2011-09-15 18:36:29 +00:00
Bruno Cardoso Lopes
8e702bba63 Change all checks regarding the presence of any SSE level to always
take into consideration the presence of AVX. This change, together with
the SSEDomainFix enabled for AVX, makes AVX codegen to always (hopefully)
emit the same code as SSE for 128-bit vector ops. I don't
have a testcase for this, but AVX now beats SSE in performance for
128-bit ops in the majority of programas in the llvm testsuite

llvm-svn: 139817
2011-09-15 18:27:36 +00:00
Bruno Cardoso Lopes
0fa8b71a55 Enable SSEDomainFix pass for AVX mode.
llvm-svn: 139816
2011-09-15 18:27:32 +00:00
Eli Friedman
7cb90dcbce Fix the code creating VZEXT_LOAD so that it creates the right memoperand. Issue spotted in -debug output. I can't think of any practical effects at the moment, but it might matter if we start doing more aggressive alias analysis in CodeGen.
llvm-svn: 139758
2011-09-14 23:42:45 +00:00
Craig Topper
60719c7bfb Fix mem type for VEX.128 form of VROUNDP*. Remove filter preventing VROUND from being recognized by disassembler.
llvm-svn: 139691
2011-09-14 06:41:26 +00:00
Craig Topper
25e81ae604 Make disassembling of VBLEND* print immediate as a XMM/YMM register name. Fixes PR10917.
llvm-svn: 139690
2011-09-14 05:55:28 +00:00
Bruno Cardoso Lopes
27a7ace4b4 Teach the foldable tables about 128-bit AVX instructions and make the
alignment check for 256-bit classes more strict. There're no testcases
but we catch more folding cases for AVX while running single and multi
sources in the llvm testsuite.

Since some 128-bit AVX instructions have different number of operands
than their SSE counterparts, they are placed in different tables.

256-bit AVX instructions should also be added in the table soon. And
there a few more 128-bit versions to handled, which should come in
the following commits.

llvm-svn: 139687
2011-09-14 02:36:58 +00:00
Bruno Cardoso Lopes
3e6b9661d1 Vector shuffle mask <i32 4, i32 5, i32 2, i32 3> should yield "movsd", not "movss".
llvm-svn: 139686
2011-09-14 02:36:14 +00:00
Nadav Rotem
f1730712f7 swap vselect operand order - pr10907
llvm-svn: 139630
2011-09-13 19:56:38 +00:00