Commit Graph

134 Commits

Author SHA1 Message Date
Nuno Lopes
944814b41a revert my previous patches that introduced an additional parameter to the objectsize intrinsic.
After a lot of discussion, we realized it's not the best option for run-time bounds checking

llvm-svn: 157255
2012-05-22 15:25:31 +00:00
Nuno Lopes
e8880a9916 change the objectsize intrinsic signature: add a 3rd parameter to denote the maximum runtime performance penalty that the user is willing to accept.
This commit only adds the parameter. Code taking advantage of it will follow.

llvm-svn: 156473
2012-05-09 15:52:43 +00:00
Craig Topper
77b1a4cee5 Remove 256-bit AVX non-temporal store intrinsics. Similar was previously done for 128-bit.
llvm-svn: 156375
2012-05-08 06:58:15 +00:00
Craig Topper
7c784d86eb Remove AVX vpermil intrinsics. I removed their uses from clang headers and builtins a while back.
llvm-svn: 154985
2012-04-18 05:24:00 +00:00
Craig Topper
94fcde74f6 Add auto upgrade support for x86 pcmpgt/pcmpeq intrinics removed in r149367.
llvm-svn: 149678
2012-02-03 06:10:55 +00:00
Nick Lewycky
e21bd35d14 Fix unused value warning for value used only in assert.
llvm-svn: 146440
2011-12-12 22:59:34 +00:00
Chandler Carruth
ed68325a38 Don't rely in there being one argument before we've actually identified
a function to upgrade. Also, simplify the code a bit at the expense of
one line.

llvm-svn: 146368
2011-12-12 10:57:20 +00:00
Chandler Carruth
083a91fab1 Switch llvm.cttz and llvm.ctlz to accept a second i1 parameter which
indicates whether the intrinsic has a defined result for a first
argument equal to zero. This will eventually allow these intrinsics to
accurately model the semantics of GCC's __builtin_ctz and __builtin_clz
and the X86 instructions (prior to AVX) which implement them.

This patch merely sets the stage by extending the signature of these
intrinsics and establishing auto-upgrade logic so that the old spelling
still works both in IR and in bitcode. The upgrade logic preserves the
existing (inefficient) semantics. This patch should not change any
behavior. CodeGen isn't updated because it can use the existing
semantics regardless of the flag's value.

Note that this will be followed by API updates to Clang and DragonEgg.

Reviewed by Nick Lewycky!

llvm-svn: 146357
2011-12-12 04:26:04 +00:00
Chris Lattner
f80a86e2d4 Eli managed to kill off llvm.membarrier in llvm 3.0 also, this means
that mainline needs no autoupgrade logic for intrinsics yet, woohoo!

llvm-svn: 145178
2011-11-27 08:42:07 +00:00
Chris Lattner
5d92b20cf1 The llvm.atomic intrinsics *were* removed in LLVM 3.0 (in r141333), remove the
autoupgrade logic for 2.9 and before.

llvm-svn: 145176
2011-11-27 08:18:55 +00:00
Chris Lattner
84bf52737a remove autoupgrade support for old forms of llvm.prefetch and the old
trampoline forms.  Both of these were correct in LLVM 3.0, and we don't
need to support LLVM 2.9 and earlier in mainline.

llvm-svn: 145174
2011-11-27 07:42:04 +00:00
Chris Lattner
011a5bf0aa remove autoupgrade support for really old-style debug info intrinsics.
I think this is the last of autoupgrade that can be removed in 3.1.
Can the atomic upgrade stuff also go?

llvm-svn: 145169
2011-11-27 06:18:33 +00:00
Chris Lattner
8067661775 remove some old autoupgrade logic
llvm-svn: 145167
2011-11-27 06:10:54 +00:00
Chris Lattner
321e2eedcc remove autoupgrade support for LLVM 2.9 exception stuff. Mainline supports
LLVM 3.0 and later.

llvm-svn: 145165
2011-11-27 05:56:16 +00:00
Eli Friedman
4d63ca106a Remove the old atomic instrinsics. autoupgrade functionality is included with this patch.
llvm-svn: 141333
2011-10-06 23:20:49 +00:00
Duncan Sands
6939ae53ac Split the init.trampoline intrinsic, which currently combines GCC's
init.trampoline and adjust.trampoline intrinsics, into two intrinsics
like in GCC.  While having one combined intrinsic is tempting, it is
not natural because typically the trampoline initialization needs to
be done in one function, and the result of adjust trampoline is needed
in a different (nested) function.  To get around this llvm-gcc hacks the
nested function lowering code to insert an additional parent variable
holding the adjust.trampoline result that can be accessed from the child
function.  Dragonegg doesn't have the luxury of tweaking GCC code, so it
stored the result of adjust.trampoline in the memory GCC set aside for
the trampoline itself (this is always available in the child function),
and set up some new memory (using an alloca) to hold the trampoline.
Unfortunately this breaks Go which allocates trampoline memory on the
heap and wants to use it even after the parent has exited (!).  Rather
than doing even more hacks to get Go working, it seemed best to just use
two intrinsics like in GCC.  Patch mostly by Sanjoy Das.

llvm-svn: 139140
2011-09-06 13:37:06 +00:00
Bill Wendling
dbea8de893 The insertion point for the loads is right before the llvm.eh.exception
call. The call may be in the same BB as the landingpad instruction. If that's
the case, then inserting the loads after the landingpad inst, but before the
extractvalues, causes undefined behavior.

llvm-svn: 139088
2011-09-04 09:02:18 +00:00
Bill Wendling
c273c3f456 Don't reload the values that are already there. The llvm.eh.resume uses the same
values that the resume instruction uses.
PR10850

llvm-svn: 139076
2011-09-03 01:38:17 +00:00
Bill Wendling
e35fdee39e No need to get fancy inserting a PHI node when the values are stored in stack
slots. This fixes a bug where the number of nodes coming into the PHI node may
not equal the number of predecessors. E.g., two or more landingpad instructions
may require a PHI before reaching the eh.exception and eh.selector instructions.

llvm-svn: 139035
2011-09-02 21:17:08 +00:00
Bill Wendling
66d5793dcf Perform the upgrading of the old EH to the new EH in a more sane manner.
Perform the upgrading in steps.

* First, create a map of the invokes to the EH intrinsics.

* Next, take that mapping and determine if the invoke's unwind destination has a
  single predecessor. If not, then create a new empty block to hold the new
  landingpad instruction.

* Create a landingpad instruction into the uwnind destination. Fill it with the
  values from the old selector. Map the old intrinsic calls to the new
  landingpad values (there may be multiple landingpad instructions per instrinic
  call pairs).

* Go through the old intrinsic calls, create a PHI node when necessary, and then
  replace their values with the new values from the landingpad instructions.

* Delete all dead instructions.

* ???

* Profit!

llvm-svn: 138990
2011-09-02 01:30:08 +00:00
Bill Wendling
607b3c3898 Only delete instructions once.
llvm-svn: 138700
2011-08-27 06:10:02 +00:00
Bill Wendling
546c7a05de Initial check in that will auto-upgrade the old EH scheme to the new EH scheme.
This upgrade suffers from the problems of the old EH scheme - i.e., that the
calls to llvm.eh.exception() and llvm.eh.selector() can wander off and get
lost. It makes a valiant effort to reclaim these little lost lambs.

This is a first draft, so it hasn't yet been hooked up to the parser.

llvm-svn: 138602
2011-08-25 23:22:40 +00:00
Chris Lattner
e1fe7061ce land David Blaikie's patch to de-constify Type, with a few tweaks.
llvm-svn: 135375
2011-07-18 04:54:35 +00:00
Jay Foad
c826df8fb7 Convert CallInst and InvokeInst APIs to use ArrayRef.
llvm-svn: 135265
2011-07-15 08:37:34 +00:00
Chris Lattner
0fe414c07e rework the remaining autoupgrade logic to use a StringRef instead of creating a
temporary std::string for every function being checked.

llvm-svn: 133355
2011-06-18 18:56:39 +00:00
Chris Lattner
ad5400fa72 rip out a ton of intrinsic modernization logic from AutoUpgrade.cpp, which is
for pre-2.9 bitcode files.  We keep x86 unaligned loads, movnt, crc32, and the
target indep prefetch change.

As usual, updating the testsuite is a PITA.

llvm-svn: 133337
2011-06-18 06:05:24 +00:00
Bruno Cardoso Lopes
b6afc5168f Add one more argument to the prefetch intrinsic to indicate whether it's a data
or instruction cache access. Update the targets to match it and also teach
autoupgrade.

llvm-svn: 132976
2011-06-14 04:58:37 +00:00
Chad Rosier
f2b2b472cc CRC32 intrinsics were renamed at revision 132163. This submission
fixes aliasing issues with the old and new names as well as adds test
cases for the auto-upgrader.
Fixes rdar 9472944.

llvm-svn: 132207
2011-05-27 19:38:10 +00:00
Chad Rosier
b87c4a6945 Renamed llvm.x86.sse42.crc32 intrinsics; crc64 doesn't exist.
crc32.[8|16|32] have been renamed to .crc32.32.[8|16|32] and
crc64.[8|16|32] have been renamed to .crc32.64.[8|64].

llvm-svn: 132163
2011-05-26 23:13:19 +00:00
Bill Wendling
67f5e8f0a7 Replace the "movnt" intrinsics with a native store + nontemporal metadata bit.
<rdar://problem/8460511>

llvm-svn: 130791
2011-05-03 21:11:17 +00:00
Bill Wendling
0984f4927e Reapply r129401 with patch for clang.
llvm-svn: 129419
2011-04-13 00:36:11 +00:00
Bill Wendling
f6446a0961 Revert r129401 for now. Clang is using the old way of doing things.
llvm-svn: 129403
2011-04-12 22:59:27 +00:00
Bill Wendling
f9c9d3e05b Remove the unaligned load intrinsics in favor of using native unaligned loads.
Now that we have a first-class way to represent unaligned loads, the unaligned
load intrinsics are superfluous.

First part of <rdar://problem/8460511>.

llvm-svn: 129401
2011-04-12 22:46:31 +00:00
Bill Wendling
c007d0af6d Remove dead code.
llvm-svn: 128519
2011-03-30 01:03:48 +00:00
Evan Cheng
ed09135349 Add intrinsics @llvm.arm.neon.vmulls and @llvm.arm.neon.vmullu.* back. Frontends
was lowering them to sext / uxt + mul instructions. Unfortunately the
optimization passes may hoist the extensions out of the loop and separate them.
When that happens, the long multiplication instructions can be broken into
several scalar instructions, causing significant performance issue.

Note the vmla and vmls intrinsics are not added back. Frontend will codegen them
as intrinsics vmull* + add / sub. Also note the isel optimizations for catching
mul + sext / zext are not changed either.

First part of rdar://8832507, rdar://9203134

llvm-svn: 128502
2011-03-29 23:06:19 +00:00
Chris Lattner
db204cbe42 convert ConstantVector::get to use ArrayRef.
llvm-svn: 125537
2011-02-15 00:14:00 +00:00
Chris Lattner
ee7f7c2494 revert my ConstantVector patch, it seems to have made the llvm-gcc
builders unhappy.

llvm-svn: 125504
2011-02-14 18:15:46 +00:00
Chris Lattner
34f32cb4c2 Switch ConstantVector::get to use ArrayRef instead of a pointer+size
idiom.  Change various clients to simplify their code.

llvm-svn: 125487
2011-02-14 07:55:32 +00:00
Bill Wendling
b94ade249a The pshufw instruction came about in MMX2 when SSE was introduced. Don't place
it in with the SSSE3 instructions.

Steward! Could you place this chair by the aft sun deck? I'm trying to get away
from the Astors. They are such boors!

llvm-svn: 115552
2010-10-04 20:24:01 +00:00
Dale Johannesen
c14a1eda84 Massive rewrite of MMX:
The x86_mmx type is used for MMX intrinsics, parameters and
return values where these use MMX registers, and is also
supported in load, store, and bitcast.

Only the above operations generate MMX instructions, and optimizations
do not operate on or produce MMX intrinsics. 

MMX-sized vectors <2 x i32> etc. are lowered to XMM or split into
smaller pieces.  Optimizations may occur on these forms and the
result casted back to x86_mmx, provided the result feeds into a
previous existing x86_mmx operation.

The point of all this is prevent optimizations from introducing
MMX operations, which is unsafe due to the EMMS problem.

llvm-svn: 115243
2010-09-30 23:57:10 +00:00
Bill Wendling
51019d366e Use StringRef which performs the "early exit" when compared against a constant
string.

llvm-svn: 113615
2010-09-10 20:42:26 +00:00
Bill Wendling
b3818f7584 Early exit with simple checks.
llvm-svn: 113603
2010-09-10 19:06:58 +00:00
Bill Wendling
f6ee5c5231 Auto-upgrade the magic ".llvm.eh.catch.all.value" global to
"llvm.eh.catch.all.value". Only the name needs to be changed.

llvm-svn: 113600
2010-09-10 18:51:56 +00:00
Bob Wilson
24fa0b33b1 Replace NEON vabdl, vaba, and vabal intrinsics with combinations of the
vabd intrinsic and add and/or zext operations.  In the case of vaba, this
also avoids the need for a DAG combine pattern to combine vabd with add.
Update tests.  Auto-upgrade the old intrinsics.

llvm-svn: 112941
2010-09-03 01:35:08 +00:00
Bob Wilson
3348d2eb50 Remove NEON vmull, vmlal, and vmlsl intrinsics, replacing them with multiply,
add, and subtract operations with zero-extended or sign-extended vectors.
Update tests.  Add auto-upgrade support for the old intrinsics.

llvm-svn: 112773
2010-09-01 23:50:19 +00:00
Bob Wilson
826a677f94 Remove NEON vmovn intrinsic, replacing it with vector truncate operations.
Auto-upgrade the old intrinsic and update tests.

llvm-svn: 112507
2010-08-30 20:02:30 +00:00
Bob Wilson
807d004452 Remove NEON vaddl, vaddw, vsubl, and vsubw intrinsics. Instead, use llvm
IR add/sub operations with one or both operands sign- or zero-extended.
Auto-upgrade the old intrinsics.

llvm-svn: 112416
2010-08-29 05:57:34 +00:00
Bob Wilson
c01101e76c Add alignment arguments to all the NEON load/store intrinsics.
Update all the tests using those intrinsics and add support for
auto-upgrading bitcode files with the old versions of the intrinsics.

llvm-svn: 112271
2010-08-27 17:13:24 +00:00
Bob Wilson
0039bc228b Replace the arm.neon.vmovls and vmovlu intrinsics with vector sign-extend and
zero-extend operations.

llvm-svn: 111614
2010-08-20 04:54:02 +00:00
Gabor Greif
a7509fca78 undo 80 column trespassing I caused
llvm-svn: 109092
2010-07-22 10:37:47 +00:00