llvm-mirror/test/Bitcode
Dale Johannesen c14a1eda84 Massive rewrite of MMX:
The x86_mmx type is used for MMX intrinsics, parameters and
return values where these use MMX registers, and is also
supported in load, store, and bitcast.

Only the above operations generate MMX instructions, and optimizations
do not operate on or produce MMX intrinsics. 

MMX-sized vectors <2 x i32> etc. are lowered to XMM or split into
smaller pieces.  Optimizations may occur on these forms and the
result casted back to x86_mmx, provided the result feeds into a
previous existing x86_mmx operation.

The point of all this is prevent optimizations from introducing
MMX operations, which is unsafe due to the EMMS problem.

llvm-svn: 115243
2010-09-30 23:57:10 +00:00
..
2006-12-11-Cast-ConstExpr.ll
2009-06-11-FirstClassAggregateConstant.ll
AutoUpgradeGlobals.ll Auto-upgrade the magic ".llvm.eh.catch.all.value" global to 2010-09-10 18:51:56 +00:00
AutoUpgradeGlobals.ll.bc Auto-upgrade the magic ".llvm.eh.catch.all.value" global to 2010-09-10 18:51:56 +00:00
AutoUpgradeIntrinsics.ll
AutoUpgradeIntrinsics.ll.bc
dg.exp
extractelement.ll
flags.ll
memcpy.ll reapply 'reject forward references to functions whose type don't match' 2010-04-20 04:49:11 +00:00
metadata-2.ll
metadata.ll
neon-intrinsics.ll Replace NEON vabdl, vaba, and vabal intrinsics with combinations of the 2010-09-03 01:35:08 +00:00
neon-intrinsics.ll.bc Replace NEON vabdl, vaba, and vabal intrinsics with combinations of the 2010-09-03 01:35:08 +00:00
sse2_loadl_pd.ll
sse2_loadl_pd.ll.bc
sse2_movl_dq.ll
sse2_movl_dq.ll.bc
sse2_movs_d.ll
sse2_movs_d.ll.bc
sse2_punpck_qdq.ll
sse2_punpck_qdq.ll.bc
sse2_shuf_pd.ll
sse2_shuf_pd.ll.bc
sse2_unpck_pd.ll
sse2_unpck_pd.ll.bc
sse41_pmulld.ll add newlines at the end of files. 2010-04-07 22:53:17 +00:00
sse41_pmulld.ll.bc
ssse3_palignr.ll Remove the palignr intrinsics now that we lower them to vector shuffles, 2010-04-20 00:59:54 +00:00
ssse3_palignr.ll.bc Massive rewrite of MMX: 2010-09-30 23:57:10 +00:00