Commit Graph

16 Commits

Author SHA1 Message Date
Chris Lattner
b85e4eba85 rip out a ton of intrinsic modernization logic from AutoUpgrade.cpp, which is
for pre-2.9 bitcode files.  We keep x86 unaligned loads, movnt, crc32, and the
target indep prefetch change.

As usual, updating the testsuite is a PITA.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133337 91177308-0d34-0410-b5e6-96231b3b80d8
2011-06-18 06:05:24 +00:00
Cameron Zwarich
1335022e19 Fix a regression caused by r102515 where explicit alignment on globals is
ignored. There was a test to catch this, but it was just blindly updated in
a large change. This fixes another part of <rdar://problem/9275290>.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129466 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-13 20:36:04 +00:00
Evan Cheng
a5e1362f96 Revert r122955. It seems using movups to lower memcpy can cause massive regression (even on Nehalem) in edge cases. I also didn't see any real performance benefit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123015 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-07 19:35:30 +00:00
Evan Cheng
461f1fc359 Use movups to lower memcpy and memset even if it's not fast (like corei7).
The theory is it's still faster than a pair of movq / a quad of movl. This
will probably hurt older chips like P4 but should run faster on current
and future Intel processors. rdar://8817010


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122955 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-06 07:58:36 +00:00
Chris Lattner
1eb1b68e3a Fix an inconsistency in the x86 backend that led it to reject "calll foo" on
x86-32: 32-bit calls were named "call" not "calll".  64-bit calls were correctly
named "callq", so this only impacted x86-32.

This fixes rdar://8456370 - llvm-mc rejects 'calll'

This also exposes that mingw/64 is generating a 32-bit call instead of a 64-bit call,
I will file a bugzilla.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114534 91177308-0d34-0410-b5e6-96231b3b80d8
2010-09-22 05:49:14 +00:00
Chris Lattner
e87f7bb50e Rework global alignment computation again. Now we do round up
alignment of globals to the preferred alignment, but only when
there is no section specified on the global (by far the common
case).


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102515 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-28 19:58:07 +00:00
Chris Lattner
567dd1f5d0 fix PR6921 a different way. Intead of increasing the
alignment of globals with a specified alignment, we fix
common variables to obey their alignment.  Add a comment
explaining why this behavior is important.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102365 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-26 18:46:46 +00:00
Chris Lattner
f74e25f60c Revert r102300/102301, which serious broke objc apps.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102359 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-26 18:30:45 +00:00
Chris Lattner
044698b39a Fix PR6921: globals were not getting correctly rounded up to their
preferred alignment unless they were common or some other special
case.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102300 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-25 05:30:43 +00:00
Evan Cheng
c3b0c341e7 Avoid using f64 to lower memcpy from constant string. It's cheaper to use i32 store of immediates.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100751 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-08 07:37:57 +00:00
Evan Cheng
3ea97550e3 In 64-bit mode, use i64 to lower memcpy / memset instead of f64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100137 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-01 20:27:45 +00:00
Evan Cheng
255f20f7f7 Fix sdisel memcpy, memset, memmove lowering:
1. Makes it possible to lower with floating point loads and stores.
2. Avoid unaligned loads / stores unless it's fast.
3. Fix some memcpy lowering logic bug related to when to optimize a
   load from constant string into a constant.
4. Adjust x86 memcpy lowering threshold to make it more sane.
5. Fix x86 target hook so it uses vector and floating point memory
   ops more effectively.
rdar://7774704


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100090 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-01 06:04:33 +00:00
Chris Lattner
7e02e52e9a make this less constrained, we want blank lines between globals.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94201 91177308-0d34-0410-b5e6-96231b3b80d8
2010-01-22 19:51:08 +00:00
Chris Lattner
41aa25a433 don't let asm-verbose break the check-next lines in these tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93869 91177308-0d34-0410-b5e6-96231b3b80d8
2010-01-19 06:39:54 +00:00
Bill Wendling
3627b48110 Remove unnecessary check.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90352 91177308-0d34-0410-b5e6-96231b3b80d8
2009-12-02 22:02:20 +00:00
Bill Wendling
77bd09b650 Test from Dhrystone to make sure that we're not emitting an aligned load for a
string that's aligned at 8-bytes instead of 16-bytes.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89295 91177308-0d34-0410-b5e6-96231b3b80d8
2009-11-19 01:33:57 +00:00