Tevert part of the x86 subtarget logic changes: when -march=x86-64

is given, override the subtarget settings and enable 64-bit support.
This restores the earlier behavior, and fixes regressions on
Non-64-bit-capable x86-32 hosts.

This isn't necessarily the best approach, but the most obvious
alternative is to require -mcpu=x86-64 or -mattr=+64bit to be used
with -march=x86-64 when the host doesn't have 64-bit support. This
makes things little more consistent, but it's less convenient, and
it has the practical drawback of requiring lots of test changes, so
I opted for the above approach for now.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@63642 91177308-0d34-0410-b5e6-96231b3b80d8
This commit is contained in:
Dan Gohman 2009-02-03 18:53:21 +00:00
parent b51d40cf40
commit 605679f0cd

View File

@ -327,15 +327,16 @@ X86Subtarget::X86Subtarget(const Module &M, const std::string &FS, bool is64Bit)
} else {
// Otherwise, use CPUID to auto-detect feature set.
AutoDetectSubtargetFeatures();
// If requesting codegen for X86-64, make sure that 64-bit features
// are enabled.
if (Is64Bit)
HasX86_64 = true;
// Make sure SSE2 is enabled; it is available on all X86-64 CPUs.
if (Is64Bit && X86SSELevel < SSE2)
X86SSELevel = SSE2;
}
// If requesting codegen for X86-64, make sure that 64-bit features
// are enabled.
if (Is64Bit)
HasX86_64 = true;
DOUT << "Subtarget features: SSELevel " << X86SSELevel
<< ", 3DNowLevel " << X863DNowLevel
<< ", 64bit " << HasX86_64 << "\n";