[ARM] Fix subtarget feature set truncation when using .cpu directive

This is a bug that was caused due to storing the feature bitset in a 32-bit
variable when it is a 64-bit mask, discarding the top half of the feature set.

llvm-svn: 228151
This commit is contained in:
Bradley Smith 2015-02-04 16:23:24 +00:00
parent 8107003a7b
commit 40791397a7
2 changed files with 5 additions and 2 deletions

View File

@ -9182,8 +9182,7 @@ bool ARMAsmParser::parseDirectiveCPU(SMLoc L) {
// see: http://llvm.org/bugs/show_bug.cgi?id=20757
STI.InitMCProcessorInfo(CPU, "");
STI.InitCPUSchedModel(CPU);
unsigned FB = ComputeAvailableFeatures(STI.getFeatureBits());
setAvailableFeatures(FB);
setAvailableFeatures(ComputeAvailableFeatures(STI.getFeatureBits()));
return false;
}

View File

@ -11,3 +11,7 @@ dsb
dsb
// CHECK-ERROR: error: Unknown CPU name
.cpu foobar
// CHECK: .cpu cortex-m3
.cpu cortex-m3
// CHECK: sub sp, #16
sub sp,#16