of a mneumonic, report operand errors with better location
info. For example, we now report:
t.s:6:14: error: invalid operand for instruction
cwtl $1
^
but we fail for common cases like:
t.s:11:4: error: invalid operand for instruction
addl $1, $1
^
because we don't know if this is supposed to be the reg/imm or imm/reg
form.
llvm-svn: 113178
by doing a binary search over the mnemonic instead of doing a linear
search through all possible instructions. This implements rdar://7785064
llvm-svn: 113171
generated matcher, emiting it as a column in the MatchEntry
table instead of forcing it to go through classification and
everything else. Making it be classified caused tblgen to
produce a ton of one-off classes for each mneumonic. This
should reduce the size of the generated matcher significantly
while paving the way for future improvements.
llvm-svn: 113169
failed because a subtarget feature was not enabled. Use this to
remove a bunch of hacks from the X86AsmParser for rejecting things
like popfl in 64-bit mode. Previously these hacks weren't needed,
but were important to get a message better than "invalid instruction"
when used in the wrong mode.
This also fixes bugs where pushal would not be rejected correctly in
32-bit mode (just pusha).
llvm-svn: 113166
in the duplicated block instead of duplicating them.
Duplicating them into the end of the loop and the preheader
means that we got a phi node in the header of the loop,
which prevented LICM from hoisting them. GVN would
usually come around later and merge the duplicated
instructions so we'd get reasonable output... except that
anything dependent on the shoulda-been-hoisted value can't
be hoisted. In PR5319 (which this fixes), a memory value
didn't get promoted.
llvm-svn: 113134
Loop::hasLoopInvariantOperands method. Remove
a useless and confusing Loop::isLoopInvariant(Instruction)
method, which didn't do what you thought it did.
No functionality change.
llvm-svn: 113133
Since mem2reg isn't run at -O0, we get a ton of reloads from the stack,
for example, before, this code:
int foo(int x, int y, int z) {
return x+y+z;
}
used to compile into:
_foo: ## @foo
subq $12, %rsp
movl %edi, 8(%rsp)
movl %esi, 4(%rsp)
movl %edx, (%rsp)
movl 8(%rsp), %edx
movl 4(%rsp), %esi
addl %edx, %esi
movl (%rsp), %edx
addl %esi, %edx
movl %edx, %eax
addq $12, %rsp
ret
Now we produce:
_foo: ## @foo
subq $12, %rsp
movl %edi, 8(%rsp)
movl %esi, 4(%rsp)
movl %edx, (%rsp)
movl 8(%rsp), %edx
addl 4(%rsp), %edx ## Folded load
addl (%rsp), %edx ## Folded load
movl %edx, %eax
addq $12, %rsp
ret
Fewer instructions and less register use = faster compiles.
llvm-svn: 113102