if (!N.Val->hasOneUse()) {
std::map<SDOperand, SDOperand>::iterator CGMI = CodeGenMap.find(N);
if (CGMI != CodeGenMap.end()) return CGMI->second;
}
Suppose a DAG like this:
X
^ ^
/ \
USE1 USE2
Suppose USE1 is being selected first and during which X is selected and
returned a new node. After this, USE1 is no longer an use of X. During USE2
selection, X will be selected again since it has only one use!
The fix is to always query CodeGenMap.
llvm-svn: 24679
it has more than one real use (non-chain uses).
* Record folded chain producing node in CodeGenMap.
* Do not fold a chain producing node if it has already been selected as an
operand of a chain use.
llvm-svn: 24647
matching code that is not currently auto-generated by tblgen, e.g. X86
addressing mode. Selection routines for complex patterns can return multiple operands, e.g. X86 addressing mode returns 4.
llvm-svn: 24634
* Enhanced tblgen to handle instructions which have chain operand and writes a
chain result.
* Enhanced tblgen to handle instructions which produces no results. Part of
the change is a temporary hack which relies on instruction property (e.g.
isReturn, isBranch). The proper fix would be to change the .td syntax to
separate results dag from ops dag.
llvm-svn: 24587
def SHL8rCL : I<0xD2, MRM4r, (ops R8 :$dst, R8 :$src),
"shl{b} {%cl, $dst|$dst, %CL}",
[(set R8:$dst, (shl R8:$src, CL))]>, Imp<[CL],[]>;
This generates a CopyToReg operand and added its 2nd result to the shl as
a flag operand.
llvm-svn: 24557
write things like this:
def : Pat<(add GPRC:$in, 12),
(ADD12 GPRC:$in)>;
Andrew: if this isn't enough or doesn't work for you, please lemme know.
llvm-svn: 23819
a pattern match, make sure to emit the (minimal number of) type checks that
verify the pattern matches this specific instruction. This allows FMA32
patterns to not match double expressions for example.
llvm-svn: 23748
type constraint. This lets tblgen realize that it doesn't need any dynamic
type checks for fextend/fround on PPC (and many other targets), because there
are only two fp types.
llvm-svn: 23730
1. If an operation has to be int or fp and the target only supports one
int or fp type, relize that the op has to have that type.
2. If a target has operations on multiple types, do not emit matching code
for patterns involving those operators, since we do not emit the code to
check for them yet. This prevents PPC from generating FP ops currently.
Also move some code around into more logical places.
llvm-svn: 23724
doesn't have to specify them manually. It currently handles associativity,
e.g. knowing that (X*Y)+Z also matches X+(Y*Z) and will be extended in
the future.
It is smart enough to not introduce duplicate patterns or patterns that can
never match.
llvm-svn: 23526
Currently we check that immediate values live on the RHS of commutative
operators. Defining ORI like this, for example:
def ORI : DForm_4<24, (ops GPRC:$dst, GPRC:$src1, u16imm:$src2),
"ori $dst, $src1, $src2",
[(set GPRC:$dst, (or immZExt16:$src2, GPRC:$src1))]>;
results in:
tblgen: In ORI: Instruction can never match: Immediate values must be on the RHS of commutative operators!
llvm-svn: 23501
This does not check that types match yet, but PPC only has one integer type
;-).
This also doesn't have the code to build the resultant dag.
llvm-svn: 23414
constraints defined in the DAG node definitions in the .td files. This
allows us to infer (and check!) the types for all nodes in the current
ppc .td file. For example, instead of:
Inst pattern EQV: (set GPRC:i32:$rT, (xor (xor GPRC:i32:$rA, GPRC:i32:$rB), (imm)<<Predicate_immAllOnes>>))
we now fully infer:
Inst pattern EQV: (set:void GPRC:i32:$rT, (xor:i32 (xor:i32 GPRC:i32:$rA, GPRC:i32:$rB), (imm:i32)<<Predicate_immAllOnes>>))
from: (set GPRC:$rT, (not (xor GPRC:$rA, GPRC:$rB)))
llvm-svn: 23284
progress. It correctly parses instructions and pattern fragments and glues
together pattern fragments into instructions.
The only code it generates currently is some boilerplate code for things
like the EntryNode.
llvm-svn: 23261
These changes modify the makefiles so that the output of flex and bison are
placed in the SRC directory, not the OBJ directory. It is intended that they
be checked in as any other LLVM source so that platforms without convenient
access to flex/bison can be compiled. From now on, if you change a .y or
.l file you *must* also commit the generated .cpp and .h files.
llvm-svn: 23115
anonymous regclass definition renaming.
Change the generated code to emit register classes as properly namespace
qualified entities like everything else.
expose the actual class definition as an object, though this isn't quite
usable yet.
llvm-svn: 22920
instruction defined, actually emit this to the InstrInfoDescriptor, which
allows an assert in the machineinstrbuilder to do some checking for us,
and is required by the dag->dag emitter
llvm-svn: 22895
LLVM is able to merge identical static const globals, GCC isn't, and this caused
some bloat in the generated data. This has a marginal effect on PPC, shrinking
the implicit sets from 10->4, but shrinks X86 from 179 to 23, a much bigger
reduction.
This should speed up the register allocator as well by reducing the dcache
footprint for this static data.
llvm-svn: 22879
printed as part of the opcode. This allows something like
cmp${cc}ss in the x86 backed to be printed as cmpltss, cmpless, etc.
depending on what the value of $cc is.
llvm-svn: 22439
finished up, only resolve fully when the def is defined. This allows things
to be changed and all uses to be propagated through. This implements
TableGen/LazyChange.td and fixes TemplateArgRename.td in the process.
None of the .td files used in LLVM backends are changed at all by this
patch.
llvm-svn: 21344
differences, which means that identical instructions (after stripping off
the first literal string) do not run any different code at all. On the X86,
this turns this code:
switch (MI->getOpcode()) {
case X86::ADC32mi: printOperand(MI, 4, MVT::i32); break;
case X86::ADC32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::ADC32mr: printOperand(MI, 4, MVT::i32); break;
case X86::ADD32mi: printOperand(MI, 4, MVT::i32); break;
case X86::ADD32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::ADD32mr: printOperand(MI, 4, MVT::i32); break;
case X86::AND32mi: printOperand(MI, 4, MVT::i32); break;
case X86::AND32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::AND32mr: printOperand(MI, 4, MVT::i32); break;
case X86::CMP32mi: printOperand(MI, 4, MVT::i32); break;
case X86::CMP32mr: printOperand(MI, 4, MVT::i32); break;
case X86::MOV32mi: printOperand(MI, 4, MVT::i32); break;
case X86::MOV32mr: printOperand(MI, 4, MVT::i32); break;
case X86::OR32mi: printOperand(MI, 4, MVT::i32); break;
case X86::OR32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::OR32mr: printOperand(MI, 4, MVT::i32); break;
case X86::ROL32mi: printOperand(MI, 4, MVT::i8); break;
case X86::ROR32mi: printOperand(MI, 4, MVT::i8); break;
case X86::SAR32mi: printOperand(MI, 4, MVT::i8); break;
case X86::SBB32mi: printOperand(MI, 4, MVT::i32); break;
case X86::SBB32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::SBB32mr: printOperand(MI, 4, MVT::i32); break;
case X86::SHL32mi: printOperand(MI, 4, MVT::i8); break;
case X86::SHLD32mrCL: printOperand(MI, 4, MVT::i32); break;
case X86::SHR32mi: printOperand(MI, 4, MVT::i8); break;
case X86::SHRD32mrCL: printOperand(MI, 4, MVT::i32); break;
case X86::SUB32mi: printOperand(MI, 4, MVT::i32); break;
case X86::SUB32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::SUB32mr: printOperand(MI, 4, MVT::i32); break;
case X86::TEST32mi: printOperand(MI, 4, MVT::i32); break;
case X86::TEST32mr: printOperand(MI, 4, MVT::i32); break;
case X86::TEST8mi: printOperand(MI, 4, MVT::i8); break;
case X86::XCHG32mr: printOperand(MI, 4, MVT::i32); break;
case X86::XOR32mi: printOperand(MI, 4, MVT::i32); break;
case X86::XOR32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::XOR32mr: printOperand(MI, 4, MVT::i32); break;
}
into this:
switch (MI->getOpcode()) {
case X86::ADC32mi:
case X86::ADC32mr:
case X86::ADD32mi:
case X86::ADD32mr:
case X86::AND32mi:
case X86::AND32mr:
case X86::CMP32mi:
case X86::CMP32mr:
case X86::MOV32mi:
case X86::MOV32mr:
case X86::OR32mi:
case X86::OR32mr:
case X86::SBB32mi:
case X86::SBB32mr:
case X86::SHLD32mrCL:
case X86::SHRD32mrCL:
case X86::SUB32mi:
case X86::SUB32mr:
case X86::TEST32mi:
case X86::TEST32mr:
case X86::XCHG32mr:
case X86::XOR32mi:
case X86::XOR32mr: printOperand(MI, 4, MVT::i32); break;
case X86::ADC32mi8:
case X86::ADD32mi8:
case X86::AND32mi8:
case X86::OR32mi8:
case X86::ROL32mi:
case X86::ROR32mi:
case X86::SAR32mi:
case X86::SBB32mi8:
case X86::SHL32mi:
case X86::SHR32mi:
case X86::SUB32mi8:
case X86::TEST8mi:
case X86::XOR32mi8: printOperand(MI, 4, MVT::i8); break;
}
After this, the generated asmwriters look pretty much as though they were
generated by hand. This shrinks the X86 asmwriter.inc files from 55101->39669
and 55429->39551 bytes each, and PPC from 16766->12859 bytes.
llvm-svn: 19760
strings starts out with a constant string, we emit the string first, using
a table lookup (instead of a switch statement).
Because this is usually the opcode portion of the asm string, the differences
between the instructions have now been greatly reduced. This allows many
more case statements to be grouped together.
This patch also allows instruction cases to be grouped together when the
instruction patterns are exactly identical (common after the opcode string
has been ripped off), and when the differing operand is a MachineInstr
operand that needs to be formatted.
The end result of this is a mean and lean generated AsmPrinter!
llvm-svn: 19759
emitting code like this:
case PPC::ADD: O << "add "; printOperand(MI, 0, MVT::i64); O << ", "; prin
tOperand(MI, 1, MVT::i64); O << ", "; printOperand(MI, 2, MVT::i64); O << '\n
'; break;
case PPC::ADDC: O << "addc "; printOperand(MI, 0, MVT::i64); O << ", "; pr
intOperand(MI, 1, MVT::i64); O << ", "; printOperand(MI, 2, MVT::i64); O << '
\n'; break;
case PPC::ADDE: O << "adde "; printOperand(MI, 0, MVT::i64); O << ", "; pr
intOperand(MI, 1, MVT::i64); O << ", "; printOperand(MI, 2, MVT::i64); O << '
\n'; break;
...
Emit code like this:
case PPC::ADD:
case PPC::ADDC:
case PPC::ADDE:
...
switch (MI->getOpcode()) {
case PPC::ADD: O << "add "; break;
case PPC::ADDC: O << "addc "; break;
case PPC::ADDE: O << "adde "; break;
...
}
printOperand(MI, 0, MVT::i64);
O << ", ";
printOperand(MI, 1, MVT::i64);
O << ", ";
printOperand(MI, 2, MVT::i64);
O << "\n";
break;
This shrinks the PPC asm writer from 24785->15205 bytes (even though the new
asmwriter has much more whitespace than the old one), and the X86 printers shrink
quite a bit too. The important implication of this is that GCC no longer hits swap
when building the PPC backend in optimized mode. Thus this fixes PR448.
-Chris
llvm-svn: 19755
and more understandable. It also allows us to do simple things like fold
consequtive literal strings together. For example, instead of emitting this
for the X86 backend:
O << "adc" << "l" << " ";
we now generate this:
O << "adcl ";
*whoa* :)
This shrinks the X86 asmwriters from 62729->58267 and 65176->58644 bytes
for the intel/att asm writers respectively.
llvm-svn: 19749