A Register with subregisters must also provide SubRegIndices for adressing the
subregisters. TableGen automatically inherits indices for sub-subregisters to
minimize typing.
CompositeIndices may be specified for the weirder cases such as the XMM sub_sd
index that returns the same register, and ARM NEON Q registers where both D
subregs have ssub_0 and ssub_1 sub-subregs.
It is now required that all subregisters are named by an index, and a future
patch will also require inherited subregisters to be named. This is necessary to
allow composite subregister indices to be reduced to a single index.
llvm-svn: 104704
A Register with subregisters must also provide SubRegIndices for adressing the
subregisters. TableGen automatically inherits indices for sub-subregisters to
minimize typing.
CompositeIndices may be specified for the weirder cases such as the XMM sub_sd
index that returns the same register, and ARM NEON Q registers where both D
subregs have ssub_0 and ssub_1 sub-subregs.
It is now required that all subregisters are named by an index, and a future
patch will also require inherited subregisters to be named. This is necessary to
allow composite subregister indices to be reduced to a single index.
llvm-svn: 104654
This passes lit tests, but I'll give it a go through the buildbots to smoke out
any remaining places that depend on the old SubRegIndex numbering.
Then I'll remove NumberHack entirely.
llvm-svn: 104615
structure that represents a mapping without any dependencies on SubRegIndex
numbering.
This brings us closer to being able to remove the explicit SubRegIndex
numbering, and it is now possible to specify any mapping without inventing
*_INVALID register classes.
llvm-svn: 104563
This is the beginning of purely symbolic subregister indices, but we need a bit
of jiggling before the explicit numeric indices can be completely removed.
llvm-svn: 104492
and %rcr_, leaving just %cr_ which is what people expect.
Updated the disassembler to support this unified register set.
Added a testcase to verify that the registers continue to be
decoded correctly.
llvm-svn: 103196
and diagnostic groups. This allows the compiler to group
diagnostics together (e.g. "Logic Warning",
"Format String Warning", etc) like the static analyzer does.
This is not exposed through anything in the compiler yet.
llvm-svn: 103050
sub-register indices and outputs a single super register which is formed from
a consecutive sequence of registers.
This is used as register allocation / coalescing aid and it is useful to
represent instructions that output register pairs / quads. For example,
v1024, v1025 = vload <address>
where v1024 and v1025 forms a register pair.
This really should be modelled as
v1024<3>, v1025<4> = vload <address>
but it would violate SSA property before register allocation is done.
Currently we use insert_subreg to form the super register:
v1026 = implicit_def
v1027 - insert_subreg v1026, v1024, 3
v1028 = insert_subreg v1027, v1025, 4
...
= use v1024
= use v1028
But this adds pseudo live interval overlap between v1024 and v1025.
We can now modeled it as
v1024, v1025 = vload <address>
v1026 = REG_SEQUENCE v1024, 3, v1025, 4
...
= use v1024
= use v1026
After coalescing, it will be
v1026<3>, v1025<4> = vload <address>
...
= use v1026<3>
= use v1026
llvm-svn: 102815
code. It used to #include the enhanced disassembly
information for the targets it supported straight
out of lib/Target/{X86,ARM,...} but now it uses a
new interface provided by MCDisassembler, and (so
far) implemented by X86 and ARM.
Also removed hacky #define-controlled initialization
of targets in edis. If clients only want edis to
initialize a limited set of targets, they can set
--enable-targets on the configure command line.
llvm-svn: 101179
We are bound to fail! For proper disassembly, the well-known encoding bits
of the instruction must be fully specified.
This also removes pseudo instructions from considerations of disassembly,
which is a better design and less fragile than the name matchings.
llvm-svn: 100899
such that the non-VFP versions have no implicit defs of VFP registers.
If any callee-saved VFP registers are marked as having been defined, the
prologue/epilogue code will try to save and restore them.
Radar 7770432.
llvm-svn: 100892
I also added a rule to the ARM target's Makefile to
build the ARM-specific instruction information table
for the enhanced disassembler.
I will add the test harness for all this stuff in
a separate commit.
llvm-svn: 100735
argument that had to be between 0 and 7 to have any value,
firing an assert later in the AsmPrinter. Now, the
disassembler rejects instructions with out-of-range values
for that immediate.
llvm-svn: 100694
When a target instruction wants to set target-specific flags, it should simply
set bits in the TSFlags bit vector defined in the Instruction TableGen class.
This works well because TableGen resolves member references late:
class I : Instruction {
AddrMode AM = AddrModeNone;
let TSFlags{3-0} = AM.Value;
}
let AM = AddrMode4 in
def ADD : I;
TSFlags gets the expected bits from AddrMode4 in this example.
llvm-svn: 100384
backend (ARMDecoderEmitter) which emits the decoder functions for ARM and Thumb,
and the disassembler core which invokes the decoder function and builds up the
MCInst based on the decoded Opcode.
Reviewed by Chris Latter and Bob Wilson.
llvm-svn: 100233
doesn't need to be stable because the patterns are fully ordered.
Add a first level sort predicate that orders patterns in this
order: 1) scalar integer operations 2) scalar floating point
3) vector int 4) vector float. This is a trivial sort on their
top level pattern type so it is nice and transitive. The
benefit of doing this is that simple integer operations are
much more common than insane vector things and isel was trying
to match the big complex vector patterns before the simple
ones because the complexity of the vector operations was much
higher. Since they can't both match, it is best (for compile
time) to try the simple integer ones first.
This cuts down the # failed match attempts on real code by
quite a bit, for example, this reduces backtracks on crafty
(as a random example) from 228285 -> 188369.
llvm-svn: 99797
patterns within the generated matcher. This works great except
that the sort fails because the relation defined isn't
transitive. I have a much simpler solution coming next, but want
to archive the code.
llvm-svn: 99795
and those derived from them. These are obnoxious because
they were written as: PatLeaf<(bitconvert). Not having an
argument was foiling adding better type checking for operand
count matching up with what was required (in this case,
bitconvert always requires an operand!)
llvm-svn: 99759
transforming it into (add (i32 GPR), 4). This allows us to write type
generic multi patterns and have tblgen automatically drop the bitconvert
in the case when the types align. This allows us to fold an extra load
in the changed testcase.
llvm-svn: 99756
issues to get here. We now trim the result type list of the
CompleteMatch or MorphNodeTo operation to be the same size as the
thing we're matching. this means that if you match (add GPR, GPR)
with an instruction that produces a normal result and a flag that
we now trim the result in tblgen instead of having to do it
dynamically. This exposed a bunch of inconsistencies in result
counting that happened to be getting lucky since the days of the
old isel.
llvm-svn: 99728
from two places in CodeGenDAGPatterns.cpp, and
use it in DAGISelMatcherGen.cpp instead of using
an incorrect predicate that happened to get lucky
on our current targets.
llvm-svn: 99726
results forward. We can now handle an instruction that
produces one implicit def and one result instead of one or
the other when not at the root of the pattern.
llvm-svn: 99725
bytes instead of one byte. This is important because
we're running up to too many opcodes to fit in a byte
and it is aggrevated by FIRST_TARGET_MEMORY_OPCODE
making the numbering sparse. This just bites the
bullet and bloats out the table. In practice, this
increases the size of the x86 isel table from 74.5K
to 76K. I think we'll cope :)
This fixes rdar://7791648
llvm-svn: 99494
If a TableGen class has an initializer expression containing an X.Y subexpression,
AND X depends on template parameters,
AND those template parameters have defaults,
AND some parameters with defaults are beyond position 1,
THEN parts of the initializer expression are evaluated prematurely with the default values when the first explicit template parameter is substituted, before the remaining explicit template parameters have been substituted.
llvm-svn: 99492
of runs without leak checking. We add -vg to the triple for non-checked runs,
or -vg_leak for checked runs. Also use this to XFAIL the TableGen tests, since
tablegen leaks like a sieve. This includes some valgrindArgs refactoring.
llvm-svn: 99103
to maintain a list of types (one for each result of
the node) instead of a single type. There are liberal
hacks added to emulate the old behavior in various
situations, but they can start disolving now.
llvm-svn: 98999
Python 2.4 always hits this bug: http://bugs.python.org/issue1731717
when running check-lit on multi-core systems.
Setting numThreads to 1 makes it slower, but at least the results reported are
correct.
llvm-svn: 98969
record* -> instrinfo instead of std::string -> instrinfo.
This speeds up tblgen on cellcpu from 7.28 -> 5.98s with a debug
build (20%).
llvm-svn: 98916
like this:
def : Pat<(add ...),
(FOOINST)>;
When fooinst only has a single implicit def (e.g. to R1). This will be handled
as if written as (set R1, (FOOINST ...))
llvm-svn: 98897