registers (not as the max number of registers).
Change toSpill from a std::set into a std::vector<bool>.
Use the reverse iterator adapter to do a reverse scan of allocatable
registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11061 91177308-0d34-0410-b5e6-96231b3b80d8
is a move between two registers, at least one of the registers is
virtual and the two live intervals do not overlap.
This results in about 40% reduction in intervals, 30% decrease in the
register allocators running time and a 20% increase in peephole
optimizations (mainly move eliminations).
The option can be enabled by passing -join-liveintervals where
appropriate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10965 91177308-0d34-0410-b5e6-96231b3b80d8
virtReg lives on the stack. Now a virtual register has an entry in the
virtual->physical map or the virtual->stack slot map but never in
both.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10958 91177308-0d34-0410-b5e6-96231b3b80d8
of the register allocator as follows:
before after
mesa 2.3790 1.5994
vpr 2.6008 1.2078
gcc 1.9840 0.5273
mcf 0.2569 0.0470
eon 1.8468 1.4359
twolf 0.9475 0.2004
burg 1.6807 1.3300
lambda 1.2191 0.3764
Speedups range anyware from 30% to over 400% :-)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10712 91177308-0d34-0410-b5e6-96231b3b80d8
saved register it has a longer free range than ECX (which is defined
every time there is a fnuction call) which makes ECX a better register
to reserve.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10635 91177308-0d34-0410-b5e6-96231b3b80d8
which denotes the register we would like to be assigned to (virtual or
physical). In register allocation, if this hint exists and we can map
it to a physical register (it is either a physical register or it is a
virtual register that already got assigned to a physical one) we use
that register if it is available instead of a random one in the free
pool.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10634 91177308-0d34-0410-b5e6-96231b3b80d8
nesting level when computing it. Right now the allocator uses:
w = sum_over_defs_uses( 10 ^ nesting level );
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10569 91177308-0d34-0410-b5e6-96231b3b80d8
instruction pass. This also fixes all remaining bugs for this new
allocator to pass all tests under test/Programs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10515 91177308-0d34-0410-b5e6-96231b3b80d8
a) remove opIsUse(), opIsDefOnly(), opIsDefAndUse()
b) add isUse(), isDef()
c) rename opHiBits32() to isHiBits32(),
opLoBits32() to isLoBits32(),
opHiBits64() to isHiBits64(),
opLoBits64() to isLoBits64().
This results to much more readable code, for example compare
"op.opIsDef() || op.opIsDefAndUse()" to "op.isDef()" a pattern used
very often in the code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10461 91177308-0d34-0410-b5e6-96231b3b80d8
bug where spill instructions were added to the next basic block
instead of the end of the current one if the instruction that required
the spill was the last in the block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@10272 91177308-0d34-0410-b5e6-96231b3b80d8