Clobber ranges are no longer used when joining physical registers.
Instead, all aliases are checked for interference.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113084 91177308-0d34-0410-b5e6-96231b3b80d8
register at a time. This turns out to be slightly faster than iterating over
instructions, but more importantly, it allows us to compute spill weights for
new registers created after the spill weight pass has run.
Also compute the allocation hint at the same time as the spill weight. This
allows us to use the spill weight as a cost metric for copies, and choose the
most profitable hint if there is more than one possibility.
The new hints provide a very small (< 0.1%) but universal code size improvement.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110631 91177308-0d34-0410-b5e6-96231b3b80d8
After heavy editing of a live interval, it is much easier to simply renumber the
live values instead of trying to keep track of the unused ones.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110463 91177308-0d34-0410-b5e6-96231b3b80d8
LiveInterval::overlapsFrom dereferences end() if it is called on an empty
interval.
It would be reasonable to just return false - an empty interval doesn't overlap
anything, but I want to know who is doing it first.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108264 91177308-0d34-0410-b5e6-96231b3b80d8
The VNInfo.kills vector was almost unused except for all the code keeping it
updated. The few places using it were easily rewritten to check for interval
ends instead.
The two new methods LiveInterval::killedAt and killedInRange are replacements.
This brings us down to 3 independent data structures tracking kills.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106905 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes PR7479 and PR7485. The test cases from those PRs are big, so not
included. However, PR7485 comes from self hosting on FreeBSD, so we will surely
hear about any regression.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106811 91177308-0d34-0410-b5e6-96231b3b80d8
When a call is placed to spill an interval this spiller will first try to
break the interval up into its component values. Single value intervals and
intervals which have already been split (or are the result of previous splits)
are spilled by the default spiller.
Splitting intervals as described above may improve the performance of generated
code in some circumstances. This work is experimental however, and it still
miscompiles many benchmarks. It's not recommended for general use yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90951 91177308-0d34-0410-b5e6-96231b3b80d8
This introduces a new pass, SlotIndexes, which is responsible for numbering
instructions for register allocation (and other clients). SlotIndexes numbering
is designed to match the existing scheme, so this patch should not cause any
changes in the generated code.
For consistency, and to avoid naming confusion, LiveIndex has been renamed
SlotIndex.
The processImplicitDefs method of the LiveIntervals analysis has been moved
into its own pass so that it can be run prior to SlotIndexes. This was
necessary to match the existing numbering scheme.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85979 91177308-0d34-0410-b5e6-96231b3b80d8
a new class, MachineInstrIndex, which hides arithmetic details from
most clients. This is a step towards allowing the register allocator
to update/insert code during allocation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@81040 91177308-0d34-0410-b5e6-96231b3b80d8
range's weight properly. This is turned off right now in the sense that
you'll get an assert if you get into a situation that can only be caused
by an iterative coalescer. All other code paths operate exactly as
before so there is no functional change with this patch. The asserts
should be disabled if/when an iterative coalescer gets added to trunk.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@76680 91177308-0d34-0410-b5e6-96231b3b80d8
as an (index,bool) pair. The bool flag records whether the kill is a
PHI kill or not. This code will be used to enable splitting of live
intervals containing PHI-kills.
A slight change to live interval weights introduced an extra spill
into lsr-code-insertion (outside the critical sections). The test
condition has been updated to reflect this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@75097 91177308-0d34-0410-b5e6-96231b3b80d8
- Change register allocation hint to a pair of unsigned integers. The hint type is zero (which means prefer the register specified as second part of the pair) or entirely target dependent.
- Allow targets to specify alternative register allocation orders based on allocation hint.
Part 2.
- Use the register allocation hint system to implement more aggressive load / store multiple formation.
- Aggressively form LDRD / STRD. These are formed *before* register allocation. It has to be done this way to shorten live interval of base and offset registers. e.g.
v1025 = LDR v1024, 0
v1026 = LDR v1024, 0
=>
v1025,v1026 = LDRD v1024, 0
If this transformation isn't done before allocation, v1024 will overlap v1025 which means it more difficult to allocate a register pair.
- Even with the register allocation hint, it may not be possible to get the desired allocation. In that case, the post-allocation load / store multiple pass must fix the ldrd / strd instructions. They can either become ldm / stm instructions or back to a pair of ldr / str instructions.
This is work in progress, not yet enabled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73381 91177308-0d34-0410-b5e6-96231b3b80d8
with SUBREG_TO_REG, teach SimpleRegisterCoalescing to coalesce
SUBREG_TO_REG instructions (which are similar to INSERT_SUBREG
instructions), and teach the DAGCombiner to take advantage of this on
targets which support it. This eliminates many redundant
zero-extension operations on x86-64.
This adds a new TargetLowering hook, isZExtFree. It's similar to
isTruncateFree, except it only applies to actual definitions, and not
no-op truncates which may not zero the high bits.
Also, this adds a new optimization to SimplifyDemandedBits: transform
operations like x+y into (zext (add (trunc x), (trunc y))) on targets
where all the casts are no-ops. In contexts where the high part of the
add is explicitly masked off, this allows the mask operation to be
eliminated. Fix the DAGCombiner to avoid undoing these transformations
to eliminate casts on targets where the casts are no-ops.
Also, this adds a new two-address lowering heuristic. Since
two-address lowering runs before coalescing, it helps to be able to
look through copies when deciding whether commuting and/or
three-address conversion are profitable.
Also, fix a bug in LiveInterval::MergeInClobberRanges. It didn't handle
the case that a clobber range extended both before and beyond an
existing live range. In that case, multiple live ranges need to be
added. This was exposed by the new subreg coalescing code.
Remove 2008-05-06-SpillerBug.ll. It was bugpoint-reduced, and the
spiller behavior it was looking for no longer occurrs with the new
instruction selection.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@68576 91177308-0d34-0410-b5e6-96231b3b80d8
1. Use the same value# to represent unknown values being merged into sub-registers.
2. When coalescer commute an instruction and the destination is a physical register, update its sub-registers by merging in the extended ranges.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66610 91177308-0d34-0410-b5e6-96231b3b80d8
Live interval reconstruction needs to account for this, and scour its maps to
prevent dangling references.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@63558 91177308-0d34-0410-b5e6-96231b3b80d8