in the middle of a split basic block, create a new live interval starting at
the def. This avoid artifically extending the live interval over a number of
cycles where it is dead. e.g.
bb1:
= vr1204 (use / kill) <= new interval starts and ends here.
...
...
vr1204 = (new def) <= start a new interval here.
= vr1204 (use)
llvm-svn: 44436
When a live interval is being spilled, rather than creating short, non-spillable
intervals for every def / use, split the interval at BB boundaries. That is, for
every BB where the live interval is defined or used, create a new interval that
covers all the defs and uses in the BB.
This is designed to eliminate one common problem: multiple reloads of the same
value in a single basic block. Note, it does *not* decrease the number of spills
since no copies are inserted so the split intervals are *connected* through
spill and reloads (or rematerialization). The newly created intervals can be
spilled again, in that case, since it does not span multiple basic blocks, it's
spilled in the usual manner. However, it can reuse the same stack slot as the
previously split interval.
This is currently controlled by -split-intervals-at-bb.
llvm-svn: 44198
can be eliminated by the allocator is the destination and source targets the
same register. The most common case is when the source and destination registers
are in different class. For example, on x86 mov32to32_ targets GR32_ which
contains a subset of the registers in GR32.
The allocator can do 2 things:
1. Set the preferred allocation for the destination of a copy to that of its source.
2. After allocation is done, change the allocation of a copy destination (if
legal) so the copy can be eliminated.
This eliminates 443 extra moves from 403.gcc.
llvm-svn: 43662
(almost) a register copy. However, it always coalesced to the register of the
RHS (the super-register). All uses of the result of a EXTRACT_SUBREG are sub-
register uses which adds subtle complications to load folding, spiller rewrite,
etc.
llvm-svn: 42899
Changes related modules so VNInfo's are not copied. This decrease
copy coalescing time by 45% and overall compilation time by 10% on siod.
llvm-svn: 41579
simultaneously. Move that pass to SimpleRegisterCoalescing.
This makes it easier to implement alternative register allocation and
coalescing strategies while maintaining reuse of the existing live
interval analysis.
llvm-svn: 37520
- A register def / use now implicitly affects sub-register liveness but does
not affect liveness information of super-registers.
- Def of a larger register (if followed by a use later) is treated as
read/mod/write of a smaller register.
llvm-svn: 36434
long live interval that has low usage density.
1. Change order of coalescing to join physical registers with virtual
registers first before virtual register intervals become too long.
2. Check size and usage density to determine if it's worthwhile to join.
3. If joining is aborted, assign virtual register live interval allocation
preference field to the physical register.
4. Register allocator should try to allocate to the preferred register
first (if available) to create identify moves that can be eliminated.
llvm-svn: 36218
- When coalescing a copy MI, if its destination is "dead", propagate the
property to the source MI's destination if there are no intervening uses.
- Detect dead function live-in's and remove them.
llvm-svn: 34383
rework the hacks that had us passing OStream in. We pass in std::ostream*
instead, check for null, and then dispatch to the correct print() method.
llvm-svn: 32636