Commit Graph

15 Commits

Author SHA1 Message Date
Jakob Stoklund Olesen
8976daa3cc Convert RAGreedy to LiveRegMatrix interference checking.
Stop depending on the LiveIntervalUnions in RegAllocBase, they are about
to be removed.

The changes are mostly replacing register alias iterators with regunit
iterators, and querying LiveRegMatrix instrad of RegAllocBase.

InterferenceCache is converted to work with per-regunit
LiveIntervalUnions, and it checks fixed regunit interference separately,
using the fixed live intervals provided by LiveIntervalAnalysis.

The local splitting helper calcGapWeights() is also considering fixed
regunit interference which is kept on the side now.

llvm-svn: 158867
2012-06-20 22:52:26 +00:00
Jakob Stoklund Olesen
be0b8939c0 Switch all register list clients to the new MC*Iterator interface.
No functional change intended.

Sorry for the churn. The iterator classes are supposed to help avoid
giant commits like this one in the future. The TableGen-produced
register lists are getting quite large, and it may be necessary to
change the table representation.

This makes it possible to do so without changing all clients (again).

llvm-svn: 157854
2012-06-01 23:28:30 +00:00
Craig Topper
8cc9d75c6a Use uint16_t to store register overlaps to reduce static data.
llvm-svn: 152001
2012-03-04 10:43:23 +00:00
Jakob Stoklund Olesen
6dfa98e1c1 Fix global live range splitting regmask accuracy.
Pretend that regmask interference ends at the 'dead' slot, even when
there is other interference ending at the 'reg' slot of the same
instruction.

llvm-svn: 150531
2012-02-14 23:53:23 +00:00
Jakob Stoklund Olesen
b58e9ef8b1 Add a static MachineOperand::clobbersPhysReg().
It can be necessary to detach a register mask pointer from its
MachineOperand. This method is convenient for checking clobbered
physregs on a detached bitmask pointer.

llvm-svn: 150261
2012-02-10 19:23:53 +00:00
Jakob Stoklund Olesen
4fe2a13535 Add register mask support to InterferenceCache.
This makes global live range splitting behave identically with and
without register mask operands.

This is not necessarily the best way of using register masks for live
range splitting.  It would be more efficient to first split global live
ranges around calls (i.e., register masks), and reserve the fine grained
per-physreg interference guidance for global live ranges that do not
cross calls.

For now the goal is to produce identical assembly when enabling register
masks.

llvm-svn: 150259
2012-02-10 18:58:34 +00:00
Andrew Trick
796cca6eff Remove pointless mode line in .cpp file.
llvm-svn: 148143
2012-01-13 22:04:16 +00:00
Andrew Trick
97340838f5 wrong filename
llvm-svn: 148103
2012-01-13 06:30:22 +00:00
Jakob Stoklund Olesen
4b2cc68d11 Allow null interference cursors to be queried.
They always report 'no interference'.

llvm-svn: 135843
2011-07-23 03:10:17 +00:00
Jakob Stoklund Olesen
b0af7bda8d Reapply r135121 with a fixed copy constructor.
Original commit message:

Count references to interference cache entries.

Each InterferenceCache::Cursor instance references a cache entry. A
non-zero reference count guarantees that the entry won't be reused for a
new register.

This makes it possible to have multiple live cursors examining
interference for different physregs.

The total number of live cursors into a cache must be kept below
InterferenceCache::getMaxCursors().

Code generation should be unaffected by this change, and it doesn't seem
to affect the cache replacement strategy either.

llvm-svn: 135130
2011-07-14 05:35:11 +00:00
Jakob Stoklund Olesen
718e76d4dd Revert r135121 which broke a gcc-4.2 builder.
llvm-svn: 135122
2011-07-14 00:58:38 +00:00
Jakob Stoklund Olesen
7d17ec883e Count references to interference cache entries.
Each InterferenceCache::Cursor instance references a cache entry. A
non-zero reference count guarantees that the entry won't be reused for a
new register.

This makes it possible to have multiple live cursors examining
interference for different physregs.

The total number of live cursors into a cache must be kept below
InterferenceCache::getMaxCursors().

Code generation should be unaffected by this change, and it doesn't seem
to affect the cache replacement strategy either.

llvm-svn: 135121
2011-07-14 00:31:14 +00:00
Jakob Stoklund Olesen
b530849e81 Precompute interference for neighbor blocks as long as there is no interference.
This doesn't require seeking in the live interval union, so it is very cheap.

llvm-svn: 129187
2011-04-09 02:59:05 +00:00
Jakob Stoklund Olesen
aace1636b6 Avoid moving iterators when the previous block was just visited.
llvm-svn: 129081
2011-04-07 17:27:50 +00:00
Jakob Stoklund Olesen
f881310607 Add an InterferenceCache class for caching per-block interference ranges.
When the greedy register allocator is splitting multiple global live ranges, it
tends to look at the same interference data many times. The InterferenceCache
class caches queries for unaltered LiveIntervalUnions.

llvm-svn: 128764
2011-04-02 06:03:35 +00:00