Commit Graph

155 Commits

Author SHA1 Message Date
Evan Cheng
83066786b0 Forgot these files.
llvm-svn: 46896
2008-02-08 22:05:27 +00:00
Chris Lattner
f83aae613c rename TargetInstrDescriptor -> TargetInstrDesc.
Make MachineInstr::getDesc return a reference instead
of a pointer, since it can never be null.

llvm-svn: 45695
2008-01-07 07:27:27 +00:00
Chris Lattner
9d38dfa4a5 Move a bunch more accessors from TargetInstrInfo to TargetInstrDescriptor
llvm-svn: 45680
2008-01-07 03:13:06 +00:00
Chris Lattner
f7f96d818f Rename MachineInstr::getInstrDescriptor -> getDesc(), which reflects
that it is cheap and efficient to get.

Move a variety of predicates from TargetInstrInfo into 
TargetInstrDescriptor, which makes it much easier to query a predicate
when you don't have TII around.  Now you can use MI->getDesc()->isBranch()
instead of going through TII, and this is much more efficient anyway. Not
all of the predicates have been moved over yet.

Update old code that used MI->getInstrDescriptor()->Flags to use the
new predicates in many places.

llvm-svn: 45674
2008-01-07 01:56:04 +00:00
Owen Anderson
2e866e9cdf Update CodeGen for MRegisterInfo --> TargetInstrInfo changes.
llvm-svn: 45673
2008-01-07 01:35:56 +00:00
Owen Anderson
e6856128ab Move some more instruction creation methods from RegisterInfo into InstrInfo.
llvm-svn: 45484
2008-01-01 21:11:32 +00:00
Owen Anderson
ae7e2c1e03 Move copyRegToReg from MRegisterInfo to TargetInstrInfo. This is part of the
Machine-level API cleanup instigated by Chris.

llvm-svn: 45470
2007-12-31 06:32:00 +00:00
Chris Lattner
96167aa93c Rename SSARegMap -> MachineRegisterInfo in keeping with the idea
that "machine" classes are used to represent the current state of
the code being compiled.  Given this expanded name, we can start 
moving other stuff into it.  For now, move the UsedPhysRegs and
LiveIn/LoveOuts vectors from MachineFunction into it.

Update all the clients to match.

This also reduces some needless #includes, such as MachineModuleInfo
from MachineFunction.

llvm-svn: 45467
2007-12-31 04:13:23 +00:00
Chris Lattner
7504adbd72 More cleanups for MachineOperand:
- Eliminate the static "print" method for operands, moving it
    into MachineOperand::print.
  - Change various set* methods for register flags to take a bool
    for the value to set it to.  Remove unset* methods.
  - Group methods more logically by operand flavor in MachineOperand.h

llvm-svn: 45461
2007-12-30 21:56:09 +00:00
Chris Lattner
ad9a6ccb83 Remove attribution from file headers, per discussion on llvmdev.
llvm-svn: 45418
2007-12-29 20:36:04 +00:00
Evan Cheng
283f581d51 If deleting a reload instruction due to reuse (value is available in register R and reload is targeting R), make sure to invalidate the kill information of the last kill.
llvm-svn: 44894
2007-12-11 23:36:57 +00:00
Evan Cheng
abc6ab4765 MachineInstr can change. Store indexes instead.
llvm-svn: 44612
2007-12-05 10:24:35 +00:00
Evan Cheng
33ac3dd05f If a split live interval is spilled again, remove the kill marker on its last use.
llvm-svn: 44611
2007-12-05 09:51:10 +00:00
Evan Cheng
a5f3ec9e03 Fix kill info for split intervals.
llvm-svn: 44609
2007-12-05 08:16:32 +00:00
Evan Cheng
1dc8e83707 - Mark last use of a split interval as kill instead of letting spiller track it.
This allows an important optimization to be re-enabled.
- If all uses / defs of a split interval can be folded, give the interval a
  low spill weight so it would not be picked in case spilling is needed (avoid
  pushing other intervals in the same BB to be spilled).

llvm-svn: 44601
2007-12-05 03:22:34 +00:00
Evan Cheng
8464a0bf00 Add a argument to storeRegToStackSlot and storeRegToAddr to specify whether
the stored register is killed.

llvm-svn: 44600
2007-12-05 03:14:33 +00:00
Evan Cheng
a8e7a09fef Remove a unsafe optimization. This fixes 401.bzip2.
llvm-svn: 44587
2007-12-04 23:57:55 +00:00
Evan Cheng
63d60199aa Spiller unfold optimization bug: do not clobber a reusable stack slot value unless it can be modified.
llvm-svn: 44575
2007-12-04 19:19:45 +00:00
Evan Cheng
0b5abc5fcb Bug fixes.
llvm-svn: 44549
2007-12-03 21:31:55 +00:00
Evan Cheng
df066bc833 Update kill info for uses of split intervals.
llvm-svn: 44531
2007-12-03 09:58:48 +00:00
Evan Cheng
58b387dfb0 Remove redundant foldMemoryOperand variants and other code clean up.
llvm-svn: 44517
2007-12-02 08:30:39 +00:00
Evan Cheng
20801d1eb5 Fixed various live interval splitting bugs / compile time issues.
llvm-svn: 44428
2007-11-29 01:06:25 +00:00
Evan Cheng
05d30e7236 Recover compile time regression.
llvm-svn: 44386
2007-11-28 01:28:46 +00:00
Evan Cheng
5c96771102 Live interval splitting:
When a live interval is being spilled, rather than creating short, non-spillable
intervals for every def / use, split the interval at BB boundaries. That is, for
every BB where the live interval is defined or used, create a new interval that
covers all the defs and uses in the BB.

This is designed to eliminate one common problem: multiple reloads of the same
value in a single basic block. Note, it does *not* decrease the number of spills
since no copies are inserted so the split intervals are *connected* through
spill and reloads (or rematerialization). The newly created intervals can be
spilled again, in that case, since it does not span multiple basic blocks, it's
spilled in the usual manner. However, it can reuse the same stack slot as the
previously split interval.

This is currently controlled by -split-intervals-at-bb.

llvm-svn: 44198
2007-11-17 00:40:40 +00:00
Evan Cheng
fd33cb316f Clean up sub-register implementation by moving subReg information back to
MachineOperand auxInfo. Previous clunky implementation uses an external map
to track sub-register uses. That works because register allocator uses
a new virtual register for each spilled use. With interval splitting (coming
soon), we may have multiple uses of the same register some of which are
of using different sub-registers from others. It's too fragile to constantly
update the information.

llvm-svn: 44104
2007-11-14 07:59:08 +00:00
Evan Cheng
65a07e73e2 One more extract_subreg coalescing bug.
llvm-svn: 43644
2007-11-02 17:35:08 +00:00
Evan Cheng
d06938ff7f - Only perform the unfolding optimization when the folding in question is modref.
- Remove a bogus assertion.

llvm-svn: 43211
2007-10-22 03:01:44 +00:00
Evan Cheng
ded6550885 Local spiller optimization:
Turn a store folding instruction into a load folding instruction. e.g.
     xorl  %edi, %eax
     movl  %eax, -32(%ebp)
     movl  -36(%ebp), %eax
     orl   %eax, -32(%ebp)
=>
     xorl  %edi, %eax
     orl   -36(%ebp), %eax
     mov   %eax, -32(%ebp)
This enables the unfolding optimization for a subsequent instruction which will
also eliminate the newly introduced store instruction.

llvm-svn: 43192
2007-10-19 21:23:22 +00:00
Evan Cheng
b9ab88ccd5 Local spiller optimization:
Turn this:
movswl  %ax, %eax
movl    %eax, -36(%ebp)
xorl    %edi, -36(%ebp)
into
movswl  %ax, %eax
xorl    %edi, %eax
movl    %eax, -36(%ebp)
by unfolding the load / store xorl into an xorl and a store when we know the
value in the spill slot is available in a register. This doesn't change the
number of instructions but reduce the number of times memory is accessed.

Also unfold some load folding instructions and reuse the value when similar
situation presents itself.

llvm-svn: 42947
2007-10-13 02:50:24 +00:00
Evan Cheng
d11cd4a095 EXTRACT_SUBREG coalescing support. The coalescer now treats EXTRACT_SUBREG like
(almost) a register copy. However, it always coalesced to the register of the
RHS (the super-register). All uses of the result of a EXTRACT_SUBREG are sub-
register uses which adds subtle complications to load folding, spiller rewrite,
etc.

llvm-svn: 42899
2007-10-12 08:50:34 +00:00
Evan Cheng
5f9e291240 Allow copyRegToReg to emit cross register classes copies.
Tested with "make check"!

llvm-svn: 42346
2007-09-26 06:25:56 +00:00
Dan Gohman
fb60c0dfed Remove isReg, isImm, and isMBB, and change all their users to use
isRegister, isImmediate, and isMachineBasicBlock, which are equivalent,
and more popular.

llvm-svn: 41958
2007-09-14 20:33:02 +00:00
David Greene
6df3ab79be Add instruction dump output. This helps find bugs.
llvm-svn: 41744
2007-09-06 16:36:39 +00:00
Evan Cheng
f758fa5c35 If the source of a move is in spill slot, the reload may be folded to essentially a load from stack slot. It's ok to mark the stack slot value as available for reuse. But it should not be clobbered since the destination of the move is live.
llvm-svn: 41109
2007-08-15 20:20:34 +00:00
Evan Cheng
70879e8dae - If a def is dead, do not spill it.
- If the defs of a spilled rematerializable MI are dead after the spill store is deleted, delete
  the def MI as well.

llvm-svn: 41086
2007-08-14 23:25:37 +00:00
Evan Cheng
0fbe2a0ec4 If a MI's def is remat as well as spilled, and the store is later deemed dead, mark the def operand as isDead.
llvm-svn: 41083
2007-08-14 20:23:13 +00:00
Evan Cheng
de24ad8897 If a spilled value is being reused and the use is a kill, that means there are
no more uses within the MBB and the spilled value isn't live out of the MBB.
Then it's safe to delete the spill store.

llvm-svn: 41069
2007-08-14 09:11:18 +00:00
Evan Cheng
886c1fe427 If a rematerializable def is not deleted, i.e. it is also spilled, check if the
spilled value is available for reuse.

llvm-svn: 41067
2007-08-14 05:42:54 +00:00
Evan Cheng
215f802b75 Re-implement trivial rematerialization. This allows def MIs whose live intervals that are coalesced to be rematerialized.
llvm-svn: 41060
2007-08-13 23:45:17 +00:00
Evan Cheng
c7216b3a86 Missed a couple of places where new instructions are added due to spill / restore.
llvm-svn: 39748
2007-07-11 19:17:18 +00:00
Evan Cheng
2791a7ea6d No longer need to track last def / use.
llvm-svn: 38534
2007-07-11 08:47:44 +00:00
Evan Cheng
c42a2e03f1 Fix for PR1545: Revamp code that update kill information due to register reuse.
llvm-svn: 38525
2007-07-11 05:28:39 +00:00
Dan Gohman
b60d8a92c9 Replace M_REMATERIALIZIBLE and the newly-added isOtherReMaterializableLoad
with a general target hook to identify rematerializable instructions. Some
instructions are only rematerializable with specific operands, such as loads
from constant pools, while others are always rematerializable. This hook
allows both to be identified as being rematerializable with the same
mechanism.

llvm-svn: 37644
2007-06-19 01:48:05 +00:00
Dan Gohman
35f2b4d716 Add a target hook to allow loads from constant pools to be rematerialized, and an
implementation for x86.

llvm-svn: 37576
2007-06-14 20:50:44 +00:00
Evan Cheng
eff332e3e1 Rename findRegisterUseOperand to findRegisterUseOperandIdx to avoid confusion.
llvm-svn: 36483
2007-04-26 19:00:32 +00:00
Evan Cheng
7f44e880dc Match MachineFunction::UsedPhysRegs changes.
llvm-svn: 36452
2007-04-25 22:13:27 +00:00
Evan Cheng
b7ec9433b3 Re-materialize all loads from fixed stack slots.
llvm-svn: 35660
2007-04-04 07:40:01 +00:00
Evan Cheng
d814ad9d50 Don't add the same MI to register reuse "last def/use" twice if it reads the
register more than once.

llvm-svn: 35513
2007-03-30 20:21:35 +00:00
Evan Cheng
c81bdb4100 Don't call getOperandConstraint() if operand index is greater than
TID->numOperands.

llvm-svn: 35375
2007-03-27 00:48:28 +00:00
Evan Cheng
0ff19780f9 Fix for PR1266. Don't mark a two address operand IsKill.
llvm-svn: 35365
2007-03-26 22:40:42 +00:00