11579 Commits

Author SHA1 Message Date
Evan Cheng
d575a99d75 Fix a couple of places where changes are made but not tracked.
llvm-svn: 129287
2011-04-11 18:47:20 +00:00
Jakob Stoklund Olesen
d224a3530a Don't add live ranges for sub-registers when clobbering a physical register.
Both coalescing and register allocation already check aliases for interference,
so these extra segments are only slowing us down.

This speeds up both linear scan and the greedy register allocator.

llvm-svn: 129283
2011-04-11 18:08:10 +00:00
Jakob Stoklund Olesen
57f2eda288 Speed up LiveIntervalUnion::unify by handling end insertion specially.
This particularly helps with the initial transfer of fixed intervals.

llvm-svn: 129277
2011-04-11 15:00:44 +00:00
Jakob Stoklund Olesen
97bb6d4c3a Time the initial seeding of live registers
llvm-svn: 129276
2011-04-11 15:00:42 +00:00
Jakob Stoklund Olesen
5d6e68454e Don't shrink live ranges after dead code elimination unless it is going to help.
In particular, don't repeatedly recompute the PIC base live range after rematerialization.

llvm-svn: 129275
2011-04-11 15:00:39 +00:00
Jay Foad
0d5ca4cf44 Don't include Operator.h from InstrTypes.h.
llvm-svn: 129271
2011-04-11 09:35:34 +00:00
Chris Lattner
e8dfbaef19 Avoid excess precision issues that lead to generating host-compiler-specific code.
Switch lowering probably shouldn't be using FP for this.  This resolves PR9581.

llvm-svn: 129199
2011-04-09 06:57:13 +00:00
Jakob Stoklund Olesen
5add6d16b7 Build the Hopfield network incrementally when splitting global live ranges.
It is common for large live ranges to have few basic blocks with register uses
and many live-through blocks without any uses. This approach grows the Hopfield
network incrementally around the use blocks, completely avoiding checking
interference for some through blocks.

llvm-svn: 129188
2011-04-09 02:59:09 +00:00
Jakob Stoklund Olesen
b530849e81 Precompute interference for neighbor blocks as long as there is no interference.
This doesn't require seeking in the live interval union, so it is very cheap.

llvm-svn: 129187
2011-04-09 02:59:05 +00:00
Chris Lattner
badb8ca63c have dag combine zap "store undef", which can be formed during call lowering
with undef arguments.

llvm-svn: 129185
2011-04-09 02:32:02 +00:00
Devang Patel
21b6ef4320 Simplify array bound checks and clarify comments. One element array can have same non-zero number as lower bound as well as upper bound.
llvm-svn: 129170
2011-04-08 23:39:38 +00:00
Devang Patel
39ac307002 Do not emit DW_AT_upper_bound and DW_AT_lower_bound for unbouded array.
If lower bound is more then upper bound then consider it is an unbounded array.
An array is unbounded if non-zero lower bound is same as upper bound.
If lower bound and upper bound are zero than array has one element.

llvm-svn: 129156
2011-04-08 21:55:10 +00:00
Evan Cheng
bc053100af Change -arm-trap-func= into a non-arm specific option. Now Intrinsic::trap is lowered into a call to the specified trap function at sdisel time.
llvm-svn: 129152
2011-04-08 21:37:21 +00:00
Nick Lewycky
ac1fe011df llvm.global_[cd]tor is defined to be either external, or appending with an array
of { i32, void ()* }. Teach the verifier to verify that, deleting copies of
checks strewn about.

llvm-svn: 129128
2011-04-08 07:30:21 +00:00
Andrew Trick
36a1759769 Added a check in the preRA scheduler for potential interference on a
induction variable. The preRA scheduler is unaware of induction vars,
so we look for potential "virtual register cycles" instead.

Fixes <rdar://problem/8946719> Bad scheduling prevents coalescing

llvm-svn: 129100
2011-04-07 19:54:57 +00:00
Jakob Stoklund Olesen
3e349f2950 Recompute hasPHIKill flags when shrinking live intervals.
PHI values may be deleted, causing the flags to be wrong. This fixes PR9616.

llvm-svn: 129092
2011-04-07 18:43:14 +00:00
Jakob Stoklund Olesen
aace1636b6 Avoid moving iterators when the previous block was just visited.
llvm-svn: 129081
2011-04-07 17:27:50 +00:00
Jakob Stoklund Olesen
1791098020 Prefer multiplications to divisions.
llvm-svn: 129080
2011-04-07 17:27:48 +00:00
Jakob Stoklund Olesen
402a4daae6 Extract SpillPlacement::addLinks for handling the special transparent blocks.
llvm-svn: 129079
2011-04-07 17:27:46 +00:00
Evan Cheng
1d3691e071 Remove dead code. rdar://9221736.
llvm-svn: 129044
2011-04-07 00:56:37 +00:00
Jakob Stoklund Olesen
b59d7e2dea Also account for the spill code that would be inserted in live-through blocks with interference.
llvm-svn: 129030
2011-04-06 21:32:41 +00:00
Jakob Stoklund Olesen
7bd327adbc Abort the constraint calculation early when all positive bias is lost.
Without any positive bias, there is nothing for the spill placer to to. It will
spill everywhere.

llvm-svn: 129029
2011-04-06 21:32:38 +00:00
Jakob Stoklund Olesen
7621fb6c1b Keep track of the number of positively biased nodes when adding constraints.
If there are no positive nodes, the algorithm can be aborted early.

llvm-svn: 129021
2011-04-06 19:14:00 +00:00
Jakob Stoklund Olesen
00f622b9b1 Break the spill placement algorithm into three parts: prepare, addConstraints, and finish.
This will allow us to abort the algorithm early if it is determined to be futile.

llvm-svn: 129020
2011-04-06 19:13:57 +00:00
Jakob Stoklund Olesen
10a362acbd Oops. Scary.
llvm-svn: 128986
2011-04-06 04:07:14 +00:00
Jakob Stoklund Olesen
bb79ab5ba3 Analyze blocks with uses separately from live-through blocks without uses.
About 90% of the relevant blocks are live-through without uses, and the only
information required about them is their number. This saves memory and enables
later optimizations that need to look at only the use-blocks.

llvm-svn: 128985
2011-04-06 03:57:00 +00:00
Jakob Stoklund Olesen
50ab0391d7 Sign error
llvm-svn: 128963
2011-04-05 23:43:16 +00:00
Jakob Stoklund Olesen
2bba415e6f Don't crash when a value is defined after the last split point.
llvm-svn: 128962
2011-04-05 23:43:14 +00:00
Jakob Stoklund Olesen
88a0367967 Permit blocks to branch directly to a landing pad.
Treat the landing pad as a normal successor when that happens.

llvm-svn: 128961
2011-04-05 23:43:11 +00:00
Devang Patel
03d0891c10 Add support to encode function's template parameters.
llvm-svn: 128947
2011-04-05 22:52:06 +00:00
Jakob Stoklund Olesen
a819faa2f7 Run LiveDebugVariables in RegAllocBasic and RegAllocGreedy.
llvm-svn: 128935
2011-04-05 21:40:37 +00:00
Devang Patel
af7f5f4ada Refactor.
llvm-svn: 128929
2011-04-05 21:08:24 +00:00
Bob Wilson
ef86806800 Add an assertion instead of crashing when the scavenger goes past the end
of a basic block.

llvm-svn: 128925
2011-04-05 20:44:15 +00:00
Jakob Stoklund Olesen
613bcf88be When dead code elimination removes all but one use, try to fold the single def into the remaining use.
Rematerialization can leave single-use loads behind that we might as well fold whenever possible.

llvm-svn: 128918
2011-04-05 20:20:26 +00:00
Devang Patel
2be08abc94 Do not emit empty name.
llvm-svn: 128914
2011-04-05 20:14:13 +00:00
Jakob Stoklund Olesen
2bef449b52 Ensure all defs referring to a virtual register are marked dead by addRegisterDead().
There can be multiple defs for a single virtual register when they are defining
sub-registers.

The missing <dead> flag was stopping the inline spiller from eliminating dead
code after rematerialization.

llvm-svn: 128888
2011-04-05 16:53:50 +00:00
Rafael Espindola
7618e7be93 Print visibility info for external variables.
llvm-svn: 128887
2011-04-05 15:51:32 +00:00
Jakob Stoklund Olesen
731b0d77a2 Use std::unique instead of a SmallPtrSet to ensure unique instructions in UseSlots.
This allows us to always keep the smaller slot for an instruction which is what
we want when a register has early clobber defines.

Drop the UsingInstrs set and the UsingBlocks map. They are no longer needed.

llvm-svn: 128886
2011-04-05 15:18:18 +00:00
Jakob Stoklund Olesen
6bd6e03755 Stop precomputing last split points, query the SplitAnalysis cache on demand.
llvm-svn: 128875
2011-04-05 04:20:29 +00:00
Jakob Stoklund Olesen
65c8f18b8d Cache the fairly expensive last split point computation and provide a fast
inlined path for the common case.

Most basic blocks don't contain a call that may throw, so the last split point
os simply the first terminator.

llvm-svn: 128874
2011-04-05 04:20:27 +00:00
Bill Wendling
a8db395dc1 Revamp the SjLj "dispatch setup" intrinsic.
It needed to be moved closer to the setjmp statement, because the code directly
after the setjmp needs to know about values that are on the stack. Also, the
'bitcast' of the function context was causing a dead load. This wouldn't be too
horrible, except that at -O0 it wasn't optimized out, and because it wasn't
using the correct base pointer (if there is a VLA), it would try to access a
value from a garbage address.
<rdar://problem/9130540>

llvm-svn: 128873
2011-04-05 01:37:43 +00:00
Stuart Hastings
1635b37415 Revert 123704; it broke threaded LLVM.
llvm-svn: 128868
2011-04-05 00:37:28 +00:00
Jakob Stoklund Olesen
1454095d5e Allow coalescing with reserved physregs in certain cases:
When a virtual register has a single value that is defined as a copy of a
reserved register, permit that copy to be joined. These virtual register are
usually copies of the stack pointer:

  %vreg75<def> = COPY %ESP; GR32:%vreg75
  MOV32mr %vreg75, 1, %noreg, 0, %noreg, %vreg74<kill>
  MOV32mi %vreg75, 1, %noreg, 8, %noreg, 0
  MOV32mi %vreg75<kill>, 1, %noreg, 4, %noreg, 0
  CALLpcrel32 ...

Coalescing these virtual registers early decreases register pressure.
Previously, they were coalesced by RALinScan::attemptTrivialCoalescing after
register allocation was completed.

The lower register pressure causes the mcinst-lowering-cmp0.ll test case to fail
because it depends on linear scan spilling a particular register.

I am deleting 2008-08-05-SpillerBug.ll because it is counting the number of
instructions emitted, and its revision history shows the 'correct' count being
edited many times.

llvm-svn: 128845
2011-04-04 21:00:03 +00:00
Jakob Stoklund Olesen
d5ddbadc69 Extract physreg joining policy to a separate method.
llvm-svn: 128844
2011-04-04 20:59:59 +00:00
Jakob Stoklund Olesen
78d65c6632 Stop caching basic block index ranges now that SlotIndexes can keep up.
llvm-svn: 128821
2011-04-04 15:32:15 +00:00
Jakob Stoklund Olesen
6092c3d81f Delete leftover data members.
llvm-svn: 128820
2011-04-04 15:32:11 +00:00
Jakob Stoklund Olesen
e5f6956148 Use InterferenceCache in RegAllocGreedy.
llvm-svn: 128765
2011-04-02 06:03:38 +00:00
Jakob Stoklund Olesen
f881310607 Add an InterferenceCache class for caching per-block interference ranges.
When the greedy register allocator is splitting multiple global live ranges, it
tends to look at the same interference data many times. The InterferenceCache
class caches queries for unaltered LiveIntervalUnions.

llvm-svn: 128764
2011-04-02 06:03:35 +00:00
Jakob Stoklund Olesen
024a1de4ae Use basic block numbers as indexes when mapping slot index ranges.
This is more compact and faster than using DenseMap.

llvm-svn: 128763
2011-04-02 06:03:31 +00:00
Cameron Zwarich
2748634089 Add a RemoveFromWorklist method to DCI. This is needed to do some complicated
transformations in target-specific DAG combines without causing DAGCombiner to
delete the same node twice. If you know of a better way to avoid this (see my
next patch for an example), please let me know.

llvm-svn: 128758
2011-04-02 02:40:26 +00:00