Use a Bitvector instead, we didn't need the smaller memory footprint anyway.
This makes the greedy register allocator 10% faster.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129390 91177308-0d34-0410-b5e6-96231b3b80d8
UnitsSharePred was a source of randomness in the scheduler: node
priority depended on the queue data structure. I rewrote the recent
VRegCycle heuristics to completely replace the old heuristic without
any randomness. To make these heuristic adjustments to node latency work,
I also needed to do something a little more reasonable with TokenFactor. I
gave it zero latency to its consumers and always schedule it as low as
possible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129383 91177308-0d34-0410-b5e6-96231b3b80d8
This merges the behavior of splitSingleBlocks into splitAroundRegion, so the
RS_Region and RS_Block register stages can be coalesced. That means the leftover
intervals after region splitting go directly to spilling instead of a second
pass of per-block splitting.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129379 91177308-0d34-0410-b5e6-96231b3b80d8
mean that it has to be ConstantArray of ConstantStruct. We might have
ConstantAggregateZero, at either level, so don't crash on that.
Also, semi-deprecate the sentinal value. The linker isn't aware of sentinals so
we end up with the two lists appended, each with their "sentinals" on them.
Different parts of LLVM treated sentinals differently, so make them all just
ignore the single entry and continue on with the rest of the list.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129307 91177308-0d34-0410-b5e6-96231b3b80d8
the 'unwind' instruction. However, later on that instruction was converted into
a jump to the basic block it was located in, causing an infinite loop when we
get there.
It turns out, we get there if the _Unwind_Resume_or_Rethrow call returns (which
it's not supposed to do). It returns if it cannot find a place to unwind
to. Thus we would get what appears to be a "hang" when in reality it's just that
the EH couldn't be propagated further along.
Instead of infinitely looping (or calling `unwind', which none of our back-ends
support (it's lowered into nothing...)), call the @llvm.trap() intrinsic
instead. This may not conform to specific rules of a particular language, but
it's rather better than infinitely looping.
<rdar://problem/9175843&9233582>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129302 91177308-0d34-0410-b5e6-96231b3b80d8
Both coalescing and register allocation already check aliases for interference,
so these extra segments are only slowing us down.
This speeds up both linear scan and the greedy register allocator.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129283 91177308-0d34-0410-b5e6-96231b3b80d8
It is common for large live ranges to have few basic blocks with register uses
and many live-through blocks without any uses. This approach grows the Hopfield
network incrementally around the use blocks, completely avoiding checking
interference for some through blocks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129188 91177308-0d34-0410-b5e6-96231b3b80d8
If lower bound is more then upper bound then consider it is an unbounded array.
An array is unbounded if non-zero lower bound is same as upper bound.
If lower bound and upper bound are zero than array has one element.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129156 91177308-0d34-0410-b5e6-96231b3b80d8
induction variable. The preRA scheduler is unaware of induction vars,
so we look for potential "virtual register cycles" instead.
Fixes <rdar://problem/8946719> Bad scheduling prevents coalescing
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129100 91177308-0d34-0410-b5e6-96231b3b80d8
About 90% of the relevant blocks are live-through without uses, and the only
information required about them is their number. This saves memory and enables
later optimizations that need to look at only the use-blocks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128985 91177308-0d34-0410-b5e6-96231b3b80d8
There can be multiple defs for a single virtual register when they are defining
sub-registers.
The missing <dead> flag was stopping the inline spiller from eliminating dead
code after rematerialization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128888 91177308-0d34-0410-b5e6-96231b3b80d8
This allows us to always keep the smaller slot for an instruction which is what
we want when a register has early clobber defines.
Drop the UsingInstrs set and the UsingBlocks map. They are no longer needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128886 91177308-0d34-0410-b5e6-96231b3b80d8
inlined path for the common case.
Most basic blocks don't contain a call that may throw, so the last split point
os simply the first terminator.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128874 91177308-0d34-0410-b5e6-96231b3b80d8
It needed to be moved closer to the setjmp statement, because the code directly
after the setjmp needs to know about values that are on the stack. Also, the
'bitcast' of the function context was causing a dead load. This wouldn't be too
horrible, except that at -O0 it wasn't optimized out, and because it wasn't
using the correct base pointer (if there is a VLA), it would try to access a
value from a garbage address.
<rdar://problem/9130540>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128873 91177308-0d34-0410-b5e6-96231b3b80d8
When a virtual register has a single value that is defined as a copy of a
reserved register, permit that copy to be joined. These virtual register are
usually copies of the stack pointer:
%vreg75<def> = COPY %ESP; GR32:%vreg75
MOV32mr %vreg75, 1, %noreg, 0, %noreg, %vreg74<kill>
MOV32mi %vreg75, 1, %noreg, 8, %noreg, 0
MOV32mi %vreg75<kill>, 1, %noreg, 4, %noreg, 0
CALLpcrel32 ...
Coalescing these virtual registers early decreases register pressure.
Previously, they were coalesced by RALinScan::attemptTrivialCoalescing after
register allocation was completed.
The lower register pressure causes the mcinst-lowering-cmp0.ll test case to fail
because it depends on linear scan spilling a particular register.
I am deleting 2008-08-05-SpillerBug.ll because it is counting the number of
instructions emitted, and its revision history shows the 'correct' count being
edited many times.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128845 91177308-0d34-0410-b5e6-96231b3b80d8
When the greedy register allocator is splitting multiple global live ranges, it
tends to look at the same interference data many times. The InterferenceCache
class caches queries for unaltered LiveIntervalUnions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128764 91177308-0d34-0410-b5e6-96231b3b80d8
transformations in target-specific DAG combines without causing DAGCombiner to
delete the same node twice. If you know of a better way to avoid this (see my
next patch for an example), please let me know.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128758 91177308-0d34-0410-b5e6-96231b3b80d8
It is using a trivial rewriter that doesn't know how to insert spill code
requested by the standard spiller.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128688 91177308-0d34-0410-b5e6-96231b3b80d8
Turn them into noop KILL instructions instead. This lets the scavenger know when
super-registers are killed and defined.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128645 91177308-0d34-0410-b5e6-96231b3b80d8
This way, shrinkToUses() will ignore the instruction that is about to be
deleted, and we avoid leaving invalid live ranges that SplitKit doesn't like.
Fix a misunderstanding in MachineVerifier about <def,undef> operands. The
<undef> flag is valid on def operands where it has the same meaning as <undef>
on a use operand. It only applies to sub-register defines which also read the
full register.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128642 91177308-0d34-0410-b5e6-96231b3b80d8
We don't expect the real "powf()" on some hosts (and powf() would be available on other hosts).
For consistency, std::pow(double,double) may be called instead.
Or, precision issue might attack us, to see unstable regalloc and stack coloring.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128629 91177308-0d34-0410-b5e6-96231b3b80d8
The rematerialized instruction may require a more constrained register class
than the register being spilled. In the test case, the spilled register has been
inflated to the DPR register class, but we are rematerializing a load of the
ssub_0 sub-register which only exists for DPR_VFP2 registers.
The register class is reinflated after spilling, so the conservative choice is
only temporary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128610 91177308-0d34-0410-b5e6-96231b3b80d8
The rewriter can keep track of multiple stack slots in the same register if they
happen to have the same value. When an instruction modifies a stack slot by
defining a register that is mapped to a stack slot, other stack slots in that
register are no longer valid.
This is a very rare problem, and I don't have a simple test case. I get the
impression that VirtRegRewriter knows it is about to be deleted, inventing a
last opaque problem.
<rdar://problem/9204040>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128562 91177308-0d34-0410-b5e6-96231b3b80d8
When DCE clones a live range because it separates into connected components,
make sure that the clones enter the same register allocator stage as the
register they were cloned from.
For instance, clones may be split even when they where created during spilling.
Other registers created during spilling are not candidates for splitting or even
(re-)spilling.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128524 91177308-0d34-0410-b5e6-96231b3b80d8
the FailBB dominator is correctly calculated. Believe it or not, there isn't a
functionality change here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128455 91177308-0d34-0410-b5e6-96231b3b80d8
The instruction to be rematerialized may not be the one defining the register
that is being spilled. The traceSiblingValue() function sees through sibling
copies to find the remat candidate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128449 91177308-0d34-0410-b5e6-96231b3b80d8
becomes reachable when before it wasn't). Check to make sure that it's not null
before trying to use it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128434 91177308-0d34-0410-b5e6-96231b3b80d8
The reassignment phase was able to move interference with a higher spill weight,
but it didn't happen very often and it was fairly expensive.
The existing interference eviction picks up the slack.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128397 91177308-0d34-0410-b5e6-96231b3b80d8
The main register class may have been inflated by live range splitting, so that
register class is not necessarily valid for the snippet instructions.
Use the original register class for the stack slot interval.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128351 91177308-0d34-0410-b5e6-96231b3b80d8
It couldn't be used outside of the file because SDISelAsmOperandInfo
is local to SelectionDAGBuilder.cpp. Making it a static function avoids
a weird linkage dance.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128342 91177308-0d34-0410-b5e6-96231b3b80d8
Correctly terminate the range of register DBG_VALUEs when the register is
clobbered or when the basic block ends.
The code is now ready to deal with variables that are sometimes in a register
and sometimes on the stack. We just need to teach emitDebugLoc to say 'stack
slot'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128327 91177308-0d34-0410-b5e6-96231b3b80d8
The .dot directives don't need labels, that is a leftover from when we created
line number info manually.
Instructions following a DBG_VALUE can share its label since the DBG_VALUE
doesn't produce any code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128284 91177308-0d34-0410-b5e6-96231b3b80d8
Add an assertion to linear scan to prevent it from allocating registers outside
the register class.
<rdar://problem/9183021>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128254 91177308-0d34-0410-b5e6-96231b3b80d8
so the scheduler can't create new interferences on the copies
themselves. Prior to this fix the scheduler could get stuck in a loop
creating copies.
Fixes PR9509.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128164 91177308-0d34-0410-b5e6-96231b3b80d8
Each of these instructions may have a RegsClobberInsn entry that can't be
ignored. Consecutive ranges are coalesced later when DwarfDebug::emitDebugLoc
merges entries.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128155 91177308-0d34-0410-b5e6-96231b3b80d8
I'm tired of doing this manually for each checkout.
If anyone knows a better way debug isel for non-trivial tests feel
free to revert and let me know how to do it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128132 91177308-0d34-0410-b5e6-96231b3b80d8
This will extend the ranges of debug info variables in registers until they are
clobbered.
Fix 1: Don't mistake DBG_VALUE instructions referring to incoming arguments on
the stack with DBG_VALUE instructions referring to variables in the frame
pointer. This fixes the gdb test-suite failure.
Fix 2: Don't trace through copies to physical registers setting up call
arguments. These registers are call clobbered, and the source register is more
likely to be a callee-saved register that can be extended through the call
instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128114 91177308-0d34-0410-b5e6-96231b3b80d8
Temporarily reverting these to see if we can get llvm-objdump to link. Hopefully this is not the problem.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128097 91177308-0d34-0410-b5e6-96231b3b80d8
These ranges get completely jumbled by the post-ra scheduler, and it is not
really reasonable to expect it to make sense of them.
Instead, teach DwarfDebug to notice when user variables in registers are
clobbered, and terminate the ranges there.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128045 91177308-0d34-0410-b5e6-96231b3b80d8
the alias of an InstAlias instead of the thing being aliased. Because we need to
know the features that are valid for an InstAlias.
This is part of a work-in-progress.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127986 91177308-0d34-0410-b5e6-96231b3b80d8
not have native support for this operation (such as X86).
The legalized code uses two vector INT_TO_FP operations and is faster
than scalarizing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127951 91177308-0d34-0410-b5e6-96231b3b80d8
Proof-of-concept code that code-gens a module to an in-memory MachO object.
This will be hooked up to a run-time dynamic linker library (see: llvm-rtdyld
for similarly conceptual work for that part) which will take the compiled
object and link it together with the rest of the system, providing back to the
JIT a table of available symbols which will be used to respond to the
getPointerTo*() queries.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127916 91177308-0d34-0410-b5e6-96231b3b80d8
The llvm.dbg.value intrinsic refers to SSA values, not virtual registers, so we
should be able to extend the range of a value by tracking that value through
register copies. This greatly improves the debug value tracking for function
arguments that for some reason are copied to a second virtual register at the
end of the entry block.
We only extend the debug value range where its register is killed. All original
llvm.dbg.value locations are still respected.
Copies from physical registers are ignored. That should not be a problem since
the entry block already adds DBG_VALUE instructions for the virtual registers
holding the function arguments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127912 91177308-0d34-0410-b5e6-96231b3b80d8
Stack slot real estate is virtually free compared to registers, so it is
advantageous to spill earlier even though the same value is now kept in both a
register and a stack slot.
Also eliminate redundant spills by extending the stack slot live range
underneath reloaded registers.
This can trigger a dead code elimination, removing copies and even reloads that
were only feeding spills.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127868 91177308-0d34-0410-b5e6-96231b3b80d8
This is not supposed to happen, but I have seen the x86 rematter getting
confused when rematerializing partial redefs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127857 91177308-0d34-0410-b5e6-96231b3b80d8
I have convinced myself that it can only happen when a phi value dies. When it
happens, allocate new virtual registers for the components.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127827 91177308-0d34-0410-b5e6-96231b3b80d8
rather than an int. Thankfully, this only causes LLVM to miss optimizations, not
generate incorrect code.
This just fixes the zext at the return. We still insert an i32 ZextAssert when
reading a function's arguments, but it is followed by a truncate and another i8
ZextAssert so it is not optimized.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127766 91177308-0d34-0410-b5e6-96231b3b80d8
plus the test where it used to break.", which broke Clang self-host of a
Debug+Asserts compiler, on OS X.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127763 91177308-0d34-0410-b5e6-96231b3b80d8
After live range splitting, an original value may be available in multiple
registers. Tracing back through the registers containing the same value, find
the best place to insert a spill, determine if the value has already been
spilled, or discover a reaching def that may be rematerialized.
This is only the analysis part. The information is not used for anything yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127698 91177308-0d34-0410-b5e6-96231b3b80d8