The complement interval may overlap the other intervals created, so use
a separate LiveRangeCalc instance to compute its live range.
A LiveRangeCalc instance can only be shared among non-overlapping
intervals.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139603 91177308-0d34-0410-b5e6-96231b3b80d8
SplitKit will soon need two copies of these data structures, and the
algorithms will also be useful when LiveIntervalAnalysis becomes
independent of LiveVariables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139572 91177308-0d34-0410-b5e6-96231b3b80d8
Splitting a landing pad takes considerable care because of PHIs and other
nasties. The problem is that the jump table needs to jump to the landing pad
block. However, the landing pad block can be jumped to only by an invoke
instruction. So we clone the landingpad instruction into its own basic block,
have the invoke jump to there. The landingpad instruction's basic block's
successor is now the target for the jump table.
But because of PHI nodes, we need to create another basic block for the jump
table to jump to. This is definitely a hack, because the values for the PHI
nodes may not be defined on the edge from the jump table. But that's okay,
because the jump table is simply a construct to mimic what is happening in the
CFG. So the values are mysteriously there, even though there is no value for the
PHI from the jump table's edge (hence calling this a hack).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139545 91177308-0d34-0410-b5e6-96231b3b80d8
SplitKit always computes a complement live range to cover the places
where the original live range was live, but no explicit region has been
allocated.
Currently, the complement live range is created to be as small as
possible - it never overlaps any of the regions. This minimizes
register pressure, but if the complement is going to be spilled anyway,
that is not very important. The spiller will eliminate redundant
spills, and hoist others by making the spill slot live range overlap
some of the regions created by splitting. Stack slots are cheap.
This patch adds the interface to enable spill modes in SplitKit. In
spill mode, SplitKit will assume that the complement is going to spill,
so it will allow it to overlap regions in order to avoid back-copies.
By doing some of the spiller's work early, the complement live range
becomes simpler. In some cases, it can become much simpler because no
extra PHI-defs are required. This will speed up both splitting and
spilling.
This is only the interface to enable spill modes, no implementation yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139500 91177308-0d34-0410-b5e6-96231b3b80d8
In some cases such as interpreters using indirectbr, the CFG can be very
complicated, and live range splitting may be forced to insert a large
number of phi-defs. When that happens, traceSiblingValue can spend a
lot of time zipping around in the CFG looking for defs and reloads.
This patch causes more information to be cached in SibValues, and the
cached values are used to terminate searches early. This speeds up
spilling by 20x in one interpreter test case. For more typical code,
this is just a 10% speedup of spilling.
The previous version had bugs that caused miscompilations. They have
been fixed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139378 91177308-0d34-0410-b5e6-96231b3b80d8
In some cases such as interpreters using indirectbr, the CFG can be very
complicated, and live range splitting may be forced to insert a large
number of phi-defs. When that happens, traceSiblingValue can spend a
lot of time zipping around in the CFG looking for defs and reloads.
This patch causes more information to be cached in SibValues, and the
cached values are used to terminate searches early. This speeds up
spilling by 20x in one interpreter test case. For more typical code,
this is just a 10% speedup of spilling.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139247 91177308-0d34-0410-b5e6-96231b3b80d8
(The fix for the related failures on x86 is going to be nastier because we actually need Acquire memoperands attached to the atomic load instrs, etc.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139221 91177308-0d34-0410-b5e6-96231b3b80d8
with a vector condition); such selects become VSELECT codegen nodes.
This patch also removes VSETCC codegen nodes, unifying them with SETCC
nodes (codegen was actually often using SETCC for vector SETCC already).
This ensures that various DAG combiner optimizations kick in for vector
comparisons. Passes dragonegg bootstrap with no testsuite regressions
(nightly testsuite as well as "make check-all"). Patch mostly by
Nadav Rotem.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139159 91177308-0d34-0410-b5e6-96231b3b80d8
init.trampoline and adjust.trampoline intrinsics, into two intrinsics
like in GCC. While having one combined intrinsic is tempting, it is
not natural because typically the trampoline initialization needs to
be done in one function, and the result of adjust trampoline is needed
in a different (nested) function. To get around this llvm-gcc hacks the
nested function lowering code to insert an additional parent variable
holding the adjust.trampoline result that can be accessed from the child
function. Dragonegg doesn't have the luxury of tweaking GCC code, so it
stored the result of adjust.trampoline in the memory GCC set aside for
the trampoline itself (this is always available in the child function),
and set up some new memory (using an alloca) to hold the trampoline.
Unfortunately this breaks Go which allocates trampoline memory on the
heap and wants to use it even after the parent has exited (!). Rather
than doing even more hacks to get Go working, it seemed best to just use
two intrinsics like in GCC. Patch mostly by Sanjoy Das.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139140 91177308-0d34-0410-b5e6-96231b3b80d8
If we have a chain of zext -> assert_zext -> zext -> use, the first zext would get simplified away because of the later zext, and then the later zext would get simplified away because of the assert. The solution is to teach SimplifyDemandedBits that assert_zext demands all of the high bits of its input, rather than only those demanded by its users. No testcase because the only example I have manifests as llvm-gcc miscompiling LLVM, and I haven't found a smaller case that reproduces this problem.
Fixes <rdar://problem/10063365>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@139059 91177308-0d34-0410-b5e6-96231b3b80d8
to be unreliable on platforms which require memcpy calls, and it is
complicating broader legalize cleanups. It is hoped that these cleanups
will make memcpy byval easier to implement in the future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138977 91177308-0d34-0410-b5e6-96231b3b80d8
- On COFF the .lcomm directive has an alignment argument.
- On ELF we fall back to .local + .comm
Based on a patch by NAKAMURA Takumi.
Fixes PR9337, PR9483 and PR10128.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138976 91177308-0d34-0410-b5e6-96231b3b80d8
An instruction may define part of a register where the other bits are
undefined. In that case, it is safe to rematerialize the instruction.
For example:
%vreg2:ssub_0<def> = VLDRS <cp#0>, 0, pred:14, pred:%noreg, %vreg2<imp-def>
The extra <imp-def> operand indicates that the instruction does not read
the other parts of the virtual register, so a remat is safe.
This patch simply allows multiple def operands for the virtual register.
It is MI->readsVirtualRegister() that determines if we depend on a
previous value so remat is impossible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138953 91177308-0d34-0410-b5e6-96231b3b80d8
The problem is fixed for all register allocators by r138944, so this
patch is no longer necessary.
<rdar://problem/10032939>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138945 91177308-0d34-0410-b5e6-96231b3b80d8
An instruction that redefines only part of a larger register can never
be rematerialized since the virtual register value depends on the old
value in other parts of the register.
This was fixed for the inline spiller in r138794. This patch fixes the
problem for all register allocators, and includes a small test case.
<rdar://problem/10032939>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138944 91177308-0d34-0410-b5e6-96231b3b80d8
Added canClobberReachingPhysRegUse() to handle a particular pattern in
which a two-address instruction could be forced to interfere with
EFLAGS, causing a compare to be unnecessarilly cloned.
Fixes rdar://problem/5875261
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138924 91177308-0d34-0410-b5e6-96231b3b80d8
Emit a repeated sequence of bytes using .zero. This saves an enormous
amount of asm file space for certain programs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138864 91177308-0d34-0410-b5e6-96231b3b80d8
X86. Modify the pass added in the previous patch to call this new
code.
This new prologues generated will call a libgcc routine (__morestack)
to allocate more stack space from the heap when required
Patch by Sanjoy Das.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138812 91177308-0d34-0410-b5e6-96231b3b80d8
Add a instruction flag: hasPostISelHook which tells the pre-RA scheduler to
call a target hook to adjust the instruction. For ARM, this is used to
adjust instructions which may be setting the 's' flag. ADC, SBC, RSB, and RSC
instructions have implicit def of CPSR (required since it now uses CPSR physical
register dependency rather than "glue"). If the carry flag is used, then the
target hook will *fill in* the optional operand with CPSR. Otherwise, the hook
will remove the CPSR implicit def from the MachineInstr.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138810 91177308-0d34-0410-b5e6-96231b3b80d8
I don't currently have a good testcase for this; will try to get one
tomorrow. <rdar://problem/10032939>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138794 91177308-0d34-0410-b5e6-96231b3b80d8
I don't really like the patterns, but I'm having trouble coming up with a
better way to handle them.
I plan on making other targets use the same legalization
ARM-without-memory-barriers is using... it's not especially efficient, but
if anyone cares, it's not that hard to fix for a given target if there's
some better lowering.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138621 91177308-0d34-0410-b5e6-96231b3b80d8
A value of -1 at a call site tells the personality function that this call isn't
handled by the current function. Since the ResumeInsts are converted to calls to
_Unwind_SjLj_Resume, add a (volatile) store of -1 to its 'call site'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138416 91177308-0d34-0410-b5e6-96231b3b80d8
This is not necessarily the first or dominating use of the EH values. The IR
breaks if it's not. So replace the specific value in the instruction with the
new value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138406 91177308-0d34-0410-b5e6-96231b3b80d8
The invoke could be at the end of the entry block. If it's the only one, then we
won't process all of the landingpad instructions correctly. This code is
currently ugly, but should be made much nicer once the new EH switch is thrown.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138397 91177308-0d34-0410-b5e6-96231b3b80d8
value, we insert a load of the exception object and selector object from memory,
which is where it actually resides. If it's used by a PHI node, we follow that
to where it is being used. Eventually, all landingpad instructions should have
no uses. Any PHI nodes that were associated with those landingpads should be
removed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138302 91177308-0d34-0410-b5e6-96231b3b80d8
the intent seems to be to terminate even in Release builds, just use abort()
directly.
If program flow ever reaches a __builtin_unreachable (which llvm_unreachable is
#define'd to on newer GCCs) then the program is undefined.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138068 91177308-0d34-0410-b5e6-96231b3b80d8
Normally, a partial register def is treated as reading the
super-register unless it also defines the full register like this:
%vreg110:sub_32bit<def> = COPY %vreg77:sub_32bit, %vreg110<imp-def>
This patch also uses the <undef> flag on partial defs to recognize
non-reading operands:
%vreg110:sub_32bit<def,undef> = COPY %vreg77:sub_32bit
This fixes a subtle bug in RegisterCoalescer where LIS->shrinkToUses
would treat a coalesced copy as still reading the register, extending
the live range artificially.
My test case only works when I disable DCE so a dead copy is left for
RegisterCoalescer, so I am not including it.
<rdar://problem/9967101>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138018 91177308-0d34-0410-b5e6-96231b3b80d8
The landingpad instruction is lowered into the EXCEPTIONADDR and EHSELECTION
SDNodes. The information from the landingpad instruction is harvested by the
'AddLandingPadInfo' function. The new EH uses the current EH scheme in the
back-end. This will change once we switch over to the new scheme. (Reviewed by
Jakob!)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137880 91177308-0d34-0410-b5e6-96231b3b80d8
This generates the SDNodes for the new exception handling scheme. It takes the
two values coming from the landingpad instruction and assigns them to the
EXCEPTIONADDR and EHSELECTION nodes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137873 91177308-0d34-0410-b5e6-96231b3b80d8
Things are much saner now. We no longer need to modify the laning pads, because
of the invariants we impose upon them. The only thing DwarfEHPrepare needs to do
is convert the 'resume' instruction into a call to '_Unwind_Resume'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137855 91177308-0d34-0410-b5e6-96231b3b80d8
MDNodes graph structure such that compiler unit keeps track of important MDNodes and update dwarf writer to process mdnodes top-down instead of bottom up.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137778 91177308-0d34-0410-b5e6-96231b3b80d8
When a variable is inlined multiple places, abstract variable keeps name, location, type etc.. info and all other concreate instances of the variable directly refers to abstract variable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137637 91177308-0d34-0410-b5e6-96231b3b80d8
be illegal, even if the requested vector type is legal. Testcase is one of the
disabled ARM tests in the vector-select patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137562 91177308-0d34-0410-b5e6-96231b3b80d8
This implements the 'landingpad' instruction. It's used to indicate that a basic
block is a landing pad. There are several restrictions on its use (see
LangRef.html for more detail). These restrictions allow the exception handling
code to gather the information it needs in a much more sane way.
This patch has the definition, implementation, C interface, parsing, and bitcode
support in it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137501 91177308-0d34-0410-b5e6-96231b3b80d8
The Query class now holds two iterators instead of an InterferenceResult
instance. The iterators are used as bookmarks for repeated
collectInterferingVRegs calls.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137380 91177308-0d34-0410-b5e6-96231b3b80d8
The InterferenceResult iterator turned out to be less important than we
thought it would be. LiveIntervalUnion clients want higher level
information, like the list of interfering virtual registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137346 91177308-0d34-0410-b5e6-96231b3b80d8
lower XMM register gets in first. This will allow the SUBREG pattern to
elliminate the first vector insertion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137310 91177308-0d34-0410-b5e6-96231b3b80d8
Coalescing can remove copy-like instructions with sub-register operands
that constrained the register class. Examples are:
x86: GR32_ABCD:sub_8bit_hi -> GR32
arm: DPR_VFP2:ssub0 -> DPR
Recompute the register class of any virtual registers that are used by
less instructions after coalescing.
This affects code generation for the Cortex-A8 where we use NEON
instructions for f32 operations, c.f. fp_convert.ll:
vadd.f32 d16, d1, d0
vcvt.s32.f32 d0, d16
The register allocator is now free to use d16 for the temporary, and
that comes first in the allocation order because it doesn't interfere
with any s-registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137133 91177308-0d34-0410-b5e6-96231b3b80d8
This function doesn't have anything to do with spill weights, and MRI
already has functions for manipulating the register class of a virtual
register.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137123 91177308-0d34-0410-b5e6-96231b3b80d8
These the methods are target-independent since they simply scan the
memory operands. They can live in TargetInstrInfoImpl.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137063 91177308-0d34-0410-b5e6-96231b3b80d8
All new local ranges are marked as RS_New now, so there is no need to
attempt splitting of RS_Spill ranges any more.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137002 91177308-0d34-0410-b5e6-96231b3b80d8
The local ranges created get to stay in the RS_New stage, just like for
local and region splitting.
This gives tryLocalSplit a bit more freedom the first time it sees one
of these new local ranges.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137001 91177308-0d34-0410-b5e6-96231b3b80d8
These functions are no longer used, and they are easily replaced with a
loop calling shouldSplitSingleBlock and splitSingleBlock.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136993 91177308-0d34-0410-b5e6-96231b3b80d8
Drop the use of SplitAnalysis::getMultiUseBlocks, there is no need to go
through a SmallPtrSet any more.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136992 91177308-0d34-0410-b5e6-96231b3b80d8
Normally, we don't create a live range for a single instruction in a
basic block, the spiller does that anyway. However, when splitting a
live range that belongs to a proper register sub-class, inserting these
extra COPY instructions completely remove the constraints from the
remainder interval, and it may be allocated from the larger super-class.
The spiller will mop up these small live ranges if we end up spilling
anyway. It calls them snippets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136989 91177308-0d34-0410-b5e6-96231b3b80d8
Some instructions require restricted register classes, but most of the
time that doesn't affect register allocation. For example, some
instructions don't work with the stack pointer, but that is a reserved
register anyway.
Sometimes it matters, GR32_ABCD only has 4 allocatable registers. For
such a proper sub-class, the register allocator should try to enable
register class inflation since that makes more registers available for
allocation.
Make sure only legal super-classes are considered. For example, tGPR is
not a proper sub-class in Thumb mode, but in ARM mode it is.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136981 91177308-0d34-0410-b5e6-96231b3b80d8
The old code would look at kills and defs in one pass over the
instruction operands, causing problems with this code:
%R0<def>, %CPSR<def,dead> = tLSLri %R5<kill>, 2, pred:14, pred:%noreg
%R0<def>, %CPSR<def,dead> = tADDrr %R4<kill>, %R0<kill>, pred:14, %pred:%noreg
The last instruction kills and redefines %R0, so it is still live after
the instruction.
This caused a register scavenger crash when compiling 483.xalancbmk for
armv6. I am not including a test case because it requires too much bad
luck to expose this old bug.
First you need to convince the register allocator to use %R0 twice on
the tADDrr instruction, then you have to convince BranchFolding to do
something that causes it to run the register scavenger on he bad block.
<rdar://problem/9898200>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136973 91177308-0d34-0410-b5e6-96231b3b80d8
inlined variable, based on the discussion in PR10542.
This explodes the runtime of several passes down the pipeline due to
a large number of "copies" remaining live across a large function. This
only shows up with both debug and opt, but when it does it creates
a many-minute compile when self-hosting LLVM+Clang. There are several
other cases that show these types of regressions.
All of this is tracked in PR10542, and progress is being made on fixing
the issue. Once its addressed, the re-instated, but until then this
restores the performance for self-hosting and other opt+debug builds.
Devang, let me know if this causes any trouble, or impedes fixing it in
any way, and thanks for working on this!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136953 91177308-0d34-0410-b5e6-96231b3b80d8
It is possible to have multiple DBG_VALUEs for the same variable:
32L TEST32rr %vreg0<kill>, %vreg0, %EFLAGS<imp-def>; GR32:%vreg0
DBG_VALUE 2, 0, !"i"
DBG_VALUE %noreg, %0, !"i"
When that happens, keep the last one instead of the first.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136842 91177308-0d34-0410-b5e6-96231b3b80d8
This helps generate better code in functions with high register
pressure.
The previous version of compact region splitting caused regressions
because the regions were a bit too large. A stronger negative bias
applied in r136832 fixed this problem.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136836 91177308-0d34-0410-b5e6-96231b3b80d8
Apply twice the negative bias on transparent blocks when computing the
compact regions. This excludes loop backedges from the region when only
one of the loop blocks uses the register.
Previously, we would include the backedge in the region if the loop
preheader and the loop latch both used the register, but the loop header
didn't.
When both the header and latch blocks use the register, we still keep it
live on the backedge.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136832 91177308-0d34-0410-b5e6-96231b3b80d8
With a 'FirstDef' field right there, it is very confusing that FirstUse
refers to an instruction that may be a def.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136739 91177308-0d34-0410-b5e6-96231b3b80d8
This is either an invalid SlotIndex, or valno->def for the first value
defined inside the block. PHI values are not counted as defined inside
the block.
The FirstDef field will be used when estimating the cost of spilling
around a block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136736 91177308-0d34-0410-b5e6-96231b3b80d8
The PrefBoth constraint is used for blocks that ideally want a live-in
value both on the stack and in a register. This would be used by a block
that has a use before interference forces a spill.
Secondly, add the ChangesValue flag to BlockConstraint. This tells
SpillPlacement if a live-in value on the stack can be reused as a
live-out stack value for free. If the block redefines the virtual
register, a spill would be required for that.
This extra information will be used by SpillPlacement to more accurately
calculate spill costs when a value can exist both on the stack and in a
register.
The simplest example is a basic block that reads the virtual register,
but doesn't change its value. Spilling around such a block requires a
reload, but no spill in the block.
The spiller already knows this, but the spill placer doesn't. That can
sometimes lead to suboptimal regions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136731 91177308-0d34-0410-b5e6-96231b3b80d8
The testcase looks extremely fragile, so I'm adding an assertion which should catch any cases like this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136711 91177308-0d34-0410-b5e6-96231b3b80d8
This adds the 'resume' instruction class, IR parsing, and bitcode reading and
writing. The 'resume' instruction resumes propagation of an existing (in-flight)
exception whose unwinding was interrupted with a 'landingpad' instruction (to be
added later).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136589 91177308-0d34-0410-b5e6-96231b3b80d8
This includes registers like EFLAGS and ST0-ST7. We don't check for
liveness issues in the verifier and scavenger because registers will
never be allocated from these classes.
While in SSA form, we do care about the liveness of unallocatable
unreserved registers. Liveness of EFLAGS and ST0 neds to be correct for
MachineDCE and MachineSinking.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136541 91177308-0d34-0410-b5e6-96231b3b80d8
This flag is true from isel to register allocation when the machine
function is required to be in SSA form. The TwoAddressInstructionPass
and PHIElimination passes clear the flag.
The SSA flag wil be used by the machine code verifier to check for SSA
form, and eventually an assertion can enforce it in +Asserts builds.
This will catch the common target error of creating machine code with
multiple defs of a virtual register.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136532 91177308-0d34-0410-b5e6-96231b3b80d8
working on x86 (at least for trivial testcases); other architectures will
need more work so that they actually emit the appropriate instructions for
orderings stricter than 'monotonic'. (As far as I can tell, the ARM, PPC,
Mips, and Alpha backends need such changes.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136457 91177308-0d34-0410-b5e6-96231b3b80d8
specified in the same file that the library itself is created. This is
more idiomatic for CMake builds, and also allows us to correctly specify
dependencies that are missed due to bugs in the GenLibDeps perl script,
or change from compiler to compiler. On Linux, this returns CMake to
a place where it can relably rebuild several targets of LLVM.
I have tried not to change the dependencies from the ones in the current
auto-generated file. The only places I've really diverged are in places
where I was seeing link failures, and added a dependency. The goal of
this patch is not to start changing the dependencies, merely to move
them into the correct location, and an explicit form that we can control
and change when necessary.
This also removes a serialization point in the build because we don't
have to scan all the libraries before we begin building various tools.
We no longer have a step of the build that regenerates a file inside the
source tree. A few other associated cleanups fall out of this.
This isn't really finished yet though. After talking to dgregor he urged
switching to a single CMake macro to construct libraries with both
sources and dependencies in the arguments. Migrating from the two macros
to that style will be a follow-up patch.
Also, llvm-config is still generated with GenLibDeps.pl, which means it
still has slightly buggy dependencies. The internal CMake
'llvm-config-like' macro uses the correct explicitly specified
dependencies however. A future patch will switch llvm-config generation
(when using CMake) to be based on these deps as well.
This may well break Windows. I'm getting a machine set up now to dig
into any failures there. If anyone can chime in with problems they see
or ideas of how to solve them for Windows, much appreciated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136433 91177308-0d34-0410-b5e6-96231b3b80d8
This generates the correct SDNodes for the landingpad instruction. It makes an
assumption that the result of the landingpad instruction has at least two
values. And that the first value is a pointer to the exception object and the
second value is the "selector."
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136430 91177308-0d34-0410-b5e6-96231b3b80d8
AddLandingPadInfo takes a landingpad instruction and grabs all of the
information from it that it needs for EH table generation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136429 91177308-0d34-0410-b5e6-96231b3b80d8
'atomicrmw' instructions, which allow representing all the current atomic
rmw intrinsics.
The allowed operands for these instructions are heavily restricted at the
moment; we can probably loosen it a bit, but supporting general
first-class types (where it makes sense) might get a bit complicated,
given how SelectionDAG works.
As an initial cut, these operations do not support specifying an alignment,
but it would be possible to add if we think it's useful. Specifying an
alignment lower than the natural alignment would be essentially
impossible to support on anything other than x86, but specifying a greater
alignment would be possible. I can't think of any useful optimizations which
would use that information, but maybe someone else has ideas.
Optimizer/codegen support coming soon.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136404 91177308-0d34-0410-b5e6-96231b3b80d8
Code like that would only be produced by bugpoint, but we should still
handle it correctly.
When a register is defined by a REG_SEQUENCE of undefs, the register
itself is undef. Previously, we would create a register with uses but no
defs.
Fixes part of PR10520.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136401 91177308-0d34-0410-b5e6-96231b3b80d8
There are two conflicting strategies in play:
- Under high register pressure, we want to assign large live ranges
first. Smaller live ranges are easier to place afterwards.
- Live range splitting is guided by interference, so splitting should be
deferred until interference is as realistic as possible.
With the recent changes to the live range stages, and with compact
regions enabled, it is less traumatic to split a live range too early.
If some of the split products were too big, they can often be split
again.
By reversing the RS_Split order, we get this queue order:
1. Normal live ranges, large to small.
2. RS_Split live ranges, large to small.
The large-to-small order improves RAGreedy's puzzle solving skills under
high register pressure. It may cause a bit more iterated splitting, but
we handle that better now.
With this change, -compact-regions is mostly an improvement on SPEC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136388 91177308-0d34-0410-b5e6-96231b3b80d8
When splitting global live ranges, it is now possible to split for
multiple destination intervals at once. Previously, we only had the main
and stack intervals.
Each edge bundle is assigned to a split candidate, and splitAroundRegion
will insert copies between the candidate intervals and the stack
interval as needed.
The multi-way splitting is used to split around compact regions when
enabled with -compact-regions. The best candidate register still gets
all the bundles it wants, but everything outside the main interval is
first split around compact regions before we create single-block
intervals.
Compact region splitting still causes some regressions, so it is not
enabled by default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136186 91177308-0d34-0410-b5e6-96231b3b80d8
These copies would coalesce easily, but the resulting value would be
defined by a deleted instruction. Now we also remove the undefined value
number from the destination register.
This fixes PR10503.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136174 91177308-0d34-0410-b5e6-96231b3b80d8
When dead code elimination deletes a PHI value, the virtual register may
split into multiple connected components. In that case, revert each
component to the RS_Assign stage.
The new components are guaranteed to be smaller (the original value
numbers are distributed among the components), so this will always be
making progress. The components are now allowed to evict other live
ranges or be split again.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136034 91177308-0d34-0410-b5e6-96231b3b80d8
This is just a LangRef entry and reading/writing/memory representation; optimizer+codegen support coming soon.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136009 91177308-0d34-0410-b5e6-96231b3b80d8
This mechanism already exists, but the RS_Split2 stage makes it clearer.
When live range splitting creates ranges that may not be making
progress, they are marked RS_Split2 instead of RS_New. These ranges may
be split again, but only in a way that can be proven to make progress.
For local ranges, that means they must be split into ranges used by
strictly fewer instructions.
For global ranges, region splitting is bypassed and the RS_Split2
ranges go straight to per-block splitting.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135912 91177308-0d34-0410-b5e6-96231b3b80d8
The stage is used to control where a live range is going, not where it
is coming from. Live ranges created by splitting will usually be marked
RS_New, but some are marked RS_Spill to avoid wasting time trying to
split them again.
The old RS_Global and RS_Local stages are merged - they are really the
same thing for local and global live ranges.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135911 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes PR10463. A two-address instruction with an <undef> use
operand was incorrectly rewritten so the def and use no longer used the
same register, violating the tie constraint.
Fix this by always rewriting <undef> operands with the register a def
operand would use.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135885 91177308-0d34-0410-b5e6-96231b3b80d8
This method computes the edge bundles that should be live when splitting
around a compact region. This is independent of interference.
The function returns false if the live range was already a compact
region, or the compact region doesn't have any live bundles - it would
be the same as splitting around basic blocks.
Compact regions are computed using the normal spill placement code. We
pretend there is interference in all live-through blocks that don't use
the live range. This removes all edges from the Hopfield network used
for spill placement, so it converges instantly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135847 91177308-0d34-0410-b5e6-96231b3b80d8
If there is no interference and no last split point, we cannot
enterIntvBefore(Stop) - that function needs a real instruction.
Use enterIntvAtEnd instead for that very easy case.
This code doesn't currently run, it is needed by multi-way splitting.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135846 91177308-0d34-0410-b5e6-96231b3b80d8
A split candidate can have a null PhysReg which means that it doesn't
map to a real interference pattern. Instead, pretend that all through
blocks have interference.
This makes it possible to generate compact regions where the live range
doesn't go through blocks that don't use it. The live range will still
be live between directly connected blocks with uses.
Splitting around a compact region tends to produce a live range with a
high spill weight, so it may evict a less dense live range.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135845 91177308-0d34-0410-b5e6-96231b3b80d8
This method matches addLinks - All the listed blocks are considered to
have interference, so they add a negative bias to their bundles.
This could also be done by addConstraints, but that requires building a
separate BlockConstraint array.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135844 91177308-0d34-0410-b5e6-96231b3b80d8
There is still a bit more refactoring left to do in Targets. But we are now very
close to fixing all the layering issues in MC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135611 91177308-0d34-0410-b5e6-96231b3b80d8
- Introduce JITDefault code model. This tells targets to set different default
code model for JIT. This eliminates the ugly hack in TargetMachine where
code model is changed after construction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135580 91177308-0d34-0410-b5e6-96231b3b80d8
TargetLoweringObjectFileImpl down to MCObjectFileInfo.
TargetAsmInfo is done to one last method. It's *almost* gone!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135569 91177308-0d34-0410-b5e6-96231b3b80d8
(including compilation, assembly). Move relocation model Reloc::Model from
TargetMachine to MCCodeGenInfo so it's accessible even without TargetMachine.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135468 91177308-0d34-0410-b5e6-96231b3b80d8
errors like the one corrected by r135261. Migrate all LLVM callers of the old
constructor to the new one.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135431 91177308-0d34-0410-b5e6-96231b3b80d8
to MCRegisterInfo. Also initialize the mapping at construction time.
This patch eliminate TargetRegisterInfo from TargetAsmInfo. It's another step
towards fixing the layering violation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135424 91177308-0d34-0410-b5e6-96231b3b80d8
When splitting a live range immediately before an LDR_POST instruction
that redefines the address register, make sure to use the correct value
number in leaveIntvBefore.
We need the value number entering the instruction.
<rdar://problem/9793765>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135413 91177308-0d34-0410-b5e6-96231b3b80d8
When trying to rematerialize a value before an instruction that has an
early-clobber redefine of the virtual register, make sure to look up the
correct value number.
Early-clobber defs are moved one slot back, so getBaseIndex is needed to
find the used value number.
Bugpoint was unable to reduce the test case for this, see PR10388.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135378 91177308-0d34-0410-b5e6-96231b3b80d8
This should unbreak the build-self-4-mingw32 tester. I have a very
complicated test case that I will try to clean up.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135329 91177308-0d34-0410-b5e6-96231b3b80d8
This gets rid of some of the gory splitting details in RAGreedy and
makes them available to future SplitKit clients.
Slightly generalize the functionality to support multi-way splitting.
Specifically, SplitEditor::splitLiveThroughBlock() supports switching
between different register intervals in a block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135307 91177308-0d34-0410-b5e6-96231b3b80d8