This patch adds utilities to ORC for managing a remote JIT target. It consists
of:
1. A very primitive RPC system for making calls over a byte-stream. See
RPCChannel.h, RPCUtils.h.
2. An RPC API defined in the above system for managing memory, looking up
symbols, creating stubs, etc. on a remote target. See OrcRemoteTargetRPCAPI.h.
3. An interface for creating high-level JIT components (memory managers,
callback managers, stub managers, etc.) that operate over the RPC API. See
OrcRemoteTargetClient.h.
4. A helper class for building servers that can handle the RPC calls. See
OrcRemoteTargetServer.h.
The system is designed to work neatly with the existing ORC components and
functionality. In particular, the ORC callback API (and consequently the
CompileOnDemandLayer) is supported, enabling lazy compilation of remote code.
Assuming this doesn't trigger any builder failures, a follow-up patch will be
committed which tests these utilities by using them to replace LLI's existing
remote-JITing demo code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257305 91177308-0d34-0410-b5e6-96231b3b80d8
This is a more generic version of the MCJITMemoryManager::notifyObjectLoaded
method: It provides only a RuntimeDyld reference (rather than an
ExecutionEngine), and so can be used with ORC JIT stacks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257296 91177308-0d34-0410-b5e6-96231b3b80d8
RuntimeDyld::MemoryManager.
The RuntimeDyld::MemoryManager::reserveAllocationSpace method is called when
object files are loaded, and gives clients a chance to pre-allocate memory for
all segments. Previously only the size of each segment (code, ro-data, rw-data)
was supplied but not the alignment. This hasn't caused any problems so far, as
most clients allocate via the MemoryBlock interface which returns page-aligned
blocks. Adding alignment arguments enables finer grained allocation while still
satisfying alignment restrictions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257294 91177308-0d34-0410-b5e6-96231b3b80d8
In r255760, I optimized the SectionMemoryManager to make better use
of virtual memory on platforms where the allocation granularity was
bigger than the protection granularity. As part of this, fixing up
the free list became more complicated and was moved into
`applyMemoryGroupPermissions`. Unfortunately, I forgot to actually
remove the call that drops the free list for RO memory (I did
remove the corresponding one for RX memory), defeating the whole
optimization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257293 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Use proper dataflow ordering to speed convergence.
This will converge the testcase on bug 26055 in 2 iterations.
(data structures speedups to come to make even that faster)
Reviewers: kcc, samsonov, echristo, dblaikie, tvvikram
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D16039
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257292 91177308-0d34-0410-b5e6-96231b3b80d8
llvm\unittests\ExecutionEngine\Orc\ObjectLinkingLayerTest.cpp(115) : error C2327: 'llvm::OrcExecutionTest::TM' : is not a type name, static, or enumerator
llvm\unittests\ExecutionEngine\Orc\ObjectLinkingLayerTest.cpp(115) : error C2065: 'TM' : undeclared identifier
FYI, "this->TM" was valid even before moving class SectionMemoryManagerWrapper.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257290 91177308-0d34-0410-b5e6-96231b3b80d8
MSVC seems to have problems looking up Value inside of the template. Not
really sure whether that's a bug there or Clang and GCC being too
permissive.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257288 91177308-0d34-0410-b5e6-96231b3b80d8
type.
This makes it easy and safe to use a set of flags as one elmenet of
a tagged union with pointers. There is quite a bit of code that has
historically done this by casting arbitrary integers to "pointers" and
assuming that this was safe and reliable. It is neither, and has started
to rear its head by triggering safety asserts in various abstractions
like PointerLikeTypeTraits when the integers chosen are invariably poor
choices for *some* platform and *some* situation. Not to mention the
(hopefully unlikely) prospect of one of these integers actually getting
allocated!
With this, it will be straightforward to build type safe abstractions
like this without being error prone. The abstraction itself is also
remarkably simple thanks to the implicit conversion.
This use case and pattern was also independently created by the folks
working on Swift, and they're going to incrementally add any missing
functionality they find.
Differential Revision: http://reviews.llvm.org/D15844
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257284 91177308-0d34-0410-b5e6-96231b3b80d8
This is a much more general and powerful form of PointerUnion. It
provides a reasonably complete sum type (from type theory) for
pointer-like types. It has several significant advantages over the
existing PointerUnion infrastructure:
1) It allows more than two pointer types to participate without awkward
nesting structures.
2) It directly exposes the tag so that it is convenient to write
switches over the possible members.
3) It can re-use the same type for multiple tag values, something that
has been worked around by either abusing PointerIntPair or defining
nonce types and doing unsafe pointer casting.
4) It supports customization of the PointerLikeTypeTraits used for
specific member types. This means it could (in theory) be used even
with types that are over-aligned on allocation to expose larger
numbers of bits to the tag.
All in all, I think it is at least complimentary to the existing
infrastructure, and a strict improvement for some use cases.
Differential Revision: http://reviews.llvm.org/D15843
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257282 91177308-0d34-0410-b5e6-96231b3b80d8
JumpThreading's runOnFunction is supposed to return true if it made any
changes. JumpThreading has a call to removeUnreachableBlocks which may
result in changes to the IR but runOnFunction didn't appropriate account
for this possibility, leading to badness.
While we are here, make sure to call LazyValueInfo::eraseBlock in
removeUnreachableBlocks; JumpThreading preserves LVI.
This fixes PR26096.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257279 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The code was simply ensuring that the catchpad's pred is its catchswitch,
which was letting cases slip through where the flow edge was the unwind
edge of the catchswitch rather than one of its catch clauses.
Reviewers: andrew.w.kaylor, rnk, majnemer
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D16011
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257275 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Funclet-based EH personalities/tables likely can't handle these, and they
can't be generated at source, so make them officially illegal in IR as
well.
Reviewers: andrew.w.kaylor, rnk, majnemer
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D15963
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257274 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
A funclet EH pad may be exited by an unwind edge, which may be a
cleanupret exiting its cleanuppad, an invoke exiting a funclet, or an
unwind out of a nested funclet transitively exiting its parent. Funclet
EH personalities require all such exceptional exits from a given funclet to
have the same unwind destination, and EH preparation / state numbering /
table generation implicitly depends on this. Formalize it as a rule of
the IR in the LangRef and verifier.
Reviewers: rnk, majnemer, andrew.w.kaylor
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D15962
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257273 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Funclet EH personalities require a tree-like nesting among funclets
(enforced by the ParentPad linkage in the IR), and also require that
unwind edges conform to certain rules with respect to the tree:
- An unwind edge may exit 0 or more ancestor pads
- An unwind edge must enter exactly one EH pad, which must be distinct
from any exited pads
- A cleanupret's edge must exit its cleanuppad
Describe these rules in the LangRef, and enforce them in the verifier.
Reviewers: rnk, majnemer, andrew.w.kaylor
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D15961
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257272 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts r257221.
This caused several build bot failures
* It looks like some of the tests don't work correctly under Windows
* It looks like the lit per test timeout tests fail
So I'm reverting for now. Once the above failures are fixed running
lit's tests can be enabled again.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257268 91177308-0d34-0410-b5e6-96231b3b80d8
AVX1 v8i32/v4i64 shuffles are bitcasted to v8f32/v4f64, this patch peeks through any bitcast to check for a load node to allow broadcasts to occur.
This is a re-commit of r257055 after r257264 fixed 32-bit broadcast loads of i64 scalars.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257266 91177308-0d34-0410-b5e6-96231b3b80d8
Previously the CompileOnDemand layer was hard-coded to use a new
SectionMemoryManager for each function when it was called.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257265 91177308-0d34-0410-b5e6-96231b3b80d8
managers.
Prior to this patch, recursive finalization (where finalization of one
RuntimeDyld instance triggers finalization of another instance on which the
first depends) could trigger memory access failures: When the inner (dependent)
RuntimeDyld instance and its memory manager are finalized, memory allocated
(but not yet relocated) by the outer instance is locked, and relocation in the
outer instance fails with a memory access error.
This patch adds a latch to the RuntimeDyld::MemoryManager base class that is
checked by a new method: RuntimeDyld::finalizeWithMemoryManagerLocking, ensuring
that shared memory managers are only finalized by the outermost RuntimeDyld
instance.
This allows ORC clients to supply the same memory manager to multiple calls to
addModuleSet. In particular it enables the use of user-supplied memory managers
with the CompileOnDemandLayer which must reuse the supplied memory manager for
each function that is lazily compiled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257263 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This is analogous to r256079, which removed an overly strong assertion, and
r256812, which simplified the code by replacing three conditionals by one.
Reviewers: reames
Subscribers: sanjoy, llvm-commits
Differential Revision: http://reviews.llvm.org/D16019
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257250 91177308-0d34-0410-b5e6-96231b3b80d8
This patch teaches rewrite-statepoints-for-gc to relocate vector-of-pointers directly rather than trying to split them. This builds on the recent lowering/IR changes to allow vector typed gc.relocates.
The motivation for this is that we recently found a bug in the vector splitting code where depending on visit order, a vector might not be relocated at some safepoint. Specifically, the bug is that the splitting code wasn't updating the side tables (live vector) of other safepoints. As a result, a vector which was live at two safepoints might not be updated at one of them. However, if you happened to visit safepoints in post order over the dominator tree, everything worked correctly. Weirdly, it turns out that post order is actually an incredibly common order to visit instructions in in practice. Frustratingly, I have not managed to write a test case which actually hits this. I can only reproduce it in large IR files produced by actual applications.
Rather than continue to make this code more complicated, we can remove all of the complexity by just representing the relocation of the entire vector natively in the IR.
At the moment, the new functionality is hidden behind a flag. To use this code, you need to pass "-rs4gc-split-vector-values=0". Once I have a chance to stress test with this option and get feedback from other users, my plan is to flip the default and remove the original splitting code. I would just remove it now, but given the rareness of the bug, I figured it was better to leave it in place until the new approach has been stress tested.
Differential Revision: http://reviews.llvm.org/D15982
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@257244 91177308-0d34-0410-b5e6-96231b3b80d8