[ORC] Change the locking scheme for ThreadSafeModule.

ThreadSafeModule/ThreadSafeContext are used to manage lifetimes and locking
for LLVMContexts in ORCv2. Prior to this patch contexts were locked as soon
as an associated Module was emitted (to be compiled and linked), and were not
unlocked until the emit call returned. This could lead to deadlocks if
interdependent modules that shared contexts were compiled on different threads:
when, during emission of the first module, the dependence was discovered the
second module (which would provide the required symbol) could not be emitted as
the thread emitting the first module still held the lock.

This patch eliminates this possibility by moving to a finer-grained locking
scheme. Each client holds the module lock only while they are actively operating
on it. To make this finer grained locking simpler/safer to implement this patch
removes the explicit lock method, 'getContextLock', from ThreadSafeModule and
replaces it with a new method, 'withModuleDo', that implicitly locks the context,
calls a user-supplied function object to operate on the Module, then implicitly
unlocks the context before returning the result.

ThreadSafeModule TSM = getModule(...);
size_t NumFunctions = TSM.withModuleDo(
    [](Module &M) { // <- context locked before entry to lambda.
      return M.size();
    });

Existing ORCv2 layers that operate on ThreadSafeModules are updated to use the
new method.

This method is used to introduce Module locking into each of the existing
layers.

llvm-svn: 367686
This commit is contained in:
Lang Hames 2019-08-02 15:21:37 +00:00
parent f44af90ee1
commit a6587cc70d
12 changed files with 227 additions and 180 deletions

View File

@ -153,7 +153,7 @@ Design Overview
ORC's JIT'd program model aims to emulate the linking and symbol resolution ORC's JIT'd program model aims to emulate the linking and symbol resolution
rules used by the static and dynamic linkers. This allows ORC to JIT rules used by the static and dynamic linkers. This allows ORC to JIT
arbitrary LLVM IR, including IR produced by an ordinary static compiler (e.g. arbitrary LLVM IR, including IR produced by an ordinary static compiler (e.g.
clang) that uses constructs like symbol linkage and visibility, and weak [4]_ clang) that uses constructs like symbol linkage and visibility, and weak [3]_
and common symbol definitions. and common symbol definitions.
To see how this works, imagine a program ``foo`` which links against a pair To see how this works, imagine a program ``foo`` which links against a pair
@ -441,7 +441,7 @@ ThreadSafeModule and ThreadSafeContext are wrappers around Modules and
LLVMContexts respectively. A ThreadSafeModule is a pair of a LLVMContexts respectively. A ThreadSafeModule is a pair of a
std::unique_ptr<Module> and a (possibly shared) ThreadSafeContext value. A std::unique_ptr<Module> and a (possibly shared) ThreadSafeContext value. A
ThreadSafeContext is a pair of a std::unique_ptr<LLVMContext> and a lock. ThreadSafeContext is a pair of a std::unique_ptr<LLVMContext> and a lock.
This design serves two purposes: providing both a locking scheme and lifetime This design serves two purposes: providing a locking scheme and lifetime
management for LLVMContexts. The ThreadSafeContext may be locked to prevent management for LLVMContexts. The ThreadSafeContext may be locked to prevent
accidental concurrent access by two Modules that use the same LLVMContext. accidental concurrent access by two Modules that use the same LLVMContext.
The underlying LLVMContext is freed once all ThreadSafeContext values pointing The underlying LLVMContext is freed once all ThreadSafeContext values pointing
@ -471,33 +471,49 @@ Before using a ThreadSafeContext, clients should ensure that either the context
is only accessible on the current thread, or that the context is locked. In the is only accessible on the current thread, or that the context is locked. In the
example above (where the context is never locked) we rely on the fact that both example above (where the context is never locked) we rely on the fact that both
``TSM1`` and ``TSM2``, and TSCtx are all created on one thread. If a context is ``TSM1`` and ``TSM2``, and TSCtx are all created on one thread. If a context is
going to be shared between threads then it must be locked before the context, going to be shared between threads then it must be locked before any accessing
or any Modules attached to it, are accessed. When code is added to in-tree IR or creating any Modules attached to it. E.g.
layers this locking is is done automatically by the
``BasicIRLayerMaterializationUnit::materialize`` method. In all other
situations, for example when writing a custom IR materialization unit, or
constructing a new ThreadSafeModule from higher-level program representations,
locking must be done explicitly:
.. code-block:: c++ .. code-block:: c++
void HighLevelRepresentationLayer::emit(MaterializationResponsibility R,
HighLevelProgramRepresentation H) {
// Get or create a context value that may be shared between threads.
ThreadSafeContext TSCtx = getContext();
// Lock the context to prevent concurrent access. ThreadSafeContext TSCtx(llvm::make_unique<LLVMContext>());
auto Lock = TSCtx.getLock();
// IRGen a module onto the locked Context. ThreadPool TP(NumThreads);
ThreadSafeModule TSM(IRGen(H, *TSCtx.getContext()), TSCtx); JITStack J;
// Emit the module to the base layer with the context still locked. for (auto &ModulePath : ModulePaths) {
BaseIRLayer.emit(std::move(R), std::move(TSM)); TP.async(
} [&]() {
auto Lock = TSCtx.getLock();
auto M = loadModuleOnContext(ModulePath, TSCtx.getContext());
J.addModule(ThreadSafeModule(std::move(M), TSCtx));
});
}
TP.wait();
To make exclusive access to Modules easier to manage the ThreadSafeModule class
provides a convenince function, ``withModuleDo``, that implicitly (1) locks the
associated context, (2) runs a given function object, (3) unlocks the context,
and (3) returns the result generated by the function object. E.g.
.. code-block:: c++
ThreadSafeModule TSM = getModule(...);
// Dump the module:
size_t NumFunctionsInModule =
TSM.withModuleDo(
[](Module &M) { // <- Context locked before entering lambda.
return M.size();
} // <- Context unlocked after leaving.
);
Clients wishing to maximize possibilities for concurrent compilation will want Clients wishing to maximize possibilities for concurrent compilation will want
to create every new ThreadSafeModule on a new ThreadSafeContext [3]_. For this to create every new ThreadSafeModule on a new ThreadSafeContext. For this
reason a convenience constructor for ThreadSafeModule is provided that implicitly reason a convenience constructor for ThreadSafeModule is provided that implicitly
constructs a new ThreadSafeContext value from a std::unique_ptr<LLVMContext>: constructs a new ThreadSafeContext value from a std::unique_ptr<LLVMContext>:
@ -620,13 +636,7 @@ TBD: Speculative compilation. Object Caches.
across processes, however this functionality appears not to have been across processes, however this functionality appears not to have been
used. used.
.. [3] Sharing ThreadSafeModules in a concurrent compilation can be dangerous: .. [3] Weak definitions are currently handled correctly within dylibs, but if
if interdependent modules are loaded on the same context, but compiled
on different threads a deadlock may occur, with each compile waiting for
the other to complete, and the other unable to proceed because the
context is locked.
.. [4] Weak definitions are currently handled correctly within dylibs, but if
multiple dylibs provide a weak definition of a symbol then each will end multiple dylibs provide a weak definition of a symbol then each will end
up with its own definition (similar to how weak definitions are handled up with its own definition (similar to how weak definitions are handled
in Windows DLLs). This will be fixed in the future. in Windows DLLs). This will be fixed in the future.

View File

@ -87,20 +87,22 @@ public:
private: private:
static Expected<ThreadSafeModule> static Expected<ThreadSafeModule>
optimizeModule(ThreadSafeModule TSM, const MaterializationResponsibility &R) { optimizeModule(ThreadSafeModule TSM, const MaterializationResponsibility &R) {
// Create a function pass manager. TSM.withModuleDo([](Module &M) {
auto FPM = llvm::make_unique<legacy::FunctionPassManager>(TSM.getModule()); // Create a function pass manager.
auto FPM = llvm::make_unique<legacy::FunctionPassManager>(&M);
// Add some optimizations. // Add some optimizations.
FPM->add(createInstructionCombiningPass()); FPM->add(createInstructionCombiningPass());
FPM->add(createReassociatePass()); FPM->add(createReassociatePass());
FPM->add(createGVNPass()); FPM->add(createGVNPass());
FPM->add(createCFGSimplificationPass()); FPM->add(createCFGSimplificationPass());
FPM->doInitialization(); FPM->doInitialization();
// Run the optimizations over all functions in the module being added to // Run the optimizations over all functions in the module being added to
// the JIT. // the JIT.
for (auto &F : *TSM.getModule()) for (auto &F : M)
FPM->run(F); FPM->run(F);
});
return TSM; return TSM;
} }

View File

@ -22,6 +22,9 @@ namespace llvm {
class Module; class Module;
namespace orc { namespace orc {
/// A layer that applies a transform to emitted modules.
/// The transform function is responsible for locking the ThreadSafeContext
/// before operating on the module.
class IRTransformLayer : public IRLayer { class IRTransformLayer : public IRLayer {
public: public:
using TransformFunction = std::function<Expected<ThreadSafeModule>( using TransformFunction = std::function<Expected<ThreadSafeModule>(

View File

@ -69,7 +69,7 @@ public:
/// instance, or null if the instance was default constructed. /// instance, or null if the instance was default constructed.
const LLVMContext *getContext() const { return S ? S->Ctx.get() : nullptr; } const LLVMContext *getContext() const { return S ? S->Ctx.get() : nullptr; }
Lock getLock() { Lock getLock() const {
assert(S && "Can not lock an empty ThreadSafeContext"); assert(S && "Can not lock an empty ThreadSafeContext");
return Lock(S); return Lock(S);
} }
@ -95,7 +95,7 @@ public:
// We also need to lock the context to make sure the module tear-down // We also need to lock the context to make sure the module tear-down
// does not overlap any other work on the context. // does not overlap any other work on the context.
if (M) { if (M) {
auto L = getContextLock(); auto L = TSCtx.getLock();
M = nullptr; M = nullptr;
} }
M = std::move(Other.M); M = std::move(Other.M);
@ -117,23 +117,14 @@ public:
~ThreadSafeModule() { ~ThreadSafeModule() {
// We need to lock the context while we destruct the module. // We need to lock the context while we destruct the module.
if (M) { if (M) {
auto L = getContextLock(); auto L = TSCtx.getLock();
M = nullptr; M = nullptr;
} }
} }
/// Get the module wrapped by this ThreadSafeModule.
Module *getModule() { return M.get(); }
/// Get the module wrapped by this ThreadSafeModule.
const Module *getModule() const { return M.get(); }
/// Take out a lock on the ThreadSafeContext for this module.
ThreadSafeContext::Lock getContextLock() { return TSCtx.getLock(); }
/// Boolean conversion: This ThreadSafeModule will evaluate to true if it /// Boolean conversion: This ThreadSafeModule will evaluate to true if it
/// wraps a non-null module. /// wraps a non-null module.
explicit operator bool() { explicit operator bool() const {
if (M) { if (M) {
assert(TSCtx.getContext() && assert(TSCtx.getContext() &&
"Non-null module must have non-null context"); "Non-null module must have non-null context");
@ -142,6 +133,33 @@ public:
return false; return false;
} }
/// Locks the associated ThreadSafeContext and calls the given function
/// on the contained Module.
template <typename Func>
auto withModuleDo(Func &&F) -> decltype(F(std::declval<Module &>())) {
assert(M && "Can not call on null module");
auto Lock = TSCtx.getLock();
return F(*M);
}
/// Locks the associated ThreadSafeContext and calls the given function
/// on the contained Module.
template <typename Func>
auto withModuleDo(Func &&F) const
-> decltype(F(std::declval<const Module &>())) {
auto Lock = TSCtx.getLock();
return F(*M);
}
/// Get a raw pointer to the contained module without locking the context.
Module *getModuleUnlocked() { return M.get(); }
/// Get a raw pointer to the contained module without locking the context.
const Module *getModuleUnlocked() const { return M.get(); }
/// Returns the context for this ThreadSafeModule.
ThreadSafeContext getContext() const { return TSCtx; }
private: private:
std::unique_ptr<Module> M; std::unique_ptr<Module> M;
ThreadSafeContext TSCtx; ThreadSafeContext TSCtx;

View File

@ -54,11 +54,12 @@ static ThreadSafeModule extractSubModule(ThreadSafeModule &TSM,
llvm_unreachable("Unsupported global type"); llvm_unreachable("Unsupported global type");
}; };
auto NewTSMod = cloneToNewContext(TSM, ShouldExtract, DeleteExtractedDefs); auto NewTSM = cloneToNewContext(TSM, ShouldExtract, DeleteExtractedDefs);
auto &M = *NewTSMod.getModule(); NewTSM.withModuleDo([&](Module &M) {
M.setModuleIdentifier((M.getModuleIdentifier() + Suffix).str()); M.setModuleIdentifier((M.getModuleIdentifier() + Suffix).str());
});
return NewTSMod; return NewTSM;
} }
namespace llvm { namespace llvm {
@ -119,32 +120,34 @@ void CompileOnDemandLayer::setPartitionFunction(PartitionFunction Partition) {
void CompileOnDemandLayer::emit(MaterializationResponsibility R, void CompileOnDemandLayer::emit(MaterializationResponsibility R,
ThreadSafeModule TSM) { ThreadSafeModule TSM) {
assert(TSM.getModule() && "Null module"); assert(TSM && "Null module");
auto &ES = getExecutionSession(); auto &ES = getExecutionSession();
auto &M = *TSM.getModule();
// First, do some cleanup on the module: // Sort the callables and non-callables, build re-exports and lodge the
cleanUpModule(M);
// Now sort the callables and non-callables, build re-exports and lodge the
// actual module with the implementation dylib. // actual module with the implementation dylib.
auto &PDR = getPerDylibResources(R.getTargetJITDylib()); auto &PDR = getPerDylibResources(R.getTargetJITDylib());
MangleAndInterner Mangle(ES, M.getDataLayout());
SymbolAliasMap NonCallables; SymbolAliasMap NonCallables;
SymbolAliasMap Callables; SymbolAliasMap Callables;
for (auto &GV : M.global_values()) { TSM.withModuleDo([&](Module &M) {
if (GV.isDeclaration() || GV.hasLocalLinkage() || GV.hasAppendingLinkage()) // First, do some cleanup on the module:
continue; cleanUpModule(M);
auto Name = Mangle(GV.getName()); MangleAndInterner Mangle(ES, M.getDataLayout());
auto Flags = JITSymbolFlags::fromGlobalValue(GV); for (auto &GV : M.global_values()) {
if (Flags.isCallable()) if (GV.isDeclaration() || GV.hasLocalLinkage() ||
Callables[Name] = SymbolAliasMapEntry(Name, Flags); GV.hasAppendingLinkage())
else continue;
NonCallables[Name] = SymbolAliasMapEntry(Name, Flags);
} auto Name = Mangle(GV.getName());
auto Flags = JITSymbolFlags::fromGlobalValue(GV);
if (Flags.isCallable())
Callables[Name] = SymbolAliasMapEntry(Name, Flags);
else
NonCallables[Name] = SymbolAliasMapEntry(Name, Flags);
}
});
// Create a partitioning materialization unit and lodge it with the // Create a partitioning materialization unit and lodge it with the
// implementation dylib. // implementation dylib.
@ -239,14 +242,16 @@ void CompileOnDemandLayer::emitPartition(
// memory manager instance to the linking layer. // memory manager instance to the linking layer.
auto &ES = getExecutionSession(); auto &ES = getExecutionSession();
GlobalValueSet RequestedGVs; GlobalValueSet RequestedGVs;
for (auto &Name : R.getRequestedSymbols()) { for (auto &Name : R.getRequestedSymbols()) {
assert(Defs.count(Name) && "No definition for symbol"); assert(Defs.count(Name) && "No definition for symbol");
RequestedGVs.insert(Defs[Name]); RequestedGVs.insert(Defs[Name]);
} }
auto GVsToExtract = Partition(RequestedGVs); /// Perform partitioning with the context lock held, since the partition
/// function is allowed to access the globals to compute the partition.
auto GVsToExtract =
TSM.withModuleDo([&](Module &M) { return Partition(RequestedGVs); });
// Take a 'None' partition to mean the whole module (as opposed to an empty // Take a 'None' partition to mean the whole module (as opposed to an empty
// partition, which means "materialize nothing"). Emit the whole module // partition, which means "materialize nothing"). Emit the whole module
@ -265,37 +270,46 @@ void CompileOnDemandLayer::emitPartition(
} }
// Ok -- we actually need to partition the symbols. Promote the symbol // Ok -- we actually need to partition the symbols. Promote the symbol
// linkages/names. // linkages/names, expand the partition to include any required symbols
// FIXME: We apply this once per partitioning. It's safe, but overkill. // (i.e. symbols that can't be separated from our partition), and
{ // then extract the partition.
auto PromotedGlobals = PromoteSymbols(*TSM.getModule()); //
if (!PromotedGlobals.empty()) { // FIXME: We apply this promotion once per partitioning. It's safe, but
MangleAndInterner Mangle(ES, TSM.getModule()->getDataLayout()); // overkill.
SymbolFlagsMap SymbolFlags;
for (auto &GV : PromotedGlobals) auto ExtractedTSM =
SymbolFlags[Mangle(GV->getName())] = TSM.withModuleDo([&](Module &M) -> Expected<ThreadSafeModule> {
JITSymbolFlags::fromGlobalValue(*GV); auto PromotedGlobals = PromoteSymbols(M);
if (auto Err = R.defineMaterializing(SymbolFlags)) { if (!PromotedGlobals.empty()) {
ES.reportError(std::move(Err)); MangleAndInterner Mangle(ES, M.getDataLayout());
R.failMaterialization(); SymbolFlagsMap SymbolFlags;
return; for (auto &GV : PromotedGlobals)
} SymbolFlags[Mangle(GV->getName())] =
} JITSymbolFlags::fromGlobalValue(*GV);
if (auto Err = R.defineMaterializing(SymbolFlags))
return std::move(Err);
}
expandPartition(*GVsToExtract);
// Extract the requested partiton (plus any necessary aliases) and
// put the rest back into the impl dylib.
auto ShouldExtract = [&](const GlobalValue &GV) -> bool {
return GVsToExtract->count(&GV);
};
return extractSubModule(TSM, ".submodule", ShouldExtract);
});
if (!ExtractedTSM) {
ES.reportError(ExtractedTSM.takeError());
R.failMaterialization();
return;
} }
expandPartition(*GVsToExtract);
// Extract the requested partiton (plus any necessary aliases) and
// put the rest back into the impl dylib.
auto ShouldExtract = [&](const GlobalValue &GV) -> bool {
return GVsToExtract->count(&GV);
};
auto ExtractedTSM = extractSubModule(TSM, ".submodule", ShouldExtract);
R.replace(llvm::make_unique<PartitioningIRMaterializationUnit>( R.replace(llvm::make_unique<PartitioningIRMaterializationUnit>(
ES, std::move(TSM), R.getVModuleKey(), *this)); ES, std::move(TSM), R.getVModuleKey(), *this));
BaseLayer.emit(std::move(R), std::move(*ExtractedTSM));
BaseLayer.emit(std::move(R), std::move(ExtractedTSM));
} }
} // end namespace orc } // end namespace orc

View File

@ -22,9 +22,9 @@ void IRCompileLayer::setNotifyCompiled(NotifyCompiledFunction NotifyCompiled) {
void IRCompileLayer::emit(MaterializationResponsibility R, void IRCompileLayer::emit(MaterializationResponsibility R,
ThreadSafeModule TSM) { ThreadSafeModule TSM) {
assert(TSM.getModule() && "Module must not be null"); assert(TSM && "Module must not be null");
if (auto Obj = Compile(*TSM.getModule())) { if (auto Obj = TSM.withModuleDo(Compile)) {
{ {
std::lock_guard<std::mutex> Lock(IRLayerMutex); std::lock_guard<std::mutex> Lock(IRLayerMutex);
if (NotifyCompiled) if (NotifyCompiled)

View File

@ -19,7 +19,7 @@ IRTransformLayer::IRTransformLayer(ExecutionSession &ES,
void IRTransformLayer::emit(MaterializationResponsibility R, void IRTransformLayer::emit(MaterializationResponsibility R,
ThreadSafeModule TSM) { ThreadSafeModule TSM) {
assert(TSM.getModule() && "Module must not be null"); assert(TSM && "Module must not be null");
if (auto TransformedTSM = Transform(std::move(TSM), R)) if (auto TransformedTSM = Transform(std::move(TSM), R))
BaseLayer.emit(std::move(R), std::move(*TransformedTSM)); BaseLayer.emit(std::move(R), std::move(*TransformedTSM));

View File

@ -41,7 +41,8 @@ Error LLJIT::defineAbsolute(StringRef Name, JITEvaluatedSymbol Sym) {
Error LLJIT::addIRModule(JITDylib &JD, ThreadSafeModule TSM) { Error LLJIT::addIRModule(JITDylib &JD, ThreadSafeModule TSM) {
assert(TSM && "Can not add null module"); assert(TSM && "Can not add null module");
if (auto Err = applyDataLayout(*TSM.getModule())) if (auto Err =
TSM.withModuleDo([&](Module &M) { return applyDataLayout(M); }))
return Err; return Err;
return CompileLayer->add(JD, std::move(TSM), ES->allocateVModule()); return CompileLayer->add(JD, std::move(TSM), ES->allocateVModule());
@ -166,10 +167,14 @@ Error LLLazyJITBuilderState::prepareForConstruction() {
Error LLLazyJIT::addLazyIRModule(JITDylib &JD, ThreadSafeModule TSM) { Error LLLazyJIT::addLazyIRModule(JITDylib &JD, ThreadSafeModule TSM) {
assert(TSM && "Can not add null module"); assert(TSM && "Can not add null module");
if (auto Err = applyDataLayout(*TSM.getModule())) if (auto Err = TSM.withModuleDo([&](Module &M) -> Error {
return Err; if (auto Err = applyDataLayout(M))
return Err;
recordCtorDtors(*TSM.getModule()); recordCtorDtors(M);
return Error::success();
}))
return Err;
return CODLayer->add(JD, std::move(TSM), ES->allocateVModule()); return CODLayer->add(JD, std::move(TSM), ES->allocateVModule());
} }

View File

@ -29,15 +29,17 @@ IRMaterializationUnit::IRMaterializationUnit(ExecutionSession &ES,
assert(this->TSM && "Module must not be null"); assert(this->TSM && "Module must not be null");
MangleAndInterner Mangle(ES, this->TSM.getModule()->getDataLayout()); MangleAndInterner Mangle(ES, this->TSM.getModuleUnlocked()->getDataLayout());
for (auto &G : this->TSM.getModule()->global_values()) { this->TSM.withModuleDo([&](Module &M) {
if (G.hasName() && !G.isDeclaration() && !G.hasLocalLinkage() && for (auto &G : M.global_values()) {
!G.hasAvailableExternallyLinkage() && !G.hasAppendingLinkage()) { if (G.hasName() && !G.isDeclaration() && !G.hasLocalLinkage() &&
auto MangledName = Mangle(G.getName()); !G.hasAvailableExternallyLinkage() && !G.hasAppendingLinkage()) {
SymbolFlags[MangledName] = JITSymbolFlags::fromGlobalValue(G); auto MangledName = Mangle(G.getName());
SymbolToDefinition[MangledName] = &G; SymbolFlags[MangledName] = JITSymbolFlags::fromGlobalValue(G);
SymbolToDefinition[MangledName] = &G;
}
} }
} });
} }
IRMaterializationUnit::IRMaterializationUnit( IRMaterializationUnit::IRMaterializationUnit(
@ -47,8 +49,9 @@ IRMaterializationUnit::IRMaterializationUnit(
TSM(std::move(TSM)), SymbolToDefinition(std::move(SymbolToDefinition)) {} TSM(std::move(TSM)), SymbolToDefinition(std::move(SymbolToDefinition)) {}
StringRef IRMaterializationUnit::getName() const { StringRef IRMaterializationUnit::getName() const {
if (TSM.getModule()) if (TSM)
return TSM.getModule()->getModuleIdentifier(); return TSM.withModuleDo(
[](const Module &M) { return M.getModuleIdentifier(); });
return "<null module>"; return "<null module>";
} }
@ -90,7 +93,6 @@ void BasicIRLayerMaterializationUnit::materialize(
auto &N = R.getTargetJITDylib().getName(); auto &N = R.getTargetJITDylib().getName();
#endif // NDEBUG #endif // NDEBUG
auto Lock = TSM.getContextLock();
LLVM_DEBUG(ES.runSessionLocked( LLVM_DEBUG(ES.runSessionLocked(
[&]() { dbgs() << "Emitting, for " << N << ", " << *this << "\n"; });); [&]() { dbgs() << "Emitting, for " << N << ", " << *this << "\n"; }););
L.emit(std::move(R), std::move(TSM)); L.emit(std::move(R), std::move(TSM));

View File

@ -23,41 +23,41 @@ ThreadSafeModule cloneToNewContext(ThreadSafeModule &TSM,
if (!ShouldCloneDef) if (!ShouldCloneDef)
ShouldCloneDef = [](const GlobalValue &) { return true; }; ShouldCloneDef = [](const GlobalValue &) { return true; };
auto Lock = TSM.getContextLock(); return TSM.withModuleDo([&](Module &M) {
SmallVector<char, 1> ClonedModuleBuffer;
SmallVector<char, 1> ClonedModuleBuffer; {
std::set<GlobalValue *> ClonedDefsInSrc;
ValueToValueMapTy VMap;
auto Tmp = CloneModule(M, VMap, [&](const GlobalValue *GV) {
if (ShouldCloneDef(*GV)) {
ClonedDefsInSrc.insert(const_cast<GlobalValue *>(GV));
return true;
}
return false;
});
{ if (UpdateClonedDefSource)
std::set<GlobalValue *> ClonedDefsInSrc; for (auto *GV : ClonedDefsInSrc)
ValueToValueMapTy VMap; UpdateClonedDefSource(*GV);
auto Tmp = CloneModule(*TSM.getModule(), VMap, [&](const GlobalValue *GV) {
if (ShouldCloneDef(*GV)) {
ClonedDefsInSrc.insert(const_cast<GlobalValue *>(GV));
return true;
}
return false;
});
if (UpdateClonedDefSource) BitcodeWriter BCWriter(ClonedModuleBuffer);
for (auto *GV : ClonedDefsInSrc)
UpdateClonedDefSource(*GV);
BitcodeWriter BCWriter(ClonedModuleBuffer); BCWriter.writeModule(*Tmp);
BCWriter.writeSymtab();
BCWriter.writeStrtab();
}
BCWriter.writeModule(*Tmp); MemoryBufferRef ClonedModuleBufferRef(
BCWriter.writeSymtab(); StringRef(ClonedModuleBuffer.data(), ClonedModuleBuffer.size()),
BCWriter.writeStrtab(); "cloned module buffer");
} ThreadSafeContext NewTSCtx(llvm::make_unique<LLVMContext>());
MemoryBufferRef ClonedModuleBufferRef( auto ClonedModule = cantFail(
StringRef(ClonedModuleBuffer.data(), ClonedModuleBuffer.size()), parseBitcodeFile(ClonedModuleBufferRef, *NewTSCtx.getContext()));
"cloned module buffer"); ClonedModule->setModuleIdentifier(M.getName());
ThreadSafeContext NewTSCtx(llvm::make_unique<LLVMContext>()); return ThreadSafeModule(std::move(ClonedModule), std::move(NewTSCtx));
});
auto ClonedModule =
cantFail(parseBitcodeFile(ClonedModuleBufferRef, *NewTSCtx.getContext()));
ClonedModule->setModuleIdentifier(TSM.getModule()->getName());
return ThreadSafeModule(std::move(ClonedModule), std::move(NewTSCtx));
} }
} // end namespace orc } // end namespace orc

View File

@ -695,18 +695,16 @@ int main(int argc, char **argv, char * const *envp) {
return Result; return Result;
} }
static orc::IRTransformLayer::TransformFunction createDebugDumper() { static std::function<void(Module &)> createDebugDumper() {
switch (OrcDumpKind) { switch (OrcDumpKind) {
case DumpKind::NoDump: case DumpKind::NoDump:
return [](orc::ThreadSafeModule TSM, return [](Module &M) {};
const orc::MaterializationResponsibility &R) { return TSM; };
case DumpKind::DumpFuncsToStdOut: case DumpKind::DumpFuncsToStdOut:
return [](orc::ThreadSafeModule TSM, return [](Module &M) {
const orc::MaterializationResponsibility &R) {
printf("[ "); printf("[ ");
for (const auto &F : *TSM.getModule()) { for (const auto &F : M) {
if (F.isDeclaration()) if (F.isDeclaration())
continue; continue;
@ -718,31 +716,23 @@ static orc::IRTransformLayer::TransformFunction createDebugDumper() {
} }
printf("]\n"); printf("]\n");
return TSM;
}; };
case DumpKind::DumpModsToStdOut: case DumpKind::DumpModsToStdOut:
return [](orc::ThreadSafeModule TSM, return [](Module &M) {
const orc::MaterializationResponsibility &R) { outs() << "----- Module Start -----\n" << M << "----- Module End -----\n";
outs() << "----- Module Start -----\n"
<< *TSM.getModule() << "----- Module End -----\n";
return TSM;
}; };
case DumpKind::DumpModsToDisk: case DumpKind::DumpModsToDisk:
return [](orc::ThreadSafeModule TSM, return [](Module &M) {
const orc::MaterializationResponsibility &R) {
std::error_code EC; std::error_code EC;
raw_fd_ostream Out(TSM.getModule()->getModuleIdentifier() + ".ll", EC, raw_fd_ostream Out(M.getModuleIdentifier() + ".ll", EC, sys::fs::F_Text);
sys::fs::F_Text);
if (EC) { if (EC) {
errs() << "Couldn't open " << TSM.getModule()->getModuleIdentifier() errs() << "Couldn't open " << M.getModuleIdentifier()
<< " for dumping.\nError:" << EC.message() << "\n"; << " for dumping.\nError:" << EC.message() << "\n";
exit(1); exit(1);
} }
Out << *TSM.getModule(); Out << M;
return TSM;
}; };
} }
llvm_unreachable("Unknown DumpKind"); llvm_unreachable("Unknown DumpKind");
@ -756,12 +746,11 @@ int runOrcLazyJIT(const char *ProgName) {
// Parse the main module. // Parse the main module.
orc::ThreadSafeContext TSCtx(llvm::make_unique<LLVMContext>()); orc::ThreadSafeContext TSCtx(llvm::make_unique<LLVMContext>());
SMDiagnostic Err; SMDiagnostic Err;
auto MainModule = orc::ThreadSafeModule( auto MainModule = parseIRFile(InputFile, Err, *TSCtx.getContext());
parseIRFile(InputFile, Err, *TSCtx.getContext()), TSCtx);
if (!MainModule) if (!MainModule)
reportError(Err, ProgName); reportError(Err, ProgName);
const auto &TT = MainModule.getModule()->getTargetTriple(); const auto &TT = MainModule->getTargetTriple();
orc::LLLazyJITBuilder Builder; orc::LLLazyJITBuilder Builder;
Builder.setJITTargetMachineBuilder( Builder.setJITTargetMachineBuilder(
@ -794,11 +783,14 @@ int runOrcLazyJIT(const char *ProgName) {
J->setLazyCompileTransform([&](orc::ThreadSafeModule TSM, J->setLazyCompileTransform([&](orc::ThreadSafeModule TSM,
const orc::MaterializationResponsibility &R) { const orc::MaterializationResponsibility &R) {
if (verifyModule(*TSM.getModule(), &dbgs())) { TSM.withModuleDo([&](Module &M) {
dbgs() << "Bad module: " << *TSM.getModule() << "\n"; if (verifyModule(M, &dbgs())) {
exit(1); dbgs() << "Bad module: " << &M << "\n";
} exit(1);
return Dump(std::move(TSM), R); }
Dump(M);
});
return TSM;
}); });
J->getMainJITDylib().setGenerator( J->getMainJITDylib().setGenerator(
ExitOnErr(orc::DynamicLibrarySearchGenerator::GetForCurrentProcess( ExitOnErr(orc::DynamicLibrarySearchGenerator::GetForCurrentProcess(
@ -809,7 +801,8 @@ int runOrcLazyJIT(const char *ProgName) {
ExitOnErr(CXXRuntimeOverrides.enable(J->getMainJITDylib(), Mangle)); ExitOnErr(CXXRuntimeOverrides.enable(J->getMainJITDylib(), Mangle));
// Add the main module. // Add the main module.
ExitOnErr(J->addLazyIRModule(std::move(MainModule))); ExitOnErr(
J->addLazyIRModule(orc::ThreadSafeModule(std::move(MainModule), TSCtx)));
// Create JITDylibs and add any extra modules. // Create JITDylibs and add any extra modules.
{ {

View File

@ -72,7 +72,7 @@ TEST(ThreadSafeModuleTest, BasicContextLockAPI) {
{ auto L = TSCtx.getLock(); } { auto L = TSCtx.getLock(); }
{ auto L = TSM.getContextLock(); } { auto L = TSM.getContext().getLock(); }
} }
TEST(ThreadSafeModuleTest, ContextLockPreservesContext) { TEST(ThreadSafeModuleTest, ContextLockPreservesContext) {