All this r=mccabe, r=beard, and sr=jband -- many thanks to all who helped,

especially to jband for his great stress-test setup and particularly helpful
(in terms of reproducing bugs in draft patches) MP and laptop machines.

- Radical(*) object (scope) locking optimization: don't lock if a scope is
  accessed on the context that exclusively owns it (initially, the context
  on which the scope was created).  Once a scope becomes shared among more
  than one owner-context, give it the usual thin or fat lock, per existing
  jslock.c code.

  I did this at the memory cost of another word per JSScope, ownercx, which
  raised scope size from 12 to 13 words if !DEBUG.  I also added a linked
  list head pointer, rt->scopeSharingTodo, and a scopeSharingDone condition
  variable to JSRuntime, and a scopeToShare pointer to JSContext that's
  necessary for deadlock avoidance.

  The rt->scopeSharingTodo list links JSScopes through the scope->u.link
  union arm, which overlays the pre-existing scope->count (now u.count)
  member.  This list holds scopes still exclusively owned by a context, but
  wanted by js_LockScope calls active on other threads.  Those calls wait
  on the rt->scopeSharingDone condition, which is notified every time an
  owner-context ends the request running on it, in which code active on
  that context may be using scope freely until end of request.

  The code that waits on rt->scopeSharingDone must first suspend any and
  all requests active on the calling context, and resume those contexts
  after the wait is notified.  This means a GC could happen while the
  thread locking a scope owned by another thread's context blocks; all
  calls to JS_LOCK_OBJ must therefore first home fp->sp above any live
  operands, e.g.  The interpreter takes care to do that already.

  To avoid AB-BA deadlocks, if a js_LockScope attempt on one context finds
  that the owner-context of the scope is already waiting on a scope owned
  by the current context (or indirectly depending on such a scope lock),
  the attempt converts the scope from lock-free exclusive ownership to
  shared ownership (thin or fat lock).

- Fix js_SetupLocks and the js_LockGlobal/js_UnlockGlobal code to avoid
  divmod instruction costs, strength-reducing to bit-mask instructions.

- The radical lock-free scope change required care in handling the 0=>1
  and 1=>0 transitions of cx->requestDepth, which was till now thread-local
  because part of the JSContext not manipulated by other threads.  It's
  still updated only by cx's thread, but it is read by other threads in
  the course of attempting to claim exclusive ownership of a scope for more
  lock-free JS object operations.

- The JS_SuspendRequest and JS_ResumeRequest APIs have changed incompatibly
  to require their caller to save and restore the requestCount found when
  JS_SuspendRequest is called.  This is necessary to avoid deadlock; sorry
  for the incompatible change.

- Fixed various nits in jslock.[ch], including using Init/Finish rather
  than New/Destroy for the methods that take a JSThinLock and initialize
  and finish/free its members.  Another example: JS_ATOMIC_ADDREF is now
  JS_ATOMIC_INCREMENT and JS_ATOMIC_DECREMENT, so the two cases can be
  mapped to PR_AtomicIncrement and PR_AtomicDecrement.  This entailed
  changing jsrefcount from jsword to int32 (PRInt32).

- No need to use JS_ATOMIC_INCREMENT on JSScopeProperty.nrefs, as it is
  always and everywhere protected by the property's JSScope.lock.

- Cleaned up gratuitous casts in jscntxt.c by using &cx->links, etc.

- The lock used for mutual exclusion around both request begin and end vs.
  GC synchronization is rt->gcLock, and this lock now also protects all
  scope->ownercx pointer changes from non-null (exclusive) to null (shared),
  the rt->scopeSharingTodo/scope->u.link list operations, and of course the
  rt->scopeSharingDone condition.

  But this means that js_GC cannot hold rt->gcLock across the bulk of its
  body, in particular the mark phase, during which JS_GetPrivate calls,
  e.g., may need to "promote" scope locks from lock-free to thin or fat,
  because doing so would double-trip.  There never was any good reason to
  hold rt->gcLock so long, of course -- locks are for mutual exclusion, not
  for waiting or notifying a thread -- those operations require a condition,
  rt->gcDone, which we already use along with rt->gcLevel to keep racing GC
  attempts at bay.

  So now that rt->gcLock does not protect the mark phase, the enumeration
  of rt->gcRootsHash can race badly with JS_RemoveRootRT, an API that may
  legitimately be called outside of a request, without even a context.  It
  turns out that people may be cheating on the request model even with
  JS_AddRoot, JS_AddNamedRoot, and JS_RemoveRoot calls, so we must make
  all of those interlock with the GC using gcLevel and gcDone, unless they
  are called on the gcThread.

  Also, since bug 49816 was fixed, there has been no need for a separate
  finalize phase, or for rt->gcFinalVec.  Finalizers can no longer allocate
  newborn GC-things that might be swept (because unmarked), or double-trip
  on rt->gcLock (which is no longer held).  So js_GC finalizes as it sweeps,
  just as it did in days of old.

- I added comments to jslock.h making it plain that callers of JS_LOCK_OBJ
  and JS_UNLOCK_OBJ must either be implementations of js_ObjectOps hooks,
  or code reachable only from those hooks; or else must be predicated on
  OBJ_IS_NATIVE tests.  It turns out jsinterp.c's CACHED_GET and CACHED_SET
  macros neglected to do such tests, limiting the ability of JS embeddings
  to implement JSObjectOps with their own non-JSScope JSObjectMap subclass.
  Fixed, small performance hit that the lock-free optimization should more
  than make up for.

- jslock.c now gives a #error if you try to compile it on a platform that
  lacks a compare-and-swap instruction.  The #error says to use NSPR locks.
  Before this change, some platforms would emulate compare-and-swap using
  a global PRLock, which is always worse in runtime than using per-scope
  PRLocks.
This commit is contained in:
brendan%mozilla.org 2000-12-04 02:43:31 +00:00
parent 7a40c13587
commit 0e3fd5e8ba
16 changed files with 1060 additions and 591 deletions

View File

@ -645,7 +645,7 @@ JS_NewRuntime(uint32 maxbytes)
rt->requestDone = JS_NEW_CONDVAR(rt->gcLock);
if (!rt->requestDone)
goto bad;
js_SetupLocks(20,20); /* this is asymmetric with JS_ShutDown. */
js_SetupLocks(20, 32); /* this is asymmetric with JS_ShutDown. */
rt->rtLock = JS_NEW_LOCK();
if (!rt->rtLock)
goto bad;
@ -655,6 +655,10 @@ JS_NewRuntime(uint32 maxbytes)
rt->setSlotLock = JS_NEW_LOCK();
if (!rt->setSlotLock)
goto bad;
rt->scopeSharingDone = JS_NEW_CONDVAR(rt->gcLock);
if (!rt->scopeSharingDone)
goto bad;
rt->scopeSharingTodo = NO_SCOPE_SHARING_TODO;
#endif
rt->propertyCache.empty = JS_TRUE;
JS_INIT_CLIST(&rt->contextList);
@ -701,6 +705,8 @@ JS_DestroyRuntime(JSRuntime *rt)
JS_DESTROY_CONDVAR(rt->stateChange);
if (rt->setSlotLock)
JS_DESTROY_LOCK(rt->setSlotLock);
if (rt->scopeSharingDone)
JS_DESTROY_CONDVAR(rt->scopeSharingDone);
#endif
free(rt);
}
@ -748,7 +754,9 @@ JS_BeginRequest(JSContext *cx)
/* Indicate that a request is running. */
rt->requestCount++;
cx->requestDepth = 1;
JS_UNLOCK_GC(rt);
return;
}
cx->requestDepth++;
}
@ -757,18 +765,59 @@ JS_PUBLIC_API(void)
JS_EndRequest(JSContext *cx)
{
JSRuntime *rt;
JSScope *scope, **todop;
uintN nshares;
CHECK_REQUEST(cx);
cx->requestDepth--;
if (!cx->requestDepth) {
rt = cx->runtime;
JS_LOCK_GC(rt);
JS_ASSERT(rt->requestCount > 0);
rt->requestCount--;
JS_ASSERT(cx->requestDepth > 0);
if (cx->requestDepth == 1) {
/* Lock before clearing to interlock with ClaimScope, in jslock.c. */
rt = cx->runtime;
JS_LOCK_GC(rt);
cx->requestDepth = 0;
/* See whether cx has any single-threaded scopes to start sharing. */
todop = &rt->scopeSharingTodo;
nshares = 0;
while ((scope = *todop) != NO_SCOPE_SHARING_TODO) {
if (scope->ownercx != cx) {
todop = &scope->u.link;
continue;
}
*todop = scope->u.link;
scope->u.link = NULL; /* null u.link for sanity ASAP */
/*
* If js_DropObjectMap returns null, we held the last ref to scope.
* The waiting thread(s) must have been killed, after which the GC
* collected the object that held this scope. Unlikely, because it
* requires that the GC ran (e.g., from a branch callback) during
* this request, but possible.
*/
if (js_DropObjectMap(cx, &scope->map, NULL)) {
js_InitLock(&scope->lock);
scope->u.count = 0; /* don't assume NULL puns as 0 */
scope->ownercx = NULL; /* NB: set last, after lock init */
nshares++;
#ifdef DEBUG
JS_ATOMIC_INCREMENT(&rt->sharedScopes);
#endif
}
}
if (nshares)
JS_NOTIFY_ALL_CONDVAR(rt->scopeSharingDone);
/* Give the GC a chance to run if this was the last request running. */
JS_ASSERT(rt->requestCount > 0);
rt->requestCount--;
if (rt->requestCount == 0)
JS_NOTIFY_REQUEST_DONE(rt);
JS_UNLOCK_GC(rt);
JS_NOTIFY_REQUEST_DONE(rt);
JS_UNLOCK_GC(rt);
return;
}
cx->requestDepth--;
}
/* Yield to pending GC operations, regardless of request depth */
@ -794,16 +843,22 @@ JS_YieldRequest(JSContext *cx)
JS_UNLOCK_GC(rt);
}
JS_PUBLIC_API(void)
JS_PUBLIC_API(jsrefcount)
JS_SuspendRequest(JSContext *cx)
{
JS_EndRequest(cx);
jsrefcount saveDepth = cx->requestDepth;
while (cx->requestDepth)
JS_EndRequest(cx);
return saveDepth;
}
JS_PUBLIC_API(void)
JS_ResumeRequest(JSContext *cx)
JS_ResumeRequest(JSContext *cx, jsrefcount saveDepth)
{
JS_BeginRequest(cx);
JS_ASSERT(!cx->requestDepth);
while (--saveDepth >= 0)
JS_BeginRequest(cx);
}
#endif /* JS_THREADSAFE */

View File

@ -353,11 +353,11 @@ JS_EndRequest(JSContext *cx);
extern JS_PUBLIC_API(void)
JS_YieldRequest(JSContext *cx);
extern JS_PUBLIC_API(void)
extern JS_PUBLIC_API(jsrefcount)
JS_SuspendRequest(JSContext *cx);
extern JS_PUBLIC_API(void)
JS_ResumeRequest(JSContext *cx);
JS_ResumeRequest(JSContext *cx, jsrefcount saveDepth);
#endif /* JS_THREADSAFE */

View File

@ -229,7 +229,7 @@ js_InitAtomState(JSContext *cx, JSAtomState *state)
return JS_FALSE;
}
#ifdef JS_THREADSAFE
js_NewLock(&state->lock);
js_InitLock(&state->lock);
state->tablegen = 0;
#endif
@ -301,7 +301,7 @@ js_FreeAtomState(JSContext *cx, JSAtomState *state)
state->table = NULL;
state->number = 0;
#ifdef JS_THREADSAFE
js_DestroyLock(&state->lock);
js_FinishLock(&state->lock);
#endif
}
@ -776,7 +776,7 @@ js_InitAtomMap(JSContext *cx, JSAtomMap *map, JSAtomList *al)
uint32 count;
#ifdef DEBUG
JS_ATOMIC_ADDREF(&js_atom_map_count, 1);
JS_ATOMIC_INCREMENT(&js_atom_map_count);
#endif
ale = al->list;
if (!ale && !al->table) {
@ -797,7 +797,7 @@ js_InitAtomMap(JSContext *cx, JSAtomMap *map, JSAtomList *al)
if (al->table) {
#ifdef DEBUG
JS_ATOMIC_ADDREF(&js_atom_map_hash_table_count, 1);
JS_ATOMIC_INCREMENT(&js_atom_map_hash_table_count);
#endif
JS_HashTableEnumerateEntries(al->table, js_map_atom, vector);
} else {

View File

@ -76,7 +76,7 @@ js_NewContext(JSRuntime *rt, size_t stackChunkSize)
JS_LOCK_RUNTIME(rt);
for (;;) {
first = (rt->contextList.next == (JSCList *)&rt->contextList);
first = (rt->contextList.next == &rt->contextList);
if (rt->state == JSRTS_UP) {
JS_ASSERT(!first);
break;
@ -159,7 +159,7 @@ js_DestroyContext(JSContext *cx, JSGCMode gcmode)
JS_LOCK_RUNTIME(rt);
JS_ASSERT(rt->state == JSRTS_UP || rt->state == JSRTS_LAUNCHING);
JS_REMOVE_LINK(&cx->links);
last = (rt->contextList.next == (JSCList *)&rt->contextList);
last = (rt->contextList.next == &rt->contextList);
if (last)
rt->state = JSRTS_LANDING;
JS_UNLOCK_RUNTIME(rt);
@ -178,7 +178,12 @@ js_DestroyContext(JSContext *cx, JSGCMode gcmode)
}
#if JS_HAS_REGEXPS
/* Remove more GC roots in regExpStatics, then collect garbage. */
/*
* Remove more GC roots in regExpStatics, then collect garbage.
* XXX anti-modularity alert: we rely on the call to js_RemoveRoot within
* XXX this function call to wait for any racing GC to complete, in the
* XXX case where JS_DestroyContext is called outside of a request on cx
*/
js_FreeRegExpStatics(cx, &cx->regExpStatics);
#endif
@ -241,6 +246,21 @@ js_DestroyContext(JSContext *cx, JSGCMode gcmode)
free(cx);
}
JSBool
js_LiveContext(JSRuntime *rt, JSContext *cx)
{
JSCList *cl;
for (cl = rt->contextList.next; cl != &rt->contextList; cl = cl->next) {
if (cl == &cx->links)
return JS_TRUE;
}
#ifdef DEBUG
JS_ATOMIC_INCREMENT(&rt->deadContexts);
#endif
return JS_FALSE;
}
JSContext *
js_ContextIterator(JSRuntime *rt, JSContext **iterp)
{
@ -248,11 +268,11 @@ js_ContextIterator(JSRuntime *rt, JSContext **iterp)
JS_LOCK_RUNTIME(rt);
if (!cx)
cx = (JSContext *)rt->contextList.next;
if ((void *)cx == &rt->contextList)
cx = NULL;
else
*iterp = (JSContext *)cx->links.next;
cx = (JSContext *)&rt->contextList;
cx = (JSContext *)cx->links.next;
if (&cx->links == &rt->contextList)
cx = NULL;
*iterp = cx;
JS_UNLOCK_RUNTIME(rt);
return cx;
}

View File

@ -66,11 +66,10 @@ struct JSRuntime {
/* Garbage collector state, used by jsgc.c. */
JSArenaPool gcArenaPool;
JSGCThing *gcFinalVec;
JSHashTable *gcRootsHash;
JSHashTable *gcLocksHash;
JSGCThing *gcFreeList;
jsword gcDisabled;
jsrefcount gcDisabled;
uint32 gcBytes;
uint32 gcLastBytes;
uint32 gcMaxBytes;
@ -158,18 +157,50 @@ struct JSRuntime {
/* Used to serialize cycle checks when setting __proto__ or __parent__. */
PRLock *setSlotLock;
/*
* State for sharing single-threaded scopes, once a second thread tries to
* lock a scope. The scopeSharingDone condvar is protected by rt->gcLock,
* to minimize number of locks taken in JS_EndRequest.
*
* The scopeSharingTodo linked list is likewise "global" per runtime, not
* one-list-per-context, to conserve space over all contexts, optimizing
* for the likely case that scopes become shared rarely, and among a very
* small set of threads (contexts).
*/
PRCondVar *scopeSharingDone;
JSScope *scopeSharingTodo;
/*
* Magic terminator for the rt->scopeSharingTodo linked list, threaded through
* scope->u.link. This hack allows us to test whether a scope is on the list
* by asking whether scope->u.link is non-null. We use a large, likely bogus
* pointer here to distinguish this value from any valid u.count (small int)
* value.
*/
#define NO_SCOPE_SHARING_TODO ((JSScope *) 0xfeedbeef)
#endif
#ifdef DEBUG
jsword inlineCalls;
jsword nativeCalls;
jsword nonInlineCalls;
jsword constructs;
/* Function invocation metering. */
jsrefcount inlineCalls;
jsrefcount nativeCalls;
jsrefcount nonInlineCalls;
jsrefcount constructs;
/* Scope lock metering. */
jsrefcount claimAttempts;
jsrefcount claimedScopes;
jsrefcount deadContexts;
jsrefcount deadlocksAvoided;
jsrefcount liveScopes;
jsrefcount sharedScopes;
jsrefcount totalScopes;
#endif
};
#define JS_ENABLE_GC(rt) JS_ATOMIC_ADDREF(&(rt)->gcDisabled, -1);
#define JS_DISABLE_GC(rt) JS_ATOMIC_ADDREF(&(rt)->gcDisabled, 1);
#define JS_ENABLE_GC(rt) JS_ATOMIC_DECREMENT(&(rt)->gcDisabled);
#define JS_DISABLE_GC(rt) JS_ATOMIC_INCREMENT(&(rt)->gcDisabled);
#ifdef JS_ARGUMENT_FORMATTER_DEFINED
/*
@ -248,6 +279,7 @@ struct JSContext {
#ifdef JS_THREADSAFE
jsword thread;
jsrefcount requestDepth;
JSScope *scopeToShare; /* weak reference, see jslock.c */
#endif
#if JS_HAS_LVALUE_RETURN
@ -291,6 +323,9 @@ js_NewContext(JSRuntime *rt, size_t stackChunkSize);
extern void
js_DestroyContext(JSContext *cx, JSGCMode gcmode);
extern JSBool
js_LiveContext(JSRuntime *rt, JSContext *cx);
extern JSContext *
js_ContextIterator(JSRuntime *rt, JSContext **iterp);

View File

@ -798,7 +798,7 @@ fun_finalize(JSContext *cx, JSObject *obj)
return;
if (fun->object == obj)
fun->object = NULL;
JS_ATOMIC_ADDREF(&fun->nrefs, -1);
JS_ATOMIC_DECREMENT(&fun->nrefs);
if (fun->nrefs)
return;
if (fun->script)
@ -1680,7 +1680,7 @@ js_LinkFunctionObject(JSContext *cx, JSFunction *fun, JSObject *funobj)
fun->object = funobj;
if (!JS_SetPrivate(cx, funobj, fun))
return JS_FALSE;
JS_ATOMIC_ADDREF(&fun->nrefs, 1);
JS_ATOMIC_INCREMENT(&fun->nrefs);
return JS_TRUE;
}

View File

@ -81,10 +81,7 @@
#define GC_ARENA_SIZE (GC_THINGS_SIZE + GC_FLAGS_SIZE)
/*
* The private JSGCThing struct, which describes a gcFreelist element. We use
* it also for things to be finalized in rt->gcFinalVec, in which case next is
* not a next-thing link, it points to the thing to be finalized. The flagp
* member points to this thing's flags, for fast recycling and finalization.
* The private JSGCThing struct, which describes a gcFreelist element.
*/
struct JSGCThing {
JSGCThing *next;
@ -268,17 +265,11 @@ js_InitGC(JSRuntime *rt, uint32 maxbytes)
JS_InitArenaPool(&rt->gcArenaPool, "gc-arena", GC_ARENA_SIZE,
sizeof(JSGCThing));
rt->gcFinalVec = malloc(GC_FINALIZE_LEN * sizeof(JSGCThing));
if (!rt->gcFinalVec)
return JS_FALSE;
rt->gcRootsHash = JS_NewHashTable(GC_ROOTS_SIZE, gc_hash_root,
JS_CompareValues, JS_CompareValues,
NULL, NULL);
if (!rt->gcRootsHash) {
free(rt->gcFinalVec);
rt->gcFinalVec = NULL;
if (!rt->gcRootsHash)
return JS_FALSE;
}
rt->gcLocksHash = NULL; /* create lazily */
rt->gcMaxBytes = maxbytes;
return JS_TRUE;
@ -339,10 +330,6 @@ js_FinishGC(JSRuntime *rt)
#endif
JS_FinishArenaPool(&rt->gcArenaPool);
JS_ArenaFinish();
if (rt->gcFinalVec) {
free(rt->gcFinalVec);
rt->gcFinalVec = NULL;
}
#if DEBUG
{
@ -382,9 +369,31 @@ js_AddRoot(JSContext *cx, void *rp, const char *name)
JSRuntime *rt;
JSBool ok;
/*
* Due to the long-standing, but now removed, use of rt->gcLock across the
* bulk of js_GC, API users have come to depend on JS_AddRoot etc. locking
* properly with a racing GC, without calling JS_AddRoot from a request.
* We have to preserve API compatibility here, now that we avoid holding
* rt->gcLock across the mark phase (including the root hashtable mark).
*
* If the GC is running and we're called on another thread, wait for this
* GC activation to finish. We can safely wait here (in the case where we
* are called within a request on another thread's context) without fear
* of deadlock because the GC doesn't set rt->gcRunning until after it has
* waited for all active requests to end.
*/
rt = cx->runtime;
JS_LOCK_GC_VOID(rt,
ok = (JS_HashTableAdd(rt->gcRootsHash, rp, (void *)name) != NULL));
JS_LOCK_GC(rt);
#ifdef JS_THREADSAFE
JS_ASSERT(!rt->gcRunning || rt->gcLevel > 0);
if (rt->gcRunning && rt->gcThread != js_CurrentThreadId()) {
do {
JS_AWAIT_GC_DONE(rt);
} while (rt->gcLevel > 0);
}
#endif
ok = (JS_HashTableAdd(rt->gcRootsHash, rp, (void *)name) != NULL);
JS_UNLOCK_GC(rt);
if (!ok)
JS_ReportOutOfMemory(cx);
return ok;
@ -393,7 +402,19 @@ js_AddRoot(JSContext *cx, void *rp, const char *name)
JSBool
js_RemoveRoot(JSRuntime *rt, void *rp)
{
/*
* Due to the JS_RemoveRootRT API, we may be called outside of a request.
* Same synchronization drill as above in js_AddRoot.
*/
JS_LOCK_GC(rt);
#ifdef JS_THREADSAFE
JS_ASSERT(!rt->gcRunning || rt->gcLevel > 0);
if (rt->gcRunning && rt->gcThread != js_CurrentThreadId()) {
do {
JS_AWAIT_GC_DONE(rt);
} while (rt->gcLevel > 0);
}
#endif
JS_HashTableRemove(rt->gcRootsHash, rp);
rt->gcPoke = JS_TRUE;
JS_UNLOCK_GC(rt);
@ -503,8 +524,8 @@ gc_hash_thing(const void *key)
return num >> JSVAL_TAGBITS;
}
#define gc_lock_get_count(he) ((jsword)(he)->value)
#define gc_lock_set_count(he,n) ((jsword)((he)->value = (void *)(n)))
#define gc_lock_get_count(he) ((jsrefcount)(he)->value)
#define gc_lock_set_count(he,n) ((jsrefcount)((he)->value = (void *)(n)))
#define gc_lock_increment(he) gc_lock_set_count(he, gc_lock_get_count(he)+1)
#define gc_lock_decrement(he) gc_lock_set_count(he, gc_lock_get_count(he)-1)
@ -922,40 +943,6 @@ js_ForceGC(JSContext *cx)
} \
JS_END_MACRO
/*
* Finalize phase.
* Don't hold the GC lock while running finalizers!
*/
static void
gc_finalize_phase(JSContext *cx, uintN len)
{
JSRuntime *rt;
JSGCThing *final, *limit, *thing;
uint8 flags, *flagp;
GCFinalizeOp finalizer;
rt = cx->runtime;
JS_UNLOCK_GC(rt);
for (final = rt->gcFinalVec, limit = final + len; final < limit; final++) {
thing = final->next;
flagp = final->flagp;
flags = *flagp;
finalizer = gc_finalizers[flags & GCF_TYPEMASK];
if (finalizer) {
*flagp = (uint8)(flags | GCF_FINAL);
finalizer(cx, thing);
}
/*
* Set flags to GCF_FINAL, signifying that thing is free, but don't
* thread thing onto rt->gcFreeList. We need the GC lock to rebuild
* the freelist below while also looking for free-able arenas.
*/
*flagp = GCF_FINAL;
}
JS_LOCK_GC(rt);
}
void
js_GC(JSContext *cx, uintN gcflags)
{
@ -965,9 +952,9 @@ js_GC(JSContext *cx, uintN gcflags)
uintN i, depth, nslots;
JSStackHeader *sh;
JSArena *a, **ap;
uintN finalpos;
uint8 flags, *flagp, *split;
JSGCThing *thing, *limit, *final, **flp, **oflp;
JSGCThing *thing, *limit, **flp, **oflp;
GCFinalizeOp finalizer;
JSBool all_clear;
#ifdef JS_THREADSAFE
jsword currentThread;
@ -1034,8 +1021,8 @@ js_GC(JSContext *cx, uintN gcflags)
* We assert, but check anyway, in case someone is misusing the API.
* Avoiding the loop over all of rt's contexts is a win in the event
* that the GC runs only on request-less contexts with 0 thread-ids,
* in a special thread such as the UI/DOM/Layout "mozilla" or "main"
* thread in Mozilla-the-browser.
* in a special thread such as might be used by the UI/DOM/Layout
* "mozilla" or "main" thread in Mozilla-the-browser.
*/
JS_ASSERT(cx->requestDepth == 0);
if (cx->requestDepth)
@ -1058,8 +1045,8 @@ js_GC(JSContext *cx, uintN gcflags)
/* Wait for the other thread to finish, then resume our request. */
while (rt->gcLevel > 0)
JS_AWAIT_GC_DONE(rt);
if (cx->requestDepth)
rt->requestCount++;
if (requestDebit)
rt->requestCount += requestDebit;
JS_UNLOCK_GC(rt);
return;
}
@ -1083,9 +1070,19 @@ js_GC(JSContext *cx, uintN gcflags)
#endif /* !JS_THREADSAFE */
/*
* Set rt->gcRunning here within the GC lock, and after waiting for any
* active requests to end, so that new requests that try to JS_AddRoot,
* JS_RemoveRoot, or JS_RemoveRootRT block in JS_BeginRequest waiting for
* rt->gcLevel to drop to zero, while request-less calls to the *Root*
* APIs block in js_AddRoot or js_RemoveRoot (see above in this file),
* waiting for GC to finish.
*/
rt->gcRunning = JS_TRUE;
JS_UNLOCK_GC(rt);
/* Reset malloc counter */
rt->gcMallocBytes = 0;
rt->gcRunning = JS_TRUE;
/* Drop atoms held by the property cache, and clear property weak links. */
js_FlushPropertyCache(cx);
@ -1173,9 +1170,11 @@ restart:
}
/*
* Sweep phase, with interleaved finalize phase.
* Sweep phase.
* Finalize as we sweep, outside of rt->gcLock, but with rt->gcRunning set
* so that any attempt to allocate a GC-thing from a finalizer will fail,
* rather than nest badly and leave the unmarked newborn to be swept.
*/
finalpos = 0;
js_SweepAtomState(&rt->atomState);
for (a = rt->gcArenaPool.first.next; a; a = a->next) {
flagp = (uint8 *) a->base;
@ -1190,13 +1189,16 @@ restart:
if (flags & GCF_MARK) {
*flagp &= ~GCF_MARK;
} else if (!(flags & (GCF_LOCKMASK | GCF_FINAL))) {
if (finalpos == GC_FINALIZE_LEN) {
gc_finalize_phase(cx, finalpos);
finalpos = 0;
/* Call the finalizer with GCF_FINAL ORed into flags. */
finalizer = gc_finalizers[flags & GCF_TYPEMASK];
if (finalizer) {
*flagp = (uint8)(flags | GCF_FINAL);
finalizer(cx, thing);
}
final = &rt->gcFinalVec[finalpos++];
final->next = thing;
final->flagp = flagp;
/* Set flags to GCF_FINAL, signifying that thing is free. */
*flagp = GCF_FINAL;
JS_ASSERT(rt->gcBytes >= sizeof(JSGCThing) + sizeof(uint8));
rt->gcBytes -= sizeof(JSGCThing) + sizeof(uint8);
}
@ -1205,12 +1207,6 @@ restart:
}
}
/*
* Last finalize phase, if needed.
*/
if (finalpos)
gc_finalize_phase(cx, finalpos);
/*
* Free phase.
* Free any unused arenas and rebuild the JSGCThing freelist.
@ -1260,9 +1256,11 @@ restart:
*flp = NULL;
out:
JS_LOCK_GC(rt);
if (rt->gcLevel > 1) {
rt->gcLevel = 1;
goto restart;
rt->gcLevel = 1;
JS_UNLOCK_GC(rt);
goto restart;
}
rt->gcLevel = 0;
rt->gcLastBytes = rt->gcBytes;

View File

@ -566,9 +566,9 @@ ComputeThis(JSContext *cx, JSObject *thisp, JSStackFrame *fp)
}
#ifdef DEBUG
# define METER_INVOCATION(rt, which) JS_ATOMIC_ADDREF(&(rt)->which, 1)
# define METER_INVOCATION(rt, which) JS_ATOMIC_INCREMENT(&(rt)->which)
#else
# define METER_INVOCATION(rt, which) ((void)0)
# define METER_INVOCATION(rt, which) /* nothing */
#endif
/*
@ -1751,27 +1751,31 @@ js_Interpret(JSContext *cx, jsval *result)
* in case a getter or setter function is invoked.
*/
#define CACHED_GET(call) { \
JS_LOCK_OBJ(cx, obj); \
PROPERTY_CACHE_TEST(&rt->propertyCache, obj, id, prop); \
if (prop) { \
JSScope *_scope = OBJ_SCOPE(obj); \
sprop = (JSScopeProperty *)prop; \
JS_ATOMIC_ADDREF(&sprop->nrefs, 1); \
slot = (uintN)sprop->slot; \
rval = LOCKED_OBJ_GET_SLOT(obj, slot); \
JS_UNLOCK_SCOPE(cx, _scope); \
ok = SPROP_GET(cx, sprop, obj, obj, &rval); \
if (ok) { \
JS_LOCK_SCOPE(cx, _scope); \
sprop = js_DropScopeProperty(cx, _scope, sprop); \
if (sprop) \
LOCKED_OBJ_SET_SLOT(obj, slot, rval); \
JS_UNLOCK_SCOPE(cx, _scope); \
} \
} else { \
JS_UNLOCK_OBJ(cx, obj); \
if (!OBJ_IS_NATIVE(obj)) { \
ok = call; \
/* No fill here: js_GetProperty fills the cache. */ \
} else { \
JS_LOCK_OBJ(cx, obj); \
PROPERTY_CACHE_TEST(&rt->propertyCache, obj, id, prop); \
if (prop) { \
JSScope *_scope = OBJ_SCOPE(obj); \
sprop = (JSScopeProperty *)prop; \
sprop->nrefs++; \
slot = (uintN)sprop->slot; \
rval = LOCKED_OBJ_GET_SLOT(obj, slot); \
JS_UNLOCK_SCOPE(cx, _scope); \
ok = SPROP_GET(cx, sprop, obj, obj, &rval); \
if (ok) { \
JS_LOCK_SCOPE(cx, _scope); \
sprop = js_DropScopeProperty(cx, _scope, sprop); \
if (sprop) \
LOCKED_OBJ_SET_SLOT(obj, slot, rval); \
JS_UNLOCK_SCOPE(cx, _scope); \
} \
} else { \
JS_UNLOCK_OBJ(cx, obj); \
ok = call; \
/* No fill here: js_GetProperty fills the cache. */ \
} \
} \
}
@ -1782,28 +1786,32 @@ js_Interpret(JSContext *cx, jsval *result)
#endif
#define CACHED_SET(call) { \
JS_LOCK_OBJ(cx, obj); \
PROPERTY_CACHE_TEST(&rt->propertyCache, obj, id, prop); \
if ((sprop = (JSScopeProperty *)prop) && \
!(sprop->attrs & JSPROP_READONLY)) { \
JSScope *_scope = OBJ_SCOPE(obj); \
JS_ATOMIC_ADDREF(&sprop->nrefs, 1); \
JS_UNLOCK_SCOPE(cx, _scope); \
ok = SPROP_SET(cx, sprop, obj, obj, &rval); \
if (ok) { \
JS_LOCK_SCOPE(cx, _scope); \
sprop = js_DropScopeProperty(cx, _scope, sprop); \
if (sprop) { \
LOCKED_OBJ_SET_SLOT(obj, sprop->slot, rval); \
SET_ENUMERATE_ATTR(sprop); \
GC_POKE(cx, JSVAL_NULL); /* second arg ignored! */ \
} \
JS_UNLOCK_SCOPE(cx, _scope); \
} \
} else { \
JS_UNLOCK_OBJ(cx, obj); \
if (!OBJ_IS_NATIVE(obj)) { \
ok = call; \
/* No fill here: js_SetProperty writes through the cache. */ \
} else { \
JS_LOCK_OBJ(cx, obj); \
PROPERTY_CACHE_TEST(&rt->propertyCache, obj, id, prop); \
if ((sprop = (JSScopeProperty *)prop) && \
!(sprop->attrs & JSPROP_READONLY)) { \
JSScope *_scope = OBJ_SCOPE(obj); \
sprop->nrefs++; \
JS_UNLOCK_SCOPE(cx, _scope); \
ok = SPROP_SET(cx, sprop, obj, obj, &rval); \
if (ok) { \
JS_LOCK_SCOPE(cx, _scope); \
sprop = js_DropScopeProperty(cx, _scope, sprop); \
if (sprop) { \
LOCKED_OBJ_SET_SLOT(obj, sprop->slot, rval); \
SET_ENUMERATE_ATTR(sprop); \
GC_POKE(cx, JSVAL_NULL); /* second arg ignored! */ \
} \
JS_UNLOCK_SCOPE(cx, _scope); \
} \
} else { \
JS_UNLOCK_OBJ(cx, obj); \
ok = call; \
/* No fill here: js_SetProperty writes through the cache. */ \
} \
} \
}

File diff suppressed because it is too large Load Diff

View File

@ -37,6 +37,7 @@
#ifdef JS_THREADSAFE
#include "jstypes.h"
#include "pratom.h"
#include "prlock.h"
#include "prcvar.h"
#include "jshash.h" /* Added by JSIFY */
@ -48,27 +49,34 @@
#define Thin_SetWait(W) ((jsword)(W) | 0x1)
#define Thin_RemoveWait(W) ((jsword)(W) & ~0x1)
typedef struct JSFatLock {
int susp;
PRLock* slock;
PRCondVar* svar;
struct JSFatLock *next;
struct JSFatLock *prev;
} JSFatLock;
typedef struct JSFatLock JSFatLock;
struct JSFatLock {
int susp;
PRLock *slock;
PRCondVar *svar;
JSFatLock *next;
JSFatLock **prevp;
};
typedef struct JSThinLock {
jsword owner;
JSFatLock *fat;
jsword owner;
JSFatLock *fat;
} JSThinLock;
typedef PRLock JSLock;
typedef struct JSFatLockTable {
JSFatLock *free;
JSFatLock *taken;
JSFatLock *free;
JSFatLock *taken;
} JSFatLockTable;
#define JS_ATOMIC_ADDREF(p, i) js_AtomicAdd(p,i)
/*
* Atomic increment and decrement for a reference counter, given jsrefcount *p.
* NB: jsrefcount is int32, aka PRInt32, so that pratom.h functions work.
*/
#define JS_ATOMIC_INCREMENT(p) PR_AtomicIncrement((PRInt32 *)(p))
#define JS_ATOMIC_DECREMENT(p) PR_AtomicDecrement((PRInt32 *)(p))
#define CurrentThreadId() (jsword)PR_GetCurrentThread()
#define JS_CurrentThreadId() js_CurrentThreadId()
@ -76,8 +84,8 @@ typedef struct JSFatLockTable {
#define JS_DESTROY_LOCK(l) PR_DestroyLock(l)
#define JS_ACQUIRE_LOCK(l) PR_Lock(l)
#define JS_RELEASE_LOCK(l) PR_Unlock(l)
#define JS_LOCK0(P,M) js_Lock(P,M)
#define JS_UNLOCK0(P,M) js_Unlock(P,M)
#define JS_LOCK0(P,M) js_Lock(P,M)
#define JS_UNLOCK0(P,M) js_Unlock(P,M)
#define JS_NEW_CONDVAR(l) PR_NewCondVar(l)
#define JS_DESTROY_CONDVAR(cv) PR_DestroyCondVar(cv)
@ -86,47 +94,78 @@ typedef struct JSFatLockTable {
#define JS_NOTIFY_CONDVAR(cv) PR_NotifyCondVar(cv)
#define JS_NOTIFY_ALL_CONDVAR(cv) PR_NotifyAllCondVar(cv)
#ifdef DEBUG
/*
* Include jsscope.h so JS_LOCK_OBJ macro callers don't have to include it.
* Since there is a JSThinLock member in JSScope, we can't nest this include
* much earlier (see JSThinLock's typedef, above). Yes, that means there is
* an #include cycle between jslock.h and jsscope.h: moderate-sized XXX here,
* to be fixed by moving JS_LOCK_SCOPE to jsscope.h, JS_LOCK_OBJ to jsobj.h,
* and so on.
*
* We also need jsscope.h #ifdef DEBUG for SET_OBJ_INFO and SET_SCOPE_INFO,
* but we do not want any nested includes that depend on DEBUG. Those lead
* to build bustage when someone makes a change that depends in a subtle way
* on jsscope.h being included directly or indirectly, but does not test by
* building optimized as well as DEBUG.
*/
#include "jsscope.h"
#define _SET_OBJ_INFO(obj,f,l) \
_SET_SCOPE_INFO(OBJ_SCOPE(obj),f,l)
#ifdef DEBUG
#define _SET_SCOPE_INFO(scope,f,l) \
(JS_ASSERT(scope->count > 0 && scope->count <= 4), \
scope->file[scope->count-1] = f, \
scope->line[scope->count-1] = l)
#define SET_OBJ_INFO(obj_,file_,line_) \
SET_SCOPE_INFO(OBJ_SCOPE(obj_),file_,line_)
#define SET_SCOPE_INFO(scope_,file_,line_) \
((scope_)->ownercx ? (void)0 : \
(JS_ASSERT((scope_)->u.count > 0 && (scope_)->u.count <= 4), \
(void)((scope_)->file[(scope_)->u.count-1] = (file_), \
(scope_)->line[(scope_)->u.count-1] = (line_))))
#endif /* DEBUG */
#define JS_LOCK_RUNTIME(rt) js_LockRuntime(rt)
#define JS_UNLOCK_RUNTIME(rt) js_UnlockRuntime(rt)
#define JS_LOCK_OBJ(cx,obj) (js_LockObj(cx, obj), \
_SET_OBJ_INFO(obj,__FILE__,__LINE__))
#define JS_UNLOCK_OBJ(cx,obj) js_UnlockObj(cx, obj)
#define JS_LOCK_SCOPE(cx,scope) (js_LockScope(cx, scope), \
_SET_SCOPE_INFO(scope,__FILE__,__LINE__))
#define JS_UNLOCK_SCOPE(cx,scope) js_UnlockScope(cx, scope)
#define JS_TRANSFER_SCOPE_LOCK(cx, scope, newscope) js_TransferScopeLock(cx, scope, newscope)
/*
* NB: The JS_LOCK_OBJ and JS_UNLOCK_OBJ macros work *only* on native objects
* (objects for which OBJ_IS_NATIVE returns true). All uses of these macros in
* the engine are predicated on OBJ_IS_NATIVE or equivalent checks. These uses
* are for optimizations above the JSObjectOps layer, under which object locks
* normally hide.
*/
#define JS_LOCK_OBJ(cx,obj) ((OBJ_SCOPE(obj)->ownercx == (cx)) \
? (void)0 \
: (js_LockObj(cx, obj), \
SET_OBJ_INFO(obj,__FILE__,__LINE__)))
#define JS_UNLOCK_OBJ(cx,obj) ((OBJ_SCOPE(obj)->ownercx == (cx)) \
? (void)0 : js_UnlockObj(cx, obj))
#define JS_LOCK_SCOPE(cx,scope) ((scope)->ownercx == (cx) ? (void)0 : \
(js_LockScope(cx, scope), \
SET_SCOPE_INFO(scope,__FILE__,__LINE__)))
#define JS_UNLOCK_SCOPE(cx,scope) ((scope)->ownercx == (cx) ? (void)0 : \
js_UnlockScope(cx, scope))
#define JS_TRANSFER_SCOPE_LOCK(cx, scope, newscope) \
js_TransferScopeLock(cx, scope, newscope)
extern jsword js_CurrentThreadId();
extern JS_INLINE void js_Lock(JSThinLock *, jsword);
extern JS_INLINE void js_Unlock(JSThinLock *, jsword);
extern int js_CompareAndSwap(jsword *, jsword, jsword);
extern void js_AtomicAdd(jsword*, jsword);
extern void js_LockRuntime(JSRuntime *rt);
extern void js_UnlockRuntime(JSRuntime *rt);
extern void js_LockObj(JSContext *cx, JSObject *obj);
extern void js_UnlockObj(JSContext *cx, JSObject *obj);
extern void js_PromoteScopeLock(JSContext *cx, JSScope *scope);
extern void js_LockScope(JSContext *cx, JSScope *scope);
extern void js_UnlockScope(JSContext *cx, JSScope *scope);
extern int js_SetupLocks(int,int);
extern void js_CleanupLocks();
extern JS_PUBLIC_API(void) js_InitContextForLocking(JSContext *);
extern void js_InitContextForLocking(JSContext *);
extern void js_TransferScopeLock(JSContext *, JSScope *, JSScope *);
extern JS_PUBLIC_API(jsval) js_GetSlotWhileLocked(JSContext *, JSObject *, uint32);
extern JS_PUBLIC_API(void) js_SetSlotWhileLocked(JSContext *, JSObject *, uint32, jsval);
extern void js_NewLock(JSThinLock *);
extern void js_DestroyLock(JSThinLock *);
extern jsval js_GetSlotThreadSafe(JSContext *, JSObject *, uint32);
extern void js_SetSlotThreadSafe(JSContext *, JSObject *, uint32, jsval);
extern void js_InitLock(JSThinLock *);
extern void js_FinishLock(JSThinLock *);
#ifdef DEBUG
@ -148,39 +187,48 @@ extern JSBool js_IsScopeLocked(JSScope *scope);
#define JS_LOCK_OBJ_VOID(cx, obj, e) \
JS_BEGIN_MACRO \
js_LockObj(cx, obj); \
e; \
js_UnlockObj(cx, obj); \
JS_LOCK_OBJ(cx, obj); \
e; \
JS_UNLOCK_OBJ(cx, obj); \
JS_END_MACRO
#define JS_LOCK_VOID(cx, e) \
JS_BEGIN_MACRO \
JSRuntime *_rt = (cx)->runtime; \
JS_LOCK_RUNTIME_VOID(_rt, e); \
JSRuntime *_rt = (cx)->runtime; \
JS_LOCK_RUNTIME_VOID(_rt, e); \
JS_END_MACRO
#if defined(JS_USE_ONLY_NSPR_LOCKS) || \
!( (defined(_WIN32) && defined(_M_IX86)) || defined(SOLARIS) || defined(AIX) || (defined(__GNUC__) && defined(__i386__)) )
#if defined(JS_USE_ONLY_NSPR_LOCKS) || \
!( (defined(_WIN32) && defined(_M_IX86)) || \
(defined(__GNUC__) && defined(__i386__)) || \
(defined(SOLARIS) && defined(sparc) && defined(ULTRA_SPARC)) || \
defined(AIX) )
#define NSPR_LOCK 1
#undef JS_LOCK0
#undef JS_UNLOCK0
#define JS_LOCK0(P,M) JS_ACQUIRE_LOCK(((JSLock*)(P)->fat)); (P)->owner = (M)
#define JS_UNLOCK0(P,M) (P)->owner = 0; JS_RELEASE_LOCK(((JSLock*)(P)->fat))
#define NSPR_LOCK 1
#define JS_LOCK0(P,M) (JS_ACQUIRE_LOCK(((JSLock*)(P)->fat)), (P)->owner = (M))
#define JS_UNLOCK0(P,M) ((P)->owner = 0, JS_RELEASE_LOCK(((JSLock*)(P)->fat)))
#else /* arch-tests */
#undef NSPR_LOCK
#endif /* arch-tests */
#else /* !JS_THREADSAFE */
#define JS_ATOMIC_ADDREF(p,i) (*(p) += i)
#define JS_ATOMIC_INCREMENT(p) (++*(p))
#define JS_ATOMIC_DECREMENT(p) (--*(p))
#define JS_CurrentThreadId() 0
#define JS_NEW_LOCK() NULL
#define JS_DESTROY_LOCK(l) ((void)0)
#define JS_ACQUIRE_LOCK(l) ((void)0)
#define JS_RELEASE_LOCK(l) ((void)0)
#define JS_LOCK0(P,M) ((void)0)
#define JS_UNLOCK0(P,M) ((void)0)
#define JS_LOCK0(P,M) ((void)0)
#define JS_UNLOCK0(P,M) ((void)0)
#define JS_NEW_CONDVAR(l) NULL
#define JS_DESTROY_CONDVAR(cv) ((void)0)
@ -206,9 +254,9 @@ extern JSBool js_IsScopeLocked(JSScope *scope);
#define JS_LOCK_RUNTIME_VOID(rt,e) \
JS_BEGIN_MACRO \
JS_LOCK_RUNTIME(rt); \
e; \
JS_UNLOCK_RUNTIME(rt); \
JS_LOCK_RUNTIME(rt); \
e; \
JS_UNLOCK_RUNTIME(rt); \
JS_END_MACRO
#define JS_LOCK_GC(rt) JS_ACQUIRE_LOCK((rt)->gcLock)
@ -217,17 +265,17 @@ extern JSBool js_IsScopeLocked(JSScope *scope);
#define JS_AWAIT_GC_DONE(rt) JS_WAIT_CONDVAR((rt)->gcDone, JS_NO_TIMEOUT)
#define JS_NOTIFY_GC_DONE(rt) JS_NOTIFY_ALL_CONDVAR((rt)->gcDone)
#define JS_AWAIT_REQUEST_DONE(rt) JS_WAIT_CONDVAR((rt)->requestDone, \
JS_NO_TIMEOUT)
JS_NO_TIMEOUT)
#define JS_NOTIFY_REQUEST_DONE(rt) JS_NOTIFY_CONDVAR((rt)->requestDone)
#define JS_LOCK(P,CX) JS_LOCK0(P,(CX)->thread)
#define JS_UNLOCK(P,CX) JS_UNLOCK0(P,(CX)->thread)
#define JS_LOCK(P,CX) JS_LOCK0(P,(CX)->thread)
#define JS_UNLOCK(P,CX) JS_UNLOCK0(P,(CX)->thread)
#ifndef _SET_OBJ_INFO
#define _SET_OBJ_INFO(obj,f,l) ((void)0)
#ifndef SET_OBJ_INFO
#define SET_OBJ_INFO(obj,f,l) ((void)0)
#endif
#ifndef _SET_SCOPE_INFO
#define _SET_SCOPE_INFO(scope,f,l) ((void)0)
#ifndef SET_SCOPE_INFO
#define SET_SCOPE_INFO(scope,f,l) ((void)0)
#endif
#endif /* jslock_h___ */

View File

@ -230,6 +230,7 @@ js_SetProtoOrParent(JSContext *cx, JSObject *obj, uint32 slot, JSObject *pobj)
* rt->setSlotLock < pobj's grand-proto-or-parent's scope lock;
* etc...
* (2) rt->setSlotLock < obj's scope lock < pobj's scope lock.
* rt->setSlotLock < obj's scope lock < rt->gcLock
*
* We avoid AB-BA deadlock by restricting obj from being on pobj's parent
* or proto chain (pobj may already be on obj's parent or proto chain; it
@ -272,6 +273,19 @@ js_SetProtoOrParent(JSContext *cx, JSObject *obj, uint32 slot, JSObject *pobj)
return JS_FALSE;
}
} else if (OBJ_IS_NATIVE(pobj) && OBJ_SCOPE(pobj) != scope) {
#ifdef JS_THREADSAFE
/*
* Avoid deadlock by never nesting a scope lock (for pobj, in
* this case) within a "flyweight" scope lock (for obj). Give
* scope a non-flyweight lock, allowing it to be shared among
* multiple threads. See ClaimScope in jslock.c.
*/
if (scope->ownercx) {
JS_ASSERT(scope->ownercx == cx);
js_PromoteScopeLock(cx, scope);
}
#endif
/* We can't deadlock because we checked for cycles above (2). */
JS_LOCK_OBJ(cx, pobj);
newscope = (JSScope *) js_HoldObjectMap(cx, pobj->map);
@ -1420,7 +1434,7 @@ JSObjectMap *
js_HoldObjectMap(JSContext *cx, JSObjectMap *map)
{
JS_ASSERT(map->nrefs >= 0);
JS_ATOMIC_ADDREF(&map->nrefs, 1);
JS_ATOMIC_INCREMENT(&map->nrefs);
return map;
}
@ -1428,7 +1442,7 @@ JSObjectMap *
js_DropObjectMap(JSContext *cx, JSObjectMap *map, JSObject *obj)
{
JS_ASSERT(map->nrefs > 0);
JS_ATOMIC_ADDREF(&map->nrefs, -1);
JS_ATOMIC_DECREMENT(&map->nrefs);
if (map->nrefs == 0) {
map->ops->destroyObjectMap(cx, map);
return NULL;
@ -1867,7 +1881,7 @@ js_LookupProperty(JSContext *cx, JSObject *obj, jsid id, JSObject **objp,
hash = js_HashValue(id);
for (;;) {
JS_LOCK_OBJ(cx, obj);
_SET_OBJ_INFO(obj, file, line);
SET_OBJ_INFO(obj, file, line);
scope = OBJ_SCOPE(obj);
if (scope->object == obj) {
sym = scope->ops->lookup(cx, scope, id, hash);
@ -1894,7 +1908,7 @@ js_LookupProperty(JSContext *cx, JSObject *obj, jsid id, JSObject **objp,
if (!newresolve(cx, obj, js_IdToValue(id), flags, &obj2))
return JS_FALSE;
JS_LOCK_OBJ(cx, obj);
_SET_OBJ_INFO(obj, file, line);
SET_OBJ_INFO(obj, file, line);
if (obj2) {
scope = OBJ_SCOPE(obj2);
if (MAP_IS_NATIVE(&scope->map))
@ -1905,7 +1919,7 @@ js_LookupProperty(JSContext *cx, JSObject *obj, jsid id, JSObject **objp,
if (!resolve(cx, obj, js_IdToValue(id)))
return JS_FALSE;
JS_LOCK_OBJ(cx, obj);
_SET_OBJ_INFO(obj, file, line);
SET_OBJ_INFO(obj, file, line);
scope = OBJ_SCOPE(obj);
if (MAP_IS_NATIVE(&scope->map))
sym = scope->ops->lookup(cx, scope, id, hash);
@ -1951,7 +1965,7 @@ js_FindProperty(JSContext *cx, jsid id, JSObject **objp, JSObject **pobjp,
if (prop) {
#ifdef JS_THREADSAFE
JS_ASSERT(OBJ_IS_NATIVE(obj));
JS_ATOMIC_ADDREF(&((JSScopeProperty *)prop)->nrefs, 1);
((JSScopeProperty *)prop)->nrefs++;
#endif
*objp = obj;
*pobjp = obj;
@ -2106,7 +2120,7 @@ js_GetProperty(JSContext *cx, JSObject *obj, jsid id, jsval *vp)
slot = sprop->slot;
*vp = LOCKED_OBJ_GET_SLOT(obj2, slot);
#ifndef JS_THREADSAFE
JS_ATOMIC_ADDREF(&sprop->nrefs, 1);
sprop->nrefs++;
#endif
JS_UNLOCK_SCOPE(cx, scope);
if (!SPROP_GET(cx, sprop, obj, obj2, vp)) {
@ -2199,7 +2213,7 @@ js_SetProperty(JSContext *cx, JSObject *obj, jsid id, jsval *vp)
/* Don't clone a setter or shared prototype property. */
if (attrs & (JSPROP_SETTER | JSPROP_SHARED)) {
JS_ATOMIC_ADDREF(&sprop->nrefs, 1);
sprop->nrefs++;
JS_UNLOCK_SCOPE(cx, scope);
ok = SPROP_SET(cx, sprop, obj, obj, vp);
@ -2245,7 +2259,7 @@ js_SetProperty(JSContext *cx, JSObject *obj, jsid id, jsval *vp)
goto unlocked_read_only;
}
if (attrs & (JSPROP_SETTER | JSPROP_SHARED)) {
JS_ATOMIC_ADDREF(&sprop->nrefs, 1);
sprop->nrefs++;
JS_UNLOCK_SCOPE(cx, scope);
ok = SPROP_SET(cx, sprop, obj, obj, vp);
@ -2349,7 +2363,7 @@ unlocked_read_only:
pval = LOCKED_OBJ_GET_SLOT(obj, slot);
/* Hold sprop across setter callout, and drop after, in case of delete. */
JS_ATOMIC_ADDREF(&sprop->nrefs, 1);
sprop->nrefs++;
/* Avoid deadlock by unlocking obj while calling sprop's setter. */
JS_UNLOCK_OBJ(cx, obj);

View File

@ -131,10 +131,17 @@ struct JSObject {
#ifdef JS_THREADSAFE
/* Thread-safe functions and wrapper macros for accessing obj->slots. */
#define OBJ_GET_SLOT(cx,obj,slot) \
(OBJ_CHECK_SLOT(obj, slot), js_GetSlotWhileLocked(cx, obj, slot))
#define OBJ_SET_SLOT(cx,obj,slot,value) \
(OBJ_CHECK_SLOT(obj, slot), js_SetSlotWhileLocked(cx, obj, slot, value))
#define OBJ_GET_SLOT(cx,obj,slot) \
(OBJ_CHECK_SLOT(obj, slot), \
(!OBJ_IS_NATIVE(obj) || OBJ_SCOPE(obj)->ownercx == cx) \
? LOCKED_OBJ_GET_SLOT(obj, slot) \
: js_GetSlotThreadSafe(cx, obj, slot))
#define OBJ_SET_SLOT(cx,obj,slot,value) \
(OBJ_CHECK_SLOT(obj, slot), \
(!OBJ_IS_NATIVE(obj) || OBJ_SCOPE(obj)->ownercx == cx) \
? (void) LOCKED_OBJ_SET_SLOT(obj, slot, value) \
: js_SetSlotThreadSafe(cx, obj, slot, value))
#else /* !JS_THREADSAFE */

View File

@ -425,26 +425,52 @@ js_NewScope(JSContext *cx, jsrefcount nrefs, JSObjectOps *ops, JSClass *clasp,
scope->data = NULL;
#ifdef JS_THREADSAFE
js_NewLock(&scope->lock);
scope->count = 0;
scope->ownercx = cx;
memset(&scope->lock, 0, sizeof scope->lock);
/*
* Set u.link = NULL, not u.count = 0, in case the target architecture's
* null pointer has a non-zero integer representation.
*/
scope->u.link = NULL;
#ifdef DEBUG
scope->file[0] = scope->file[1] = scope->file[2] = scope->file[3] = NULL;
scope->line[0] = scope->line[1] = scope->line[2] = scope->line[3] = 0;
JS_ATOMIC_INCREMENT(&cx->runtime->liveScopes);
JS_ATOMIC_INCREMENT(&cx->runtime->totalScopes);
#endif
#endif
return scope;
}
#ifdef DEBUG_SCOPE_COUNT
extern void
js_unlog_scope(JSScope *scope);
#endif
void
js_DestroyScope(JSContext *cx, JSScope *scope)
{
JS_LOCK_SCOPE(cx, scope);
scope->ops->clear(cx, scope);
JS_UNLOCK_SCOPE(cx, scope);
#ifdef DEBUG_SCOPE_COUNT
js_unlog_scope(scope);
#endif
#ifdef JS_THREADSAFE
JS_ASSERT(scope->count == 0);
js_DestroyLock(&scope->lock);
/*
* Scope must be single-threaded at this point, so set scope->ownercx.
* This also satisfies the JS_IS_SCOPE_LOCKED assertions in the _clear
* implementations.
*/
JS_ASSERT(scope->u.count == 0);
scope->ownercx = cx;
#endif
scope->ops->clear(cx, scope);
#ifdef JS_THREADSAFE
js_FinishLock(&scope->lock);
#endif
#ifdef DEBUG
JS_ATOMIC_DECREMENT(&cx->runtime->liveScopes);
#endif
JS_free(cx, scope);
}

View File

@ -43,6 +43,10 @@
#include "jsprvtd.h"
#include "jspubtd.h"
#ifdef JS_THREADSAFE
# include "jslock.h"
#endif
#ifndef JS_DOUBLE_HASHING
struct JSScopeOps {
JSSymbol * (*lookup)(JSContext *cx, JSScope *scope, jsid id,
@ -69,8 +73,12 @@ struct JSScope {
void *data; /* private data specific to ops */
#endif
#ifdef JS_THREADSAFE
JSContext *ownercx; /* creating context, NULL if shared */
JSThinLock lock; /* binary semaphore protecting scope */
int32 count; /* entry count for reentrancy */
union { /* union lockful and lock-free state: */
jsrefcount count; /* lock entry count for reentrancy */
JSScope *link; /* next link in rt->scopeSharingTodo */
} u;
#ifdef DEBUG
const char *file[4]; /* file where lock was (re-)taken */
unsigned int line[4]; /* line where lock was (re-)taken */
@ -78,12 +86,12 @@ struct JSScope {
#endif
};
#define OBJ_SCOPE(obj) ((JSScope *)(obj)->map)
#define SPROP_GETTER(sprop,obj) SPROP_GETTER_SCOPE(sprop, OBJ_SCOPE(obj))
#define SPROP_SETTER(sprop,obj) SPROP_SETTER_SCOPE(sprop, OBJ_SCOPE(obj))
#define OBJ_SCOPE(obj) ((JSScope *)(obj)->map)
#define SPROP_GETTER(sprop,obj) SPROP_GETTER_SCOPE(sprop, OBJ_SCOPE(obj))
#define SPROP_SETTER(sprop,obj) SPROP_SETTER_SCOPE(sprop, OBJ_SCOPE(obj))
#define SPROP_INVALID_SLOT 0xffffffff
#define SPROP_HAS_VALID_SLOT(_s) ((_s)->slot != SPROP_INVALID_SLOT)
#define SPROP_INVALID_SLOT 0xffffffff
#define SPROP_HAS_VALID_SLOT(sprop) ((sprop)->slot != SPROP_INVALID_SLOT)
#ifdef JS_DOUBLE_HASHING

View File

@ -386,12 +386,6 @@ typedef JSIntn JSBool;
************************************************************************/
typedef JSUint8 JSPackedBool;
/*
** Status code used by some routines that have a single point of failure or
** special status return.
*/
typedef enum { JS_FAILURE = -1, JS_SUCCESS = 0 } JSStatus;
/*
** A JSWord is an integer that is the same size as a void*
*/

View File

@ -557,13 +557,10 @@ nsXPCWrappedNativeClass::CallWrappedMethod(JSContext* cx,
XPCJSRuntime* rt;
nsXPConnect* xpc;
// This is used to gate calls to JS_SuspendRequest/JS_ResumeRequest
// XXX Looking at cx->requestDepth is currently necessary because the DOM
// nsJSContexts break the nice rules and don't do their work within
// JS Requests. Calling JS_SuspendRequest with a zero requestDepth
// would cause the requestDepth to wrap around to a big number and
// Bad Things would happen.
JSBool useJSRequest = JS_GetContextThread(cx) && cx->requestDepth;
// This is used to nest calls to JS_SuspendRequest/JS_ResumeRequest
// NB: It's safe to call JS_SuspendRequest outside of any request, and
// the matching JS_ResumeRequest(cx, 0) will do no harm.
jsrefcount saveDepth;
#ifdef DEBUG_stats_jband
PRIntervalTime startTime = PR_IntervalNow();
@ -915,15 +912,15 @@ nsXPCWrappedNativeClass::CallWrappedMethod(JSContext* cx,
ccdata.init(callee, vtblIndex, wrapper, cx, argc, argv, vp);
oldccdata = cc->SetData(&ccdata);
if(useJSRequest)
JS_SuspendRequest(cx);
// avoid deadlock in case the native method blocks somehow
saveDepth = JS_SuspendRequest(cx);
// do the invoke
invokeResult = XPTC_InvokeByIndex(callee, vtblIndex,
paramCount, dispatchParams);
if(useJSRequest)
JS_ResumeRequest(cx);
// resume non-blocking JS operations now
JS_ResumeRequest(cx, saveDepth);
xpcc->SetLastResult(invokeResult);
cc->SetData(oldccdata);