gecko-dev/dom/serviceworkers/ServiceWorkerPrivate.h
Andrew Sutherland 24e291c07a Bug 1927247 - Regenerate Client Id for ServiceWorkers on termination. r=dom-worker-reviewers,webidl,smaug
Bug 1544232 changed it so ServiceWorker globals used a ClientInfo and
Client Id created by the ServiceWorkerPrivate rather than creating a
random client id.  This allows the ServiceWorkerManager to reliably map
a ServiceWorker Client Id back to the underlying ServiceWorker.

The problem with this was that ClientManagerService is not okay with
there being multiple ClientSources using the same id and it results in
an IPC_FAIL.  This was not a problem in desktop testing because under
fission the potential race window is incredibly small for a
ServiceWorker and its spawned successor to have a live ClientSource at
the same time because the ClientSource will be torn down by the
ClientManager WorkerRef on the transition to canceling and both SWs
will be spawned in the same process.  But on Android where there is no
fission, SWs spawn randomly with no affinity and so a successor can be
spawned on a different, more responsive process.

The fix here is to regenerate the Client Id whenever we terminate the
SW so we are prepared for the next time we spawn the SW.

This patch adds an additional test case to
browser_sw_lifetime_extension.js that is able to reproduce the crash
case on desktop by artificially blocking the ServiceWorker thread with
a monitor so that the ServiceWorker can't transition to Canceling until
its successor has already been spawned.  This reliably reproduces the
bug (when the fix is not in place).  This required adding some new test
infrastructure to WorkerTestUtils.

The new WorkerTestUtils methods provide 2 ways to hang the worker in a
controlled fashion until an observer notification is notified on the
main thread which use a shared helper class:

1. Use a monitor to completely block the thread until notified.  This
   prevents control runnables from running and thereby prevents worker
   refs from being notified.
2. Acquire a ThreadSafeWorkerRef and hold it until notified.  This lets
   the worker advance to Canceling but prevents progressing to Killing.

I added the WorkerRef mechanism first but it wasn't sufficient, so I
added the monitor mechanism, slightly generalizing the mechanism.

A mechanism to generate an observer notification on the main thread is
also added so that the successor ServiceWorker can notify the
predecessor SW without us needing to involve JSActors or other means
of running arbitrary JS in the process hosting the SWs.  This does mean
that when we are in non-fission mode we do need to limit the browser to
a single process in order to ensure both workers are spawned in the
same process.

Differential Revision: https://phabricator.services.mozilla.com/D227446
2024-11-05 06:26:05 +00:00

426 lines
14 KiB
C++

/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- */
/* vim: set ts=8 sts=2 et sw=2 tw=80: */
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
#ifndef mozilla_dom_serviceworkerprivate_h
#define mozilla_dom_serviceworkerprivate_h
#include <functional>
#include <type_traits>
#include "mozilla/Attributes.h"
#include "mozilla/Maybe.h"
#include "mozilla/MozPromise.h"
#include "mozilla/RefPtr.h"
#include "mozilla/TimeStamp.h"
#include "mozilla/UniquePtr.h"
#include "mozilla/dom/FetchService.h"
#include "mozilla/dom/Promise.h"
#include "mozilla/dom/RemoteWorkerController.h"
#include "mozilla/dom/RemoteWorkerTypes.h"
#include "mozilla/dom/ServiceWorkerLifetimeExtension.h"
#include "mozilla/dom/ServiceWorkerOpArgs.h"
#include "nsCOMPtr.h"
#include "nsISupportsImpl.h"
#include "nsTArray.h"
#define NOTIFICATION_CLICK_EVENT_NAME u"notificationclick"
#define NOTIFICATION_CLOSE_EVENT_NAME u"notificationclose"
class nsIInterceptedChannel;
class nsIWorkerDebugger;
namespace mozilla {
template <typename T>
class Maybe;
class JSObjectHolder;
namespace dom {
class PostMessageSource;
class RemoteWorkerControllerChild;
class ServiceWorkerCloneData;
class ServiceWorkerInfo;
class ServiceWorkerPrivate;
class ServiceWorkerRegistrationInfo;
namespace ipc {
class StructuredCloneData;
} // namespace ipc
class LifeCycleEventCallback : public Runnable {
public:
LifeCycleEventCallback() : Runnable("dom::LifeCycleEventCallback") {}
// Called on the worker thread.
virtual void SetResult(bool aResult) = 0;
};
// Used to keep track of pending waitUntil as well as in-flight extendable
// events. When the last token is released, we attempt to terminate the worker.
class KeepAliveToken final : public nsISupports {
public:
NS_DECL_ISUPPORTS
explicit KeepAliveToken(ServiceWorkerPrivate* aPrivate);
private:
~KeepAliveToken();
RefPtr<ServiceWorkerPrivate> mPrivate;
};
class ServiceWorkerPrivate final : public RemoteWorkerObserver {
friend class KeepAliveToken;
public:
NS_INLINE_DECL_REFCOUNTING(ServiceWorkerPrivate, override);
using PromiseExtensionWorkerHasListener = MozPromise<bool, nsresult, false>;
public:
explicit ServiceWorkerPrivate(ServiceWorkerInfo* aInfo);
Maybe<ClientInfo> GetClientInfo() { return mClientInfo; }
nsresult SendMessageEvent(
RefPtr<ServiceWorkerCloneData>&& aData,
const ServiceWorkerLifetimeExtension& aLifetimeExtension,
const PostMessageSource& aSource);
// This is used to validate the worker script and continue the installation
// process.
nsresult CheckScriptEvaluation(
const ServiceWorkerLifetimeExtension& aLifetimeExtension,
RefPtr<LifeCycleEventCallback> aCallback);
nsresult SendLifeCycleEvent(
const nsAString& aEventType,
const ServiceWorkerLifetimeExtension& aLifetimeExtension,
const RefPtr<LifeCycleEventCallback>& aCallback);
nsresult SendPushEvent(const nsAString& aMessageId,
const Maybe<nsTArray<uint8_t>>& aData,
RefPtr<ServiceWorkerRegistrationInfo> aRegistration);
nsresult SendPushSubscriptionChangeEvent();
nsresult SendNotificationEvent(const nsAString& aEventName,
const nsAString& aID, const nsAString& aTitle,
const nsAString& aDir, const nsAString& aLang,
const nsAString& aBody, const nsAString& aTag,
const nsAString& aIcon, const nsAString& aData,
const nsAString& aBehavior,
const nsAString& aScope);
nsresult SendFetchEvent(nsCOMPtr<nsIInterceptedChannel> aChannel,
nsILoadGroup* aLoadGroup, const nsAString& aClientId,
const nsAString& aResultingClientId);
Result<RefPtr<PromiseExtensionWorkerHasListener>, nsresult>
WakeForExtensionAPIEvent(const nsAString& aExtensionAPINamespace,
const nsAString& aEXtensionAPIEventName);
// This will terminate the current running worker thread and drop the
// workerPrivate reference.
// Called by ServiceWorkerInfo when [[Clear Registration]] is invoked
// or whenever the spec mandates that we terminate the worker.
// This is a no-op if the worker has already been stopped.
//
// Now takes an optional promise that will be resolved when the worker is
// dead, including if the worker was not running at all.
void TerminateWorker(Maybe<RefPtr<Promise>> aMaybePromise = Nothing());
void NoteDeadServiceWorkerInfo();
void NoteStoppedControllingDocuments();
void UpdateState(ServiceWorkerState aState);
nsresult GetDebugger(nsIWorkerDebugger** aResult);
nsresult AttachDebugger();
nsresult DetachDebugger();
// Return the current lifetime deadline for this ServiceWorker; this value may
// be null or in the past.
//
// This value always only reflects the explicit lifetime extensions
// resulting from functional events and will never reflect the extra "grace
// period".
TimeStamp GetLifetimeDeadline() { return mIdleDeadline; }
uint32_t GetLaunchCount() { return mLaunchCount; }
bool IsIdle() const;
// This promise is used schedule clearing of the owning registrations and its
// associated Service Workers if that registration becomes "unreachable" by
// the ServiceWorkerManager. This occurs under two conditions, which are the
// preconditions to calling this method:
// - The owning registration must be unregistered.
// - The associated Service Worker must *not* be controlling clients.
//
// Additionally, perhaps stating the obvious, the associated Service Worker
// must *not* be idle (whatever must be done "when idle" can just be done
// immediately).
RefPtr<GenericPromise> GetIdlePromise();
void SetHandlesFetch(bool aValue);
RefPtr<GenericPromise> SetSkipWaitingFlag();
static void RunningShutdown() {
// Force a final update of the number of running ServiceWorkers
UpdateRunning(0, 0);
MOZ_ASSERT(sRunningServiceWorkers == 0);
MOZ_ASSERT(sRunningServiceWorkersFetch == 0);
}
/**
* Update Telemetry for # of running ServiceWorkers
*/
static void UpdateRunning(int32_t aDelta, int32_t aFetchDelta);
private:
// Timer callbacks
void NoteIdleWorkerCallback(nsITimer* aTimer);
void TerminateWorkerCallback(nsITimer* aTimer);
void RenewKeepAliveToken(
const ServiceWorkerLifetimeExtension& aLifetimeExtension);
void ResetIdleTimeout(
const ServiceWorkerLifetimeExtension& aLifetimeExtension);
void AddToken();
void ReleaseToken();
already_AddRefed<KeepAliveToken> CreateEventKeepAliveToken();
nsresult SpawnWorkerIfNeeded(
const ServiceWorkerLifetimeExtension& aLifetimeExtension);
~ServiceWorkerPrivate();
nsresult Initialize();
void RegenerateClientInfo();
/**
* RemoteWorkerObserver
*/
void CreationFailed() override;
void CreationSucceeded() override;
void ErrorReceived(const ErrorValue& aError) override;
void LockNotified(bool aCreated) final {
// no-op for service workers
}
void WebTransportNotified(bool aCreated) final {
// no-op for service workers
}
void Terminated() override;
// Refreshes only the parts of mRemoteWorkerData that may change over time.
void RefreshRemoteWorkerData(
const RefPtr<ServiceWorkerRegistrationInfo>& aRegistration);
nsresult SendPushEventInternal(
RefPtr<ServiceWorkerRegistrationInfo>&& aRegistration,
ServiceWorkerPushEventOpArgs&& aArgs);
// Setup the navigation preload by the intercepted channel and the
// RegistrationInfo.
RefPtr<FetchServicePromises> SetupNavigationPreload(
nsCOMPtr<nsIInterceptedChannel>& aChannel,
const RefPtr<ServiceWorkerRegistrationInfo>& aRegistration);
nsresult SendFetchEventInternal(
RefPtr<ServiceWorkerRegistrationInfo>&& aRegistration,
ParentToParentServiceWorkerFetchEventOpArgs&& aArgs,
nsCOMPtr<nsIInterceptedChannel>&& aChannel,
RefPtr<FetchServicePromises>&& aPreloadResponseReadyPromises);
void Shutdown(Maybe<RefPtr<Promise>>&& aMaybePromise = Nothing());
RefPtr<GenericNonExclusivePromise> ShutdownInternal(
uint32_t aShutdownStateId);
nsresult ExecServiceWorkerOp(
ServiceWorkerOpArgs&& aArgs,
const ServiceWorkerLifetimeExtension& aLifetimeExtension,
std::function<void(ServiceWorkerOpResult&&)>&& aSuccessCallback,
std::function<void()>&& aFailureCallback = [] {});
class PendingFunctionalEvent {
public:
PendingFunctionalEvent(
ServiceWorkerPrivate* aOwner,
RefPtr<ServiceWorkerRegistrationInfo>&& aRegistration);
virtual ~PendingFunctionalEvent();
virtual nsresult Send() = 0;
protected:
ServiceWorkerPrivate* const MOZ_NON_OWNING_REF mOwner;
RefPtr<ServiceWorkerRegistrationInfo> mRegistration;
};
class PendingPushEvent final : public PendingFunctionalEvent {
public:
PendingPushEvent(ServiceWorkerPrivate* aOwner,
RefPtr<ServiceWorkerRegistrationInfo>&& aRegistration,
ServiceWorkerPushEventOpArgs&& aArgs);
nsresult Send() override;
private:
ServiceWorkerPushEventOpArgs mArgs;
};
class PendingFetchEvent final : public PendingFunctionalEvent {
public:
PendingFetchEvent(
ServiceWorkerPrivate* aOwner,
RefPtr<ServiceWorkerRegistrationInfo>&& aRegistration,
ParentToParentServiceWorkerFetchEventOpArgs&& aArgs,
nsCOMPtr<nsIInterceptedChannel>&& aChannel,
RefPtr<FetchServicePromises>&& aPreloadResponseReadyPromises);
nsresult Send() override;
~PendingFetchEvent();
private:
ParentToParentServiceWorkerFetchEventOpArgs mArgs;
nsCOMPtr<nsIInterceptedChannel> mChannel;
// The promises from FetchService. It indicates if the preload response is
// ready or not. The promise's resolve/reject value should be handled in
// FetchEventOpChild, such that the preload result can be propagated to the
// ServiceWorker through IPC. However, FetchEventOpChild creation could be
// pending here, so this member is needed. And it will be forwarded to
// FetchEventOpChild when crearting the FetchEventOpChild.
RefPtr<FetchServicePromises> mPreloadResponseReadyPromises;
};
nsTArray<UniquePtr<PendingFunctionalEvent>> mPendingFunctionalEvents;
/**
* It's possible that there are still in-progress operations when a
* a termination operation is issued. In this case, it's important to keep
* the RemoteWorkerControllerChild actor alive until all pending operations
* have completed before destroying it with Send__delete__().
*
* RAIIActorPtrHolder holds a singular, owning reference to a
* RemoteWorkerControllerChild actor and is responsible for destroying the
* actor in its (i.e. the holder's) destructor. This implies that all
* in-progress operations must maintain a strong reference to their
* corresponding holders and release the reference once completed/canceled.
*
* Additionally a RAIIActorPtrHolder must be initialized with a non-null actor
* and cannot be moved or copied. Therefore, the identities of two held
* actors can be compared by simply comparing their holders' addresses.
*/
class RAIIActorPtrHolder final {
public:
NS_INLINE_DECL_REFCOUNTING(RAIIActorPtrHolder)
explicit RAIIActorPtrHolder(
already_AddRefed<RemoteWorkerControllerChild> aActor);
RAIIActorPtrHolder(const RAIIActorPtrHolder& aOther) = delete;
RAIIActorPtrHolder& operator=(const RAIIActorPtrHolder& aOther) = delete;
RAIIActorPtrHolder(RAIIActorPtrHolder&& aOther) = delete;
RAIIActorPtrHolder& operator=(RAIIActorPtrHolder&& aOther) = delete;
RemoteWorkerControllerChild* operator->() const
MOZ_NO_ADDREF_RELEASE_ON_RETURN;
RemoteWorkerControllerChild* get() const;
RefPtr<GenericPromise> OnDestructor();
private:
~RAIIActorPtrHolder();
MozPromiseHolder<GenericPromise> mDestructorPromiseHolder;
const RefPtr<RemoteWorkerControllerChild> mActor;
};
RefPtr<RAIIActorPtrHolder> mControllerChild;
RemoteWorkerData mRemoteWorkerData;
Maybe<ClientInfo> mClientInfo;
TimeStamp mServiceWorkerLaunchTimeStart;
// Counters for Telemetry - totals running simultaneously, and those that
// handle Fetch, plus Max values for each
static uint32_t sRunningServiceWorkers;
static uint32_t sRunningServiceWorkersFetch;
static uint32_t sRunningServiceWorkersMax;
static uint32_t sRunningServiceWorkersFetchMax;
// We know the state after we've evaluated the worker, and we then store
// it in the registration. The only valid state transition should be
// from Unknown to Enabled or Disabled.
enum { Unknown, Enabled, Disabled } mHandlesFetch{Unknown};
// The info object owns us. It is possible to outlive it for a brief period
// of time if there are pending waitUntil promises, in which case it
// will be null and |SpawnWorkerIfNeeded| will always fail.
ServiceWorkerInfo* MOZ_NON_OWNING_REF mInfo;
nsCOMPtr<nsITimer> mIdleWorkerTimer;
ServiceWorkerLifetimeExtension mPendingSpawnLifetime;
// This is the current time in the future that the idle timer is set to expire
// for keepalive purposes. This will not be updated for the
// "dom.serviceWorkers.idle_extended_timeout" grace period after the time
// first expires.
TimeStamp mIdleDeadline;
// We keep a token for |dom.serviceWorkers.idle_timeout| seconds to give the
// worker a grace period after each event.
RefPtr<KeepAliveToken> mIdleKeepAliveToken;
uint64_t mDebuggerCount;
uint64_t mTokenCount;
uint32_t mLaunchCount;
// Used by the owning `ServiceWorkerRegistrationInfo` when it wants to call
// `Clear` after being unregistered and isn't controlling any clients but this
// worker (i.e. the registration's active worker) isn't idle yet. Note that
// such an event should happen at most once in a
// `ServiceWorkerRegistrationInfo`s lifetime, so this promise should also only
// be obtained at most once.
MozPromiseHolder<GenericPromise> mIdlePromiseHolder;
#ifdef DEBUG
bool mIdlePromiseObtained = false;
#endif
};
} // namespace dom
} // namespace mozilla
#endif // mozilla_dom_serviceworkerprivate_h