Previously the PVerifySSLServerCert protocol consisted of two functions: one to
call when certificate verification succeeded, and another to call upon failure.
This was unnecessary, as the code before and after this protocol didn't have
the same split. This patch unifies the protocol to better match the surrounding
code. It also takes the opportunity to make use of some IPC helpers to
serialize enums rather than manually casting to and from basic integer types.
Differential Revision: https://phabricator.services.mozilla.com/D212594
* Adjust UtilityProcess{Host,Manager} to propagate ipc::LaunchErrors for
their clients, or to create new ones where none previously existed.
* Adjust the various clients of the above to to handle the additional
information -- mostly by adding the additional failure-location data
to log messages.
* Fix an unrelated bug wherein the return type of `LaunchProcess` was
declared as exclusive, despite being cached and reused.
In particular, filedialog::Error objects should now contain -- and report
to telemetry without further adjustment -- the actual error code from
`ipc::LaunchError`s.
(Reporting the original failure location as well will occur in bug
1884221.)
Differential Revision: https://phabricator.services.mozilla.com/D209715
As the intended use for LaunchError::mFunction is telemetry, avoid the
possibility of accidental exfiltration of PII by requiring that
LaunchError be constructed from `StaticString`.
Additionally, remove the Windows-specific constructor overloads in favor
of an explicit factory function, and explicitly document that `mError`
is a generic bag of bits rather than any kind of strict error type.
No functional changes.
Differential Revision: https://phabricator.services.mozilla.com/D209712
As we've added encryption scheme per content type in previous patches,
there is no need to keep this old encryption scheme.
Differential Revision: https://phabricator.services.mozilla.com/D211793
Currently our implementation is the scheme support per key system,
not per content type. As each content type can have different supported
scheme, eg. type A only supports cenc, but type B only supports cbcs,
only having scheme per key system can't return a precise result.
Differential Revision: https://phabricator.services.mozilla.com/D211642
As we've added encryption scheme per content type in previous patches,
there is no need to keep this old encryption scheme.
Differential Revision: https://phabricator.services.mozilla.com/D211793
Currently our implementation is the scheme support per key system,
not per content type. As each content type can have different supported
scheme, eg. type A only supports cenc, but type B only supports cbcs,
only having scheme per key system can't return a precise result.
Differential Revision: https://phabricator.services.mozilla.com/D211642
Actually remove the check for demangle, no supported target need that
check.
Also make library dependencies explicit instead of relying on "$LIBS".
Differential Revision: https://phabricator.services.mozilla.com/D203637
This patch adjusts ManagedContainer to have a common base class, and exposes
methods for interacting with this base class from generic code on IProtocol.
This avoids the need for some specialized methods which were previously
required in order to manipulate the managed lists which allows the ordering to
be more precisely controlled in generic actor lifecycle methods.
Differential Revision: https://phabricator.services.mozilla.com/D209855
This changes the way that StmtCode handles the pattern `$,{list}` alone on a
line, adjusting it such that each item on the list is printed onto its own
line, and then indented. This helps the formatting of large lists such as the
ones generated in part 2.
Differential Revision: https://phabricator.services.mozilla.com/D209854
In part 1, a fallback was added to allow message buffers which would be sent as
shmem to be sent inline if shmem allocation or mapping failed. This could
potentially lead to an increase in message size too large crashes, as these
messages are now being sent inline again.
This patch adds an extra crash annotaion such that failures of this kind can be
identified in socorro.
Depends on D209880
Differential Revision: https://phabricator.services.mozilla.com/D209881
This may help reduce crashes in some cases, especially on 32-bit machines which
may be suffering from severe memory fragmentation. This required serializing
extra information for large buffers which would be sent as a shmem to record if
the shmem allocation succeeded or failed on the sending side.
Differential Revision: https://phabricator.services.mozilla.com/D209880
Actually remove the check for demangle, no supported target need that
check.
Also make library dependencies explicit instead of relying on "$LIBS".
Differential Revision: https://phabricator.services.mozilla.com/D203637
When sync IPC under the top-level PCompositorManager protocol does not
reply within a certain time threshold we purposefully kill the GPU
process. While this allows the user to recover from a stuck GPU
process, we have little visibility about the underlying cause.
This patch makes it so that we generate a paired minidump for the GPU
and parent processes prior to killing the GPU process in
GPUProcessHost::KillHard(). The implementation roughly follows the
equivalent for content processes in ContentParent::KillHard().
As the GPU process can be purposefully killed during normal operation,
and because generating minidumps can be expensive, we are careful to
only do so when the new argument aGenerateMinidump is true. We
additionally remove the aReason argument as it is unused (and
currently innacurate in some places).
As these minidumps may not automatically submitted we limit the
minidumps generation to twice per session in order to avoid
accumulating a large number of unsubmitted minidumps on disk.
Differential Revision: https://phabricator.services.mozilla.com/D202166
When sync IPC under the top-level PCompositorManager protocol does not
reply within a certain time threshold we purposefully kill the GPU
process. While this allows the user to recover from a stuck GPU
process, we have little visibility about the underlying cause.
This patch makes it so that we generate a paired minidump for the GPU
and parent processes prior to killing the GPU process in
GPUProcessHost::KillHard(). The implementation roughly follows the
equivalent for content processes in ContentParent::KillHard().
As the GPU process can be purposefully killed during normal operation,
and because generating minidumps can be expensive, we are careful to
only do so when the new argument aGenerateMinidump is true. We
additionally remove the aReason argument as it is unused (and
currently innacurate in some places).
As these minidumps may not automatically submitted we limit the
minidumps generation to twice per session in order to avoid
accumulating a large number of unsubmitted minidumps on disk.
Differential Revision: https://phabricator.services.mozilla.com/D202166
Sensible defaults are very different for the browser and the shell so I added
separate constants. I think the JS testing functions can get called from the
browser and so may pick the wrong default but it's not too serious.
Differential Revision: https://phabricator.services.mozilla.com/D210058
In the changes from bug 1879375, the conditions for entries to be added
to mPendingMessages was changed, such that it is now possible for the
broker to temporarily have empty entries in this table. This means the
assertion is no longer correct and should be removed.
Differential Revision: https://phabricator.services.mozilla.com/D209861
This is done by generating a new ID for the actor which can be successfully
registered, then explicitly tearing the actor down, improving the consistency
of error behaviour with other error cases during actor construction.
Differential Revision: https://phabricator.services.mozilla.com/D209064
Slightly restructure the script to be compatible with multiprocessing,
but in a non-intrusive way: guard execution by __main__ and use shared
containers for shared states.
Avoid a redundant parser call (even if cached).
Differential Revision: https://phabricator.services.mozilla.com/D207180
With the new changes in bug 1724083, it is possible for an actor to be
destroyed during SetManagerAndRegister, which can lead to RemoveManagee being
called before the managee is even registered, leading to an assertion failure.
The easiest fix here is to relax this assertion, as we won't insert the entry
later in this failure case.
Differential Revision: https://phabricator.services.mozilla.com/D208936
It appears that the Element member may have been creating a reference
cycle passing through the new strong WindowGlobalParent::Manager()
reference.
This patch also removes an unused member from BrowserParent which
otherwise may have needed to be cycle-collected.
Differential Revision: https://phabricator.services.mozilla.com/D207170
There are a few IPDL actors which are cycle-collected, including `PBrowser`,
`PContent`, and `PWindowGlobal`.
This patch adds support for these actors to traverse and unlink the new strong
Manager() reference added by IPDL, allowing cycles containing these actors to
be properly unlinked and avoiding leaks.
Differential Revision: https://phabricator.services.mozilla.com/D198629
While the BackgroundChildImpl actor is not safe to use from any thread other
than the one it was created on, it holds no important thread-bound members at
destruction time, and does no meaningful work in its destructor.
Actors managed by BackgroundChild are occasionally kept alive longer than the
thread they were bound to, meaning that the new Manager() strong reference
would keep BackgroundChildImpl alive past the death of its thread and IPC link.
This change makes the reference counting threadsafe, so that it's OK to destroy
these final references from another thread.
Differential Revision: https://phabricator.services.mozilla.com/D198628
This makes accessing `Manager()` on an IPDL protocol safer, and replaces the
previous ActorLifecycleProxy reference, which would only keep the manager alive
until IPDL drops its reference.
Unfortunately this introduces some leaks due to reference cycles, which will be
fixed in follow-up parts.
Differential Revision: https://phabricator.services.mozilla.com/D198624
Suppose we have 3 nodes: A, B, and C. In this scenario A is the broker,
and there are two child-nodes B and C. A creates a port-pair (Ab <->
Ac), and sends one each to B (Ab => Bb) and C (Ac => Cc). Assuming both
directions of the proxy bypass occur concurrently, we'll send a number
of ObserveProxy messages between node A and nodes B/C, eventually
cleaning up the proxies in A, such that Bb's peer is Cc, and vice versa.
During this process, we never attempt to send a message directly between
nodes B and C.
In NodeController, direct connections between a pair of nodes are
established via the broker node when attempting to send a message
directly between nodes, but as we have not attempted to send any
messages directly, no such connection has been established. That means
that when one of these child nodes dies, the other node will not be
notified of the peer being lost, and the IPDL actor will appear to
remain open. Only once a message is sent will the death of the peer node
be discovered, and the corresponding actor destroyed.
To fix this, we modify the routing code, adding a couple of callbacks
when accepting ports over IPC and bypassing proxies which notify
NodeController, allowing it to attempt an introduction eagerly. This
helps ensure that actors will reliably be notified when their peer
processes die.
In addition, some tweaks to the introduction logic were made to both
make introductions happen reliably, and to ensure we clean up missing
peer nodes in error conditions.
Differential Revision: https://phabricator.services.mozilla.com/D201153
This changes the locking behaviour for IPC port mutexes in TSAN builds
to use a single shared mutex for all ports, rather than individual
mutexes per-port. This avoids the need to potentially lock a large
number of mutexes simultaneously when sending a large number of ports in
an IPC message.
I've tried to leave in the various debug assertions such that it still
acts like there are multiple mutexes under the hood. It is likely that
this could harm performance somewhat due to the increased contention,
however it should have no impact on actual release builds.
Differential Revision: https://phabricator.services.mozilla.com/D207073