This adds a PrefName wrapper that allows use to avoid copying the pref name
string in the fast root pref branch case, but also allows to create a new
string in the slower non-root pref branch case. By creating a new string we
avoid the odd behavior of mutating the pref branch name with each retrieval.
MozReview-Commit-ID: HGCLbpGmKrr
This updates test_libPrefs.js so that it actually runs all the observer
portion of it's tests. Previously the observer callback would get hit before
the next do_test_pending call could be run, thus causing the test to end.
An additional test was added to confirm that registering for multiple prefs on
a non-root pref branch works as intended.
If 'media.playback.warnings-as-errors' is true, demuxing and decoding warnings
(i.e., non-fatal errors) will be treated as errors, causing playback to fail.
Currently set to false by default.
This could be later changed to catch and diagnose more issues.
MozReview-Commit-ID: BTaZ6TbIbNG
--HG--
extra : rebase_source : bacc24a46f588dd344e6d46178ae2d2c58882fcb
The racing algorithm is quite simple at this point:
If racing is enabled, the request is allowed to hit the network, and the cache queue size is bigger than a certain threshold, then we trigger the network right before we query the cache.
MozReview-Commit-ID: JklG4P1eRyO
They have generic names, and are potentially conflicting with
in-tree headers with the same name (which is true for at least port.h).
There aren't enough users of brotli to want to avoid LOCAL_INCLUDES
in the directories that use it.
--HG--
extra : rebase_source : 82531ac5961ad80e1b3d0c1484a2f146be194411
woff_out.h includes port.h (actually, worse, "./port.h"), and in
dist/include, port.h is actually *not* the one from woff2...
We've just been lucky it's worked so far.
--HG--
extra : rebase_source : 65537c1f6c0ba540e0c93ef2c8ba587e5903d273
This gives us simpler code, and eliminates some extra frame copies.
MozReview-Commit-ID: 10N0O9Pn0Kw
--HG--
extra : rebase_source : 8c99178ac94b3f580772598766b2e701da7cfe2c
extra : intermediate-source : 6d6877aa1d1905a2bc05dbf77bb4122af1722719
extra : source : 05f80a9025abea168ca2ee649316422418d08dc0
Makes transfer of samples between the content and CDM processes use shmems.
The Chromium CDM API requires us to implement a synchronous interface to supply
buffers to the CDM for it to write decrypted samples into. We want our buffers
to be backed by shmems, in order to reduce the overhead of transferring decoded
frames. However due to sandboxing restrictions, the CDM process cannot allocate
shmems itself. We don't want to be doing synchronous IPC to request shmems
from the content process, nor do we want to have to do intr IPC or make async
IPC conform to the sync allocation interface. So instead we have the content
process pre-allocate a set of shmems and give them to the CDM process in
advance of them being needed.
When the CDM needs to allocate a buffer for storing a decrypted sample, the CDM
host gives it one of these shmems' buffers. When this is sent back to the
content process, we copy the result out (uploading to a GPU surface for video
frames), and send the shmem back to the CDM process so it can reuse it.
We predict the size of buffer the CDM will allocate, and prepopulate the CDM's
list of shmems with shmems of at least that size, plus a bit of padding for
safety. We pad frames out to be the next multiple of 16, as we've seen some
decoders do that.
Normally the CDM won't allocate more than one buffer at once, but we've seen
cases where it allocates two buffers, returns one and holds onto the other. So
the minimum number of shmems we give to the CDM must be at least two, and the
default is three for safety.
MozReview-Commit-ID: 5FaWAst3aeh
--HG--
extra : rebase_source : a0cb126e72bfb2905bcdf02e864dc654e8340410