The Fennec CrashReporter class is also renamed to
CrashReporterActivity. When running in Fennec, the Activity will be used
which retains what we do today, prompting for comments, email, etc. When
used in standalone GeckoView, we report the crash without user
interaction if the appropriate GeckoRuntimeSetting was set. The app will
want to ask for user permission at least once in order to set this.
We do not collect the URL, email, or logcat with GeckoView crashes.
Logcat and URL would be nice to have, but it's not clear what the API
for those would look like, and they can be addressed in followup
patches.
MozReview-Commit-ID: C5ROsUKreRe
Right now we pass a bundle to GeckoLoader.setupGeckoEnvironment() with
magic keys representing the environment variables. Instead of this,
simply pass a list of Strings.
MozReview-Commit-ID: D6mSTnYpnGu
We need to call nsInProcessTabChildGlobal::Init immediately after creating the
nsInProcessTabChildGlobal, so that we set up the binding object eagerly. Otherwise we
might end up calling WrapObject on it.
--HG--
extra : rebase_source : 691fa943e67727e438e4bb33f6a78ff2ea955bf7
Summary:
No bug, Automated HPKP preload list update from task XSqPd8faStCdsylVmzvQ6w
No bug, Automated blocklist update from task XSqPd8faStCdsylVmzvQ6w
Reviewers: sfraser, aki
Reviewed By: sfraser
Differential Revision: https://phabricator.services.mozilla.com/D1256
--HG--
extra : rebase_source : 855e19990c75e2613bd311976297fb6513e02b94
This adds buttons to collapse and expand the JSON tree. If the file
is larger than 100kB the "Expand All" button is hidden for performance
reasonds.
--HG--
extra : histedit_source : 3fa4d8e5523afbc423ebc5a6d803bdb84100f9d7
These were found using some ugly text searches, so it's possible some unused
atoms remain. In the future, we should enforce removing unused atoms using
static analysis. Or just generate the static atoms table based on string atom
names in our code.
This patch leaves unused RDF atoms in place, since those are being dealt with
in another bug.
MozReview-Commit-ID: 1KpH9KsHzQy
--HG--
extra : rebase_source : 8138faa2b16e847da31861abae2bbc1c7bac4e02
on a CLOSED TREE
MozReview-Commit-ID: 8neBZZJgfsp
--HG--
rename : testing/web-platform/tests/css/cssom/stylesheet-title-001.html => testing/web-platform/tests/css/cssom/stylesheet-title.html
Regardless of the size of an encoded image, SourceBuffer::Compact would
try to consolidate all of the chunks into a single chunk. If an image is
quite large, it can be actively harmful to do this, because we want a
very large contiguous chunk of memory for no real reason, and spend
extra time on the main thread doing the memcpy/consolidation.
Instead we now cap out the chunk size at 20MB. If we start allocating
chunks of this size, we will not perform compacting when we have
received all of the data. (Save for realloc'ing the last chunk since it
probably isn't full.)
On a related note, if we hit an out-of-memory condition in the middle of
appending data to the SourceBuffer, we would swallow the error. This is
because nsIInputStream::ReadSegments will succeed if any data was
written. This leaves the SourceBuffer out of sync. We now propogate this
error up properly to the higher levels.
fixup