We were erroneously looking for the first reasonably-valued
server-timing-param for each name. However, that's not how it works. We
should really be looking for the first server-timing-param regardless
of how reasonable its value is.
MozReview-Commit-ID: LwaHFyCpteU
--HG--
extra : rebase_source : 995f14fec3bd130e6eeada6c4cac0db0b27e618f
Make sure cookies aren't saved on channel headers in the content process.
Adds test to verify that this works, and removes tests that expected cookie headers to be visible to the child.
MozReview-Commit-ID: KOB83xpuAlF
--HG--
extra : rebase_source : 6f9a5ef570fb23200acf8d75285e67d80b7c27f0
The bug was caused by the tcp connection not sending back any data, and just being closed right away.
So we get something like this:
FTPChannelChild::DoOnStartRequest
FTPChannelChild::DoOnStopRequest -> nsUnknownDecoder::OnStopRequest -> (data is empty) -> nsUnknownDecoder::FireListenerNotifications -> nsDocumentOpenInfo::OnStartRequest -> ExternalHelperAppChild::OnStartRequest -> ExternalHelperAppChild::DivertToParent -> FTPChannelChild::DivertToParent.
However, in nsIDivertableChannel.idl the description for divertToParent() is "divertToParent is called in the child process. It can only be called during OnStartRequest()."
Enforcing this condition seems to be enough to avoid an infinite loop. The crash was fixed by bug 1436311.
This was done automatically replacing:
s/mozilla::Move/std::move/
s/ Move(/ std::move(/
s/(Move(/(std::move(/
Removing the 'using mozilla::Move;' lines.
And then with a few manual fixups, see the bug for the split series..
MozReview-Commit-ID: Jxze3adipUh
Before this change, the trusted URI schemes, based on a string whitelist, were:
https, file, resource, app, moz-extension and wss.
This change removes "app" from the list (since we don't implement it),
and adds "about" to the list (because we control the delivery of that).
This fixes the "Assertion failure: PermissionAvailable(prin, aType), at nsPermissionManager.cpp:2341 when loading FTP URLs on debug builds"
MozReview-Commit-ID: 4eRGQ3hrUWo
--HG--
extra : rebase_source : 36516275b1fe0f266a08394484e19e0aecfbd671
Using concrete class types with static IIDs in QueryInterface methods is a
pretty common pattern which isn't supported by any existing helper macros.
That's lead to separate ad-hoc implementations, with varying degrees of
dodginess, being scattered around the tree.
This patch adds a helper macro with a canonical (and safe) implementation, and
updates existing ad-hoc users to use it.
MozReview-Commit-ID: HaTGF7MN5Cv
--HG--
extra : rebase_source : ace930129d85960d22bc3048ca3bb19bbbd4a63e
extra : histedit_source : 03a87f746d957789d41381e4e1bfcc4fd7eebaf2%2C9c5bae9feeeef7721105db67be0f83e0ded66bb7
the id was a b2g feature only settable via chrome privd xhr and is no
longer active in the code base
MozReview-Commit-ID: 84GPNvhvjNb
--HG--
extra : rebase_source : ab5c2229b98e1407b8b74ef2ee00dcfea45e046a
This commit is a (rebased) backout of changeset 016bcae14073 from bug 1322610,
which simply added a diagnostic to gather more information about a crash.
We can remove that diagnostic now, hence this commit.
MozReview-Commit-ID: 6ea7SAX4PSV
--HG--
extra : rebase_source : c13d9cd5bac4761cfe2dab51f67967462b1bd962
Since URI hostnames are defined to be case-insensitive, we only ever see
lower-case hostnames when looking up substitutions. That means that
substitutions containing capital letters are inaccessible, which is a footgun
that has hit many people.
The handler should lower-case substitutions when they're added so that
look-ups are always case-insensitive.
MozReview-Commit-ID: C936hS2cSyY
--HG--
extra : rebase_source : a70e8ceb822879e51c3a40232b7dffdfb9c0a185
This also removes any redundant Ci.nsISupports elements in the interface
lists.
This was done using the following script:
acecb401b7/processors/chromeutils-generateQI.jsm
MozReview-Commit-ID: AIx10P8GpZY
--HG--
extra : rebase_source : a29c07530586dc18ba040f19215475ac20fcfb3b
When writing to alt-data output stream fails for whatever reason, we now try to
truncate alternative data and keep the original data instead of dooming the
whole entry. The patch also changes how is the predicted size passed to the
cache. Instead of a dedicated method it's now an argument of openOutputStream
and openAlternativeOutputStream methods which fail in case the entry would
exceed the allowed limit.