When bug 1508393 landed, it not only enabled producing of full frames
on the decoder threads, it also enabled recycling of old animated image
frame buffers it would have otherwise discarded. It reuses the contents
of the buffer where possible given we know what pixels changed between
the old frame and the frame we want to produce. However where this
calculation was done was incorrect. We originally calculated it when we
advanced the frame, but at that point there is no guarantee that we have
all of the necessary information; we may have fallen behind on decoding.
As such, we move the calculation to where we actually perform the
recycling. At this point we are guaranteed to have all the necessary
frames between the recycling and display queues.
Differential Revision: https://phabricator.services.mozilla.com/D12903
WebRender takes longer than OMTP to release its hold on the current
frame. This is because it is in a separate process and holds onto the
surface in between rendering frames, rather than getting a reference for
each repaint. This patch makes us less aggressive about taking the most
recent surface placed in the recycling queue out to avoid blocking on
waiting for the surface to be released.
Differential Revision: https://phabricator.services.mozilla.com/D10903
Bug 1249474 suggested that we add image/webp to the front of the Accept
header for images, to indicate to servers that we actually support WebP.
Differential Revision: https://phabricator.services.mozilla.com/D8120
Behavior-wise this only removes the HasAttr(src) check, and adds the IsEmpty()
check to the alt attribute value, since this function is only called for <img>
and <input>.
But it also cleans up a bit.
Differential Revision: https://phabricator.services.mozilla.com/D11194
Behavior-wise this only removes the HasAttr(src) check, and adds the IsEmpty()
check to the alt attribute value, since this function is only called for <img>
and <input>.
But it also cleans up a bit.
Differential Revision: https://phabricator.services.mozilla.com/D11194
--HG--
extra : source : 803b224d52a0940b4fb4b3b9cffc6a1fa6e5d4ee
Behavior-wise this only removes the HasAttr(src) check, and adds the IsEmpty()
check to the alt attribute value, since this function is only called for <img>
and <input>.
But it also cleans up a bit.
Differential Revision: https://phabricator.services.mozilla.com/D11194
There were two unrelated buffering problems in nsWebPDecoder. The first
was with the decoder contract. We are expected to loop until the
iterator is unable to provide more data, and wait for the SourceBuffer
to reschedule us, where as nsWebPDecoder::DoDecode only did one pass.
Thus when something yielded wanting more data, we would just wait
forever.
The second was the integration with the libwebp API. We are expected to
retry when we receive SUSPENDED from the decoder, as it decided to yield
pixels instead of continuing to decode as many as possible.
The tests did not cover the first problem because multi chunk decoder
tests do not use SourceBuffer scheduling. This is an oversight. They now
will write a chunk of data, let the SourceBuffer reschedule the decoder,
and repeat until all of the data has been written.
The tests did not cover the second problem because all of the reference
WebP images are too small. This patch adds a new test with a large WebP
image (converted from a Mozilla all hands photo of lanyards). This
should actually trigger the SUSPEND behaviour of libwebp.
Differential Revision: https://phabricator.services.mozilla.com/D10817
For decoders which produce unpaletted partial frames (APNG, WebP), the
surface format should always be BGRA. These frames while partial, are
the same size as the output size of the animated image. When
FrameAnimator performs the blend with the compositing frame, it expects
all pixels we don't care about to be set to fully transparent. If it is
BGRX, they will be set to solid white instead.
Differential Revision: https://phabricator.services.mozilla.com/D10753
First we did not handle the SourceBufferIterator::WAITING state which
can happen when we get woken up but there is no data to read from the
SourceBufferIterator. StreamingLexer handled this properly by yielding
with NEED_MORE_DATA, and properly scheduling the decoder to resume. This
patch does the same in the WebP decoder.
Second nsWebPDecoder::GetType was not implemented. This meant it would
return DecoderType::UNKNOWN, and would fail to recreate the decoder if
we are discarding frames and need to restart from the beginning. In
addition to implementing that method, this patch also corrects an assert
in DecoderFactory::CloneAnimationDecoder which failed to check for WebP
as a supported animated decoder.
This patch also modestly improves the logging output and library method
checks.
Differential Revision: https://phabricator.services.mozilla.com/D10624
In the next few patches, AnimationFrameBuffer will be reworked to split
the discarding behaviour from the retaining behaviour. Once implemented
as separate classes, it will allow easier reuse of the discarding code
for recycling.
Differential Revision: https://phabricator.services.mozilla.com/D7513
When blending full frames off the main thread, FrameAnimator no longer
requires access to the raw data of the frame to advance the animation.
Now we only request a RawAccessFrameRef for the current/next frames when
we have discovered that we need to do blending on the main thread.
In addition to avoiding the mutex overhead of RawAccessFrameRef, this
will also facilitate potentially optimizing the surfaces for the
DrawTarget for individual animated image frames.
Differential Revision: https://phabricator.services.mozilla.com/D7506
When generating display lists for WebRender, we were not caching the
draw result via nsDisplayItemGenericImageGeometry::UpdateDrawResult (or
similar) after completing CreateWebRenderCommands. This is important
because reftests use this to force sync decoding for images; it may be a
reason for image-related intermittent failures on *-qr builds.
Additionally, we may have been requesting fallback in cases where fallback
could not do anything more than WebRender could. For example, if we can't
get an image container yet, there is no point in requesting fallback
because it might just be we haven't started decoding yet. We should just
return the actual draw result in such cases.
In addition to the image container, the draw result can also be useful
for callers to know whether or not the surface(s) in the container are
fully decoded or not. This is used in subsequent parts to avoid
flickering in some cases.
In addition to the image container, the draw result can also be useful
for callers to know whether or not the surface(s) in the container are
fully decoded or not. This is used in subsequent parts to avoid
flickering in some cases.
This patch was written entirely by the following script:
#!/bin/bash
if [ ! -d "./.hg" ]
then
echo "Not in a source tree." 1>&2
exit 1
fi
find . -regex '.*\(ref\|crash\)test.*\.list' | while read FILENAME
do
echo "Processing ${FILENAME}."
# The following has four substitutions:
# * The first one replaces the *first* argument to fuzzy() when it doesn't
# have a - in it, by replacing it with an explicit 0-N range.
# * The second one does the same for the *second* argument to fuzzy().
# * The third does the same for the *second* argument to fuzzy-if().
# * The fourth does the same for the *third* argument to fuzzy-if().
#
# Note that this is using perl rather than sed because perl doesn't
# support non-greedy matching, which is needed for the first argument to
# fuzzy-if.
perl -pi -e 's/(fuzzy\()([^ ,()-]*)(,[^ ,()]*\))/${1}0-${2}${3}/g;s/(fuzzy\([^ ,()]*,)([^ ,()-]*)(\))/${1}0-${2}${3}/g;s/(fuzzy-if\([^ ]*?,)([^ ,()-]*)(,[^ ,()]*\))/${1}0-${2}${3}/g;s/(fuzzy-if\([^ ]*?,[^ ,()]*,)([^ ,()-]*)(\))/${1}0-${2}${3}/g' "${FILENAME}"
done
Differential Revision: https://phabricator.services.mozilla.com/D2974
--HG--
extra : moz-landing-system : lando
DocShells are associated with outer DOM Windows, rather than Documents, so
having the getter on the document is a bit odd to begin with. But it's also
considerably less convenient, since most of the times when we want a docShell
from JS, we're dealing most directly with a window, and have to detour through
the document to get it.
MozReview-Commit-ID: LUj1H9nG3QL
--HG--
extra : source : fcfb99baa0f0fb60a7c420a712c6ae7c72576871
extra : histedit_source : 5be9b7b29a52a4b8376ee0bdfc5c08b12e3c775a
DocShells are associated with outer DOM Windows, rather than Documents, so
having the getter on the document is a bit odd to begin with. But it's also
considerably less convenient, since most of the times when we want a docShell
from JS, we're dealing most directly with a window, and have to detour through
the document to get it.
MozReview-Commit-ID: LUj1H9nG3QL
--HG--
extra : rebase_source : a13c59d1a5ed000187c7fd8e7339408ad6e2dee6
Same approach as the other bug, mostly replacing automatically by removing
'using mozilla::Forward;' and then:
s/mozilla::Forward/std::forward/
s/Forward</std::forward</
The only file that required manual fixup was TestTreeTraversal.cpp, which had
a class called TestNodeForward with template parameters :)
MozReview-Commit-ID: A88qFG5AccP
This was done automatically replacing:
s/mozilla::Move/std::move/
s/ Move(/ std::move(/
s/(Move(/(std::move(/
Removing the 'using mozilla::Move;' lines.
And then with a few manual fixups, see the bug for the split series..
MozReview-Commit-ID: Jxze3adipUh
We can easily use Maybe<DataSourceSurface::ScopedMap> instead of
allocated the map on the heap. This does require some minor changes to
ScopedMap to properly support moves, but should be much more efficient.
DrawableSurface only exposes DrawableFrameRef to its users. This is
sufficient for the drawing related code in general, but FrameAnimator
really needs RawAccessFrameRef to the underlying pixel data (which may
be paletted). While one can get a RawAccessFrameRef from a
DrawableFrameRef, it requires yet another lock of the imgFrame's mutex.
We can avoid this extra lock if we just allow the callers to get the
right data type in the first place.
Regardless of the size of an encoded image, SourceBuffer::Compact would
try to consolidate all of the chunks into a single chunk. If an image is
quite large, it can be actively harmful to do this, because we want a
very large contiguous chunk of memory for no real reason, and spend
extra time on the main thread doing the memcpy/consolidation.
Instead we now cap out the chunk size at 20MB. If we start allocating
chunks of this size, we will not perform compacting when we have
received all of the data. (Save for realloc'ing the last chunk since it
probably isn't full.)
On a related note, if we hit an out-of-memory condition in the middle of
appending data to the SourceBuffer, we would swallow the error. This is
because nsIInputStream::ReadSegments will succeed if any data was
written. This leaves the SourceBuffer out of sync. We now propogate this
error up properly to the higher levels.
fixup
Many of these could probably be fuzzed but in the interests of getting
the reftest suite turned on sooner I'm doing a blanket fails-if. This
covers all the reftests where there is more fuzz with webrender on
windows than any of existing annotations account for. In some cases the
fuzz is only a few pixels more than the equivalent Linux fuzz already
annotated, but I'll clean that up in a future bug.
MozReview-Commit-ID: IaKarbnL46d
--HG--
extra : rebase_source : 71889340305b0b12fa8eace722e42bb3faf14419
After decoding the first frame we allocate the second frame, but before it finishes we encounter an error, Decoder::PostError is called it aborts the second frame and decrements the frame count. But AnimationSurfaceProvider::CheckForFrameAtTerminalState just asks for the current frame ref from the decoder (which it never cleared) and inserts that.
The condition that we use from the decoder to decide to report a new frame is mFinishedNewFrame (via TakeCompleteFrameCount), however this doesn't directly correspond to mFrameCount. So we create a new bool on the Decoder to track when there is a frame that we can take.
This didn't cause any problems before but now we have tighter coupling between the list of frames the AnimationSurfaceProvider has and what FrameAnimator expects.
Another possible fix would be to clear the current frame ref in PostError, but the only place we clear the current frame is when we allocate the new frame and we have the mImageData pointer still around that decoders could theorhetically use to do final processing on the last partial frame.
With the previous parts, for large animated images, we will now discard
previous frames after we reach the threshold. This mochitest configures
a very low threshold, such that it will trigger on a small animated
image. It then verifies that we are already to loop the animation a
couple of times.
In order to reduce the log size, increase the snapshot polling timeout
from 1ms to 20ms. Additionally use SimpleTest.requestCompleteLog() to
ensure we get everything when the test eventually fails.
This patch was autogenerated by my decomponents.py
It covers almost every file with the extension js, jsm, html, py,
xhtml, or xul.
It removes blank lines after removed lines, when the removed lines are
preceded by either blank lines or the start of a new block. The "start
of a new block" is defined fairly hackily: either the line starts with
//, ends with */, ends with {, <![CDATA[, """ or '''. The first two
cover comments, the third one covers JS, the fourth covers JS embedded
in XUL, and the final two cover JS embedded in Python. This also
applies if the removed line was the first line of the file.
It covers the pattern matching cases like "var {classes: Cc,
interfaces: Ci, utils: Cu, results: Cr} = Components;". It'll remove
the entire thing if they are all either Ci, Cr, Cc or Cu, or it will
remove the appropriate ones and leave the residue behind. If there's
only one behind, then it will turn it into a normal, non-pattern
matching variable definition. (For instance, "const { classes: Cc,
Constructor: CC, interfaces: Ci, utils: Cu } = Components" becomes
"const CC = Components.Constructor".)
MozReview-Commit-ID: DeSHcClQ7cG
--HG--
extra : rebase_source : d9c41878036c1ef7766ef5e91a7005025bc1d72b
This was done using the following script:
37e3803c7a/processors/chromeutils-import.jsm
MozReview-Commit-ID: 1Nc3XDu0wGl
--HG--
extra : source : 12fc4dee861c812fd2bd032c63ef17af61800c70
extra : intermediate-source : 34c999fa006bffe8705cf50c54708aa21a962e62
extra : histedit_source : b2be2c5e5d226e6c347312456a6ae339c1e634b0
This was done using the following script:
37e3803c7a/processors/chromeutils-import.jsm
MozReview-Commit-ID: 1Nc3XDu0wGl
--HG--
extra : source : 12fc4dee861c812fd2bd032c63ef17af61800c70
This was done using the following script:
37e3803c7a/processors/chromeutils-import.jsm
MozReview-Commit-ID: 1Nc3XDu0wGl
--HG--
extra : rebase_source : c004a023389f1f6bf3d2f3efe93c13d423b23ccd
This patch adjusts tools/fuzzing/ in such a way that the relevant parts can be
reused in the JS engine. Changes in detail include:
* Various JS_STANDALONE checks to exclude parts that cannot be included in
those builds.
* Turn LibFuzzerRegistry and LibFuzzerRunner into generic FuzzerRegistry and
FuzzerRunner classes and use them for AFL as well. Previously, AFL was
piggy-backing on gtests which was kind of an ugly solution anyway (besides
that it can't work in JS). Now more code like registry and harness is
shared between the two and they follow almost the same call paths and entry
points. AFL macros in FuzzingInterface have been rewritten accordingly.
This also required name changes in various places. Furthermore, this unifies
the way, the fuzzing target is selected, using the FUZZER environment
variable rather than LIBFUZZER (using LIBFUZZER in browser builds will give
you a deprecation warning because I know some people are using this already
and need time to switch). Previously, AFL target had to be selected using
GTEST_FILTER, so this is also much better now.
* I had to split up FuzzingInterface* such that the STREAM parts are in a
separate set of files FuzzingInterfaceStream* because they use nsStringStream
which is not allowed to be included into the JS engine even in a full browser
build (error: "Using XPCOM strings is limited to code linked into libxul.").
I also had to pull FuzzingInterface.cpp (the RAW part only) into the header
and make it static because otherwise, would have to make not only separate
files but also separate libraries to statically link to the JS engine, which
seemed overkill for a single small function. The streaming equivalent of the
function is still in a cpp file.
* LibFuzzerRegister functions are now unique by appending the module name to
avoid redefinition errors.
MozReview-Commit-ID: 44zWCdglnHr
--HG--
extra : rebase_source : fe07c557032fd33257eb701190becfaf85ab79d0
This patch adjusts tools/fuzzing/ in such a way that the relevant parts can be
reused in the JS engine. Changes in detail include:
* Various JS_STANDALONE checks to exclude parts that cannot be included in
those builds.
* Turn LibFuzzerRegistry and LibFuzzerRunner into generic FuzzerRegistry and
FuzzerRunner classes and use them for AFL as well. Previously, AFL was
piggy-backing on gtests which was kind of an ugly solution anyway (besides
that it can't work in JS). Now more code like registry and harness is
shared between the two and they follow almost the same call paths and entry
points. AFL macros in FuzzingInterface have been rewritten accordingly.
This also required name changes in various places. Furthermore, this unifies
the way, the fuzzing target is selected, using the FUZZER environment
variable rather than LIBFUZZER (using LIBFUZZER in browser builds will give
you a deprecation warning because I know some people are using this already
and need time to switch). Previously, AFL target had to be selected using
GTEST_FILTER, so this is also much better now.
* I had to split up FuzzingInterface* such that the STREAM parts are in a
separate set of files FuzzingInterfaceStream* because they use nsStringStream
which is not allowed to be included into the JS engine even in a full browser
build (error: "Using XPCOM strings is limited to code linked into libxul.").
I also had to pull FuzzingInterface.cpp (the RAW part only) into the header
and make it static because otherwise, would have to make not only separate
files but also separate libraries to statically link to the JS engine, which
seemed overkill for a single small function. The streaming equivalent of the
function is still in a cpp file.
* LibFuzzerRegister functions are now unique by appending the module name to
avoid redefinition errors.
MozReview-Commit-ID: 44zWCdglnHr
--HG--
rename : tools/fuzzing/libfuzzer/harness/LibFuzzerRunner.cpp => tools/fuzzing/interface/harness/FuzzerRunner.cpp
rename : tools/fuzzing/libfuzzer/harness/LibFuzzerRunner.h => tools/fuzzing/interface/harness/FuzzerRunner.h
rename : tools/fuzzing/libfuzzer/harness/LibFuzzerTestHarness.h => tools/fuzzing/interface/harness/FuzzerTestHarness.h
rename : tools/fuzzing/libfuzzer/harness/moz.build => tools/fuzzing/interface/harness/moz.build
rename : tools/fuzzing/libfuzzer/harness/LibFuzzerRegistry.cpp => tools/fuzzing/registry/FuzzerRegistry.cpp
rename : tools/fuzzing/libfuzzer/harness/LibFuzzerRegistry.h => tools/fuzzing/registry/FuzzerRegistry.h
extra : rebase_source : 7d0511ca0591dbf4d099376011402e063a79ee3b
These are all no-ops because the objects involved are already implementing one of the WebIDL interfaces that pulls in MozImageLoadingContent, and that's all script gets to see.
MozReview-Commit-ID: Io2mLHbv7qM
All of these tests have existing fuzzy annotations which cover the
differences in the WR renderings. Therefore we can remove the
fails-if(webrender) annotations and use the existing fuzzy annotations
to treat the tests as passing.
MozReview-Commit-ID: LFWha6gAP2r
--HG--
extra : rebase_source : b26a0d0cd66b6bab273251e6a2de9210417ba798
If we aren't using a downscaler we avoid this bug because the mask is either 100% transparent or 100% opaque, and in the transparent case we just set the whole pixel (32 bits) to 0.
But when we are using a downscaler we just replace the alpha values in the original surface (leaving the color values untouched).
We need to go the full premultiply route because after downscaling the mask we can have any value for alpha instead of just 0 or 255.