This is a mega-patch that was too hard to disentangle. Here's what it does:
-- Create infrastructure around AudioNode::UpdateOutputEnded to detect
when a node can no longer produce any output. When that becomes true,
disconnect it from the AudioNode graph.
-- Have AudioNode implement JSBindingFinalized to use as input in
UpdateOutputEnded.
-- Give every AudioNode a MediaStream, and give every connection
a MediaInputPort.
-- Actually play the audio that reaches the AudioContext's destination node.
-- Force AudioContext to use the audio sample rate defined by MediaStreamGraph.
-- Fix AudioBufferSourceNode's start and stop methods to possibly throw and
take default 'when' parameters.
-- Create an AudioNodeStream for AudioBufferSourceNode and give it a
AudioBufferSourceNodeEngine that does what's needed. Set parameters for
this engine in the start() and stop() methods.
-- Create AudioBuffer::GetThreadSharedChannelsForRate, which is responsible
for stealing the contents of any JS array buffers, and bundling them up
into a thread-shared read-only buffer object which can be used as
part of an AudioChunk. This method will also be responsible for
resampling and caching as necessary.
--HG--
rename : content/media/MediaStreamGraph.cpp => content/media/MediaStreamGraphImpl.h
extra : rebase_source : 9fa0ec0efa304acd6513e427103d6339c78efa53
We need this in order to update the MediaStreamGraph thread when an
AudioParam changes. This enables each AudioParam to be registered with
a callback from its owner node, so that the owner node can have custom
processing code for each AudioParam's mutation.
This is a mega-patch that was too hard to disentangle. Here's what it does:
-- Create infrastructure around AudioNode::UpdateOutputEnded to detect
when a node can no longer produce any output. When that becomes true,
disconnect it from the AudioNode graph.
-- Have AudioNode implement JSBindingFinalized to use as input in
UpdateOutputEnded.
-- Give every AudioNode a MediaStream, and give every connection
a MediaInputPort.
-- Actually play the audio that reaches the AudioContext's destination node.
-- Force AudioContext to use the audio sample rate defined by MediaStreamGraph.
-- Fix AudioBufferSourceNode's start and stop methods to possibly throw and
take default 'when' parameters.
-- Create an AudioNodeStream for AudioBufferSourceNode and give it a
AudioBufferSourceNodeEngine that does what's needed. Set parameters for
this engine in the start() and stop() methods.
-- Create AudioBuffer::GetThreadSharedChannelsForRate, which is responsible
for stealing the contents of any JS array buffers, and bundling them up
into a thread-shared read-only buffer object which can be used as
part of an AudioChunk. This method will also be responsible for
resampling and caching as necessary.
Here's what this patch does:
-- Makes AudioNodes mostly not use nsWrapperCache. AudioDestinationNode
still does.
-- Rename MaxNumberOfInputs/Outputs to NumberOfInputs/Outputs, and have them
default to 1 in AudioNode.
-- Allow any number of nodes to be connected to any given input/output port.
This patch implements most of the semantics of AudioParam in a
templatized base class called AudioEventTimeline. This is done in order
to make it possible to test this code from C++ without the need of
linking to libxul. Basically we take everything that could depend on
libxul or is not suitable for a genertic implementation and put it in a
traits like type as a template argument. Then the test creates mock
objects that conform to the argument interfaces, and tests the logic of
AudioEventTimeline.
This is the full implementation of the AudioBuffer object. There are
two ways to create these objects from an audio context and this patch
implements only one of them.
The construction of the AudioBuffer object is a two step process: the
object should be created with operator new first, and then
InitializeBuffers should be called on it. InitializeBuffers is
fallible, because it uses the JS API to create the underlying typed
arrays, but that's fine, since the length of the buffers comes from web
content, and we don't want to use infallible allocations for those
anyways.
We hold on to the JS objects from the C++ implementation, and trace
through all of those objects, so that a GC does not kill those object
without us knowing.
The buffer should be possible to manipulate from both C++ and JS, and
the C++ object probably needs to support a set of methods for the C++
callers at some point.
This is the bare minimum that one needs in order to get those interfaces
implemented. The work to make the simplest of Web Audio test cases
actually pass will be done in bug 792649.
This is the bare minimum that one needs in order to get those interfaces
implemented. The work to make the simplest of Web Audio test cases
actually pass will be done in bug 792649.