mirror of
https://github.com/mozilla/gecko-dev.git
synced 2024-11-13 23:17:57 +00:00
e32b404e54
- Relevant spec text: - http://webaudio.github.io/web-audio-api/#widl-AudioContext-suspend-Promise - http://webaudio.github.io/web-audio-api/#widl-AudioContext-resume-Promise - http://webaudio.github.io/web-audio-api/#widl-AudioContext-close-Promise - http://webaudio.github.io/web-audio-api/#widl-AudioContext-state - http://webaudio.github.io/web-audio-api/#widl-AudioContext-onstatechange - In a couple words, the behavior we want: - Closed context cannot have new nodes created, but can do decodeAudioData, and create buffers, and such. - OfflineAudioContexts don't support those methods, transitions happen at startRendering and at the end of processing. onstatechange is used to make this observable. - (regular) AudioContexts support those methods. The promises and onstatechange should be resolved/called when the operation has actually completed on the rendering thread. Once a context has been closed, it cannot transition back to "running". An AudioContext switches to "running" when the audio callback start running, this allow authors to know how long the audio stack takes to start running. - MediaStreams that feed in/go out of a suspended graph should respectively not buffer at the graph input, and output silence - suspended context should not be doing much on the CPU, and we should try to pause audio streams if we can (this behaviour is the main reason we need this in the first place, for saving battery on mobile, and CPU on all platforms) - Now, the implementation: - AudioNodeStreams are now tagged with a context id, to be able to operate on all the streams of a given AudioContext on the Graph thread without having to go and lock everytime to touch the AudioContext. This happens in the AudioNodeStream ctor. IDs are of course constant for the lifetime of the node. - When an AudioContext goes into suspended mode, streams for this AudioContext are moved out of the mStreams array to a second array, mSuspendedStreams. Streams in mSuspendedStream are not ordered, and are not processed. - The MSG will automatically switch to a SystemClockDriver when it finds that there are no more AudioNodeStream/Stream with an audio track. This is how pausing the audio subsystem and saving battery works. Subsequently, when the MSG finds that there are only streams in mSuspendedStreams, it will go to sleep (block on a monitor), so we save CPU, but it does not shut itself down. This is mostly not a new behaviour (this is what the MSG does since the refactoring), but is important to note. - Promises are gripped (addref-ed) on the main thread, and then shepherd down other threads and to the GraphDriver, if needed (sometimes we can resolve them right away). They move between threads as void* to prevent calling methods on them, as they are not thread safe. Then, the driver executes the operation, and when it's done (initializing and closing audio streams can take some time), we send the promise back to the main thread, and resolve it, casting back to Promise* after asserting we're back on the main thread. This way, we can send them back on the main thread once an operation has complete (suspending an audio stream, starting it again on resume(), etc.), without having to do bookkeeping between suspend calls and their result. Promises are not thread safe, so we can't move them around AddRef-ed. - The stream destruction logic now takes into account that a stream can be destroyed while not being in mStreams. - A graph can now switch GraphDriver twice or more per iteration, for example if an author goes suspend()/resume()/suspend() in the same script. - Some operation have to be done on suspended stream, so we now use double for-loop around mSuspendedStreams and mStreams in some places in MediaStreamGraph.cpp. - A tricky part was making sure everything worked at AudioContext boundaries. TrackUnionStream that have one of their input stream suspended append null ticks instead. - The graph ordering algorithm had to be altered to not include suspended streams. - There are some edge cases (adding a stream on a suspended graph, calling suspend/resume when a graph has just been close()d).
1036 lines
28 KiB
C++
1036 lines
28 KiB
C++
/* -*- Mode: C++; tab-width: 2; indent-tabs-mode: nil; c-basic-offset: 2 -*- */
|
|
/* vim:set ts=2 sw=2 sts=2 et cindent: */
|
|
/* This Source Code Form is subject to the terms of the Mozilla Public
|
|
* License, v. 2.0. If a copy of the MPL was not distributed with this
|
|
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
|
|
|
|
#include "AudioContext.h"
|
|
|
|
#include "nsPIDOMWindow.h"
|
|
#include "mozilla/ErrorResult.h"
|
|
#include "mozilla/dom/AnalyserNode.h"
|
|
#include "mozilla/dom/HTMLMediaElement.h"
|
|
#include "mozilla/dom/AudioContextBinding.h"
|
|
#include "mozilla/dom/OfflineAudioContextBinding.h"
|
|
#include "mozilla/dom/OwningNonNull.h"
|
|
#include "MediaStreamGraph.h"
|
|
#include "AudioChannelService.h"
|
|
#include "AudioDestinationNode.h"
|
|
#include "AudioBufferSourceNode.h"
|
|
#include "AudioBuffer.h"
|
|
#include "GainNode.h"
|
|
#include "MediaElementAudioSourceNode.h"
|
|
#include "MediaStreamAudioSourceNode.h"
|
|
#include "DelayNode.h"
|
|
#include "PannerNode.h"
|
|
#include "AudioListener.h"
|
|
#include "DynamicsCompressorNode.h"
|
|
#include "BiquadFilterNode.h"
|
|
#include "ScriptProcessorNode.h"
|
|
#include "StereoPannerNode.h"
|
|
#include "ChannelMergerNode.h"
|
|
#include "ChannelSplitterNode.h"
|
|
#include "MediaStreamAudioDestinationNode.h"
|
|
#include "WaveShaperNode.h"
|
|
#include "PeriodicWave.h"
|
|
#include "ConvolverNode.h"
|
|
#include "OscillatorNode.h"
|
|
#include "nsNetUtil.h"
|
|
#include "AudioStream.h"
|
|
#include "mozilla/dom/Promise.h"
|
|
|
|
namespace mozilla {
|
|
namespace dom {
|
|
|
|
// 0 is a special value that MediaStreams use to denote they are not part of a
|
|
// AudioContext.
|
|
static dom::AudioContext::AudioContextId gAudioContextId = 1;
|
|
|
|
NS_IMPL_CYCLE_COLLECTION_CLASS(AudioContext)
|
|
|
|
NS_IMPL_CYCLE_COLLECTION_UNLINK_BEGIN(AudioContext)
|
|
NS_IMPL_CYCLE_COLLECTION_UNLINK(mDestination)
|
|
NS_IMPL_CYCLE_COLLECTION_UNLINK(mListener)
|
|
if (!tmp->mIsStarted) {
|
|
NS_IMPL_CYCLE_COLLECTION_UNLINK(mActiveNodes)
|
|
}
|
|
NS_IMPL_CYCLE_COLLECTION_UNLINK_END_INHERITED(DOMEventTargetHelper)
|
|
|
|
NS_IMPL_CYCLE_COLLECTION_TRAVERSE_BEGIN_INHERITED(AudioContext,
|
|
DOMEventTargetHelper)
|
|
NS_IMPL_CYCLE_COLLECTION_TRAVERSE(mDestination)
|
|
NS_IMPL_CYCLE_COLLECTION_TRAVERSE(mListener)
|
|
if (!tmp->mIsStarted) {
|
|
MOZ_ASSERT(tmp->mIsOffline,
|
|
"Online AudioContexts should always be started");
|
|
NS_IMPL_CYCLE_COLLECTION_TRAVERSE(mActiveNodes)
|
|
}
|
|
NS_IMPL_CYCLE_COLLECTION_TRAVERSE_END
|
|
|
|
NS_IMPL_ADDREF_INHERITED(AudioContext, DOMEventTargetHelper)
|
|
NS_IMPL_RELEASE_INHERITED(AudioContext, DOMEventTargetHelper)
|
|
NS_INTERFACE_MAP_BEGIN_CYCLE_COLLECTION_INHERITED(AudioContext)
|
|
NS_INTERFACE_MAP_END_INHERITING(DOMEventTargetHelper)
|
|
|
|
static float GetSampleRateForAudioContext(bool aIsOffline, float aSampleRate)
|
|
{
|
|
if (aIsOffline) {
|
|
return aSampleRate;
|
|
} else {
|
|
CubebUtils::InitPreferredSampleRate();
|
|
return static_cast<float>(CubebUtils::PreferredSampleRate());
|
|
}
|
|
}
|
|
|
|
AudioContext::AudioContext(nsPIDOMWindow* aWindow,
|
|
bool aIsOffline,
|
|
AudioChannel aChannel,
|
|
uint32_t aNumberOfChannels,
|
|
uint32_t aLength,
|
|
float aSampleRate)
|
|
: DOMEventTargetHelper(aWindow)
|
|
, mId(gAudioContextId++)
|
|
, mSampleRate(GetSampleRateForAudioContext(aIsOffline, aSampleRate))
|
|
, mAudioContextState(AudioContextState::Suspended)
|
|
, mNumberOfChannels(aNumberOfChannels)
|
|
, mNodeCount(0)
|
|
, mIsOffline(aIsOffline)
|
|
, mIsStarted(!aIsOffline)
|
|
, mIsShutDown(false)
|
|
, mCloseCalled(false)
|
|
{
|
|
aWindow->AddAudioContext(this);
|
|
|
|
// Note: AudioDestinationNode needs an AudioContext that must already be
|
|
// bound to the window.
|
|
mDestination = new AudioDestinationNode(this, aIsOffline, aChannel,
|
|
aNumberOfChannels, aLength, aSampleRate);
|
|
// We skip calling SetIsOnlyNodeForContext and the creation of the
|
|
// audioChannelAgent during mDestination's constructor, because we can only
|
|
// call them after mDestination has been set up.
|
|
mDestination->CreateAudioChannelAgent();
|
|
mDestination->SetIsOnlyNodeForContext(true);
|
|
}
|
|
|
|
AudioContext::~AudioContext()
|
|
{
|
|
nsPIDOMWindow* window = GetOwner();
|
|
if (window) {
|
|
window->RemoveAudioContext(this);
|
|
}
|
|
|
|
UnregisterWeakMemoryReporter(this);
|
|
}
|
|
|
|
JSObject*
|
|
AudioContext::WrapObject(JSContext* aCx, JS::Handle<JSObject*> aGivenProto)
|
|
{
|
|
if (mIsOffline) {
|
|
return OfflineAudioContextBinding::Wrap(aCx, this, aGivenProto);
|
|
} else {
|
|
return AudioContextBinding::Wrap(aCx, this, aGivenProto);
|
|
}
|
|
}
|
|
|
|
/* static */ already_AddRefed<AudioContext>
|
|
AudioContext::Constructor(const GlobalObject& aGlobal,
|
|
ErrorResult& aRv)
|
|
{
|
|
nsCOMPtr<nsPIDOMWindow> window = do_QueryInterface(aGlobal.GetAsSupports());
|
|
if (!window) {
|
|
aRv.Throw(NS_ERROR_FAILURE);
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<AudioContext> object =
|
|
new AudioContext(window, false,
|
|
AudioChannelService::GetDefaultAudioChannel());
|
|
|
|
RegisterWeakMemoryReporter(object);
|
|
|
|
return object.forget();
|
|
}
|
|
|
|
/* static */ already_AddRefed<AudioContext>
|
|
AudioContext::Constructor(const GlobalObject& aGlobal,
|
|
AudioChannel aChannel,
|
|
ErrorResult& aRv)
|
|
{
|
|
nsCOMPtr<nsPIDOMWindow> window = do_QueryInterface(aGlobal.GetAsSupports());
|
|
if (!window) {
|
|
aRv.Throw(NS_ERROR_FAILURE);
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<AudioContext> object = new AudioContext(window, false, aChannel);
|
|
|
|
RegisterWeakMemoryReporter(object);
|
|
|
|
return object.forget();
|
|
}
|
|
|
|
/* static */ already_AddRefed<AudioContext>
|
|
AudioContext::Constructor(const GlobalObject& aGlobal,
|
|
uint32_t aNumberOfChannels,
|
|
uint32_t aLength,
|
|
float aSampleRate,
|
|
ErrorResult& aRv)
|
|
{
|
|
nsCOMPtr<nsPIDOMWindow> window = do_QueryInterface(aGlobal.GetAsSupports());
|
|
if (!window) {
|
|
aRv.Throw(NS_ERROR_FAILURE);
|
|
return nullptr;
|
|
}
|
|
|
|
if (aNumberOfChannels == 0 ||
|
|
aNumberOfChannels > WebAudioUtils::MaxChannelCount ||
|
|
aLength == 0 ||
|
|
aSampleRate < WebAudioUtils::MinSampleRate ||
|
|
aSampleRate > WebAudioUtils::MaxSampleRate) {
|
|
// The DOM binding protects us against infinity and NaN
|
|
aRv.Throw(NS_ERROR_DOM_NOT_SUPPORTED_ERR);
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<AudioContext> object = new AudioContext(window,
|
|
true,
|
|
AudioChannel::Normal,
|
|
aNumberOfChannels,
|
|
aLength,
|
|
aSampleRate);
|
|
|
|
RegisterWeakMemoryReporter(object);
|
|
|
|
return object.forget();
|
|
}
|
|
|
|
bool AudioContext::CheckClosed(ErrorResult& aRv)
|
|
{
|
|
if (mAudioContextState == AudioContextState::Closed) {
|
|
aRv.Throw(NS_ERROR_DOM_INVALID_STATE_ERR);
|
|
return true;
|
|
}
|
|
return false;
|
|
}
|
|
|
|
already_AddRefed<AudioBufferSourceNode>
|
|
AudioContext::CreateBufferSource(ErrorResult& aRv)
|
|
{
|
|
if (CheckClosed(aRv)) {
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<AudioBufferSourceNode> bufferNode =
|
|
new AudioBufferSourceNode(this);
|
|
return bufferNode.forget();
|
|
}
|
|
|
|
already_AddRefed<AudioBuffer>
|
|
AudioContext::CreateBuffer(JSContext* aJSContext, uint32_t aNumberOfChannels,
|
|
uint32_t aLength, float aSampleRate,
|
|
ErrorResult& aRv)
|
|
{
|
|
if (!aNumberOfChannels) {
|
|
aRv.Throw(NS_ERROR_DOM_INDEX_SIZE_ERR);
|
|
return nullptr;
|
|
}
|
|
|
|
return AudioBuffer::Create(this, aNumberOfChannels, aLength,
|
|
aSampleRate, aJSContext, aRv);
|
|
}
|
|
|
|
namespace {
|
|
|
|
bool IsValidBufferSize(uint32_t aBufferSize) {
|
|
switch (aBufferSize) {
|
|
case 0: // let the implementation choose the buffer size
|
|
case 256:
|
|
case 512:
|
|
case 1024:
|
|
case 2048:
|
|
case 4096:
|
|
case 8192:
|
|
case 16384:
|
|
return true;
|
|
default:
|
|
return false;
|
|
}
|
|
}
|
|
|
|
}
|
|
|
|
already_AddRefed<MediaStreamAudioDestinationNode>
|
|
AudioContext::CreateMediaStreamDestination(ErrorResult& aRv)
|
|
{
|
|
if (mIsOffline) {
|
|
aRv.Throw(NS_ERROR_DOM_NOT_SUPPORTED_ERR);
|
|
return nullptr;
|
|
}
|
|
|
|
if (CheckClosed(aRv)) {
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<MediaStreamAudioDestinationNode> node =
|
|
new MediaStreamAudioDestinationNode(this);
|
|
return node.forget();
|
|
}
|
|
|
|
already_AddRefed<ScriptProcessorNode>
|
|
AudioContext::CreateScriptProcessor(uint32_t aBufferSize,
|
|
uint32_t aNumberOfInputChannels,
|
|
uint32_t aNumberOfOutputChannels,
|
|
ErrorResult& aRv)
|
|
{
|
|
if ((aNumberOfInputChannels == 0 && aNumberOfOutputChannels == 0) ||
|
|
aNumberOfInputChannels > WebAudioUtils::MaxChannelCount ||
|
|
aNumberOfOutputChannels > WebAudioUtils::MaxChannelCount ||
|
|
!IsValidBufferSize(aBufferSize)) {
|
|
aRv.Throw(NS_ERROR_DOM_INDEX_SIZE_ERR);
|
|
return nullptr;
|
|
}
|
|
|
|
if (CheckClosed(aRv)) {
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<ScriptProcessorNode> scriptProcessor =
|
|
new ScriptProcessorNode(this, aBufferSize, aNumberOfInputChannels,
|
|
aNumberOfOutputChannels);
|
|
return scriptProcessor.forget();
|
|
}
|
|
|
|
already_AddRefed<AnalyserNode>
|
|
AudioContext::CreateAnalyser(ErrorResult& aRv)
|
|
{
|
|
if (CheckClosed(aRv)) {
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<AnalyserNode> analyserNode = new AnalyserNode(this);
|
|
return analyserNode.forget();
|
|
}
|
|
|
|
already_AddRefed<StereoPannerNode>
|
|
AudioContext::CreateStereoPanner(ErrorResult& aRv)
|
|
{
|
|
if (CheckClosed(aRv)) {
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<StereoPannerNode> stereoPannerNode = new StereoPannerNode(this);
|
|
return stereoPannerNode.forget();
|
|
}
|
|
|
|
already_AddRefed<MediaElementAudioSourceNode>
|
|
AudioContext::CreateMediaElementSource(HTMLMediaElement& aMediaElement,
|
|
ErrorResult& aRv)
|
|
{
|
|
if (mIsOffline) {
|
|
aRv.Throw(NS_ERROR_DOM_NOT_SUPPORTED_ERR);
|
|
return nullptr;
|
|
}
|
|
#ifdef MOZ_EME
|
|
if (aMediaElement.ContainsRestrictedContent()) {
|
|
aRv.Throw(NS_ERROR_DOM_NOT_SUPPORTED_ERR);
|
|
return nullptr;
|
|
}
|
|
#endif
|
|
|
|
if (CheckClosed(aRv)) {
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<DOMMediaStream> stream = aMediaElement.MozCaptureStream(aRv,
|
|
mDestination->Stream()->Graph());
|
|
if (aRv.Failed()) {
|
|
return nullptr;
|
|
}
|
|
nsRefPtr<MediaElementAudioSourceNode> mediaElementAudioSourceNode =
|
|
new MediaElementAudioSourceNode(this, stream);
|
|
return mediaElementAudioSourceNode.forget();
|
|
}
|
|
|
|
already_AddRefed<MediaStreamAudioSourceNode>
|
|
AudioContext::CreateMediaStreamSource(DOMMediaStream& aMediaStream,
|
|
ErrorResult& aRv)
|
|
{
|
|
if (mIsOffline) {
|
|
aRv.Throw(NS_ERROR_DOM_NOT_SUPPORTED_ERR);
|
|
return nullptr;
|
|
}
|
|
|
|
if (CheckClosed(aRv)) {
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<MediaStreamAudioSourceNode> mediaStreamAudioSourceNode =
|
|
new MediaStreamAudioSourceNode(this, &aMediaStream);
|
|
return mediaStreamAudioSourceNode.forget();
|
|
}
|
|
|
|
already_AddRefed<GainNode>
|
|
AudioContext::CreateGain(ErrorResult& aRv)
|
|
{
|
|
if (CheckClosed(aRv)) {
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<GainNode> gainNode = new GainNode(this);
|
|
return gainNode.forget();
|
|
}
|
|
|
|
already_AddRefed<WaveShaperNode>
|
|
AudioContext::CreateWaveShaper(ErrorResult& aRv)
|
|
{
|
|
if (CheckClosed(aRv)) {
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<WaveShaperNode> waveShaperNode = new WaveShaperNode(this);
|
|
return waveShaperNode.forget();
|
|
}
|
|
|
|
already_AddRefed<DelayNode>
|
|
AudioContext::CreateDelay(double aMaxDelayTime, ErrorResult& aRv)
|
|
{
|
|
if (CheckClosed(aRv)) {
|
|
return nullptr;
|
|
}
|
|
|
|
if (aMaxDelayTime > 0. && aMaxDelayTime < 180.) {
|
|
nsRefPtr<DelayNode> delayNode = new DelayNode(this, aMaxDelayTime);
|
|
return delayNode.forget();
|
|
}
|
|
|
|
aRv.Throw(NS_ERROR_DOM_NOT_SUPPORTED_ERR);
|
|
return nullptr;
|
|
}
|
|
|
|
already_AddRefed<PannerNode>
|
|
AudioContext::CreatePanner(ErrorResult& aRv)
|
|
{
|
|
if (CheckClosed(aRv)) {
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<PannerNode> pannerNode = new PannerNode(this);
|
|
mPannerNodes.PutEntry(pannerNode);
|
|
return pannerNode.forget();
|
|
}
|
|
|
|
already_AddRefed<ConvolverNode>
|
|
AudioContext::CreateConvolver(ErrorResult& aRv)
|
|
{
|
|
if (CheckClosed(aRv)) {
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<ConvolverNode> convolverNode = new ConvolverNode(this);
|
|
return convolverNode.forget();
|
|
}
|
|
|
|
already_AddRefed<ChannelSplitterNode>
|
|
AudioContext::CreateChannelSplitter(uint32_t aNumberOfOutputs, ErrorResult& aRv)
|
|
{
|
|
if (aNumberOfOutputs == 0 ||
|
|
aNumberOfOutputs > WebAudioUtils::MaxChannelCount) {
|
|
aRv.Throw(NS_ERROR_DOM_INDEX_SIZE_ERR);
|
|
return nullptr;
|
|
}
|
|
|
|
if (CheckClosed(aRv)) {
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<ChannelSplitterNode> splitterNode =
|
|
new ChannelSplitterNode(this, aNumberOfOutputs);
|
|
return splitterNode.forget();
|
|
}
|
|
|
|
already_AddRefed<ChannelMergerNode>
|
|
AudioContext::CreateChannelMerger(uint32_t aNumberOfInputs, ErrorResult& aRv)
|
|
{
|
|
if (aNumberOfInputs == 0 ||
|
|
aNumberOfInputs > WebAudioUtils::MaxChannelCount) {
|
|
aRv.Throw(NS_ERROR_DOM_INDEX_SIZE_ERR);
|
|
return nullptr;
|
|
}
|
|
|
|
if (CheckClosed(aRv)) {
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<ChannelMergerNode> mergerNode =
|
|
new ChannelMergerNode(this, aNumberOfInputs);
|
|
return mergerNode.forget();
|
|
}
|
|
|
|
already_AddRefed<DynamicsCompressorNode>
|
|
AudioContext::CreateDynamicsCompressor(ErrorResult& aRv)
|
|
{
|
|
if (CheckClosed(aRv)) {
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<DynamicsCompressorNode> compressorNode =
|
|
new DynamicsCompressorNode(this);
|
|
return compressorNode.forget();
|
|
}
|
|
|
|
already_AddRefed<BiquadFilterNode>
|
|
AudioContext::CreateBiquadFilter(ErrorResult& aRv)
|
|
{
|
|
if (CheckClosed(aRv)) {
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<BiquadFilterNode> filterNode =
|
|
new BiquadFilterNode(this);
|
|
return filterNode.forget();
|
|
}
|
|
|
|
already_AddRefed<OscillatorNode>
|
|
AudioContext::CreateOscillator(ErrorResult& aRv)
|
|
{
|
|
if (CheckClosed(aRv)) {
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<OscillatorNode> oscillatorNode =
|
|
new OscillatorNode(this);
|
|
return oscillatorNode.forget();
|
|
}
|
|
|
|
already_AddRefed<PeriodicWave>
|
|
AudioContext::CreatePeriodicWave(const Float32Array& aRealData,
|
|
const Float32Array& aImagData,
|
|
ErrorResult& aRv)
|
|
{
|
|
aRealData.ComputeLengthAndData();
|
|
aImagData.ComputeLengthAndData();
|
|
|
|
if (aRealData.Length() != aImagData.Length() ||
|
|
aRealData.Length() == 0 ||
|
|
aRealData.Length() > 4096) {
|
|
aRv.Throw(NS_ERROR_DOM_NOT_SUPPORTED_ERR);
|
|
return nullptr;
|
|
}
|
|
|
|
nsRefPtr<PeriodicWave> periodicWave =
|
|
new PeriodicWave(this, aRealData.Data(), aImagData.Data(),
|
|
aImagData.Length(), aRv);
|
|
if (aRv.Failed()) {
|
|
return nullptr;
|
|
}
|
|
return periodicWave.forget();
|
|
}
|
|
|
|
AudioListener*
|
|
AudioContext::Listener()
|
|
{
|
|
if (!mListener) {
|
|
mListener = new AudioListener(this);
|
|
}
|
|
return mListener;
|
|
}
|
|
|
|
already_AddRefed<Promise>
|
|
AudioContext::DecodeAudioData(const ArrayBuffer& aBuffer,
|
|
const Optional<OwningNonNull<DecodeSuccessCallback> >& aSuccessCallback,
|
|
const Optional<OwningNonNull<DecodeErrorCallback> >& aFailureCallback,
|
|
ErrorResult& aRv)
|
|
{
|
|
nsCOMPtr<nsIGlobalObject> parentObject = do_QueryInterface(GetParentObject());
|
|
nsRefPtr<Promise> promise;
|
|
AutoJSAPI jsapi;
|
|
jsapi.Init();
|
|
JSContext* cx = jsapi.cx();
|
|
JSAutoCompartment ac(cx, aBuffer.Obj());
|
|
|
|
promise = Promise::Create(parentObject, aRv);
|
|
if (aRv.Failed()) {
|
|
return nullptr;
|
|
}
|
|
|
|
aBuffer.ComputeLengthAndData();
|
|
|
|
// Neuter the array buffer
|
|
size_t length = aBuffer.Length();
|
|
JS::RootedObject obj(cx, aBuffer.Obj());
|
|
|
|
uint8_t* data = static_cast<uint8_t*>(JS_StealArrayBufferContents(cx, obj));
|
|
|
|
// Sniff the content of the media.
|
|
// Failed type sniffing will be handled by AsyncDecodeWebAudio.
|
|
nsAutoCString contentType;
|
|
NS_SniffContent(NS_DATA_SNIFFER_CATEGORY, nullptr, data, length, contentType);
|
|
|
|
nsRefPtr<DecodeErrorCallback> failureCallback;
|
|
nsRefPtr<DecodeSuccessCallback> successCallback;
|
|
if (aFailureCallback.WasPassed()) {
|
|
failureCallback = &aFailureCallback.Value();
|
|
}
|
|
if (aSuccessCallback.WasPassed()) {
|
|
successCallback = &aSuccessCallback.Value();
|
|
}
|
|
nsRefPtr<WebAudioDecodeJob> job(
|
|
new WebAudioDecodeJob(contentType, this,
|
|
promise, successCallback, failureCallback));
|
|
AsyncDecodeWebAudio(contentType.get(), data, length, *job);
|
|
// Transfer the ownership to mDecodeJobs
|
|
mDecodeJobs.AppendElement(job.forget());
|
|
|
|
return promise.forget();
|
|
}
|
|
|
|
void
|
|
AudioContext::RemoveFromDecodeQueue(WebAudioDecodeJob* aDecodeJob)
|
|
{
|
|
mDecodeJobs.RemoveElement(aDecodeJob);
|
|
}
|
|
|
|
void
|
|
AudioContext::RegisterActiveNode(AudioNode* aNode)
|
|
{
|
|
if (!mIsShutDown) {
|
|
mActiveNodes.PutEntry(aNode);
|
|
}
|
|
}
|
|
|
|
void
|
|
AudioContext::UnregisterActiveNode(AudioNode* aNode)
|
|
{
|
|
mActiveNodes.RemoveEntry(aNode);
|
|
}
|
|
|
|
void
|
|
AudioContext::UnregisterAudioBufferSourceNode(AudioBufferSourceNode* aNode)
|
|
{
|
|
UpdatePannerSource();
|
|
}
|
|
|
|
void
|
|
AudioContext::UnregisterPannerNode(PannerNode* aNode)
|
|
{
|
|
mPannerNodes.RemoveEntry(aNode);
|
|
if (mListener) {
|
|
mListener->UnregisterPannerNode(aNode);
|
|
}
|
|
}
|
|
|
|
static PLDHashOperator
|
|
FindConnectedSourcesOn(nsPtrHashKey<PannerNode>* aEntry, void* aData)
|
|
{
|
|
aEntry->GetKey()->FindConnectedSources();
|
|
return PL_DHASH_NEXT;
|
|
}
|
|
|
|
void
|
|
AudioContext::UpdatePannerSource()
|
|
{
|
|
mPannerNodes.EnumerateEntries(FindConnectedSourcesOn, nullptr);
|
|
}
|
|
|
|
uint32_t
|
|
AudioContext::MaxChannelCount() const
|
|
{
|
|
return mIsOffline ? mNumberOfChannels : CubebUtils::MaxNumberOfChannels();
|
|
}
|
|
|
|
MediaStreamGraph*
|
|
AudioContext::Graph() const
|
|
{
|
|
return Destination()->Stream()->Graph();
|
|
}
|
|
|
|
MediaStream*
|
|
AudioContext::DestinationStream() const
|
|
{
|
|
if (Destination()) {
|
|
return Destination()->Stream();
|
|
}
|
|
return nullptr;
|
|
}
|
|
|
|
double
|
|
AudioContext::CurrentTime() const
|
|
{
|
|
MediaStream* stream = Destination()->Stream();
|
|
return StreamTimeToDOMTime(stream->
|
|
StreamTimeToSeconds(stream->GetCurrentTime()));
|
|
}
|
|
|
|
void
|
|
AudioContext::Shutdown()
|
|
{
|
|
mIsShutDown = true;
|
|
|
|
// We mute rather than suspending, because the delay between the ::Shutdown
|
|
// call and the CC would make us overbuffer in the MediaStreamGraph.
|
|
// See bug 936784 for details.
|
|
if (!mIsOffline) {
|
|
Mute();
|
|
}
|
|
|
|
// Release references to active nodes.
|
|
// Active AudioNodes don't unregister in destructors, at which point the
|
|
// Node is already unregistered.
|
|
mActiveNodes.Clear();
|
|
|
|
// For offline contexts, we can destroy the MediaStreamGraph at this point.
|
|
if (mIsOffline && mDestination) {
|
|
mDestination->OfflineShutdown();
|
|
}
|
|
}
|
|
|
|
AudioContextState AudioContext::State() const
|
|
{
|
|
return mAudioContextState;
|
|
}
|
|
|
|
StateChangeTask::StateChangeTask(AudioContext* aAudioContext,
|
|
void* aPromise,
|
|
AudioContextState aNewState)
|
|
: mAudioContext(aAudioContext)
|
|
, mPromise(aPromise)
|
|
, mAudioNodeStream(nullptr)
|
|
, mNewState(aNewState)
|
|
{
|
|
MOZ_ASSERT(NS_IsMainThread(),
|
|
"This constructor should be used from the main thread.");
|
|
}
|
|
|
|
StateChangeTask::StateChangeTask(AudioNodeStream* aStream,
|
|
void* aPromise,
|
|
AudioContextState aNewState)
|
|
: mAudioContext(nullptr)
|
|
, mPromise(aPromise)
|
|
, mAudioNodeStream(aStream)
|
|
, mNewState(aNewState)
|
|
{
|
|
MOZ_ASSERT(!NS_IsMainThread(),
|
|
"This constructor should be used from the graph thread.");
|
|
}
|
|
|
|
NS_IMETHODIMP
|
|
StateChangeTask::Run()
|
|
{
|
|
MOZ_ASSERT(NS_IsMainThread());
|
|
|
|
if (!mAudioContext && !mAudioNodeStream) {
|
|
return NS_OK;
|
|
}
|
|
if (mAudioNodeStream) {
|
|
AudioNode* node = mAudioNodeStream->Engine()->NodeMainThread();
|
|
if (!node) {
|
|
return NS_OK;
|
|
}
|
|
mAudioContext = node->Context();
|
|
if (!mAudioContext) {
|
|
return NS_OK;
|
|
}
|
|
}
|
|
|
|
mAudioContext->OnStateChanged(mPromise, mNewState);
|
|
// We have can't call Release() on the AudioContext on the MSG thread, so we
|
|
// unref it here, on the main thread.
|
|
mAudioContext = nullptr;
|
|
|
|
return NS_OK;
|
|
}
|
|
|
|
/* This runnable allows to fire the "statechange" event */
|
|
class OnStateChangeTask final : public nsRunnable
|
|
{
|
|
public:
|
|
explicit OnStateChangeTask(AudioContext* aAudioContext)
|
|
: mAudioContext(aAudioContext)
|
|
{}
|
|
|
|
NS_IMETHODIMP
|
|
Run() override
|
|
{
|
|
nsCOMPtr<nsPIDOMWindow> parent = do_QueryInterface(mAudioContext->GetParentObject());
|
|
if (!parent) {
|
|
return NS_ERROR_FAILURE;
|
|
}
|
|
|
|
nsIDocument* doc = parent->GetExtantDoc();
|
|
if (!doc) {
|
|
return NS_ERROR_FAILURE;
|
|
}
|
|
|
|
return nsContentUtils::DispatchTrustedEvent(doc,
|
|
static_cast<DOMEventTargetHelper*>(mAudioContext),
|
|
NS_LITERAL_STRING("statechange"),
|
|
false, false);
|
|
}
|
|
|
|
private:
|
|
nsRefPtr<AudioContext> mAudioContext;
|
|
};
|
|
|
|
|
|
|
|
void
|
|
AudioContext::OnStateChanged(void* aPromise, AudioContextState aNewState)
|
|
{
|
|
MOZ_ASSERT(NS_IsMainThread());
|
|
|
|
MOZ_ASSERT((mAudioContextState == AudioContextState::Suspended &&
|
|
aNewState == AudioContextState::Running) ||
|
|
(mAudioContextState == AudioContextState::Running &&
|
|
aNewState == AudioContextState::Suspended) ||
|
|
(mAudioContextState == AudioContextState::Running &&
|
|
aNewState == AudioContextState::Closed) ||
|
|
(mAudioContextState == AudioContextState::Suspended &&
|
|
aNewState == AudioContextState::Closed) ||
|
|
(mAudioContextState == aNewState),
|
|
"Invalid AudioContextState transition");
|
|
|
|
MOZ_ASSERT(
|
|
mIsOffline || aPromise || aNewState == AudioContextState::Running,
|
|
"We should have a promise here if this is a real-time AudioContext."
|
|
"Or this is the first time we switch to \"running\".");
|
|
|
|
if (aPromise) {
|
|
Promise* promise = reinterpret_cast<Promise*>(aPromise);
|
|
promise->MaybeResolve(JS::UndefinedHandleValue);
|
|
DebugOnly<bool> rv = mPromiseGripArray.RemoveElement(promise);
|
|
MOZ_ASSERT(rv, "Promise wasn't in the grip array?");
|
|
}
|
|
|
|
if (mAudioContextState != aNewState) {
|
|
nsRefPtr<OnStateChangeTask> onStateChangeTask =
|
|
new OnStateChangeTask(this);
|
|
NS_DispatchToMainThread(onStateChangeTask);
|
|
}
|
|
|
|
mAudioContextState = aNewState;
|
|
}
|
|
|
|
already_AddRefed<Promise>
|
|
AudioContext::Suspend(ErrorResult& aRv)
|
|
{
|
|
nsCOMPtr<nsIGlobalObject> parentObject = do_QueryInterface(GetParentObject());
|
|
nsRefPtr<Promise> promise;
|
|
promise = Promise::Create(parentObject, aRv);
|
|
if (aRv.Failed()) {
|
|
return nullptr;
|
|
}
|
|
if (mIsOffline) {
|
|
promise->MaybeReject(NS_ERROR_DOM_NOT_SUPPORTED_ERR);
|
|
return promise.forget();
|
|
}
|
|
|
|
if (mAudioContextState == AudioContextState::Closed ||
|
|
mCloseCalled) {
|
|
promise->MaybeReject(NS_ERROR_DOM_INVALID_STATE_ERR);
|
|
return promise.forget();
|
|
}
|
|
|
|
if (mAudioContextState == AudioContextState::Suspended) {
|
|
promise->MaybeResolve(JS::UndefinedHandleValue);
|
|
return promise.forget();
|
|
}
|
|
|
|
MediaStream* ds = DestinationStream();
|
|
if (ds) {
|
|
ds->BlockStreamIfNeeded();
|
|
}
|
|
|
|
mPromiseGripArray.AppendElement(promise);
|
|
Graph()->ApplyAudioContextOperation(DestinationStream()->AsAudioNodeStream(),
|
|
AudioContextOperation::Suspend, promise);
|
|
|
|
return promise.forget();
|
|
}
|
|
|
|
already_AddRefed<Promise>
|
|
AudioContext::Resume(ErrorResult& aRv)
|
|
{
|
|
nsCOMPtr<nsIGlobalObject> parentObject = do_QueryInterface(GetParentObject());
|
|
nsRefPtr<Promise> promise;
|
|
promise = Promise::Create(parentObject, aRv);
|
|
if (aRv.Failed()) {
|
|
return nullptr;
|
|
}
|
|
|
|
if (mIsOffline) {
|
|
promise->MaybeReject(NS_ERROR_DOM_NOT_SUPPORTED_ERR);
|
|
return promise.forget();
|
|
}
|
|
|
|
if (mAudioContextState == AudioContextState::Closed ||
|
|
mCloseCalled) {
|
|
promise->MaybeReject(NS_ERROR_DOM_INVALID_STATE_ERR);
|
|
return promise.forget();
|
|
}
|
|
|
|
if (mAudioContextState == AudioContextState::Running) {
|
|
promise->MaybeResolve(JS::UndefinedHandleValue);
|
|
return promise.forget();
|
|
}
|
|
|
|
MediaStream* ds = DestinationStream();
|
|
if (ds) {
|
|
ds->UnblockStreamIfNeeded();
|
|
}
|
|
|
|
mPromiseGripArray.AppendElement(promise);
|
|
Graph()->ApplyAudioContextOperation(DestinationStream()->AsAudioNodeStream(),
|
|
AudioContextOperation::Resume, promise);
|
|
|
|
return promise.forget();
|
|
}
|
|
|
|
already_AddRefed<Promise>
|
|
AudioContext::Close(ErrorResult& aRv)
|
|
{
|
|
nsCOMPtr<nsIGlobalObject> parentObject = do_QueryInterface(GetParentObject());
|
|
nsRefPtr<Promise> promise;
|
|
promise = Promise::Create(parentObject, aRv);
|
|
if (aRv.Failed()) {
|
|
return nullptr;
|
|
}
|
|
|
|
if (mIsOffline) {
|
|
promise->MaybeReject(NS_ERROR_DOM_NOT_SUPPORTED_ERR);
|
|
return promise.forget();
|
|
}
|
|
|
|
if (mAudioContextState == AudioContextState::Closed) {
|
|
promise->MaybeResolve(NS_ERROR_DOM_INVALID_STATE_ERR);
|
|
return promise.forget();
|
|
}
|
|
|
|
mCloseCalled = true;
|
|
|
|
mPromiseGripArray.AppendElement(promise);
|
|
Graph()->ApplyAudioContextOperation(DestinationStream()->AsAudioNodeStream(),
|
|
AudioContextOperation::Close, promise);
|
|
|
|
MediaStream* ds = DestinationStream();
|
|
if (ds) {
|
|
ds->BlockStreamIfNeeded();
|
|
}
|
|
|
|
return promise.forget();
|
|
}
|
|
|
|
void
|
|
AudioContext::UpdateNodeCount(int32_t aDelta)
|
|
{
|
|
bool firstNode = mNodeCount == 0;
|
|
mNodeCount += aDelta;
|
|
MOZ_ASSERT(mNodeCount >= 0);
|
|
// mDestinationNode may be null when we're destroying nodes unlinked by CC
|
|
if (!firstNode && mDestination) {
|
|
mDestination->SetIsOnlyNodeForContext(mNodeCount == 1);
|
|
}
|
|
}
|
|
|
|
JSObject*
|
|
AudioContext::GetGlobalJSObject() const
|
|
{
|
|
nsCOMPtr<nsIGlobalObject> parentObject = do_QueryInterface(GetParentObject());
|
|
if (!parentObject) {
|
|
return nullptr;
|
|
}
|
|
|
|
// This can also return null.
|
|
return parentObject->GetGlobalJSObject();
|
|
}
|
|
|
|
already_AddRefed<Promise>
|
|
AudioContext::StartRendering(ErrorResult& aRv)
|
|
{
|
|
nsCOMPtr<nsIGlobalObject> parentObject = do_QueryInterface(GetParentObject());
|
|
|
|
MOZ_ASSERT(mIsOffline, "This should only be called on OfflineAudioContext");
|
|
if (mIsStarted) {
|
|
aRv.Throw(NS_ERROR_DOM_INVALID_STATE_ERR);
|
|
return nullptr;
|
|
}
|
|
|
|
mIsStarted = true;
|
|
nsRefPtr<Promise> promise = Promise::Create(parentObject, aRv);
|
|
mDestination->StartRendering(promise);
|
|
|
|
OnStateChanged(nullptr, AudioContextState::Running);
|
|
|
|
return promise.forget();
|
|
}
|
|
|
|
void
|
|
AudioContext::Mute() const
|
|
{
|
|
MOZ_ASSERT(!mIsOffline);
|
|
if (mDestination) {
|
|
mDestination->Mute();
|
|
}
|
|
}
|
|
|
|
void
|
|
AudioContext::Unmute() const
|
|
{
|
|
MOZ_ASSERT(!mIsOffline);
|
|
if (mDestination) {
|
|
mDestination->Unmute();
|
|
}
|
|
}
|
|
|
|
AudioChannel
|
|
AudioContext::MozAudioChannelType() const
|
|
{
|
|
return mDestination->MozAudioChannelType();
|
|
}
|
|
|
|
AudioChannel
|
|
AudioContext::TestAudioChannelInAudioNodeStream()
|
|
{
|
|
MediaStream* stream = mDestination->Stream();
|
|
MOZ_ASSERT(stream);
|
|
|
|
return stream->AudioChannelType();
|
|
}
|
|
|
|
size_t
|
|
AudioContext::SizeOfIncludingThis(mozilla::MallocSizeOf aMallocSizeOf) const
|
|
{
|
|
// AudioNodes are tracked separately because we do not want the AudioContext
|
|
// to track all of the AudioNodes it creates, so we wouldn't be able to
|
|
// traverse them from here.
|
|
|
|
size_t amount = aMallocSizeOf(this);
|
|
if (mListener) {
|
|
amount += mListener->SizeOfIncludingThis(aMallocSizeOf);
|
|
}
|
|
amount += mDecodeJobs.SizeOfExcludingThis(aMallocSizeOf);
|
|
for (uint32_t i = 0; i < mDecodeJobs.Length(); ++i) {
|
|
amount += mDecodeJobs[i]->SizeOfIncludingThis(aMallocSizeOf);
|
|
}
|
|
amount += mActiveNodes.SizeOfExcludingThis(nullptr, aMallocSizeOf);
|
|
amount += mPannerNodes.SizeOfExcludingThis(nullptr, aMallocSizeOf);
|
|
return amount;
|
|
}
|
|
|
|
NS_IMETHODIMP
|
|
AudioContext::CollectReports(nsIHandleReportCallback* aHandleReport,
|
|
nsISupports* aData, bool aAnonymize)
|
|
{
|
|
int64_t amount = SizeOfIncludingThis(MallocSizeOf);
|
|
return MOZ_COLLECT_REPORT("explicit/webaudio/audiocontext", KIND_HEAP, UNITS_BYTES,
|
|
amount, "Memory used by AudioContext objects (Web Audio).");
|
|
}
|
|
|
|
double
|
|
AudioContext::ExtraCurrentTime() const
|
|
{
|
|
return mDestination->ExtraCurrentTime();
|
|
}
|
|
|
|
}
|
|
}
|