BaseAudioContext
The BaseAudioContext interface of the Web Audio API acts as a base definition for online and offline audio-processing graphs, as represented by AudioContext and OfflineAudioContext respectively. You wouldn't use BaseAudioContext directly — you'd use its features via one of these two inheriting interfaces.
A BaseAudioContext can be a target of events, therefore it implements the EventTarget interface.
Properties
BaseAudioContext.audioWorkletRead only Secure context-
Returns the
AudioWorkletobject, which can be used to create and manageAudioNodes in which JavaScript code implementing theAudioWorkletProcessorinterface are run in the background to process audio data. BaseAudioContext.currentTimeRead only-
Returns a double representing an ever-increasing hardware time in seconds used for scheduling. It starts at
0. BaseAudioContext.destinationRead only-
Returns an
AudioDestinationNoderepresenting the final destination of all audio in the context. It can be thought of as the audio-rendering device. BaseAudioContext.listenerRead only-
Returns the
AudioListenerobject, used for 3D spatialization. BaseAudioContext.sampleRateRead only-
Returns a float representing the sample rate (in samples per second) used by all nodes in this context. The sample-rate of an
AudioContextcannot be changed. BaseAudioContext.stateRead only-
Returns the current state of the
AudioContext.
Events
statechange-
Fired when the
AudioContext's state changes due to the calling of one of the state change methods (AudioContext.suspend,AudioContext.resume, orAudioContext.close).
Methods
Also implements methods from the interface EventTarget.
BaseAudioContext.createAnalyser()-
Creates an
AnalyserNode, which can be used to expose audio time and frequency data and for example to create data visualisations. BaseAudioContext.createBiquadFilter()-
Creates a
BiquadFilterNode, which represents a second order filter configurable as several different common filter types: high-pass, low-pass, band-pass, etc BaseAudioContext.createBuffer()-
Creates a new, empty
AudioBufferobject, which can then be populated by data and played via anAudioBufferSourceNode. BaseAudioContext.createBufferSource()-
Creates an
AudioBufferSourceNode, which can be used to play and manipulate audio data contained within anAudioBufferobject.AudioBuffers are created usingAudioContext.createBuffer()or returned byAudioContext.decodeAudioData()when it successfully decodes an audio track. BaseAudioContext.createConstantSource()-
Creates a
ConstantSourceNodeobject, which is an audio source that continuously outputs a monaural (one-channel) sound signal whose samples all have the same value. BaseAudioContext.createChannelMerger()-
Creates a
ChannelMergerNode, which is used to combine channels from multiple audio streams into a single audio stream. BaseAudioContext.createChannelSplitter()-
Creates a
ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them separately. BaseAudioContext.createConvolver()-
Creates a
ConvolverNode, which can be used to apply convolution effects to your audio graph, for example a reverberation effect. BaseAudioContext.createDelay()-
Creates a
DelayNode, which is used to delay the incoming audio signal by a certain amount. This node is also useful to create feedback loops in a Web Audio API graph. BaseAudioContext.createDynamicsCompressor()-
Creates a
DynamicsCompressorNode, which can be used to apply acoustic compression to an audio signal. BaseAudioContext.createGain()-
Creates a
GainNode, which can be used to control the overall volume of the audio graph. BaseAudioContext.createIIRFilter()-
Creates an
IIRFilterNode, which represents a second order filter configurable as several different common filter types. BaseAudioContext.createOscillator()-
Creates an
OscillatorNode, a source representing a periodic waveform. It basically generates a tone. BaseAudioContext.createPanner()-
Creates a
PannerNode, which is used to spatialize an incoming audio stream in 3D space. BaseAudioContext.createPeriodicWave()-
Creates a
PeriodicWave, used to define a periodic waveform that can be used to determine the output of anOscillatorNode. BaseAudioContext.createScriptProcessor()-
Creates a
ScriptProcessorNode, which can be used for direct audio processing via JavaScript. BaseAudioContext.createStereoPanner()-
Creates a
StereoPannerNode, which can be used to apply stereo panning to an audio source. BaseAudioContext.createWaveShaper()-
Creates a
WaveShaperNode, which is used to implement non-linear distortion effects. BaseAudioContext.decodeAudioData()-
Asynchronously decodes audio file data contained in an
ArrayBuffer. In this case, theArrayBufferis usually loaded from anXMLHttpRequest'sresponseattribute after setting theresponseTypetoarraybuffer. This method only works on complete files, not fragments of audio files.
Examples
Basic audio context declaration:
const audioContext = new AudioContext();
Cross browser variant:
const AudioContext = window.AudioContext || window.webkitAudioContext;
const audioContext = new AudioContext();
const oscillatorNode = audioContext.createOscillator();
const gainNode = audioContext.createGain();
const finish = audioContext.destination;
Specifications
| Specification |
|---|
| Web Audio API # BaseAudioContext |
Browser compatibility
BCD tables only load in the browser