AudioWorkletGlobalScope
The AudioWorkletGlobalScope
interface of the Web Audio API represents a global execution context for user-supplied code, which defines custom AudioWorkletProcessor
-derived classes.
Each BaseAudioContext
has a single AudioWorklet
available under the audioWorklet
property, which runs its code in a single AudioWorkletGlobalScope
.
As the global execution context is shared across the current BaseAudioContext
, it's possible to define any other variables and perform any actions allowed in worklets — apart from defining AudioWorkletProcessor
-derived classes.
Properties
currentFrame
Read only-
Returns an integer that represents the ever-increasing current sample-frame of the audio block being processed. It is incremented by 128 (the size of a render quantum) after the processing of each audio block.
currentTime
Read only-
Returns a double that represents the ever-increasing context time of the audio block being processed. It is equal to the
currentTime
property of theBaseAudioContext
the worklet belongs to. sampleRate
Read only-
Returns a float that represents the sample rate of the associated
BaseAudioContext
.
Methods
registerProcessor()
-
Registers a class derived from the
AudioWorkletProcessor
interface. The class can then be used by creating anAudioWorkletNode
, providing its registered name.
Examples
In this example we output all global properties into the console in the constructor of a custom AudioWorkletProcessor
.
First we need to define the processor, and register it. Note that this should be done in a separate file.
// test-processor.js
class TestProcessor extends AudioWorkletProcessor {
constructor () {
super()
// current sample-frame and time at the moment of instantiation
// to see values change, you can put these two lines in process method
console.log(currentFrame)
console.log(currentTime)
}
// the process method is required - output silence,
// which the outputs are already filled with
process (inputs, outputs, parameters) {
return true
}
}
// the sample rate is not going to change ever,
// because it's a read-only property of a BaseAudioContext
// and is set only during its instantiation
console.log(sampleRate)
// you can declare any variables and use them in your processors
// for example it may be an ArrayBuffer with a wavetable
const usefulVariable = 42
console.log(usefulVariable)
registerProcessor('test-processor', TestProcessor)
Next, in our main scripts file we'll load the processor, create an instance of AudioWorkletNode
— passing the name of the processor to it — and connect the node to an audio graph. We should see the output of console.log
calls in the console:
const audioContext = new AudioContext()
await audioContext.audioWorklet.addModule('test-processor.js')
const testNode = new AudioWorkletNode(audioContext, 'test-processor')
testNode.connect(audioContext.destination)
Specifications
Specification |
---|
Web Audio API # AudioWorkletGlobalScope |
Browser compatibility
BCD tables only load in the browser