BaseAudioContext.createAnalyser()
The createAnalyser()
method of the
BaseAudioContext
interface creates an AnalyserNode
, which
can be used to expose audio time and frequency data and create data visualizations.
Note: The AnalyserNode()
constructor is the
recommended way to create an AnalyserNode
; see
Creating an AudioNode.
Note: For more on using this node, see the
AnalyserNode
page.
Syntax
var analyserNode = baseAudioContext.createAnalyser();
Returns
An AnalyserNode
.
Example
The following example shows basic usage of an AudioContext to create an Analyser node, then use requestAnimationFrame() to collect time domain data repeatedly and draw an "oscilloscope style" output of the current audio input. For more complete applied examples/information, check out our Voice-change-O-matic demo (see app.js lines 128–205 for relevant code).
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var analyser = audioCtx.createAnalyser();
...
analyser.fftSize = 2048;
var bufferLength = analyser.frequencyBinCount;
var dataArray = new Uint8Array(bufferLength);
analyser.getByteTimeDomainData(dataArray);
// draw an oscilloscope of the current audio source
function draw() {
drawVisual = requestAnimationFrame(draw);
analyser.getByteTimeDomainData(dataArray);
canvasCtx.fillStyle = 'rgb(200, 200, 200)';
canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
canvasCtx.lineWidth = 2;
canvasCtx.strokeStyle = 'rgb(0, 0, 0)';
canvasCtx.beginPath();
var sliceWidth = WIDTH * 1.0 / bufferLength;
var x = 0;
for(var i = 0; i < bufferLength; i++) {
var v = dataArray[i] / 128.0;
var y = v * HEIGHT/2;
if(i === 0) {
canvasCtx.moveTo(x, y);
} else {
canvasCtx.lineTo(x, y);
}
x += sliceWidth;
}
canvasCtx.lineTo(canvas.width, canvas.height/2);
canvasCtx.stroke();
};
draw();
Specifications
Specification |
---|
Web Audio API # dom-baseaudiocontext-createanalyser |
Browser compatibility
BCD tables only load in the browser