So I'm trying to use Web Audio API
to decode & play MP3 file chunks streamed to the browser using Node.js & Socket.IO.(因此,我尝试使用Web Audio API
来解码和播放使用Node.js和Socket.IO流向浏览器的MP3文件块。)
Is my only option, in this context, to create a new AudioBufferSourceNode
for each audio data chunk received or is it possible to create a single AudioBufferSourceNode
for all chunks and simply append the new audio data to the end of source node's buffer
attribute?(在这种情况下,我唯一的选择是为每个接收到的音频数据块创建一个新的AudioBufferSourceNode
,还是可以为所有块创建一个AudioBufferSourceNode
并简单地将新音频数据附加到源节点的buffer
属性的末尾?)
Currently this is how I'm receiving my MP3 chunks, decoding them and scheduling them for playback.(目前,这就是我接收MP3块,对其进行解码并安排它们进行播放的方式。)
I have already verified that each chunk being received is a 'valid MP3 chunk' and is being successfully decoded by the Web Audio API.(我已经验证了接收到的每个块都是“有效的MP3块”,并且已被Web Audio API成功解码。)
audioContext = new AudioContext();
startTime = 0;
socket.on('chunk_received', function(chunk) {
audioContext.decodeAudioData(toArrayBuffer(data.audio), function(buffer) {
var source = audioContext.createBufferSource();
source.buffer = buffer;
source.connect(audioContext.destination);
source.start(startTime);
startTime += buffer.duration;
});
});
Any advice or insight into how best to 'update' Web Audio API playback with new audio data would be greatly appreciated.(对于如何最好地用新的音频数据“更新” Web Audio API回放的任何建议或见解,将不胜感激。)
ask by Jonathan Byrne translate from so
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…