I'd like to build a very low-level audio output node using the web audio API's AudioWorkletProcessor
. It seems like all of the examples out there implement processors that transform input samples to output samples, but what I'd like to do is produce output only, all based on the timestamp of the samples.
According to MDN, BaseAudioContext.currentTime
is not a precise source for this timestamp:
To offer protection against timing attacks and fingerprinting, the precision of audioCtx.currentTime might get rounded depending on browser settings.
In Firefox, the privacy.reduceTimerPrecision preference is enabled by default and defaults to 20us in Firefox 59; in 60 it will be 2ms.
One hacky solution may be to use BaseAudioContext.sampleRate
, a running counter, and the size of the output arrays, but that only works if we can assume that every sample is computed without drops and that everything's computed in order, and I'm not sure if those are valid assumptions.
Within a processing frame, is there a reliable way to know the timestamp which correlates to given sample index?
question from:
https://stackoverflow.com/questions/65878106/timestamps-for-custom-output-only-audioworkletprocessor 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…