If capturing input data from an AUGraph
is the task, the critical part of code (more or less) boils down to this simplest one-channel demo example:
OSStatus MyRenderProc(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
Float32 buf [inNumberFrames]; //just for one channel!
MyMIDIPlayer *player = (MyMIDIPlayer *)inRefCon;
if (*ioActionFlags & kAudioUnitRenderAction_PostRender){
static int TEMP_kAudioUnitRenderAction_PostRenderError = (1 << 8);
if (!(*ioActionFlags & TEMP_kAudioUnitRenderAction_PostRenderError)){
Float32* data = (Float32 *)ioData->mBuffers[0].mData; //just for one channel!
memcpy(buf, data, inNumberFrames*sizeof(Float32));
//do something with buff - there are nice examples of ExtAudioFileWriteAsync()
}
}
return noErr;
}
In the setupAUGraph()
this callback can be set up in the following way:
void setupAUGraph(MyMIDIPlayer *player)
{
// the beginning follows the textbook example setup pattern
{… … …}
// this is the specific part
AURenderCallbackStruct input = {0};
input.inputProc = MyRenderProc;
input.inputProcRefCon = player->instrumentUnit;
CheckError(AudioUnitAddRenderNotify(player->instrumentUnit,
MyRenderProc,
&player),
"AudioUnitAddRenderNotify Failed");
// now initialize the graph (causes resources to be allocated)
CheckError(AUGraphInitialize(player->graph),
"AUGraphInitialize failed");
}
Please note that the render callback "taps" the connection between the output of instrument node and input of output node, capturing what comes from upstream. The callback just copies ioData
into some other buffer which can be saved. AFAIK, this is the simplest way of accessing ioData
that I know it works, without breaking the API.
Please also note there are very efficient plain-C methods for testing if this works for a specific implementation - no need for Objective-C methods inside the callback. Fiddling with some NSArray
s, adding objects etc, inside a plain-C real-time callback introduces risk of priority issues which may later become difficult to debug. CoreAudio API is written in plain-C for a reason. At the heart of Obj-C runtime there's much of what can't take place on the real time thread without risking glitches (locks, memory management, etc). So, it would be safer to keep off Obj-C on a real time thread.
Hope this can help.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…