标题几乎概括了我想要实现的目标。我正在尝试在渲染回调中使用 Michael Tyson 的 TPCircularBuffer,而循环缓冲区正在填充传入的音频数据。我想将渲染回调中的音频发送到 RemoteIO 音频单元的输出元素,以便我可以通过设备扬声器听到它。
音频是交错的 16 位立体声,以 2048 帧的数据包形式传入。以下是我设置 Audio Session 的方式:
#define kInputBus 1
#define kOutputBus 0
NSError *err = nil;
NSTimeInterval ioBufferDuration = 46;
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryPlayback withOptions:AVAudioSessionCategoryOptionMixWithOthers error:&err];
[session setPreferredIOBufferDuration:ioBufferDuration error:&err];
[session setActive:YES error:&err];
AudioComponentDescription defaultOutputDescription;
defaultOutputDescription.componentType = kAudioUnitType_Output;
defaultOutputDescription.componentSubType = kAudioUnitSubType_RemoteIO;
defaultOutputDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
defaultOutputDescription.componentFlags = 0;
defaultOutputDescription.componentFlagsMask = 0;
AudioComponent defaultOutput = AudioComponentFindNext(NULL, &defaultOutputDescription);
NSAssert(defaultOutput, @"Can't find default output.");
AudioComponentInstanceNew(defaultOutput, &remoteIOUnit);
UInt32 flag = 0;
OSStatus status = AudioUnitSetProperty(remoteIOUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, kOutputBus, &flag, sizeof(flag));
size_t bytesPerSample = sizeof(AudioUnitSampleType);
AudioStreamBasicDescription streamFormat = {0};
streamFormat.mSampleRate = 44100.00;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags = kAudioFormatFlagsCanonical;
streamFormat.mBytesPerPacket = bytesPerSample;
streamFormat.mFramesPerPacket = 1;
streamFormat.mBytesPerFrame = bytesPerSample;
streamFormat.mChannelsPerFrame = 2;
streamFormat.mBitsPerChannel = bytesPerSample * 8;
streamFormat.mReserved = 0;
status = AudioUnitSetProperty(remoteIOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, kInputBus, &streamFormat, sizeof(streamFormat));
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = render;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(remoteIOUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &callbackStruct, sizeof(callbackStruct));
这里是音频数据被加载到循环缓冲区并在渲染回调中使用的地方:
#define kBufferLength 2048
-(void)loadBytesByte *)byteArrPtr{
TPCircularBufferProduceBytes(&buffer, byteArrPtr, kBufferLength);
}
OSStatus render(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
AUDIOIO *audio = (__bridge AUDIOIO *)inRefCon;
AudioSampleType *outSample = (AudioSampleType *)ioData->mBuffers[0].mData;
//Zero outSample
memset(outSample, 0, kBufferLength);
int bytesToCopy = ioData->mBuffers[0].mDataByteSize;
SInt16 *targetBuffer = (SInt16 *)ioData->mBuffers[0].mData;
//Pull audio
int32_t availableBytes;
SInt16 *buffer = TPCircularBufferTail(&audio->buffer, &availableBytes);
memcpy(targetBuffer, buffer, MIN(bytesToCopy, availableBytes));
TPCircularBufferConsume(&audio->buffer, MIN(bytesToCopy, availableBytes));
return noErr;
}
此设置有问题,因为我没有通过扬声器获得任何音频,但我在设备上进行测试时也没有收到任何错误。据我所知,TPCircularBuffer 正在被正确填充和读取。我已按照 Apple 文档设置 Audio Session 。我正在考虑下一步尝试设置 AUGraph,但我想看看是否有人可以为我在这里尝试做的事情提出解决方案。谢谢!
Best Answer-推荐答案 strong>
对于立体声(每帧 2 个 channel ),每帧的字节数和每数据包的字节数必须是样本大小的两倍(以字节为单位)。就比特而言,每个 channel 的比特数相同。
添加:如果 availableBytes/yourFrameSize 几乎不总是与 inNumberFrames 一样大或更大,您将不会获得太多连续的声音。
关于ios - 在附加到 remoteio 音频单元的输入范围的渲染回调中使用循环缓冲区中的音频数据,我们在Stack Overflow上找到一个类似的问题:
https://stackoverflow.com/questions/21032267/
|