I was going over core audio conversion services in the Learning Core Audio and I was struck by this example in their sample code:
while(1)
{
// wrap the destination buffer in an AudioBufferList
AudioBufferList convertedData;
convertedData.mNumberBuffers = 1;
convertedData.mBuffers[0].mNumberChannels = mySettings->outputFormat.mChannelsPerFrame;
convertedData.mBuffers[0].mDataByteSize = outputBufferSize;
convertedData.mBuffers[0].mData = outputBuffer;
UInt32 frameCount = packetsPerBuffer;
// read from the extaudiofile
CheckResult(ExtAudioFileRead(mySettings->inputFile,
&frameCount,
&convertedData),
"Couldn't read from input file");
if (frameCount == 0) {
printf ("done reading from file");
return;
}
// write the converted data to the output file
CheckResult (AudioFileWritePackets(mySettings->outputFile,
FALSE,
frameCount,
NULL,
outputFilePacketPosition / mySettings->outputFormat.mBytesPerPacket,
&frameCount,
convertedData.mBuffers[0].mData),
"Couldn't write packets to file");
// advance the output file write location
outputFilePacketPosition += (frameCount * mySettings->outputFormat.mBytesPerPacket);
}
notice how frameCount
is defined as packetsPerBuffer
.. packetsPerBuffer
is defined here:
UInt32 outputBufferSize = 32 * 1024; // 32 KB is a good starting point
UInt32 sizePerPacket = mySettings->outputFormat.mBytesPerPacket;
UInt32 packetsPerBuffer = outputBufferSize / sizePerPacket;
the part that stumped me is AudioFileWritePackets
is called.. in the documentation AudioFileWritePackets third and fifth parameters are defined as:
inNumBytes
The number of bytes of audio data being written.
ioNumPackets
On input, a pointer to the number of packets to write. On output, a pointer to the number of packets actually written..
yet in the code both parameters are given frameCount.. how is this possible?? I know with PCM data 1 frame = 1 packet:
// define the ouput format. AudioConverter requires that one of the data formats be LPCM
audioConverterSettings.outputFormat.mSampleRate = 44100.0;
audioConverterSettings.outputFormat.mFormatID = kAudioFormatLinearPCM;
audioConverterSettings.outputFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioConverterSettings.outputFormat.mBytesPerPacket = 4;
audioConverterSettings.outputFormat.mFramesPerPacket = 1;
audioConverterSettings.outputFormat.mBytesPerFrame = 4;
audioConverterSettings.outputFormat.mChannelsPerFrame = 2;
audioConverterSettings.outputFormat.mBitsPerChannel = 16;
but the same lPCM formatting also clearly states that there are 4 bytes per packet (= 4 bytes per frame)..
so how does this work? (the same applies to the other example in the same chapter that uses AudioConverterFillComplexBuffer
instead of ExtAudioFileRead
, and uses packets instead of frames.. but it's the same thing)
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…