我正在尝试反转 AVAsset 音频并将其保存到文件中。为了清楚起见,我用问题 https://github.com/ksenia-lyagusha/AudioReverse.git 做了一个简单的应用程序。
应用程序从包中获取 mp4 视频文件,将其作为单个 m4a 文件导出到沙箱中的临时文件夹,然后尝试从那里读取它,反转并保存结果文件回来。
临时 m4a 文件没问题。
我反向部分的唯一结果是 Sandbox 中带有白噪声的音频文件。
下面有一段代码,负责反转AVAsset 。它基于相关问题
但是,它对我不起作用。
OSStatus theErr = noErr;
UInt64 fileDataSize = 0;
AudioFileID inputAudioFile;
AudioStreamBasicDescription theFileFormat;
UInt32 thePropertySize = sizeof(theFileFormat);
theErr = AudioFileOpenURL((__bridge CFURLRef)[NSURL URLWithString:inputPath], kAudioFileReadPermission, 0, &inputAudioFile);
thePropertySize = sizeof(fileDataSize);
theErr = AudioFileGetProperty(inputAudioFile, kAudioFilePropertyAudioDataByteCount, &thePropertySize, &fileDataSize);
UInt32 ps = sizeof(AudioStreamBasicDescription) ;
AudioFileGetProperty(inputAudioFile, kAudioFilePropertyDataFormat, &ps, &theFileFormat);
UInt64 dataSize = fileDataSize;
void *theData = malloc(dataSize);
// set up output file
AudioFileID outputAudioFile;
AudioStreamBasicDescription myPCMFormat;
myPCMFormat.mSampleRate = 44100;
myPCMFormat.mFormatID = kAudioFormatLinearPCM;
// kAudioFormatFlagsCanonical is deprecated
myPCMFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved;
myPCMFormat.mChannelsPerFrame = 1;
myPCMFormat.mFramesPerPacket = 1;
myPCMFormat.mBitsPerChannel = 32;
myPCMFormat.mBytesPerPacket = (myPCMFormat.mBitsPerChannel / 8) * myPCMFormat.mChannelsPerFrame;
myPCMFormat.mBytesPerFrame = myPCMFormat.mBytesPerPacket;
NSString *exportPath = [NSTemporaryDirectory() stringByAppendingPathComponent"ReverseAudio.caf"];
NSURL *outputURL = [NSURL fileURLWithPath:exportPath];
theErr = AudioFileCreateWithURL((__bridge CFURLRef)outputURL,
kAudioFileCAFType,
&myPCMFormat,
kAudioFileFlags_EraseFile,
&outputAudioFile);
//Read data into buffer
//if readPoint = dataSize, then bytesToRead = 0 in while loop and
//it is endless
SInt64 readPoint = dataSize-1;
UInt64 writePoint = 0;
while(readPoint > 0)
{
UInt32 bytesToRead = 2;
AudioFileReadBytes(inputAudioFile, false, readPoint, &bytesToRead, theData);
// bytesToRead is now the amount of data actually read
UInt32 bytesToWrite = bytesToRead;
AudioFileWriteBytes(outputAudioFile, false, writePoint, &bytesToWrite, theData);
// bytesToWrite is now the amount of data actually written
writePoint += bytesToWrite;
readPoint -= bytesToRead;
}
free(theData);
AudioFileClose(inputAudioFile);
AudioFileClose(outputAudioFile);
如果我将 AudioFileCreateWithURL 中的文件类型从 kAudioFileCAFType 更改为另一个,则根本不会在 Sandbox 中创建结果文件。
感谢您的帮助。
Best Answer-推荐答案 strong>
您会收到白噪声,因为您的输入和输出文件格式不兼容。您有不同的采样率和 channel ,可能还有其他差异。要完成这项工作,您需要在读取和写入之间使用一种通用 (PCM) 格式。对于新的(ish)AVAudio 框架来说,这是一项合理的工作。我们从文件读取到 PCM,洗牌缓冲区,然后从 PCM 写入到文件。这种方法没有针对大文件进行优化,因为所有数据都一次性读入缓冲区,但足以让您开始使用。
您可以从 getAudioFromVideo 完成 block 中调用此方法。为清楚起见,忽略了错误处理。
- (void)readAudioFromURLNSURL*)inURL reverseToURLNSURL*)outURL {
//prepare the in and outfiles
AVAudioFile* inFile =
[[AVAudioFile alloc] initForReading:inURL error:nil];
AVAudioFormat* format = inFile.processingFormat;
AVAudioFrameCount frameCount =(UInt32)inFile.length;
NSDictionary* outSettings = @{
AVNumberOfChannelsKey(format.channelCount)
,AVSampleRateKey(format.sampleRate)};
AVAudioFile* outFile =
[[AVAudioFile alloc] initForWritingutURL
settingsutSettings
error:nil];
//prepare the forward and reverse buffers
self.forwaredBuffer =
[[AVAudioPCMBuffer alloc] initWithPCMFormat:format
frameCapacity:frameCount];
self.reverseBuffer =
[[AVAudioPCMBuffer alloc] initWithPCMFormat:format
frameCapacity:frameCount];
//read file into forwardBuffer
[inFile readIntoBuffer:self.forwaredBuffer error:&error];
//set frameLength of reverseBuffer to forwardBuffer framelength
AVAudioFrameCount frameLength = self.forwaredBuffer.frameLength;
self.reverseBuffer.frameLength = frameLength;
//iterate over channels
//stride is 1 or 2 depending on interleave format
NSInteger stride = self.forwaredBuffer.stride;
for (AVAudioChannelCount channelIdx = 0;
channelIdx < self.forwaredBuffer.format.channelCount;
channelIdx++) {
float* forwaredChannelData =
self.forwaredBuffer.floatChannelData[channelIdx];
float* reverseChannelData =
self.reverseBuffer.floatChannelData[channelIdx];
int32_t reverseIdx = 0;
//iterate over samples, allocate to reverseBuffer in reverse order
for (AVAudioFrameCount frameIdx = frameLength;
frameIdx >0;
frameIdx--) {
float sample = forwaredChannelData[frameIdx*stride];
reverseChannelData[reverseIdx*stride] = sample;
reverseIdx++;
}
}
//write reverseBuffer to outFile
[outFile writeFromBuffer:self.reverseBuffer error:nil];
}
关于ios - 无法正确反转 AVAsset 音频。唯一的结果是白噪声,我们在Stack Overflow上找到一个类似的问题:
https://stackoverflow.com/questions/35581531/
|