在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):syedhali/EZAudio开源软件地址(OpenSource Url):https://github.com/syedhali/EZAudio开源编程语言(OpenSource Language):Objective-C 97.9%开源软件介绍(OpenSource Introduction):A simple, intuitive audio framework for iOS and OSX. DeprecatedEZAudio has recently been deprecated in favor of AudioKit. However, since some people are still forking and using EZAudio I've decided to restore the README as it was. Check out the note below. Apps Using EZAudioI'd really like to start creating a list of projects made using EZAudio. If you've used EZAudio to make something cool, whether it's an app or open source visualization or whatever, please email me at syedhali07[at]gmail.com and I'll add it to our wall of fame! To start it off:
FeaturesAwesome Components I've designed six audio components and two interface components to allow you to immediately get your hands dirty recording, playing, and visualizing audio data. These components simply plug into each other and build on top of the high-performance, low-latency AudioUnits API and give you an easy to use API written in Objective-C instead of pure C. A useful class for getting all the current and available inputs/output on any Apple device. The A microphone class that provides its delegate audio data from the default device microphone with one line of code. An output class that will playback any audio it is provided by its datasource. An audio file class that reads/seeks through audio files and provides useful delegate callbacks. A replacement for A recorder class that provides a quick and easy way to write audio files from any datasource. A Core Graphics-based audio waveform plot capable of visualizing any float array as a buffer or rolling plot. An OpenGL-based, GPU-accelerated audio waveform plot capable of visualizing any float array as a buffer or rolling plot. Cross Platform
Examples & DocsWithin this repo you'll find the examples for iOS and OSX to get you up to speed using each component and plugging them into each other. With just a few lines of code you'll be recording from the microphone, generating audio waveforms, and playing audio files like a boss. See the full Getting Started guide for an interactive look into each of components. Example ProjectsEZAudioCoreGraphicsWaveformExample Shows how to use the EZAudioOpenGLWaveformExample Shows how to use the EZAudioPlayFileExample Shows how to use the EZAudioRecordWaveformExample Shows how to use the EZAudioWaveformFromFileExample Shows how to use the EZAudioPassThroughExample Shows how to use the EZAudioFFTExample Shows how to calculate the real-time FFT of the audio data coming from the DocumentationThe official documentation for EZAudio can be found here: http://cocoadocs.org/docsets/EZAudio/1.1.4/
Getting StartedTo begin using Build RequirementsiOS
OSX
FrameworksiOS
OSX
Adding To ProjectYou can add EZAudio to your project in a few ways:
Using EZAudio & The Amazing Audio EngineIf you're also using the Amazing Audio Engine then use the
2.) EZAudio now supports Carthage (thanks Andrew and Tommaso!). You can refer to Carthage's installation for a how-to guide: https://github.com/Carthage/Carthage 3.) Alternatively, you can check out the iOS/Mac examples for how to setup a project using the EZAudio project as an embedded project and utilizing the frameworks. Be sure to set your header search path to the folder containing the EZAudio source. Core Components
EZAudioDeviceProvides a simple interface for obtaining the current and all available inputs and output for any Apple device. For instance, the iPhone 6 has three microphones available for input, while on OSX you can choose the Built-In Microphone or any available HAL device on your system. Similarly, for iOS you can choose from a pair of headphones connected or speaker, while on OSX you can choose from the Built-In Output, any available HAL device, or Airplay. Getting Input DevicesTo get all the available input devices use the NSArray *inputDevices = [EZAudioDevice inputDevices]; or to just get the currently selected input device use the // On iOS this will default to the headset device or bottom microphone, while on OSX this will
// be your selected inpupt device from the Sound preferences
EZAudioDevice *currentInputDevice = [EZAudioDevice currentInputDevice]; Getting Output DevicesSimilarly, to get all the available output devices use the NSArray *outputDevices = [EZAudioDevice outputDevices]; or to just get the currently selected output device use the // On iOS this will default to the headset speaker, while on OSX this will be your selected
// output device from the Sound preferences
EZAudioDevice *currentOutputDevice = [EZAudioDevice currentOutputDevice]; EZMicrophoneProvides access to the default device microphone in one line of code and provides delegate callbacks to receive the audio data as an AudioBufferList and float arrays. Relevant Example Projects
Creating A MicrophoneCreate an // Declare the EZMicrophone as a strong property
@property (nonatomic, strong) EZMicrophone *microphone;
...
// Initialize the microphone instance and assign it a delegate to receive the audio data
// callbacks
self.microphone = [EZMicrophone microphoneWithDelegate:self]; Alternatively, you could also use the shared // Assign a delegate to the shared instance of the microphone to receive the audio data
// callbacks
[EZMicrophone sharedMicrophone].delegate = self; Setting The DeviceThe NSArray *inputs = [EZAudioDevice inputDevices];
[self.microphone setDevice:[inputs lastObject]]; Anytime the - (void)microphone:(EZMicrophone *)microphone changedDevice:(EZAudioDevice *)device
{
// This is not always guaranteed to occur on the main thread so make sure you
// wrap it in a GCD block
dispatch_async(dispatch_get_main_queue(), ^{
// Update UI here
NSLog(@"Changed input device: %@", device);
});
} Note: For iOS this can happen automatically if the AVAudioSession changes the current device. Getting Microphone DataTo tell the microphone to start fetching audio use the // Starts fetching audio from the default device microphone and sends data to EZMicrophoneDelegate
[self.microphone startFetchingAudio]; Once the /**
The microphone data represented as non-interleaved float arrays useful for:
- Creating real-time waveforms using EZAudioPlot or EZAudioPlotGL
- Creating any number of custom visualizations that utilize audio!
*/
-(void) microphone:(EZMicrophone *)microphone
hasAudioReceived:(float **)buffer
withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels
{
__weak typeof (self) weakSelf = self;
// Getting audio data as an array of float buffer arrays that can be fed into the
// EZAudioPlot, EZAudioPlotGL, or whatever visualization you would like to do with
// the microphone data.
dispatch_async(dispatch_get_main_queue(),^{
// Visualize this data brah, buffer[0] = left channel, buffer[1] = right channel
[weakSelf.audioPlot updateBuffer:buffer[0] withBufferSize:bufferSize];
});
} or the AudioBufferList representation: /**
The microphone data represented as CoreAudio's AudioBufferList useful for:
- Appending data to an audio file via the EZRecorder
- Playback via the EZOutput
*/
-(void) microphone:(EZMicrophone *)microphone
hasBufferList:(AudioBufferList *)bufferList
withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels
{
// Getting audio data as an AudioBufferList that can be directly fed into the EZRecorder
// or EZOutput. Say whattt...
} Pausing/Resuming The MicrophonePause or resume fetching audio at any time like so: // Stop fetching audio
[self.microphone stopFetchingAudio];
// Resume fetching audio
[self.microphone startFetchingAudio]; Alternatively, you could also toggle the // Stop fetching audio
self.microphone.microphoneOn = NO;
// Start fetching audio
self.microphone.microphoneOn = YES; EZOutputProvides flexible playback to the default output device by asking the // The EZOutputDataSource should fill out the audioBufferList with the given frame count.
// The timestamp is provided for sample accurate calculation, but for basic use cases can
// be ignored.
- (OSStatus) output:(EZOutput *)output
shouldFillAudioBufferList:(AudioBufferList *)audioBufferList
withNumberOfFrames:(UInt32)frames
timestamp:(const AudioTimeStamp *)timestamp; Relevant Example Projects
Creating An OutputCreate an // Declare the EZOutput as a strong property
@property (nonatomic, strong) EZOutput *output;
...
// Initialize the EZOutput instance and assign it a delegate to provide the output audio data
self.output = [EZOutput outputWithDataSource:self]; Alternatively, you could also use the shared output instance and just assign it an // Assign a delegate to the shared instance of the output to provide the output audio data
[EZOutput sharedOutput].delegate = self; Setting The DeviceThe // By default the EZOutput uses the default output device, but you can change this at any time
EZAudioDevice *currentOutputDevice = [EZAudioDevice currentOutputDevice];
[self.output setDevice:currentOutputDevice]; Anytime the - (void)output:(EZOutput *)output changedDevice:(EZAudioDevice *)device
{
NSLog(@"Change output device to: %@", device);
} Playing AudioSetting The Input FormatWhen providing audio data the // Set a mono, float format with a sample rate of 44.1 kHz
AudioStreamBasicDescription monoFloatFormat = [EZAudioUtilities monoFloatFormatWithSampleRate:44100.0f];
[self.output setInputFormat:monoFloatFormat]; Implementing the EZOutputDataSourceAn example of implementing the - (OSStatus) output:(EZOutput *)output
shouldFillAudioBufferList:(AudioBufferList *)audioBufferList
withNumberOfFrames:(UInt32)frames
timestamp:(const AudioTimeStamp *)timestamp
{
if (self.audioFile)
{
UInt32 bufferSize; // amount of frames actually read
BOOL eof; // end of file
[self.audioFile readFrames:frames
audioBufferList:audioBufferList
bufferSize:&bufferSize
eof:&eof];
if (eof && [self.delegate respondsToSelector:@selector(audioPlayer:reachedEndOfAudioFile:)])
{
[self.delegate audioPlayer:self reachedEndOfAudioFile:self.audioFile];
}
if (eof && self.shouldLoop)
{
[self seekToFrame:0];
}
else if (eof)
{
[self pause];
[self seekToFrame:0];
[[NSNotificationCenter defaultCenter] postNotificationName:EZAudioPlayerDidReachEndOfFileNotification
object:self];
}
}
return noErr;
} I created a sample project that uses the ...
double const SAMPLE_RATE = 44100.0;
- (void)awakeFromNib
{
//
// Create EZOutput to play audio data with mono format (EZOutput will convert
// this mono, float "inputFormat" to a clientFormat, i.e. the stereo output format).
//
AudioStreamBasicDescription inputFormat = [EZAudioUtilities monoFloatFormatWithSampleRate:SAMPLE_RATE];
self.output = [EZOutput outputWithDataSource:self inputFormat:inputFormat];
[self.output setDelegate:self];
self.frequency = 200.0;
self.sampleRate = SAMPLE_RATE;
self.amplitude = 0.80;
}
- (OSStatus) output:(EZOutput *)output
shouldFillAudioBufferList:(AudioBufferList *)audioBufferList
withNumberOfFrames:(UInt32)frames
timestamp:(const AudioTimeStamp *)timestamp
{
Float32 *buffer = (Float32 *)audioBufferList->mBuffers[0].mData;
size_t bufferByteSize = (size_t)audioBufferList->mBuffers[0].mDataByteSize;
double theta = self.theta;
double frequency = self.frequency;
double thetaIncrement = 2.0 * M_PI * frequency / SAMPLE_RATE;
if (self.type == GeneratorTypeSine)
{
for (UInt32 frame = 0; frame < frames; frame++)
{
buffer[frame] = self.amplitude * sin(theta);
theta += thetaIncrement;
if (theta > 2.0 * M_PI)
{
theta -= 2.0 * M_PI;
}
}
self.theta = theta;
}
else if (... other shapes in full source)
} For the full implementation of the square, triangle, sawtooth, and noise functions here: (https://github.com/syedhali/SineExample/blob/master/SineExample/GeneratorViewController.m#L220-L305) Once the /**
The output data represented as non-interleaved float arrays useful for:
- Creating real-time waveforms using EZAudioPlot or EZAudioPlotGL
- Creating any number of custom visualizations that utilize audio!
*/
- (void) output:(EZOutput *)output
playedAudio:(float **)buffer
withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels
{
__weak typeof (self) weakSelf = self;
dispatch_async(dispatch_get_main_queue(), ^{
// Update plot, buffer[0] = left channel, buffer[1] = right channel
});
} Pausing/Resuming The OutputPause or resume the output component at any time like so: // Stop fetching audio
[self.output stopPlayback];
// Resume fetching audio
[self.output startPlayback]; Chaining Audio Unit EffectsInternally the // By default this method connects the AUNode representing the input format converter to
// the mixer node. In subclasses you can add effects in the chain between the converter
// and mixer by creating additional AUNodes, adding them to the AUGraph provided below,
// and then connecting them together.
- (OSStatus)connectOutputOfSourceNode:(AUNode)sourceNode
|
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论