• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

syedhali/EZAudio: An iOS and macOS audio visualization framework built upon Core ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

syedhali/EZAudio

开源软件地址(OpenSource Url):

https://github.com/syedhali/EZAudio

开源编程语言(OpenSource Language):

Objective-C 97.9%

开源软件介绍(OpenSource Introduction):

alt text

A simple, intuitive audio framework for iOS and OSX.

Deprecated

EZAudio has recently been deprecated in favor of AudioKit. However, since some people are still forking and using EZAudio I've decided to restore the README as it was. Check out the note below.

Apps Using EZAudio

I'd really like to start creating a list of projects made using EZAudio. If you've used EZAudio to make something cool, whether it's an app or open source visualization or whatever, please email me at syedhali07[at]gmail.com and I'll add it to our wall of fame! To start it off:

  • Detour - Gorgeous location-aware audio walks
  • Jumpshare - Incredibly fast, real-time file sharing
  • Piano Tuner (Home Edition)
  • Piano Tuner (Pro Edition)
  • Music Pitch Detector
  • Piano Prober
  • Multi Tuner

Features

Awesome Components

I've designed six audio components and two interface components to allow you to immediately get your hands dirty recording, playing, and visualizing audio data. These components simply plug into each other and build on top of the high-performance, low-latency AudioUnits API and give you an easy to use API written in Objective-C instead of pure C.

EZAudioDevice

A useful class for getting all the current and available inputs/output on any Apple device. The EZMicrophone and EZOutput use this to direct sound in/out from different hardware components.

EZMicrophone

A microphone class that provides its delegate audio data from the default device microphone with one line of code.

EZOutput

An output class that will playback any audio it is provided by its datasource.

EZAudioFile

An audio file class that reads/seeks through audio files and provides useful delegate callbacks.

EZAudioPlayer

A replacement for AVAudioPlayer that combines an EZAudioFile and a EZOutput to perform robust playback of any file on any piece of hardware.

EZRecorder

A recorder class that provides a quick and easy way to write audio files from any datasource.

EZAudioPlot

A Core Graphics-based audio waveform plot capable of visualizing any float array as a buffer or rolling plot.

EZAudioPlotGL

An OpenGL-based, GPU-accelerated audio waveform plot capable of visualizing any float array as a buffer or rolling plot.

Cross Platform

EZAudio was designed to work transparently across all iOS and OSX devices. This means one universal API whether you're building for Mac or iOS. For instance, under the hood an EZAudioPlot knows that it will subclass a UIView for iOS or an NSView for OSX and the EZMicrophone knows to build on top of the RemoteIO AudioUnit for iOS, but defaults to the system defaults for input and output for OSX.

Examples & Docs

Within this repo you'll find the examples for iOS and OSX to get you up to speed using each component and plugging them into each other. With just a few lines of code you'll be recording from the microphone, generating audio waveforms, and playing audio files like a boss. See the full Getting Started guide for an interactive look into each of components.

Example Projects

EZAudioCoreGraphicsWaveformExample

CoreGraphicsWaveformExampleGif

Shows how to use the EZMicrophone and EZAudioPlot to visualize the audio data from the microphone in real-time. The waveform can be displayed as a buffer or a rolling waveform plot (traditional waveform look).

EZAudioOpenGLWaveformExample

OpenGLWaveformExampleGif

Shows how to use the EZMicrophone and EZAudioPlotGL to visualize the audio data from the microphone in real-time. The drawing is using OpenGL so the performance much better for plots needing a lot of points.

EZAudioPlayFileExample

PlayFileExample

Shows how to use the EZAudioPlayer and EZAudioPlotGL to playback, pause, and seek through an audio file while displaying its waveform as a buffer or a rolling waveform plot.

EZAudioRecordWaveformExample

RecordWaveformExample

Shows how to use the EZMicrophone, EZRecorder, and EZAudioPlotGL to record the audio from the microphone input to a file while displaying the audio waveform of the incoming data. You can then playback the newly recorded audio file using AVFoundation and keep adding more audio data to the tail of the file.

EZAudioWaveformFromFileExample

WaveformExample

Shows how to use the EZAudioFile and EZAudioPlot to animate in an audio waveform for an entire audio file.

EZAudioPassThroughExample

PassthroughExample

Shows how to use the EZMicrophone, EZOutput, and the EZAudioPlotGL to pass the microphone input to the output for playback while displaying the audio waveform (as a buffer or rolling plot) in real-time.

EZAudioFFTExample

FFTExample

Shows how to calculate the real-time FFT of the audio data coming from the EZMicrophone and the Accelerate framework. The audio data is plotted using two EZAudioPlots for the time and frequency displays.

Documentation

The official documentation for EZAudio can be found here: http://cocoadocs.org/docsets/EZAudio/1.1.4/
You can also generate the docset yourself using appledocs by running the appledocs on the EZAudio source folder.

Getting Started

To begin using EZAudio you must first make sure you have the proper build requirements and frameworks. Below you'll find explanations of each component and code snippets to show how to use each to perform common tasks like getting microphone data, updating audio waveform plots, reading/seeking through audio files, and performing playback.

Build Requirements

iOS

  • 6.0+

OSX

  • 10.8+

Frameworks

iOS

  • Accelerate
  • AudioToolbox
  • AVFoundation
  • GLKit

OSX

  • Accelerate
  • AudioToolbox
  • AudioUnit
  • CoreAudio
  • QuartzCore
  • OpenGL
  • GLKit

Adding To Project

You can add EZAudio to your project in a few ways:

1.) The easiest way to use EZAudio is via <a href="
http://cocoapods.org/", target="_blank">Cocoapods. Simply add EZAudio to your <a href="http://guides.cocoapods.org/using/the-podfile.html", target="_blank">Podfile like so:

pod 'EZAudio', '~> 1.1.4'

Using EZAudio & The Amazing Audio Engine

If you're also using the Amazing Audio Engine then use the EZAudio/Core subspec like so:

pod 'EZAudio/Core', '~> 1.1.4'

2.) EZAudio now supports Carthage (thanks Andrew and Tommaso!). You can refer to Carthage's installation for a how-to guide: https://github.com/Carthage/Carthage

3.) Alternatively, you can check out the iOS/Mac examples for how to setup a project using the EZAudio project as an embedded project and utilizing the frameworks. Be sure to set your header search path to the folder containing the EZAudio source.

Core Components

EZAudio currently offers six audio components that encompass a wide range of functionality. In addition to the functional aspects of these components such as pulling audio data, reading/writing from files, and performing playback they also take special care to hook into the interface components to allow developers to display visual feedback (see the Interface Components below).

EZAudioDevice

Provides a simple interface for obtaining the current and all available inputs and output for any Apple device. For instance, the iPhone 6 has three microphones available for input, while on OSX you can choose the Built-In Microphone or any available HAL device on your system. Similarly, for iOS you can choose from a pair of headphones connected or speaker, while on OSX you can choose from the Built-In Output, any available HAL device, or Airplay.

EZAudioDeviceInputsExample

Getting Input Devices

To get all the available input devices use the inputDevices class method:

NSArray *inputDevices = [EZAudioDevice inputDevices];

or to just get the currently selected input device use the currentInputDevice method:

// On iOS this will default to the headset device or bottom microphone, while on OSX this will
// be your selected inpupt device from the Sound preferences
EZAudioDevice *currentInputDevice = [EZAudioDevice currentInputDevice];

Getting Output Devices

Similarly, to get all the available output devices use the outputDevices class method:

NSArray *outputDevices = [EZAudioDevice outputDevices];

or to just get the currently selected output device use the currentInputDevice method:

// On iOS this will default to the headset speaker, while on OSX this will be your selected
// output device from the Sound preferences
EZAudioDevice *currentOutputDevice = [EZAudioDevice currentOutputDevice];

EZMicrophone

Provides access to the default device microphone in one line of code and provides delegate callbacks to receive the audio data as an AudioBufferList and float arrays.

Relevant Example Projects

  • EZAudioCoreGraphicsWaveformExample (iOS)
  • EZAudioCoreGraphicsWaveformExample (OSX)
  • EZAudioOpenGLWaveformExample (iOS)
  • EZAudioOpenGLWaveformExample (OSX)
  • EZAudioRecordExample (iOS)
  • EZAudioRecordExample (OSX)
  • EZAudioPassThroughExample (iOS)
  • EZAudioPassThroughExample (OSX)
  • EZAudioFFTExample (iOS)
  • EZAudioFFTExample (OSX)

Creating A Microphone

Create an EZMicrophone instance by declaring a property and initializing it like so:

// Declare the EZMicrophone as a strong property
@property (nonatomic, strong) EZMicrophone *microphone;

...

// Initialize the microphone instance and assign it a delegate to receive the audio data
// callbacks
self.microphone = [EZMicrophone microphoneWithDelegate:self];

Alternatively, you could also use the shared EZMicrophone instance and just assign its EZMicrophoneDelegate.

// Assign a delegate to the shared instance of the microphone to receive the audio data
// callbacks
[EZMicrophone sharedMicrophone].delegate = self;

Setting The Device

The EZMicrophone uses an EZAudioDevice instance to select what specific hardware destination it will use to pull audio data. You'd use this if you wanted to change the input device like in the EZAudioCoreGraphicsWaveformExample for iOS or OSX. At any time you can change which input device is used by setting the device property:

NSArray *inputs = [EZAudioDevice inputDevices];
[self.microphone setDevice:[inputs lastObject]];

Anytime the EZMicrophone changes its device it will trigger the EZMicrophoneDelegate event:

- (void)microphone:(EZMicrophone *)microphone changedDevice:(EZAudioDevice *)device
{
    // This is not always guaranteed to occur on the main thread so make sure you
    // wrap it in a GCD block
    dispatch_async(dispatch_get_main_queue(), ^{
        // Update UI here
	    NSLog(@"Changed input device: %@", device);
    });
}

Note: For iOS this can happen automatically if the AVAudioSession changes the current device.

Getting Microphone Data

To tell the microphone to start fetching audio use the startFetchingAudio function.

// Starts fetching audio from the default device microphone and sends data to EZMicrophoneDelegate
[self.microphone startFetchingAudio];

Once the EZMicrophone has started it will send the EZMicrophoneDelegate the audio back in a few ways. An array of float arrays:

/**
 The microphone data represented as non-interleaved float arrays useful for:
    - Creating real-time waveforms using EZAudioPlot or EZAudioPlotGL
    - Creating any number of custom visualizations that utilize audio!
 */
-(void)   microphone:(EZMicrophone *)microphone
    hasAudioReceived:(float **)buffer
      withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels
{
    __weak typeof (self) weakSelf = self;
	// Getting audio data as an array of float buffer arrays that can be fed into the
	// EZAudioPlot, EZAudioPlotGL, or whatever visualization you would like to do with
	// the microphone data.
	dispatch_async(dispatch_get_main_queue(),^{
		// Visualize this data brah, buffer[0] = left channel, buffer[1] = right channel
		[weakSelf.audioPlot updateBuffer:buffer[0] withBufferSize:bufferSize];
    });
}

or the AudioBufferList representation:

/**
 The microphone data represented as CoreAudio's AudioBufferList useful for:
    - Appending data to an audio file via the EZRecorder
    - Playback via the EZOutput

 */
-(void)    microphone:(EZMicrophone *)microphone
        hasBufferList:(AudioBufferList *)bufferList
       withBufferSize:(UInt32)bufferSize
 withNumberOfChannels:(UInt32)numberOfChannels
{
	// Getting audio data as an AudioBufferList that can be directly fed into the EZRecorder
	// or EZOutput. Say whattt...
}

Pausing/Resuming The Microphone

Pause or resume fetching audio at any time like so:

// Stop fetching audio
[self.microphone stopFetchingAudio];

// Resume fetching audio
[self.microphone startFetchingAudio];

Alternatively, you could also toggle the microphoneOn property (safe to use with Cocoa Bindings)

// Stop fetching audio
self.microphone.microphoneOn = NO;

// Start fetching audio
self.microphone.microphoneOn = YES;

EZOutput

Provides flexible playback to the default output device by asking the EZOutputDataSource for audio data to play. Doesn't care where the buffers come from (microphone, audio file, streaming audio, etc). As of 1.0.0 the EZOutputDataSource has been simplified to have only one method to provide audio data to your EZOutput instance.

// The EZOutputDataSource should fill out the audioBufferList with the given frame count.
// The timestamp is provided for sample accurate calculation, but for basic use cases can
// be ignored.
- (OSStatus)        output:(EZOutput *)output
 shouldFillAudioBufferList:(AudioBufferList *)audioBufferList
        withNumberOfFrames:(UInt32)frames
                 timestamp:(const AudioTimeStamp *)timestamp;

Relevant Example Projects

  • EZAudioPlayFileExample (iOS)
  • EZAudioPlayFileExample (OSX)
  • EZAudioPassThroughExample (iOS)
  • EZAudioPassThroughExample (OSX)

Creating An Output

Create an EZOutput by declaring a property and initializing it like so:

// Declare the EZOutput as a strong property
@property (nonatomic, strong) EZOutput *output;
...

// Initialize the EZOutput instance and assign it a delegate to provide the output audio data
self.output = [EZOutput outputWithDataSource:self];

Alternatively, you could also use the shared output instance and just assign it an EZOutputDataSource if you will only have one EZOutput instance for your application.

// Assign a delegate to the shared instance of the output to provide the output audio data
[EZOutput sharedOutput].delegate = self;

Setting The Device

The EZOutput uses an EZAudioDevice instance to select what specific hardware destination it will output audio to. You'd use this if you wanted to change the output device like in the EZAudioPlayFileExample for OSX. At any time you can change which output device is used by setting the device property:

// By default the EZOutput uses the default output device, but you can change this at any time
EZAudioDevice *currentOutputDevice = [EZAudioDevice currentOutputDevice];
[self.output setDevice:currentOutputDevice];

Anytime the EZOutput changes its device it will trigger the EZOutputDelegate event:

- (void)output:(EZOutput *)output changedDevice:(EZAudioDevice *)device
{
    NSLog(@"Change output device to: %@", device);
}

Playing Audio

Setting The Input Format

When providing audio data the EZOutputDataSource will expect you to fill out the AudioBufferList provided with whatever inputFormat that is set on the EZOutput. By default the input format is a stereo, non-interleaved, float format (see defaultInputFormat for more information). If you're dealing with a different input format (which is typically the case), just set the inputFormat property. For instance:

// Set a mono, float format with a sample rate of 44.1 kHz
AudioStreamBasicDescription monoFloatFormat = [EZAudioUtilities monoFloatFormatWithSampleRate:44100.0f];
[self.output setInputFormat:monoFloatFormat];
Implementing the EZOutputDataSource

An example of implementing the EZOutputDataSource is done internally in the EZAudioPlayer using an EZAudioFile to read audio from an audio file on disk like so:

- (OSStatus)        output:(EZOutput *)output
 shouldFillAudioBufferList:(AudioBufferList *)audioBufferList
        withNumberOfFrames:(UInt32)frames
                 timestamp:(const AudioTimeStamp *)timestamp
{
    if (self.audioFile)
    {
        UInt32 bufferSize; // amount of frames actually read
        BOOL eof; // end of file
        [self.audioFile readFrames:frames
                   audioBufferList:audioBufferList
                        bufferSize:&bufferSize
                               eof:&eof];
        if (eof && [self.delegate respondsToSelector:@selector(audioPlayer:reachedEndOfAudioFile:)])
        {
            [self.delegate audioPlayer:self reachedEndOfAudioFile:self.audioFile];
        }
        if (eof && self.shouldLoop)
        {
            [self seekToFrame:0];
        }
        else if (eof)
        {
            [self pause];
            [self seekToFrame:0];
            [[NSNotificationCenter defaultCenter] postNotificationName:EZAudioPlayerDidReachEndOfFileNotification
                                                                object:self];
        }
    }
    return noErr;
}

I created a sample project that uses the EZOutput to act as a signal generator to play sine, square, triangle, sawtooth, and noise waveforms. Here's a snippet of code to generate a sine tone:

...
double const SAMPLE_RATE = 44100.0;

- (void)awakeFromNib
{
    //
    // Create EZOutput to play audio data with mono format (EZOutput will convert
    // this mono, float "inputFormat" to a clientFormat, i.e. the stereo output format).
    //
    AudioStreamBasicDescription inputFormat = [EZAudioUtilities monoFloatFormatWithSampleRate:SAMPLE_RATE];
    self.output = [EZOutput outputWithDataSource:self inputFormat:inputFormat];
    [self.output setDelegate:self];
    self.frequency = 200.0;
    self.sampleRate = SAMPLE_RATE;
    self.amplitude = 0.80;
}

- (OSStatus)        output:(EZOutput *)output
 shouldFillAudioBufferList:(AudioBufferList *)audioBufferList
        withNumberOfFrames:(UInt32)frames
                 timestamp:(const AudioTimeStamp *)timestamp
{
    Float32 *buffer = (Float32 *)audioBufferList->mBuffers[0].mData;
    size_t bufferByteSize = (size_t)audioBufferList->mBuffers[0].mDataByteSize;
    double theta = self.theta;
    double frequency = self.frequency;
    double thetaIncrement = 2.0 * M_PI * frequency / SAMPLE_RATE;
    if (self.type == GeneratorTypeSine)
    {
        for (UInt32 frame = 0; frame < frames; frame++)
        {
            buffer[frame] = self.amplitude * sin(theta);
            theta += thetaIncrement;
            if (theta > 2.0 * M_PI)
            {
                theta -= 2.0 * M_PI;
            }
        }
        self.theta = theta;
    }
    else if (... other shapes in full source)
}

For the full implementation of the square, triangle, sawtooth, and noise functions here: (https://github.com/syedhali/SineExample/blob/master/SineExample/GeneratorViewController.m#L220-L305)

Once the EZOutput has started it will send the EZOutputDelegate the audio back as float arrays for visualizing. These are converted inside the EZOutput component from whatever input format you may have provided. For instance, if you provide an interleaved, signed integer AudioStreamBasicDescription for the inputFormat property then that will be automatically converted to a stereo, non-interleaved, float format when sent back in the delegate playedAudio:... method below: An array of float arrays:

/**
 The output data represented as non-interleaved float arrays useful for:
    - Creating real-time waveforms using EZAudioPlot or EZAudioPlotGL
    - Creating any number of custom visualizations that utilize audio!
 */
- (void)       output:(EZOutput *)output
          playedAudio:(float **)buffer
       withBufferSize:(UInt32)bufferSize
 withNumberOfChannels:(UInt32)numberOfChannels
{
    __weak typeof (self) weakSelf = self;
    dispatch_async(dispatch_get_main_queue(), ^{
	// Update plot, buffer[0] = left channel, buffer[1] = right channel
    });
}

Pausing/Resuming The Output

Pause or resume the output component at any time like so:

// Stop fetching audio
[self.output stopPlayback];

// Resume fetching audio
[self.output startPlayback];

Chaining Audio Unit Effects

Internally the EZOutput is using an AUGraph to chain together a converter, mixer, and output audio units. You can hook into this graph by subclassing EZOutput and implementing the method:

// By default this method connects the AUNode representing the input format converter to
// the mixer node. In subclasses you can add effects in the chain between the converter
// and mixer by creating additional AUNodes, adding them to the AUGraph provided below,
// and then connecting them together.
- (OSStatus)connectOutputOfSourceNode:(AUNode)sourceNode
            

鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap