When setting up the AVCaptureVideoDataOutput that returns the raw camera frames, you can set the format of the frames using code like the following:
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
In this case a BGRA pixel format is specified (I used this for matching a color format for an OpenGL ES texture). Each pixel in that format has one byte for blue, green, red, and alpha, in that order. Going with this makes it easy to pull out color components, but you do sacrifice a little performance by needing to make the conversion from the camera-native YUV colorspace.
Other supported colorspaces are kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
and kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
on newer devices and kCVPixelFormatType_422YpCbCr8
on the iPhone 3G. The VideoRange
or FullRange
suffix simply indicates whether the bytes are returned between 16 - 235 for Y and 16 - 240 for UV or full 0 - 255 for each component.
I believe the default colorspace used by an AVCaptureVideoDataOutput instance is the YUV 4:2:0 planar colorspace (except on the iPhone 3G, where it's YUV 4:2:2 interleaved). This means that there are two planes of image data contained within the video frame, with the Y plane coming first. For every pixel in your resulting image, there is one byte for the Y value at that pixel.
You would get at this raw Y data by implementing something like this in your delegate callback:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
unsigned char *rawPixelBase = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);
// Do something with the raw pixels here
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}
You could then figure out the location in the frame data for each X, Y coordinate on the image and pull the byte out that corresponds to the Y component at that coordinate.
Apple's FindMyiCone sample from WWDC 2010 (accessible along with the videos) shows how to process raw BGRA data from each frame. I also created a sample application, which you can download the code for here, that performs color-based object tracking using the live video from the iPhone's camera. Both show how to process raw pixel data, but neither of these work in the YUV colorspace.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…