Unfortunately, there really isn't any documentation on these new functions. The best you're going to find right now is in the CVOpenGLESTextureCache.h
header file, where you'll see a basic description of the function parameters:
/*!
@function CVOpenGLESTextureCacheCreate
@abstract Creates a new Texture Cache.
@param allocator The CFAllocatorRef to use for allocating the cache. May be NULL.
@param cacheAttributes A CFDictionaryRef containing the attributes of the cache itself. May be NULL.
@param eaglContext The OpenGLES 2.0 context into which the texture objects will be created. OpenGLES 1.x contexts are not supported.
@param textureAttributes A CFDictionaryRef containing the attributes to be used for creating the CVOpenGLESTexture objects. May be NULL.
@param cacheOut The newly created texture cache will be placed here
@result Returns kCVReturnSuccess on success
*/
CV_EXPORT CVReturn CVOpenGLESTextureCacheCreate(
CFAllocatorRef allocator,
CFDictionaryRef cacheAttributes,
void *eaglContext,
CFDictionaryRef textureAttributes,
CVOpenGLESTextureCacheRef *cacheOut) __OSX_AVAILABLE_STARTING(__MAC_NA,__IPHONE_5_0);
The more difficult elements are the attributes dictionaries, which unfortunately you need to find examples of in order to use these functions properly. Apple has the GLCameraRipple and RosyWriter examples that show off how to use the fast texture upload path with BGRA and YUV input color formats. Apple also provided the ChromaKey example at WWDC (which may still be accessible along with the videos) that demonstrated how to use these texture caches to pull information from an OpenGL ES texture.
I just got this fast texture uploading working in my GPUImage framework (the source code for which is available at that link), so I'll lay out what I was able to parse out of this. First, I create a texture cache using the following code:
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)[[GPUImageOpenGLESContext sharedImageProcessingOpenGLESContext] context], NULL, &coreVideoTextureCache);
if (err)
{
NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreate %d");
}
where the context referred to is an EAGLContext configured for OpenGL ES 2.0.
I use this to keep video frames from the iOS device camera in video memory, and I use the following code to do this:
CVPixelBufferLockBaseAddress(cameraFrame, 0);
CVOpenGLESTextureRef texture = NULL;
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, coreVideoTextureCache, cameraFrame, NULL, GL_TEXTURE_2D, GL_RGBA, bufferWidth, bufferHeight, GL_BGRA, GL_UNSIGNED_BYTE, 0, &texture);
if (!texture || err) {
NSLog(@"CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)", err);
return;
}
outputTexture = CVOpenGLESTextureGetName(texture);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Do processing work on the texture data here
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
CVOpenGLESTextureCacheFlush(coreVideoTextureCache, 0);
CFRelease(texture);
outputTexture = 0;
This creates a new CVOpenGLESTextureRef, representing an OpenGL ES texture, from the texture cache. This texture is based on the CVImageBufferRef passed in by the camera. That texture is then retrieved from the CVOpenGLESTextureRef and appropriate parameters set for it (which seemed to be necessary in my processing). Finally, I do my work on the texture and clean up when I'm done.
This fast upload process makes a real difference on the iOS devices. It took the upload and processing of a single 640x480 frame of video on an iPhone 4S from 9.0 ms to 1.8 ms.
I've heard that this works in reverse, as well, which might allow for the replacement of glReadPixels()
in certain situations, but I've yet to try this.