This is a 2-part Question. I have the following code working which grabs the current display surface and creates a video out of the surfaces (everything happens in the background).
for(int i=0;i<100;i++){
IOMobileFramebufferConnection connect;
kern_return_t result;
IOSurfaceRef screenSurface = NULL;
io_service_t framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleH1CLCD"));
if(!framebufferService)
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleM2CLCD"));
if(!framebufferService)
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleCLCD"));
result = IOMobileFramebufferOpen(framebufferService, mach_task_self(), 0, &connect);
result = IOMobileFramebufferGetLayerDefaultSurface(connect, 0, &screenSurface);
uint32_t aseed;
IOSurfaceLock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
uint32_t width = IOSurfaceGetWidth(screenSurface);
uint32_t height = IOSurfaceGetHeight(screenSurface);
m_width = width;
m_height = height;
CFMutableDictionaryRef dict;
int pitch = width*4, size = width*height*4;
int bPE=4;
char pixelFormat[4] = {'A','R','G','B'};
dict = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(dict, kIOSurfaceIsGlobal, kCFBooleanTrue);
CFDictionarySetValue(dict, kIOSurfaceBytesPerRow, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &pitch));
CFDictionarySetValue(dict, kIOSurfaceBytesPerElement, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &bPE));
CFDictionarySetValue(dict, kIOSurfaceWidth, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &width));
CFDictionarySetValue(dict, kIOSurfaceHeight, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &height));
CFDictionarySetValue(dict, kIOSurfacePixelFormat, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, pixelFormat));
CFDictionarySetValue(dict, kIOSurfaceAllocSize, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &size));
IOSurfaceRef destSurf = IOSurfaceCreate(dict);
IOSurfaceAcceleratorRef outAcc;
IOSurfaceAcceleratorCreate(NULL, 0, &outAcc);
IOSurfaceAcceleratorTransferSurface(outAcc, screenSurface, destSurf, dict, NULL);
IOSurfaceUnlock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
CFRelease(outAcc);
// MOST RELEVANT PART OF CODE
CVPixelBufferCreateWithBytes(NULL, width, height, kCVPixelFormatType_32BGRA, IOSurfaceGetBaseAddress(destSurf), IOSurfaceGetBytesPerRow(destSurf), NULL, NULL, NULL, &sampleBuffer);
CMTime frameTime = CMTimeMake(frameCount, (int32_t)5);
[adaptor appendPixelBuffer:sampleBuffer withPresentationTime:frameTime];
CFRelease(sampleBuffer);
CFRelease(destSurf);
frameCount++;
}
P.S: The last 4-5 lines of code are the most relevant(if you need to filter).
1) The video that is produced has artefacts. I have worked on videos previously and have encountered such an issue before as well. I suppose there can be 2 reasons for this:
i. The PixelBuffer that is passed to the adaptor is getting modified or released before the processing (encoding + writing) is complete. This can be due to asynchronous calls. But I am not sure if this itself is the problem and how to resolve it.
ii. The timestamps that are passed are inaccurate (e.g. 2 frames having the same timestamp or a frame having a lower timestamp than the previous frame). I logged out the timestamp values and this doesn't seem to be the problem.
2) The code above is not able to grab surfaces when a video is played or when we play games. All I get is a blank screen in the output. This might be due to hardware accelerated decoding that happens in such cases.
Any inputs on either of the 2 parts of the questions will be really helpful. Also, if you have any good links to read on IOSurfaces in general, please do post them here.
See Question&Answers more detail:
os