Apple recently added a new constant to the CIDetector
class called CIDetectorTracking
which appears to be able to track faces between frames in a video. This would be very beneficial for me if I could manage to figure out how it works..
I've tried adding this key to the detectors options dictionary using every object I can think of that is remotely relevant including, my AVCaptureStillImageOutput instance, the UIImage I'm working on, YES, 1, etc.
NSDictionary *detectorOptions = [[NSDictionary alloc] initWithObjectsAndKeys:CIDetectorAccuracyHigh, CIDetectorAccuracy,myAVCaptureStillImageOutput,CIDetectorTracking, nil];
But no matter what parameter I try to pass, it either crashes (obviously I'm guessing at it here) or the debugger outputs:
Unknown CIDetectorTracking specified. Ignoring.
Normally, I wouldn't be guessing at this, but resources on this topic are virtually nonexistent. Apple's class reference states:
A key used to enable or disable face tracking for the detector. Use
this option when you want to track faces across frames in a video.
Other than availability being iOS 6+ and OS X 10.8+ that's it.
Comments inside CIDetector.h
:
/*The key in the options dictionary used to specify that feature
tracking should be used. */
If that wasn't bad enough, a Google search provides 7 results (8 when they find this post) all of which are either Apple class references, API diffs, a SO post asking how to achieve this in iOS 5, or 3rd party copies of the former.
All that being said, any hints or tips for the proper usage of CIDetectorTracking
would be greatly appreciated!
See Question&Answers more detail:
os