我想在 AVCapture 捕获的视频上显示 CALayer。 我可以显示该图层,但对于下一帧,应该删除前一帧。
我的代码是:
[CATransaction begin];
[CATransaction setValueid)kCFBooleanTrue forKey:kCATransactionDisableActions];
for (int i = 0; i < faces.size(); i++) {
CGRect faceRect;
// Get the Graphics Context
faceRect.origin.x = xyPoints.x;
faceRect.origin.y = xyPoints.y;
faceRect.size.width =50; //faces[i].width;
faceRect.size.height =50;// faces[i].height;
CALayer *featureLayer=nil;
// faceRect = CGRectApplyAffineTransform(faceRect, t);
if (!featureLayer) {
featureLayer = [[CALayer alloc]init];
featureLayer.borderColor = [[UIColor redColor] CGColor];
featureLayer.borderWidth = 10.0f;
[self.view.layer addSublayer:featureLayer];
}
featureLayer.frame = faceRect;
NSLog(@"frame-x - %f, frame-y - %f, frame-width - %f, frame-height - %f",featureLayer.frame.origin.x,featureLayer.frame.origin.y,featureLayer.frame.size.width,featureLayer.frame.size.height);
}
// [featureLayer removeFromSuperlayer];
[CATransaction commit];
其中 face 是 (const std::vector
[featureLayer removeFromSuperLayer];
注意:“人脸”不是用于人脸检测的……它只是一个矩形。
我已经找到了解决方案... featureLayer 是 CALayer 对象,我将其作为身份。喜欢
featureLayer.name = @"earLayer";
每当我检测到框架中的对象时,我都会从主视图中获取子层,例如
NSArray *sublayers = [NSArray arrayWithArray:[self.view.layer sublayers]];
并计算子层以检查 for 循环,如下所示:
int sublayersCount = [sublayers count];
int currentSublayer = 0;
for (CALayer *layer in sublayers) {
NSString *layerName = [layer name];
if ([layerName isEqualToString"earayer"])
[layer setHidden:YES];
}
现在我得到了带有 Detected objects 的正确层。
关于iphone - 如何在 AVCapture 视频上添加 CALayer 矩形?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/16860325/
欢迎光临 OStack程序员社区-中国程序员成长平台 (https://ostack.cn/) | Powered by Discuz! X3.4 |