iphone - 如何在 AVCapture 视频上添加 CALayer 矩形?
<p><p>我想在 AVCapture 捕获的视频上显示 CALayer。
我可以显示该图层,但对于下一帧,应该删除前一帧。</p>
<p>我的代码是:</p>
<pre><code>;
;
for (int i = 0; i < faces.size(); i++) {
CGRect faceRect;
// Get the Graphics Context
faceRect.origin.x = xyPoints.x;
faceRect.origin.y = xyPoints.y;
faceRect.size.width =50; //faces.width;
faceRect.size.height =50;// faces.height;
CALayer *featureLayer=nil;
// faceRect = CGRectApplyAffineTransform(faceRect, t);
if (!featureLayer) {
featureLayer = [init];
featureLayer.borderColor = [ CGColor];
featureLayer.borderWidth = 10.0f;
;
}
featureLayer.frame = faceRect;
NSLog(@"frame-x - %f, frame-y - %f, frame-width - %f, frame-height - %f",featureLayer.frame.origin.x,featureLayer.frame.origin.y,featureLayer.frame.size.width,featureLayer.frame.size.height);
}
//;
;
</code></pre>
<p>其中 face 是 <code>(const std::vector<cv::Rect)face</code> OpenCV 格式。
我需要知道在哪里放置代码 <code>;</code></p>
<p>注意:“人脸”不是用于人脸检测的……它只是一个矩形。</p></p>
<br><hr><h1><strong>Best Answer-推荐答案</ strong></h1><br>
<p><p>我已经找到了解决方案...
featureLayer 是 CALayer 对象,我将其作为身份。喜欢</p>
<pre><code>featureLayer.name = @"earLayer";
</code></pre>
<p>每当我检测到框架中的对象时,我都会从主视图中获取子层,例如</p>
<pre><code>NSArray *sublayers = ];
</code></pre>
<p>并计算子层以检查 for 循环,如下所示:</p>
<pre><code>int sublayersCount = ;
int currentSublayer = 0;
for (CALayer *layer in sublayers) {
NSString *layerName = ;
if ()
;
}
</code></pre>
<p>现在我得到了带有 Detected objects 的正确层。</p></p>
<p style="font-size: 20px;">关于iphone - 如何在 AVCapture 视频上添加 CALayer 矩形?,我们在Stack Overflow上找到一个类似的问题:
<a href="https://stackoverflow.com/questions/16860325/" rel="noreferrer noopener nofollow" style="color: red;">
https://stackoverflow.com/questions/16860325/
</a>
</p>
页:
[1]