Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
756 views
in Technique[技术] by (71.8m points)

opencv - open cv ios video processing

I'm trying to do the tutorial found here for ios video processing with openCv framework.

I've successfully loaded the ios openCv framework to my project - but there seems to be a mismatch between my framework and the one presented in the tutorial and I am hoping someone can help me.

OpenCv uses cv::Mat type for representing images. When using AVfoundation delegation to process images from the camera - I will need to convert all the CMSampleBufferRef to that type.

It seems that the openCV framework presented in the tutorial provides a library called using

#import <opencv2/highgui/cap_ios.h>

with a new delegate command:

Can anyone point me where I can find this framework or possibly fast conversion between CMSampleBufferRef and cv::Mat

EDIT

There is a lot of segmentation in the opencv framework (at least for ios). I've downloaded it through various "official" sites and also using tools such as fink and brew using THEIR instructions. I even compared header files that were installed to /usr/local/include/opencv/. They were different each time. When downloading an openCV project - there are various cmake files and conflicting readme files in the same project. I think I was successful in building a good version for IOS with avcapture functionality built in to the framework (with this header <opencv2/highgui/cap_ios.h>) through this link and then building the library using the python script in the ios directory - using the command python opencv/ios/build_framework.py ios. I will try and update

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Here is the conversion that I use. You lock the pixel buffer, create a cv::Mat, process with the cv::Mat, then unlock the pixel buffer.

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

CVPixelBufferLockBaseAddress( pixelBuffer, 0 );

int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
int bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
int bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);
cv::Mat image = cv::Mat(bufferHeight,bufferWidth,CV_8UC4,pixel, bytesPerRow); //put buffer in open cv, no memory copied
//Processing here

//End processing
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
}

The above method does not copy any memory and as such you do not own the memory, pixelBuffer will free it for you. If you want your own copy of the buffer, just do

cv::Mat copied_image = image.clone();

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...