Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
718 views
in Technique[技术] by (71.8m points)

avfoundation - How to apply a Vignette CIFilter to a live camera feed in iOS?

While trying to apply a simple vignette filter to the raw camera feed of an iPhone6, with the help of Metal and Core Image, I see a lot of lag between the frames being processed and rendered in an MTKView

The approach which I have followed is (MetalViewController.swift):

  1. Get raw camera output using AVCaptureVideoDataOutputSampleBufferDelegate
  2. Convert CMSampleBuffer > CVPixelBuffer > CGImage
  3. Create an MTLTexture with this CGImage.

Point no. 2 and 3 are inside the method named: fillMTLTextureToStoreTheImageData

  1. Apply a CIFilter to the CIImage fetched from the MTLTexture in the MTKViewDelegate
    func draw(in view: MTKView) {

        if let currentDrawable = view.currentDrawable {
            let commandBuffer = self.commandQueue.makeCommandBuffer()

            if let myTexture = self.sourceTexture{

                let inputImage = CIImage(mtlTexture: myTexture, options: nil)

                self.vignetteEffect.setValue(inputImage, forKey: kCIInputImageKey)

                self.coreImageContext.render(self.vignetteEffect.outputImage!, to: currentDrawable.texture, commandBuffer: commandBuffer, bounds: inputImage!.extent, colorSpace: self.colorSpace)

                commandBuffer?.present(currentDrawable)

                commandBuffer?.commit()
            }
        }
    }

The performance is not at all what Apple mentioned in this doc: https://developer.apple.com/library/archive/documentation/GraphicsImaging/Conceptual/CoreImaging/ci_tasks/ci_tasks.html#//apple_ref/doc/uid/TP30001185-CH3-TPXREF101

Am I missing something?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Your step 2 is way too slow to support real-time rendering... and it looks like you're missing a couple of steps. For your purpose, you would typically:

Setup:

  1. create a pool of CVPixelBuffer - using CVPixelBufferPoolCreate
  2. create a pool of metal textures using CVMetalTextureCacheCreate

For each frame:

  1. convert CMSampleBuffer > CVPixelBuffer > CIImage
  2. Pass that CIImage through your filter pipeline
  3. render the output image into a CVPixelBuffer from the pool created in step 1
  4. use CVMetalTextureCacheCreateTextureFromImage to create a metal texture with your filtered CVPixelBuffer

If setup correctly, all these steps will make sure your image data stays on the GPU, as opposed to travelling from GPU to CPU and back to GPU for display.

The good news is all this is demoed in the AVCamPhotoFilter sample code from Apple https://developer.apple.com/library/archive/samplecode/AVCamPhotoFilter/Introduction/Intro.html#//apple_ref/doc/uid/TP40017556. In particular see the RosyCIRenderer class and its superclass FilterRenderer.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...