在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
iOS8 Core Image In Swift:自己主动改善图像以及内置滤镜的使用 iOS8 Core Image In Swift:更复杂的滤镜 iOS8 Core Image In Swift:人脸检測以及马赛克 iOS8 Core Image In Swift:视频实时滤镜 视频採集我们要进行实时滤镜的前提,就是对摄像头以及UI操作的全然控制,那么我们将不能使用系统提供的Controller。须要自己去绘制一切。
先建立一个Single View Applicationproject(我命名名RealTimeFilter),还是在Storyboard里关掉Auto Layout和Size Classes。然后放一个Button进去,Button的事件连到VC的openCamera方法上。然后我们给VC加两个属性:
class ViewController: UIViewController , AVCaptureVideoDataOutputSampleBufferDelegate { var captureSession: AVCaptureSession! var previewLayer: CALayer! ......
override func viewDidLoad() { super.viewDidLoad()
previewLayer = CALayer() previewLayer.bounds = CGRectMake(0, 0, self.view.frame.size.height, self.view.frame.size.width); previewLayer.position = CGPointMake(self.view.frame.size.width / 2.0, self.view.frame.size.height / 2.0); previewLayer.setAffineTransform(CGAffineTransformMakeRotation(CGFloat(M_PI / 2.0)));
self.view.layer.insertSublayer(previewLayer, atIndex: 0)
setupCaptureSession() } func setupCaptureSession() { captureSession = AVCaptureSession() captureSession.beginConfiguration() captureSession.sessionPreset = AVCaptureSessionPresetLow
let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
let deviceInput = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: nil) as AVCaptureDeviceInput if captureSession.canAddInput(deviceInput) { captureSession.addInput(deviceInput) }
let dataOutput = AVCaptureVideoDataOutput() dataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey : kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] dataOutput.alwaysDiscardsLateVideoFrames = true
if captureSession.canAddOutput(dataOutput) { captureSession.addOutput(dataOutput) }
let queue = dispatch_queue_create("VideoQueue", DISPATCH_QUEUE_SERIAL) dataOutput.setSampleBufferDelegate(self, queue: queue) captureSession.commitConfiguration() } 从这种方法開始,就算正式開始了。
我们如今完毕一个session的建立过程,但这个session还没有開始工作,就像我们訪问数据库的时候,要先打开数据库---然后建立连接---訪问数据---关闭连接---关闭数据库一样。我们在openCamera方法里启动session: @IBAction func openCamera(sender: UIButton) { sender.enabled = false captureSession.startRunning() } optional func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) 这个回调就能够了: Core Image之前的方式func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) { let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) CVPixelBufferLockBaseAddress(imageBuffer, 0) let width = CVPixelBufferGetWidthOfPlane(imageBuffer, 0) let height = CVPixelBufferGetHeightOfPlane(imageBuffer, 0) let bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0) let lumaBuffer = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0)
let grayColorSpace = CGColorSpaceCreateDeviceGray() let context = CGBitmapContextCreate(lumaBuffer, width, height, 8, bytesPerRow, grayColorSpace, CGBitmapInfo.allZeros) let cgImage = CGBitmapContextCreateImage(context)
dispatch_sync(dispatch_get_main_queue(), { self.previewLayer.contents = cgImage }) }
如今在真机上编译、执行,应该能看到例如以下的实时灰度效果:
(这张图是通过手机截屏获取的,容易手抖。所以不是非常清晰)
用Core Image处理通过以上几步能够看到,代码不是非常多,没有Core Image也能处理。可是比較费劲,难以理解、不好维护。假设想多添加一些效果(这仅仅是一个灰度效果),代码会变得非常臃肿,所以拓展性也不好。
事实上,我们想通过Core Image改造上面的代码也非常easy,先从加入CIFilter和CIContext開始。这是Core Image的核心内容。
在VC上新增两个属性:
var filter: CIFilter! lazy var context: CIContext = { let eaglContext = EAGLContext(API: EAGLRenderingAPI.OpenGLES2) let options = [kCIContextWorkingColorSpace : NSNull()] return CIContext(EAGLContext: eaglContext, options: options) }()
然后我们把session的配置过程略微改动一下。仅仅改动一处代码就可以:
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange 替换为 kCVPixelFormatType_32BGRA 再把session的回调进行一些改动,变成我们熟悉的方式,就像这样: func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) { let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
// CVPixelBufferLockBaseAddress(imageBuffer, 0) // let width = CVPixelBufferGetWidthOfPlane(imageBuffer, 0) // let height = CVPixelBufferGetHeightOfPlane(imageBuffer, 0) // let bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0) // let lumaBuffer = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0) // // let grayColorSpace = CGColorSpaceCreateDeviceGray() // let context = CGBitmapContextCreate(lumaBuffer, width, height, 8, bytesPerRow, grayColorSpace, CGBitmapInfo.allZeros) // let cgImage = CGBitmapContextCreateImage(context)
var outputImage = CIImage(CVPixelBuffer: imageBuffer)
if filter != nil { filter.setValue(outputImage, forKey: kCIInputImageKey) outputImage = filter.outputImage }
let cgImage = context.createCGImage(outputImage, fromRect: outputImage.extent())
dispatch_sync(dispatch_get_main_queue(), { self.previewLayer.contents = cgImage }) } 这是一段拓展性、维护性都比較好的代码了:
在此基础上,我们仅仅用加入一些滤镜就能够了。
先在Storyboard上加入一个UIView,再以这个UIView作容器。往里面加四个button,从0到3设置button的tag,并把button们的事件全部连接到VC的applyFilter方法上,UI看起来像这样:
把这个UIView(buttons的容器)连接到VC的filterButtonsContainer上。再加入一个字符串数组,存储一些滤镜的名字。终于VC的全部属性例如以下:
class ViewController: UIViewController , AVCaptureVideoDataOutputSampleBufferDelegate { @IBOutlet var filterButtonsContainer: UIView! var captureSession: AVCaptureSession! var previewLayer: CALayer! var filter: CIFilter! lazy var context: CIContext = { let eaglContext = EAGLContext(API: EAGLRenderingAPI.OpenGLES2) let options = [kCIContextWorkingColorSpace : NSNull()] return CIContext(EAGLContext: eaglContext, options: options) }() lazy var filterNames: [String] = { return ["CIColorInvert","CIPhotoEffectMono","CIPhotoEffectInstant","CIPhotoEffectTransfer"] }() ...... ...... filterButtonsContainer.hidden = true ...... @IBAction func openCamera(sender: UIButton) { sender.enabled = false captureSession.startRunning() self.filterButtonsContainer.hidden = false } @IBAction func applyFilter(sender: UIButton) { var filterName = filterNames[sender.tag] filter = CIFilter(name: filterName) } 保存到图库接下来我们加入拍照功能。
首先我们在VC上加入一个名为“拍照”的button。连接到VC的takePicture方法上,在实现方法之前。有几步改造工作要先做完。
首先就是图像元数据的问题,一张图像可能包括定位信息、图像格式、方向等元数据。而方向是我们最关心的部分。在上面的viewDidLoad方法中,我是通过将previewLayer进行旋转使我们看到正确的图像,可是假设直接将图像保存在图库或文件里。我们会得到一个方向不对的图像,为了终于获取方向正确的图像,我把previewLayer的旋转去掉:
...... previewLayer = CALayer() // previewLayer.bounds = CGRectMake(0, 0, self.view.frame.size.height, self.view.frame.size.width); // previewLayer.position = CGPointMake(self.view.frame.size.width / 2.0, self.view.frame.size.height / 2.0); // previewLayer.setAffineTransform(CGAffineTransformMakeRotation(CGFloat(M_PI / 2.0))); previewLayer.anchorPoint = CGPointZero previewLayer.bounds = view.bounds ...... 如今你执行的话看到的将是方向不对的图像。 然后我们把方向统一放到captureSession的回调中处理。改动之前写的实现: ...... var outputImage = CIImage(CVPixelBuffer: imageBuffer)
let orientation = UIDevice.currentDevice().orientation var t: CGAffineTransform! if orientation == UIDeviceOrientation.Portrait { t = CGAffineTransformMakeRotation(CGFloat(-M_PI / 2.0)) } else if orientation == UIDeviceOrientation.PortraitUpsideDown { t = CGAffineTransformMakeRotation(CGFloat(M_PI / 2.0)) } else if (orientation == UIDeviceOrientation.LandscapeRight) { t = CGAffineTransformMakeRotation(CGFloat(M_PI)) } else { t = CGAffineTransformMakeRotation(0) } outputImage = outputImage.imageByApplyingTransform(t) if filter != nil { filter.setValue(outputImage, forKey: kCIInputImageKey) outputImage = filter.outputImage } ...... 执行之后看到的效果和之前就一样了。 方向处理完后我们还要用一个实例变量保存这个outputImage,由于这里面含有图像的元数据。我们不会丢弃它: 给VC加入一个CIImage的属性: var ciImage: CIImage! ...... if filter != nil { filter.setValue(outputImage, forKey: kCIInputImageKey) outputImage = filter.outputImage } let cgImage = context.createCGImage(outputImage, fromRect: outputImage.extent()) ciImage = outputImage ...... 最后是takePicture的方法实现: @IBAction func takePicture(sender: UIButton) { sender.enabled = false captureSession.stopRunning() var cgImage = context.createCGImage(ciImage, fromRect: ciImage.extent()) ALAssetsLibrary().writeImageToSavedPhotosAlbum(cgImage, metadata: ciImage.properties()) { (url: NSURL!, error :NSError!) -> Void in if error == nil { println("保存成功") println(url) } else { let alert = UIAlertView(title: "错误", message: error.localizedDescription, delegate: nil,
全部评论
专题导读
上一篇:Swift实战-豆瓣电台(二)界面布局发布时间:2022-07-13下一篇:iOS10ProgrammingFundamentalswithSwift学习笔记0发布时间:2022-07-13热门推荐
热门话题
阅读排行榜
|
请发表评论