Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
507 views
in Technique[技术] by (71.8m points)

ios - Swift: video records at one size but renders at wrong size

The goal is to capture full screen video on a device with Swift. In the code below, video capture appears to happen at full screen (while recording the camera preview uses the full screen), but the rendering of the video happens at a different resolution. For a 5S specifically, it appears like capture happens at 320x568 but rendering occurs at 320x480.

How can you capture and render full screen video?

Code for video capture:

private func initPBJVision() {
    // Store PBJVision in var for convenience
    let vision = PBJVision.sharedInstance()

    // Configure PBJVision
    vision.delegate = self
    vision.cameraMode = PBJCameraMode.Video
    vision.cameraOrientation = PBJCameraOrientation.Portrait
    vision.focusMode = PBJFocusMode.ContinuousAutoFocus
    vision.outputFormat = PBJOutputFormat.Preset
    vision.cameraDevice = PBJCameraDevice.Back

    // Let taps start/pause recording
    let tapHandler = UITapGestureRecognizer(target: self, action: "doTap:")
    view.addGestureRecognizer(tapHandler)

    // Log status
    print("Configured PBJVision")
}


private func startCameraPreview() {
    // Store PBJVision in var for convenience
    let vision = PBJVision.sharedInstance()

    // Connect PBJVision camera preview to <videoView>
    // -- Get preview width
    let deviceWidth = CGRectGetWidth(view.frame)
    let deviceHeight = CGRectGetHeight(view.frame)

    // -- Configure PBJVision's preview layer
    let previewLayer = vision.previewLayer
    previewLayer.frame = CGRectMake(0, 0, deviceWidth, deviceHeight)
    previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
    ...
}

Video rendering code:

func exportVideo(fileUrl: NSURL) {
    // Create main composition object
    let videoAsset = AVURLAsset(URL: fileUrl, options: nil)
    let mainComposition = AVMutableComposition()
    let compositionVideoTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
    let compositionAudioTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))

    // -- Extract and apply video & audio tracks to composition
    let sourceVideoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]
    let sourceAudioTrack = videoAsset.tracksWithMediaType(AVMediaTypeAudio)[0]
    do {
        try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: sourceVideoTrack, atTime: kCMTimeZero)
    } catch {
        print("Error with insertTimeRange. Video error: (error).")
    }
    do {
        try compositionAudioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: sourceAudioTrack, atTime: kCMTimeZero)
    } catch {
        print("Error with insertTimeRange. Audio error: (error).")
    }

    // Add text to video
    // -- Create video composition object
    let renderSize = compositionVideoTrack.naturalSize
    let videoComposition = AVMutableVideoComposition()
    videoComposition.renderSize = renderSize
    videoComposition.frameDuration = CMTimeMake(Int64(1), Int32(videoFrameRate))

    // -- Add instruction to  video composition object
    let instruction = AVMutableVideoCompositionInstruction()
    instruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
    let videoLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: compositionVideoTrack)
    instruction.layerInstructions = [videoLayerInstruction]
    videoComposition.instructions = [instruction]

    // -- Define video frame
    let videoFrame = CGRectMake(0, 0, renderSize.width, renderSize.height)
    print("Video Frame: (videoFrame)")  // <-- Prints frame of 320x480 so render size already wrong here 
    ...
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

If I get you right, it seems that you have misunderstood the fact that device screen width is'n equal to camera preview (and capture) size.

The videoGravity property of your previewLayer indicates how to stretch/fit your preview inside your layer. It doesn't affect capture output.

Actual frame size of output depends on sessionPreset property of your current AVCaptureSession. And as I can understand by reading GitHub repository of PBJVision lib, its singleton has setter for this (called captureSessionPreset). You can change it inside your initPBJVision method.

There you can find possible values of session presets.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...