Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
453 views
in Technique[技术] by (71.8m points)

objective c - How to get real time video stream from iphone camera and send it to server?

I am using AVCaptureSession to capture video and get real time frame from iPhone camera but how can I send it to server with multiplexing of frame and sound and how to use ffmpeg to complete this task, if any one have any tutorial about ffmpeg or any example please share here.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

The way I'm doing it is to implement an AVCaptureSession, which has a delegate with a callback that's run on every frame. That callback sends each frame over the network to the server, which has a custom setup to receive it.

Here's the flow:

http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW2

And here's some code:

// make input device

NSError *deviceError;

AVCaptureDevice *cameraDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

AVCaptureDeviceInput *inputDevice = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:&deviceError];

// make output device

AVCaptureVideoDataOutput *outputDevice = [[AVCaptureVideoDataOutput alloc] init];

[outputDevice setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

// initialize capture session

AVCaptureSession *captureSession = [[[AVCaptureSession alloc] init] autorelease];

[captureSession addInput:inputDevice];

[captureSession addOutput:outputDevice];

// make preview layer and add so that camera's view is displayed on screen

AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer    layerWithSession:captureSession];
previewLayer.frame = view.bounds;
[view.layer addSublayer:previewLayer];

// go!

[captureSession startRunning];

Then the output device's delegate (here, self) has to implement the callback:

-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection

{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );

CGSize imageSize = CVImageBufferGetEncodedSize( imageBuffer );

// also in the 'mediaSpecific' dict of the sampleBuffer

   NSLog( @"frame captured at %.fx%.f", imageSize.width, imageSize.height );

    }

Sending raw frames or individual images will never work well enough for you (because of the amount of data and number of frames). Nor can you reasonably serve anything from the phone (WWAN networks have all sorts of firewalls). You'll need to encode the video, and stream it to a server, most likely over a standard streaming format (RTSP, RTMP). There is an H.264 encoder chip on the iPhone >= 3GS. The problem is that it is not stream oriented. That is, it outputs the metadata required to parse the video last. This leaves you with a few options.

1) Get the raw data and use FFmpeg to encode on the phone (will use a ton of CPU and battery).

2) Write your own parser for the H.264/AAC output (very hard).

3) Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions).


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...