The correct approach at recording the video without frame drops is to isolate the two tasks (frame acquisition, and frame serialization) such that they don't influence each other (specifically so that fluctuations in serialization don't eat away time from capturing the frames, which has to happen without delays to prevent frame loss).
This can be achieved by delegating the serialization (encoding of the frames and writing them into a video file) to separate threads, and using some kind of synchronized queue to feed the data to the worker threads.
Following is a simple example showing how this could be done. Since I only have one camera and not the kind you have, I will simply use a webcam and duplicate the frames, but the general principle applies to your scenario as well.
Sample Code
In the beginning we have some includes:
#include <opencv2/opencv.hpp>
#include <chrono>
#include <condition_variable>
#include <iostream>
#include <mutex>
#include <queue>
#include <thread>
// ============================================================================
using std::chrono::high_resolution_clock;
using std::chrono::duration_cast;
using std::chrono::microseconds;
// ============================================================================
Synchronized Queue
The first step is to define our synchronized queue, which we will use to communicate with the worker threads that write the video.
The primary functions we need is the ability to:
- Push new images into a the queue
- Pop images from the queue, waiting when it's empty.
- Ability to cancel all pending pops, when we're finished.
We use std::queue
to hold the cv::Mat
instances, and std::mutex
to provide synchronization. A std::condition_variable
is used to notify the consumer when image has been inserted into the queue (or the cancellation flag set), and a simple boolean flag is used to notify cancellation.
Finally, we use the empty struct cancelled
as an exception thrown from pop()
, so we can cleanly terminate the worker by cancelling the queue.
// ============================================================================
class frame_queue
{
public:
struct cancelled {};
public:
frame_queue();
void push(cv::Mat const& image);
cv::Mat pop();
void cancel();
private:
std::queue<cv::Mat> queue_;
std::mutex mutex_;
std::condition_variable cond_;
bool cancelled_;
};
// ----------------------------------------------------------------------------
frame_queue::frame_queue()
: cancelled_(false)
{
}
// ----------------------------------------------------------------------------
void frame_queue::cancel()
{
std::unique_lock<std::mutex> mlock(mutex_);
cancelled_ = true;
cond_.notify_all();
}
// ----------------------------------------------------------------------------
void frame_queue::push(cv::Mat const& image)
{
std::unique_lock<std::mutex> mlock(mutex_);
queue_.push(image);
cond_.notify_one();
}
// ----------------------------------------------------------------------------
cv::Mat frame_queue::pop()
{
std::unique_lock<std::mutex> mlock(mutex_);
while (queue_.empty()) {
if (cancelled_) {
throw cancelled();
}
cond_.wait(mlock);
if (cancelled_) {
throw cancelled();
}
}
cv::Mat image(queue_.front());
queue_.pop();
return image;
}
// ============================================================================
Storage Worker
The next step is to define a simple storage_worker
, which will be responsible for taking the frames from the synchronized queue, and encode them into a video file until the queue has been cancelled.
I've added simple timing, so we have some idea about how much time is spent encoding the frames, as well as simple logging to console, so we have some idea about what is happening in the program.
// ============================================================================
class storage_worker
{
public:
storage_worker(frame_queue& queue
, int32_t id
, std::string const& file_name
, int32_t fourcc
, double fps
, cv::Size frame_size
, bool is_color = true);
void run();
double total_time_ms() const { return total_time_ / 1000.0; }
private:
frame_queue& queue_;
int32_t id_;
std::string file_name_;
int32_t fourcc_;
double fps_;
cv::Size frame_size_;
bool is_color_;
double total_time_;
};
// ----------------------------------------------------------------------------
storage_worker::storage_worker(frame_queue& queue
, int32_t id
, std::string const& file_name
, int32_t fourcc
, double fps
, cv::Size frame_size
, bool is_color)
: queue_(queue)
, id_(id)
, file_name_(file_name)
, fourcc_(fourcc)
, fps_(fps)
, frame_size_(frame_size)
, is_color_(is_color)
, total_time_(0.0)
{
}
// ----------------------------------------------------------------------------
void storage_worker::run()
{
cv::VideoWriter writer(file_name_, fourcc_, fps_, frame_size_, is_color_);
try {
int32_t frame_count(0);
for (;;) {
cv::Mat image(queue_.pop());
if (!image.empty()) {
high_resolution_clock::time_point t1(high_resolution_clock::now());
++frame_count;
writer.write(image);
high_resolution_clock::time_point t2(high_resolution_clock::now());
double dt_us(static_cast<double>(duration_cast<microseconds>(t2 - t1).count()));
total_time_ += dt_us;
std::cout << "Worker " << id_ << " stored image #" << frame_count
<< " in " << (dt_us / 1000.0) << " ms" << std::endl;
}
}
} catch (frame_queue::cancelled& /*e*/) {
// Nothing more to process, we're done
std::cout << "Queue " << id_ << " cancelled, worker finished." << std::endl;
}
}
// ============================================================================
Processing
Finally, we can put this all together.
We begin by initializing and configuring our video source. Then we create two frame_queue
instances, one for each stream of images. We follow this by creating two instances of storage_worker
, one for each queue. To keep things interesting, I've set a different codec for each.
Next step is to create and start worker threads, which will execute the run()
method of each storage_worker
. Having our consumers ready, we can start capturing frames from the camera, and feed them to the frame_queue
instances. As mentioned above, I have only single source, so I insert copies of the same frame into both queues.
NB: I need to use the clone()
method of cv::Mat
to do a deep copy, otherwise I would be inserting references to the single buffer OpenCV VideoCapture
uses for performance reasons. That would mean that the worker threads would be getting references to this single image, and there would be no synchronization for access to this shared image buffer. You need to make sure this does not happen in your scenario as well.
Once we have read the appropriate number of frames (you can implement any other kind of stop-condition you desire), we cancel the work queues, and wait for the worker threads to complete.
Finally we write some statistics about the time required for the different tasks.
// ============================================================================
int main()
{
// The video source -- for me this is a webcam, you use your specific camera API instead
// I only have one camera, so I will just duplicate the frames to simulate your scenario
cv::VideoCapture capture(0);
// Let's make it decent sized, since my camera defaults to 640x480
capture.set(CV_CAP_PROP_FRAME_WIDTH, 1920);
capture.set(CV_CAP_PROP_FRAME_HEIGHT, 1080);
capture.set(CV_CAP_PROP_FPS, 20.0);
// And fetch the actual values, so we can create our video correctly
int32_t frame_width(static_cast<int32_t>(capture.get(CV_CAP_PROP_FRAME_WIDTH)));
int32_t frame_height(static_cast<int32_t>(capture.get(CV_CAP_PROP_FRAME_HEIGHT)));
double video_fps(std::max(10.0, capture.get(CV_CAP_PROP_FPS))); // Some default in case it's 0
std::cout << "Capturing images (" << frame_width << "x" << frame_height
<< ") at " << video_fps << " FPS." << std::endl;
// The synchronized queues, one per video source/storage worker pair
std::vector<frame_queue> queue(2);
// Let's create our storage workers -- let's have two, to simulate your scenario
// and to keep it interesting, have each one write a different format
std::vector <storage_worker> storage;
storage.emplace_back(std::ref(queue[0]), 0
, std::string("foo_0.avi")
, CV_FOURCC('I', 'Y', 'U', 'V')
, video_fps
, cv::Size(frame_width, frame_height)
, true);
storage.emplace_back(std::ref(queue[1]), 1
, std::string("foo_1.avi")
, CV_FOURCC('D', 'I', 'V', 'X')
, video_fps
, cv::Size(frame_width, frame_height)
, true);
// And start the worker threads for each storage worker
std::vector<std::thread> storage_thread;
for (auto& s : storage) {
storage_thread.emplace_back(&storage_worker::run, &s);
}
// Now the main capture loop
int32_t const MAX_FRAME_COUNT(10);
double total_read_time(0.0);
int32_t frame_count(0);
for (; frame_count < MAX_FRAME_COUNT; ++frame_count) {
high_resolution_clock::time_point t1(high_resolution_clock::now());
// Try to read a frame
cv::Mat image;
if (!capture.read(image)) {
std::cerr << "Failed to capture image.
";
break;
}
// Insert a copy into all queues
for (auto& q : queue) {
q.push(image.clone());
}
high_resolution_clock::time_point t2(high_resolution_clock::now());
double dt_us(static_cast<double>(duration_cast<microseconds>(t2 - t1).count()));
total_read_time += dt_us;
std::cout << "Captured image #" << frame_count << " in "
<< (dt_us / 1000.0) << " ms" << std::endl;
}
// We're done reading, cancel all the queues
for (auto& q : queue) {
q.cancel();
}
// And join all the worker threads, waiting for them to finish
for (auto& st : storage_thread) {
st.join();
}
if (frame_count == 0