Is there any way with OpenCV to read frames from a video file in parallel or speed up reading in some other way?
I have tried using the cap.read(frame) function in multiple threads, but application crashes.
I also tried with VideoCapture object array caps, all referencing the same video file, then in each thread I can use caps[i].read(frame) and so I can read in parallel, but I just read the same frame multiple times.
I have not found any other way to speed up the reading other than changing the video format. I changed it to HapQ (original format was Apple ProRes H422) and the performance was noticeably better, about 30% faster (20-25 ms for reading frames compared to 30-35 ms before).
Related
I am working on a program that does simple analyzation (color, etc) of videos, specifically films. Since films are often thousands of frames long, I figured that it would be inefficient to use simply iterate through the video by using the cap.read() function to capture each frame individually. I would like to use multiple cores or processes to read and save the pertinent information from each frame for certain sections of the video; for example, have one core read and analyze frames from the first quarter of the video, while other core do the same for other parts, and combine information after all frames have been read. How might this be done?
I am relatively new to opencv and the practice of multithreading, but I am eager to learn. Even if you are able to provide some resources for me to look at, I would appreciate it a lot!
I'm writing a program that reads multiple webcams, stitches the pictures together and saves them into a video file.
Should i use threads to capture the images and to write the resulting large image into a file?
If yes, should i use c+11 or boost?
Do you maybe have some simple example code which shows how to avoid racing conditions?
I already found this thread which uses threads for write but it doesn't seem to avoid racing conditions.
pseudocode of my program:
camVector //vector with cameras
VideoWriter outputVideo //outputs the video
while(true){
for(camera:camVector){
camera.grab() //send signal to grab picture
}
picVector = getFrames(camVector) //collect frames of all cameras into a Vector
Mat bigImg = stitchPicTogether(picVector)
outputVideo.write(bigImg)
}
That's what I'd do:
camVector //vector with cameras
VideoWriter outputVideo //outputs the video
for(camera:camVector) camera.grab(); // request first frame
while(true){
parallel for(camera:camVector) {
frame = getFrame(camera);
pasteFrame(/* destination */ bigImg,
/* geometry */ geometry[camera],
/* source */ frame
);
camera.grab(); // request next frame, so the webcam driver can start working
// while you write to disk
}
outputVideo.write(bigImg)
}
This way stiching is done in parallel, and if your webcam drivers have different timing, you can begin stiching what you recieved from one webcam while you wait for the other webcam's frame.
About implementation, you could simply go with OpenMP, already used in OpenCV, something like #pragma omp parallel. If you prefer something more C++ like, give Intel TBB a try, it's cool.
Or as you said, you could go with native C++11 threads or boost. Choosing between the two mainly depend on your work environment: if you intend to work with older compiler, boost is safer.
In all cases, no locking is required, except a join at the end, to wait for all threads to finish their work.
If your bottleneck is writing the output, the limitting factor is either disk IO or video compression speed. For disk, you can try changing codec and/or compressing more. For compression speed, have a look at GPU codecs like this.
I am writing a program that involves real-time processing of video from a network camera using OpenCV. I want to be able to capture (at any time during processing) previous images (e.g. say ten seconds worth) and save to a video file.
I am currently doing this using a queue as a buffer (to push 'cv::Mat' data) but this is obviously not efficient as a few seconds worth of images soon uses up all the PC memory. I tried compressing images using 'cv::imencode' but that doesn't make much difference using PNG, I need a solution that uses hard-drive memory and efficient for real-time operation.
Can anyone suggest a very simple and efficient solution?
EDIT:
Just so that everyone understands what I'm doing at the moment; here's the code for a 10 second buffer:
void run()
{
cv::VideoCapture cap(0);
double fps = cap.get(CV_CAP_PROP_FPS);
int buffer_lenght = 10; // in seconds
int wait = 1000.0/fps;
QTime time;
forever{
time.restart();
cv::mat image;
bool read = cap.read(image);
if(!read)
break;
bool locked = _mutex.tryLock(10);
if(locked){
if(image.data){
_buffer.push(image);
if((int)_buffer.size() > (fps*buffer_lenght))
_buffer.pop();
}
_mutex.unlock();
}
int time_taken = time.elapsed();
if(time_taken<wait)
msleep(wait-time_taken);
}
cap.release();
}
queue<cv::Mat> _buffer and QMutex _mutex are global variables. If you're familiar with QT, signals and slots etc, I've got a slot that grabs the buffer and saves it as a video using cv::VideoWriter.
EDIT:
I think the ideal solution will be for my queue<cv::Mat> _buffer to use hard-drive memory rather than pc memory. Not sure on which planet this is possible? :/
I suggest looking into real-time compression with x264 or similar. x264 is regularly used for real-time encoding of video streams and, with the right settings, can encode multiple streams or a 1080p video stream in a moderately powered processor.
I suggest asking in doom9's forum or similar forums.
x264 is a free h.264 encoder which can achieve 100:1 or better (vs raw) compression. The output of x264 can be stored in your memory queue with much greater efficiency than uncompressed (or losslessly compressed) video.
UPDATED
One thing you can do is store images to the hard disk using imwrite and update their filenames to the queue. When the queue is full, delete images as you pop filenames.
In your video writing slot, load the images as they are popped from the queue and write them to your VideoWriter instance
You mentioned you needed to use Hard Drive Memory
In that case, consider using the OpenCV HighGUI VideoWriter. You can create an instance of VideoWriter as below:
VideoWriter record("RobotVideo.avi", CV_FOURCC('D','I','V','X'),
30, frame.size(), true);
And write image captures to in as below:
record.write(image);
Find the documentation and the sample program on the website.
We're currently developing some functionality for our program that needs OpenCV. One of the ideas being tossed at the table is the use of a "buffer" which saves a minute of video data to the memory and then we need to extract like a 13-second video file from that buffer for every event trigger.
Currently we don't have enough experience with OpenCV so we don't know if it is possible or not. Looking at the documentation the only allowable function to write in memory are imencode and imdecode, but those are images. If we can find a way to write sequences of images to a video file that would be neat, but for now our idea is to use a video buffer.
We're also using OpenCV version 2 specifications.
TL;DR We want to know if it is possible to write a portion of a video to memory.
In OpenCV, every video is treated as a collection of frames(images). Depending on your cameras' FPS you can capture frames periodically and fill the buffer with them. Meanwhile you can destroy the oldest frame(taken 1 min before). So a FIFO data structure can be implemented to achieve your goal. Getting a 13 second sample is easy, just jump to a random frame and write 13*FPS frames sequentially to a video file.
But there will be some sync and timing problems AFAIK and as far as I've used OpenCV.
Here is the link of OpenCV documentation about video i/o. Especially the last chunk of code is what you will use for writing.
TL;DR : There is no video, there are sequential images with little differences. So you need to treat them as such.
I'm writing a video player. For audio part i'm using XAudio2. For this i have separate thread that is waiting for BufferEnd event and after this fills buffer with new data and call SubmitSourceBuffer.
The problem is that XAudio2(driver or sound card) has huge delays before playing next buffer if buffer size is small (1024 bytes). I made measurements and XAudio takes up to two times long for play such chunk. (1024 bytes chunk of 48khz raw 2-channeled pcm should be played in nearly 5ms, but on my computer it's played up to 10ms). And nearly no delays if i make buffer 4kbytes or more.
I need such small buffer to be able making synchronizations with video clock or external clock (like ffplay does). If i make my buffer too big then end-user will hear lot of noises in output due to synchronization stuff.
Also i have made measurements on all my functions that are decoding and synchronizing audio or anything else that could block or produce delays, they take 0 or 1 ms to execute, so they are not the problem 100%.
Does anybody know what can it be and why it's happenning? Can anyone check if he has same delay problems with small buffer?
I've not experienced any delay or pause using .wav files. If you are using mp3 format, it may add silence at the beginning and end of the sound during the compress operation thus causing a delay in your sound playing. See this post for more information.