openCV - videocapture of webcam - latency issue - c++

So I'm actually making a project of augmented reality.
I use openCV to take picture from 2 camera.
Those camera arent really efficient, I think their max fps is around 30 fps.
If I refresh the camera frame (by the read method) in the programm at each frame the fps of the application is about 25 fps. If I don't refresh it It's about 55 fps.
I suppose that this latency is because openCV wait for a new frame to be generated by the cameras before going to the next step of the program.
But I need at least all virtual object to be rendered at 55 fps for imersion. Is their a way to say openCV to jump to the next call if there's no frame in the videoCapture object?
And if there's no way is there an other cross platform API for camera control more efficient?
Thx!

I never use OpenCV in C++, but I think this is the same. I am using OpenCV4Android and I need to do something when a frame comes in, it will acturally slow the fps if you put your procedure in the onCameraFrame() function(I guess its like the read() function in C++) because only when the frame return, the next frame is coming in.
My solution is use another thread to process the frame. what you can do in your read() function is setting the flag to indicate the frame is in videoCapture object or not, then use the process thread to check the flag, if there is, process it. The fps will be better.

Related

how to programmatically modify the FPS of a video

I'm using OpenCV's cv::VideoCapture class to read frames from videos.
My guess is I could drop every 2nd frame to go from 30 FPS to 15 FPS, or I could drop every 3rd frame to go from 30 FPS to 20 FPS, etc.
...but I suspect this is not the right way to do it. Isn't there some sort of interpolation or re-interpretation of frames that needs to happen to smoothly modify the FPS?
Assuming there is, what would this be called so I can search for it? I believe projects like VLC can re-encode videos to use a different FPS, but I'm more curious to know about how to programmatically do this work in C++ with OpenCV.

C++ OpenCV VideoWriter frame rate synchronization

I am about to capture frames from a video grabber card. Those frames are processed and written to the HDD.
The whole setting is in a multithreading environment, so the grabber writes the images to a queue, and in another thread, the images is processed and another one writes to hdd. If the image is good by the definition of the processor, the image gets written to hdd. If 10 images in a row are "bad", the file is completed. If there are 9 images or less "bad", all the images get written with the next good image, so the file writer gets informed.
Here the problem, if I do not do it this way, instead writing each file directly after it is processed, the video file is fine, But 9 "bad" images are written. If I do it the way in my description above, the speed/frame rate of the video is not suitable. This description is a bit weird, so here is just a simplified example, so you can see the problem:
void FrameWriter::writeFrameLoop() {
string path = getPath();
cv::Size2i size(1350, 1080);
cv::VideoWriter videoWriter(path, fourcc, 30, size);
while (this->isRunning) {
while (!this->frames.empty()) {
usleep(100000); // this effects the speed/frame
videoWriter.write(this->pop());
}
std::this_thread::sleep_for(10ms);
}
videoWriter.release();
}
The example is pretty simple, here I "block" the writing process with a sleep, remember this is a different thread. This means after the capturing is stopped, the file writing takes a bit longer.
But I would expect that this does not effect the video itself, because there is a framerate of 30 and the images are still in the same order. But for me it seems to effect the video file, when I call "videoWriter.write" not in time. In this case the video is much faster than expected.
I thought only the configured frames of 30 and the count of written images would effect the video speed, but it is not. Can anyone help me do understand what is going on here?
I am using openCV 4.4.0 with Ubuntu 18.04.
Thank you for your help.
BR Michael
I think I know the reason of fast-playing result videos.
In constructor cv::VideoWriter videoWriter(path, fourcc, 30, size); you set frame rate (FPS) of resulting video to 30. It means that CV library expects exactly 30 frames to be written by write() function for each 1 second of resulting video stream.
Also for CV library there's no difference how fast you call write() with new frame, you may call it 5 times per second or 10 or even 1000 times. The only thing that matters is that you have to provide exactly 30 frames for each second of video, and how fast you provide these 30 frames doesn't matter.
I mean that all your sleep(...) functionality doesn't matter for CV VideoWriter class. And it is always true for all video rendering/conversion libraries. So pausing thread doesn't change anything at all.
But in your case you're saying that you grab 10 frames per second from real-time video data of your grabber's video card. It means the your FPS is really 10 frames per second. So in order to solve your task correctly next things should be done:
Remove all pausing functionality, like calling sleep(). It is not needed at all. And doesn't change behavior of VideoWriter.
Then first way to solve the task is to change in your constructor cv::VideoWriter videoWriter(path, fourcc, 30, size); value 30 to 10. This already will solve your task, but you have to be sure that you grab 10 frames per second, not more not less. Then your video will be a correctly playing (correct speed) video with a frame rate of 10 frames per second. This is the simplest solution. Video doesn't need to be 30 FPS for correctly playing later, 10 FPS video will be correctly played later by any player.
Another solution, if you really want your resulting video to play 30 frames per second, not less, not more, then duplicate each frame of your grabbed video three times, thus you'll get 30 frames out of 10 frames of your grabbed video. By duplicating I just mean that you should call videWriter.write(...) three times (in a small loop) with same single frame, call this write without any pause (like sleep). Then again your resulting video will have exactly 30 frames per second.
I think you just miss understood how CV::VideoWriter works. You thought that write() renders resulting video in real time, meaning that if you feed it 10 frames but exactly within one second period, then it should render correct speed of video. But this writer renders video not in real time, meaning that it just supposes that 10 frames passed constitute just 1/3 of second of resulting videos, hence it expects 30 frames to be written for 1 resulting second.
If you have a camera that's not able to provide a constant frame rate of let's say 30 frames, you can also consider limiting the frame rate yourself to e.g. 25 and measure the time elapsed since writing the last frame. You can also change your framerate to arbitrary values, as long as the camera is able to provide it. An example of an implementation:
m_fps = 25;
std::chrono::steady_clock::time_point start = std::chrono::steady_clock::now();
std::chrono::steady_clock::time_point now;
while (1) {
now = std::chrono::steady_clock::now();
long duration = std::chrono::duration_cast<std::chrono::nanoseconds>(now - start).count();
if ((double)duration > (1e9 / m_fps)) {
start = std::chrono::steady_clock::now();
m_duration += duration;
// Capture frame-by-frame
if(m_cap->grab()) { m_cap->retrieve(m_frame); }
// write frame
m_writer->write(m_frame);
cv::waitKey(1);
}
}
I figured out my problem and it was me, not OpenCV.
Because of the multithreading environment, I was writing the images (cv::Mat) to a queue. Since there are a lot of transformations YUV -> RGB -> BGR -> Crop etc. I was expecting the cv::Mat object I put in the queue was a deep copy. Instead of this, the cv::Mat was always the same object. Which means, even if there was already 20 cv::Mat entries in my queue, all of them was the same and all of them "changed" when ever there was a new image.

Change FPS on video capture from file with opencv

I'm reading a video file, and it's slower than the actual FPS of the file (59 FPS #1080p) even if i'm not doing any processing to the image:
using namespace cv;
using namespace std;
// Global variables
UMat frame; //current frame
int main(int argc, char** argv)
{
VideoCapture capture("myFile.MP4");
namedWindow("Frame");
capture.set(CAP_PROP_FPS, 120); //not changing anything
cout>>capture.get(CAP_PROP_FPS);
while (charCheckForEscKey != 27) {
capture >>frame;
if (frame.empty())
break;
imshow("Frame", frame);
}
}
Even if I tried to set CAP_PROP_FPS to 120 it doesn't change the fps of the file and when I get(CAP_PROP_FPS) I still get 59.9...
When I read the video the actual outcome is more or less 54 FPS (even using UMat).
Is there a way to read the file at a higher FPS rate ?
I asked he question on the opencv Q&A website as well :http://answers.opencv.org/question/117482/change-fps-on-video-capture-from-file/
Is it just because my computer is too slow ?
TL;DR FPS is irrelevant to the problem, probably a performance issue
What FPS is used for? Before you can display a single frame of video, you have to read the data (from an HDD, DVD, Network, Internet or whatever) and decode it. Both of these operations take time, the amount of which differs from system to system, depending on the speed of HDD/Internet, processor speed etc. If we just display each frame as soon as it's ready, the resulting movie speed will, therefore, vary from system to system. That is usually not what we want, so along with the sequence of video frames we get the "frames per second" value (a. k. a. FPS), which tells us how soon we shall display each consecutive frame (once every 1/30-th of a second for 30 FPS, once every 1/60-th of a second for 60 FPS, etc.) If the frame is ready to be displayed but it's too early, we can wait till its time comes. If it's time to display a frame but it's not ready (on an underpowered/too busy system), there's not much we can do (maybe drop frames in some situations). To see the effect for yourself, try changing the FPS value for x2, saving the file and display it with VLC: for the same amount of data and same number of frame, you will notice that the speed of your video has doubled and the time - halved. Try writing each frame twice for your x2 FPS - you will see that the playback speed is back to normal (with double the number of frames and meaningless increase of the file size).
What FPS is not used for? When processing (not displaying) a video, we are not limited by the original FPS, and the processing goes as fast as it can. If your PC can process 1000 frames per second - good, if 1500 - even better. Needless to say, changing the FPS value in the file won't improve your CPU/HDD speed, so if you were only able to process 54 frames per second, you are still going to be able to only process 54 frames per second.
But how can VLC display faster? Assuming you didn't forget to switch from Debug to Release build before measuring time, there are still a number of possibilities: VLC is probably better optimized for the particular task of video playback (OpenCV is not really that fast at some tasks, plus it has to convert each frame to a more general Mat/UMat structure), multithreading (including "double buffering", as mentioned in the comments) is another possible reason, maybe caching as well (e. g. reading a block of data containing many frames from the HDD at once instead of reading and processing frames one by one).

OpenCV is missing frames while face detection takes place

I am using the
haarcascade_frontalface_alt2.xml
file for face detection in OpenCV 2.4.3 under the Visual Studio 10 framework.
I am using
Mat frame;
cv::VideoCapture capture("C:\\Users\\Xavier\\Desktop\\AVI\\Video 6_xvid.avi");
capture.set(CV_CAP_PROP_FPS,30);
for(;;)
{
capture >> frame;
//face detection code
}
The problem i'm facing is that as Haar face detection is computationally heavy, OpenCV is missing a few frames in the
capture >> frame;
instruction. To check it I wrote to a txt file a counter and found only 728 frames out of 900 for a 30 sec 30fps video.
Plz someone tell me how to fix it.
I am not an experienced openCV user, but you could try flushing the outputstream of capture to disk. Unfortunately, I think the VideoCapture class does not seem to support such an operation. Note that flushing to disk will have an impact on your performance, since it will first flush everything and only then continue executing. Therefore it might not be the best solution, but it is the easiest one if possible.
Another approach that requires more work but that should fix it is to make a separate low priority thread that writes each frame to disk. Your current thread then only needs to call this low priority thread each time it wants its data to be captured. Depending on whether the higher priority thread might change the data while the low priority thread still has to write it to disk, you might want to copy the data to a separate buffer first.

How to average all the frames of a video file in which objects are not moving in OpenCV?

I have different frames of a video file. Now, on observing each frames separately, I noticed there are many frames in which objects has not moved. I need to do averaging of all those frames and make a single frame using OpenCV.
I am totally new in OpenCV, so It will be great help if can able to get codes for frame averaging.
One simple technique...
Subtract previous frame from present frame...using this opencv function. Take only those frame which have difference negative or positive above a threshold...like in the frames where the object is almost static then on doing frame difference you will get low value...skip those frames....again when only the object is moving and the rest of the background is more or less static like a man moving in a park, there just you can store the position of the man and the background gets duplicated...