Change FPS on video capture from file with opencv - c++

I'm reading a video file, and it's slower than the actual FPS of the file (59 FPS #1080p) even if i'm not doing any processing to the image:
using namespace cv;
using namespace std;
// Global variables
UMat frame; //current frame
int main(int argc, char** argv)
{
VideoCapture capture("myFile.MP4");
namedWindow("Frame");
capture.set(CAP_PROP_FPS, 120); //not changing anything
cout>>capture.get(CAP_PROP_FPS);
while (charCheckForEscKey != 27) {
capture >>frame;
if (frame.empty())
break;
imshow("Frame", frame);
}
}
Even if I tried to set CAP_PROP_FPS to 120 it doesn't change the fps of the file and when I get(CAP_PROP_FPS) I still get 59.9...
When I read the video the actual outcome is more or less 54 FPS (even using UMat).
Is there a way to read the file at a higher FPS rate ?
I asked he question on the opencv Q&A website as well :http://answers.opencv.org/question/117482/change-fps-on-video-capture-from-file/
Is it just because my computer is too slow ?

TL;DR FPS is irrelevant to the problem, probably a performance issue
What FPS is used for? Before you can display a single frame of video, you have to read the data (from an HDD, DVD, Network, Internet or whatever) and decode it. Both of these operations take time, the amount of which differs from system to system, depending on the speed of HDD/Internet, processor speed etc. If we just display each frame as soon as it's ready, the resulting movie speed will, therefore, vary from system to system. That is usually not what we want, so along with the sequence of video frames we get the "frames per second" value (a. k. a. FPS), which tells us how soon we shall display each consecutive frame (once every 1/30-th of a second for 30 FPS, once every 1/60-th of a second for 60 FPS, etc.) If the frame is ready to be displayed but it's too early, we can wait till its time comes. If it's time to display a frame but it's not ready (on an underpowered/too busy system), there's not much we can do (maybe drop frames in some situations). To see the effect for yourself, try changing the FPS value for x2, saving the file and display it with VLC: for the same amount of data and same number of frame, you will notice that the speed of your video has doubled and the time - halved. Try writing each frame twice for your x2 FPS - you will see that the playback speed is back to normal (with double the number of frames and meaningless increase of the file size).
What FPS is not used for? When processing (not displaying) a video, we are not limited by the original FPS, and the processing goes as fast as it can. If your PC can process 1000 frames per second - good, if 1500 - even better. Needless to say, changing the FPS value in the file won't improve your CPU/HDD speed, so if you were only able to process 54 frames per second, you are still going to be able to only process 54 frames per second.
But how can VLC display faster? Assuming you didn't forget to switch from Debug to Release build before measuring time, there are still a number of possibilities: VLC is probably better optimized for the particular task of video playback (OpenCV is not really that fast at some tasks, plus it has to convert each frame to a more general Mat/UMat structure), multithreading (including "double buffering", as mentioned in the comments) is another possible reason, maybe caching as well (e. g. reading a block of data containing many frames from the HDD at once instead of reading and processing frames one by one).

Related

C++ OpenCV VideoWriter frame rate synchronization

I am about to capture frames from a video grabber card. Those frames are processed and written to the HDD.
The whole setting is in a multithreading environment, so the grabber writes the images to a queue, and in another thread, the images is processed and another one writes to hdd. If the image is good by the definition of the processor, the image gets written to hdd. If 10 images in a row are "bad", the file is completed. If there are 9 images or less "bad", all the images get written with the next good image, so the file writer gets informed.
Here the problem, if I do not do it this way, instead writing each file directly after it is processed, the video file is fine, But 9 "bad" images are written. If I do it the way in my description above, the speed/frame rate of the video is not suitable. This description is a bit weird, so here is just a simplified example, so you can see the problem:
void FrameWriter::writeFrameLoop() {
string path = getPath();
cv::Size2i size(1350, 1080);
cv::VideoWriter videoWriter(path, fourcc, 30, size);
while (this->isRunning) {
while (!this->frames.empty()) {
usleep(100000); // this effects the speed/frame
videoWriter.write(this->pop());
}
std::this_thread::sleep_for(10ms);
}
videoWriter.release();
}
The example is pretty simple, here I "block" the writing process with a sleep, remember this is a different thread. This means after the capturing is stopped, the file writing takes a bit longer.
But I would expect that this does not effect the video itself, because there is a framerate of 30 and the images are still in the same order. But for me it seems to effect the video file, when I call "videoWriter.write" not in time. In this case the video is much faster than expected.
I thought only the configured frames of 30 and the count of written images would effect the video speed, but it is not. Can anyone help me do understand what is going on here?
I am using openCV 4.4.0 with Ubuntu 18.04.
Thank you for your help.
BR Michael
I think I know the reason of fast-playing result videos.
In constructor cv::VideoWriter videoWriter(path, fourcc, 30, size); you set frame rate (FPS) of resulting video to 30. It means that CV library expects exactly 30 frames to be written by write() function for each 1 second of resulting video stream.
Also for CV library there's no difference how fast you call write() with new frame, you may call it 5 times per second or 10 or even 1000 times. The only thing that matters is that you have to provide exactly 30 frames for each second of video, and how fast you provide these 30 frames doesn't matter.
I mean that all your sleep(...) functionality doesn't matter for CV VideoWriter class. And it is always true for all video rendering/conversion libraries. So pausing thread doesn't change anything at all.
But in your case you're saying that you grab 10 frames per second from real-time video data of your grabber's video card. It means the your FPS is really 10 frames per second. So in order to solve your task correctly next things should be done:
Remove all pausing functionality, like calling sleep(). It is not needed at all. And doesn't change behavior of VideoWriter.
Then first way to solve the task is to change in your constructor cv::VideoWriter videoWriter(path, fourcc, 30, size); value 30 to 10. This already will solve your task, but you have to be sure that you grab 10 frames per second, not more not less. Then your video will be a correctly playing (correct speed) video with a frame rate of 10 frames per second. This is the simplest solution. Video doesn't need to be 30 FPS for correctly playing later, 10 FPS video will be correctly played later by any player.
Another solution, if you really want your resulting video to play 30 frames per second, not less, not more, then duplicate each frame of your grabbed video three times, thus you'll get 30 frames out of 10 frames of your grabbed video. By duplicating I just mean that you should call videWriter.write(...) three times (in a small loop) with same single frame, call this write without any pause (like sleep). Then again your resulting video will have exactly 30 frames per second.
I think you just miss understood how CV::VideoWriter works. You thought that write() renders resulting video in real time, meaning that if you feed it 10 frames but exactly within one second period, then it should render correct speed of video. But this writer renders video not in real time, meaning that it just supposes that 10 frames passed constitute just 1/3 of second of resulting videos, hence it expects 30 frames to be written for 1 resulting second.
If you have a camera that's not able to provide a constant frame rate of let's say 30 frames, you can also consider limiting the frame rate yourself to e.g. 25 and measure the time elapsed since writing the last frame. You can also change your framerate to arbitrary values, as long as the camera is able to provide it. An example of an implementation:
m_fps = 25;
std::chrono::steady_clock::time_point start = std::chrono::steady_clock::now();
std::chrono::steady_clock::time_point now;
while (1) {
now = std::chrono::steady_clock::now();
long duration = std::chrono::duration_cast<std::chrono::nanoseconds>(now - start).count();
if ((double)duration > (1e9 / m_fps)) {
start = std::chrono::steady_clock::now();
m_duration += duration;
// Capture frame-by-frame
if(m_cap->grab()) { m_cap->retrieve(m_frame); }
// write frame
m_writer->write(m_frame);
cv::waitKey(1);
}
}
I figured out my problem and it was me, not OpenCV.
Because of the multithreading environment, I was writing the images (cv::Mat) to a queue. Since there are a lot of transformations YUV -> RGB -> BGR -> Crop etc. I was expecting the cv::Mat object I put in the queue was a deep copy. Instead of this, the cv::Mat was always the same object. Which means, even if there was already 20 cv::Mat entries in my queue, all of them was the same and all of them "changed" when ever there was a new image.

C++ Opencv: Performance in multithreaded environment

In my C++ program I create 3 pthread. The first 2 threads create socket and receive data (integers) from different ports. The main() function at the same time is writing these received values (by the 2 threads) into a text file. I have a third thread which starts to capture images from a camera using opencv libraries and I want to save each frame as a png file. I know my camera can go up to maximum 60FPS which brings us to sampling rate of 16 milliseconds. I use the function gettimeofday() before and after capturing the image to obtain the timing of image acquisition.
//Thread number 3
Mat IMG;
unsigned long ms;
VideoCapture cap(0);
struct timeval tp1,tp2;
while(1)
{
gettimeofday(&tp1,NULL);
cap>>IMG;
gettimeofday(&tp2,NULL);
ms=10000000*(tp1.tv_sec-tp2.tv_sec)+(tp1.tv_usec-tp2.tv_usec);
cout<<ms/1000<<endl;
}
My main program starts writing to the text file once the data is available by the other 2 threads which happens roughly 10 seconds after the first image is captured. During this 10 seconds by monitoring the time, each frame takes no longer than 16 ms to be captured. However as soon as the text file writing is initialized, even if I dont save the images, the timing that I get is not always 16ms and sometimes goes up to 35 milliseconds (although most of the time this duration is still 16ms or less). Now in addition if I start saving the images this time increases. I tried to put the imwrite() funtion for image saving into another thread but it didnt help. How can I avoid this? is memory access so privileged that even writing into a text file can slow down other threads that are only responsible to capture images? Is there any way to preserve this timing and save the images while running other threads and the main function doing his own job?

Directshow returns wrong Frame rate FPS

I want to have the frame rate of a media file using DirectShow.
Currently, I use the following methods which seem inaccurate in some cases:
I add a SourceFilter to my graph, enum its pins, then call a pPin->ConnectionMediaType(&compressedMediaFormat) and extract AvgTimePerFrame out from it. As far as I understand it is the average time per frame expressed in 100 nanoseconds. So, I just divide 10,000,000 / AvgTimePerFrame to get the average FPS of the file.
For those media files which have almost the same frame time for all frames, I get a correct FPS. But for those, which have different frame times for different frames this method returns very inaccurate results.
A correct way to get that would be to get the duration and frame count of the file and calculate the average FPS out of it (frameCount / duration). This is a costly operation however as I understand because calculating the exact number of frames requires passing through the whole file.
I wonder if there is a way to get that frame rate information more accurately?
The media files don't have to be of fixed frame rate, in general - there might be variable frame rate. The metadata of the file still has some frame rate related information which, in this case, might be inaccurate. When you start accessing the file, you have the quickly available metadata information about the frame rate. Indeed, to get the full picture you are supposed to read all frames and process their time stamps.
Even though in many it is technically possible to quickly read just time stamps of frames without reading the actual data, DirectShow demultiplexers/parsers have no method defined to obtain the information, so you would have to read and count the frames to get the accurate information.
You don't need to decompress video for that though, and you can also remove clock from the filter graph when doing this, so that counting frames does not require streaming data in real time (frames would be streamed at maximal rate in that case).

openh264 Repeating frame

I'm using the openh264 lib, in a c++ program, to convert a set of images into a h264 encoded mp4 file. These images represent updates to the screen during a session recording.
Lets say a set contains 2 images, one initial screen grab of the desktop and another one, 30 seconds later, when the clock changes.
Is there a way for the stream to represent a 30 seconds long video using only theses 2 images?
Right now, I'm brute forcing this by encoding multiple times the first frame to fill the gap. It there a more efficient and/or faster way of doing this.
Of course. Set a frame rate of 1/30 fps and you end up with 1 frame every 30 seconds. It doesn't even have to be in the H.264 stream - it can be done also when it gets muxed into an mp4 file afterwards for example.

Sampling rate deviation and sound playing position

When you set soundcard rate to, for example, 44100, you cannot guarantee actual rate be equal 44100. In my case traffic measurements between application and ALSA (in samples/sec) gave me value of 44066...44084.
This should not be related to resampling issues: even only-48000 hardware must "eat" data at 44100 rate in "44100" mode.
The problem occurs when i try to draw a cursor over waveform while this waveform is playing. I calculate cursor position using "ideal" sampling rate read from WAV-file (22050, ..., 44100, ..., 48000) and the milliseconds spent after playing start, using following C++ function:
long long getCurrentTimeMs(void)
{
boost::posix_time::ptime now = boost::posix_time::microsec_clock::local_time();
boost::posix_time::ptime epoch_start(boost::gregorian::date(1970,1,1));
boost::posix_time::time_duration dur = now - epoch_start;
return dur.total_milliseconds();
}
QTimer is used to generate frames for cursor animation, but i do not depend on QTimer precision, because i ask time by getCurrentTimeMs() (assiming it is precise enough) every frame, so i can work with varying framerate.
After 2-3 minutes of playing i see a little difference between what i hear and what i see - the cursor position is greater than playing position for something like 1/20 of second or so.
When i measure traffic that go through ALSA's callback i get mean value of 44083.7 samples/sec. Then i use this value in the screen drawing function as an actual rate. Now the problem disappears. The program is cross-platform, so i will test this measurements on windows and another soundcard later.
But is there a better way to sync sound and screen? Is there some not very CPU-consuming way of asking soundcard about actual playing sample number, for example?
This is a known effect, which is for example in Windows addressed by Rate Matching, described here Live Sources.
On playback, the effect is typically addressed by using audio hardware as "clock" and synchronizing to audio playback instead of "real" clock. That is, for example, with audio sampling rate 44100, next video frame of 25 fps video is presented in sync with 44100/25 sample playback rather than using 1/25 system time increment. This compensates for the imprecise effective playback rate.
On capture, the hardware itself acts as if it is delivering data at exactly requested rate. I think the best you can do is to measure effective rate and resample audio from effecive to correct sampling rate.