I want to get framerate for video but I always get -nan on linux.
VideoCapture video(input);
if (!video.isOpened()) // zakoncz program w przypadku, problemu z otwarciem
{
exit(0);
}
double fps = video.get(CV_CAP_PROP_FPS);
My openCv version is 2.4.7. The same code works fine on windows.
My guess is that it's camera dependent. Some (API) functions are sometimes not implemented in OpenCV and/or supported by your camera. Best would be if you check the code on github.
Concerning your problem: I am able to get the frame rates with a normal webcam and a XIMEA camera with your code.
Tested on:
Ubuntu 15.04 64bit
OpenCV 3.0.0 compiled with Qt and XIMEA camera support
You could measure your frame rate yourself:
double t1 = (double)cv::getTickCount();
// do something
t1 = ((double)cv::getTickCount() - t1)/cv::getTickFrequency();
Gives you the time that //do something spent.
It is from a file, you can try to estimate it yourself.
VideoCapture video("name_of_video.format");
int frameCount = (int)video.get(CV_CAP_PROP_FRAME_COUNT) ;
//some times frame count is wrong, so you can verify
video.set(CV_CAP_PROP_POS_FRAMES , frameCount-1);
//try to read the last frame, if not decrement frame count
while(!(video.read(nextFrame))){
frameCount--;
video.set(CV_CAP_PROP_POS_FRAMES , frameCount-1);
}
//it is already set above, but just for clarity
video.set(CV_CAP_PROP_POS_FRAMES , frameCount-1);
double fps = (double)(1000*frameCount)/( video.get(CV_CAP_PROP_POS_MSEC));
cout << "fps: " << fps << endl;
This is how I get framerate when using CV_CAP_PROP_FPS fails
The question doesn't clarify if this refers to video from a live source (webcam) or from a video file.
If the latter, the capabilities of OpenCV will depend on the format and codecs used in the file. For some file formats, expect to get a 0 or NaN.
If the former, the real fps of the source may not be returned, especially if the requested framerate is not supported by the hardware, and a different one is used instead. For this case I would suggest an approach similar to #holzkohlengrill's, but only do that calculation after an initial delay of say 300ms (YMMV), as grabbing of the first frames and some initialisations happening can mess with that calculation.
Related
I'm using OpenCV4 to read from a camera. Similar to a webcam. Works great, code is somewhat like this:
cv::VideoCapture cap(0);
cap.set(cv::CAP_PROP_FRAME_WIDTH , 1600);
cap.set(cv::CAP_PROP_FRAME_HEIGHT, 1200);
while (true)
{
cv::Mat mat;
// wait for some external event here so I know it is time to take a picture...
cap >> mat;
process_image(mat);
}
Problem is, this gives many video frames, not a single image. This is important because in my case I don't want nor need to be processing 30 FPS. I actually have specific physical events that trigger reading the image from the camera at certain times. Because OpenCV is expecting the caller to want video -- not surprising considering the class is called cv::VideoCapture -- it has buffered many seconds of frames.
What I see in the image is always from several seconds ago.
So my questions:
Is there a way to flush the OpenCV buffer?
Or to tell OpenCV to discard the input until I tell it to take another image?
Or to get the most recent image instead of the oldest one?
The other option I'm thinking of investigating is using V4L2 directly instead of OpenCV. Will that let me take individual pictures or only stream video like OpenCV?
I try to capture video using Logitech C920 webcam with full hd resolution. It provides 30 fps with this resolution.
It works with windows camera application at 30 fps but whatever I try, I could not get this fps rate with opencv videoCapture.
Note: I use windows 10 and vs15.
I tried with different usb ports, opencv versions and codecs. Result is same, ~5 fps.
I measured fps ignoring first 10 frames. Here are my calculations: only read = "5.04fps" , read+imshow = "4.97fps" and read+imshow+write = "4.91fps"
void main()
{
mainStream.open(0);
mainStream.set(CV_CAP_PROP_FRAME_WIDTH, 1920);
mainStream.set(CV_CAP_PROP_FRAME_HEIGHT, 1080);
mainStream.set(CV_CAP_PROP_FPS, 30);
mainStream.set(CV_CAP_PROP_FOURCC, CV_FOURCC('M', 'J', 'P', 'G'));
mainWriter.open("outputnew2.avi", CV_FOURCC('M', 'J', 'P', 'G'), 30, cv::Size(frameW, frameH), true);
namedWindow("frame", 1);
while (true){
Mat frame;
mainStream >> frame;
imshow("frame", frame);
if (waitKey(5) == 27)
break;
mainWriter << frame;
}
mainStream.release();
mainWriter.release();
}
First of all:
The imshow method is very slow (in a pretty relative scope). Try to measure the real fps while you do not show the image and do not write the image to a file.
After that is done, you can check the real fps and determine which one of the two options (showing or writing) is slowing down your achieved fps rate.
Please post results of the achieved fps rate without showing or writing the image.
Edit:
Alright, you nearly always get 5 fps, which is kind of slow. Does the saved video (or images) match the resolution you wanted? Are they really 1920x1080
?
In that case, do the measured times differ from release and debug build?
Edit2:
If the same code works with other usb cams (and they produce more fps than the C920) my immediate suspect is the C920 itself (or its driver at least). Does it help if you deinstall the driver for it (eventually reboot) and install the newest driver again?
Another thing: Do the measured fps change if you do not request 30 but maybe like 20 fps?
Edit3:
It seems it was a driver issue (merged from comments). Reinstalling the driver is one method to adress this
Currently I’m working on a project to mirror a camera for a blind spot.
The camera got 640 x 480 NTSC signal.
The output screen is 854 x 480 NTSC.
I grab the camera with an EasyCAP video grabber.
On the Banana Pi I installed open cv 2.4.9.
The critical point of this project is that the video on the display needs to be real time.
Whenever I comment the line that puts the window into fullscreen, there pop ups a small window and the footage runs without delay and lagg.
But when I set the video to full screen, the footage becomes slow, and lags.
Part of the code:
namedWindow("window",0);
setWindowProperty("window",CV_WND_PROP_FULLSCREEN,CV_WINDOW_FULLSCREEN);
while(1){
cap>>image;
flip(image, destination,1);
imshow("window",destination);
waitKey(33); //delay 33 ms
}
How can I fill the screen with the camera footage without losing speed and frames?
Is it possible to output the footage directly to the composite output?
The problem is that upscaling and drawing is done in software here. The Banana Pi processor is not powerful enough to process the needed throughput with 30 frames per second.
This is an educated guess on my side, as even desktop systems can run into lag problems when processing and simultaneously displaying video.
A common solution in the computer vision community for this problem is to use OpenGL for display. Here, the upscaling and display is offloaded to the graphics processor. You can do the same thing on a Banana Pi.
If you compiled OpenCV with OpenGL support, you can try it like this:
namedWindow("window", WINDOW_OPENGL);
imshow("window", destination);
Note that if you use OpenGL, you can also save on the flip operation by using an approprate modelview matrix. For this however you probably need to dive into GL code yourself instead of using imshow.
I fixed the whole problem by using:
namedWindow("window",1);
With FLAG 1 stands for WINDOW_AUTOSIZE.
The footage is more real-time now.
I’m using a small monitor, so the window size is nearly the same as the monitor.
I'm attempting to synchronize the frames decoded from an MP4 video. I'm using the FFMPEG libraries. I've decoded and stored each frame and successfully displayed the video over an OPENGL plane.
I've started a timer just before cycling through the frames; the aim being to synchronize the Video correctly. I then compare the PTS of each frame against this timer. I stored the PTS received from the packet during decoding.
What is displayed within my application does not seem to play at the rate I expect. It plays faster than the original video file would within a media player.
I am inexperienced with FFMPEG and programming video in general. Am I tackling this the wrong way?
Here is an example of what I'm attempting to do
FrameObject frameObject = frameQueue.front();
AVFrame frame = *frameObject.pFrame;
videoClock += dt;
if(videoClock >= globalPTS)
{
//Draw the Frame to a texture
DrawFrame(&frame, frameObject.m_pts);
frameQueue.pop_front();
globalPTS = frameObject.m_pts;
}
Please note I'm using C++, Windows, Opengl, FFMPEG and the VS2010 IDE
First off, Use int64_t pts = av_frame_get_best_effort_timestamp(pFrame) to get the pts. Second you must make sure both streams you are syncing use the same time base. The easiest way to do this is convert everything to AV_TIME_BASE_Q. pts = av_rescale_q ( pts, formatCtx->streams[videoStream]->time_base, AV_TIME_BASE_Q ); In this format, pts is in nanoseconds.
I know the title is a bit vague but I'm not sure how else to describe it.
CentOS with ffmpeg + OpenCV 2.4.9. I'm working on a simple motion detection system which uses a stream from an IP camera (h264).
Once in a while the stream hiccups and throws in a "bad frame" (see pic-bad.png link below). The problem is, these frames vary largely from the previous frames and causes a "motion" event to get triggered even though no actual motion occured.
The pictures below will explain the problem.
Good frame (motion captured):
Bad frame (no motion, just a broken frame):
The bad frame gets caught randomly. I guess I can make a bad frame detector by analyzing (looping) through the pixels going down from a certain position to see if they are all the same, but I'm wondering if there is any other, more efficient, "by the book" approach to detecting these types of bad frames and just skipping over them.
Thank You!
EDIT UPDATE:
The frame is grabbed using a C++ motion detection program via cvQueryFrame(camera); so I do not directly interface with ffmpeg, OpenCV does it on the backend. I'm using the latest version of ffmpeg compiled from git source. All of the libraries are also up to date (h264, etc, all downloaded and compiled yesterday). The data is coming from an RTSP stream (ffserver). I've tested over multiple cameras (dahua 1 - 3 MP models) and the frame glitch is pretty persistent across all of them, although it doesn't happen continuously, just once on a while (ex: once every 10 minutes).
What comes to my mind in first approach is to check dissimilarity between example of valid frame and the one we are checking by counting the pixels that are not the same. Dividing this number by the area we get percentage which measures dissimilarity. I would guess above 0.5 we can say that tested frame is invalid because it differs too much from the example of valid one.
This assumption is only appropriate if you have a static camera (it does not move) and the objects which can move in front of it are not in the shortest distance (depends from focal length, but if you have e.g. wide lenses so objects should not appear less than 30 cm in front of camera to prevent situation that objects "jumps" into a frame from nowhere and has it size bigger that 50% of frame area).
Here you have opencv function which does what I said. In fact you can adjust dissimilarity coefficient more large if you think motion changes will be more rapid. Please notice that first parameter should be an example of valid frame.
bool IsBadFrame(const cv::Mat &goodFrame, const cv::Mat &nextFrame) {
// assert(goodFrame.size() == nextFrame.size())
cv::Mat g, g2;
cv::cvtColor(goodFrame, g, CV_BGR2GRAY);
cv::cvtColor(nextFrame, g2, CV_BGR2GRAY);
cv::Mat diff = g2 != g;
float similarity = (float)cv::countNonZero(diff) / (goodFrame.size().height * goodFrame.size().width);
return similarity > 0.5f;
}
You do not mention if you use ffmpeg command line or libraries, but in the latter case you can check the bad frame flag (I forgot its exact description) and simply ignore those frames.
remove waitKey(50) or change it to waitKey(1). I think opencv does not spawn a new thread to perform capture. so when there is a pause, it confuses the buffer management routines, causing bad frames..maybe?
I have dahua cameras and observed that with higher delay, bad frames are observed. And they go away completely with waitKey(1). The pause does not necessarily need to come from waitKey. Calling routines also cause such pauses and result in bad frames if they are taking long enough.
This means that there should be minimum pause between consecutive frame grabs.the solution would be to use two threads to perform capture and processing separately.