I want to get the property FPS from a video that is recorded by a camera.
I use:
CvCapture* flujo_video = cvCreateFileCapture(argv[1]);
double parametro= cvGetCaptureProperty( flujo_video, CV_CAP_PROP_FPS);
The result of this is -nan and if I use an int format the result is -2147483648.
Try it without using the deprecated C api:
VideoCapture cap(0); // open the video file for reading
double fps = cap.get(CV_CAP_PROP_FPS); //get the frames per seconds of the video
If you look around in the web, you can see lots of people having problem with this parameter. It turns out that being thousands of cameras/codecs/formats openCV cant handle them all, so often you get 0, NaN (not a number) or other illogical parameter. This generally means that you can not get the FPS for your camera.
Related
I'm trying to capture the video. If I input 00:00:10, I want to go to that time of video and capture it.
Now, I get the duration of video. And If the duration is same with input time, I stop and capture it. But that takes too much time.
int timing = cap.get(CAP_PROP_POS_MSEC);
How can I solve this problem?
You can use the function
bool cv::VideoCapture::set ( int propId,
double value
)
with
propId = CAP_PROP_POS_MSEC //Current position of the video file in milliseconds.
Example:
VideoCapture openCVCapture("video1.mp4");
openCVCapture.set(cv2.CAP_PROP_POS_MSEC,20000) //jump to 20 sec
openCVCapture >> image;
More information is in Opencv docs
I am using C++ and opencv to capture camera images. Within this process as shown below in my code,I also measure capturing duration in milliseconds by using gettimeofday() before and after the capturing image.
Mat IMG;
unsigned long ms;
VideoCapture cap(0);
struct timeval tp1,tp2;
while(1)
{
gettimeofday(&tp1,NULL);
cap>>IMG;
gettimeofday(&tp2,NULL);
ms=10000000*(tp1.tv_sec-tp2.tv_sec)+(tp1.tv_usec-tp2.tv_usec);
cout<<ms/1000<<endl;
}
I know my camera can go up to maximum 60 frames per seconds. Therefore this code will output values of 15~17 ms. Now I want to save my images, therefore I use imwrite() function for that and add it after the second time I call gettimeofday() as shown below:
Mat IMG;
unsigned long ms;
VideoCapture cap(0);
int cc=0;
struct timeval tp1,tp2;
while(1)
{
gettimeofday(&tp1,NULL);
cap>>IMG;
gettimeofday(&tp2,NULL);
ms=10000000*(tp1.tv_sec-tp2.tv_sec)+(tp1.tv_usec-tp2.tv_usec);
cc=cc+1;
imwrite("IMG_"+std::to_string(cc)+".png",IMG);
cout<<ms/1000<<endl;
}
Now in this case the output will be 5~6 ms! and if I put the second call to gettimeofday() after the image writing I will get the same values of 15~17ms. How is that possible? Thanks in advance.
This happens because you only measure the time waiting on the VideoCapture.
In the first example, extracting the next frame will always block until it is ready (and only spend time there), meaning that you will see values around the inverse of your frame rate.
In the second example, the first frame should take equally long to read. However, then you spend time writing the image to the file. While this happens, the camera will start recording the next frame - meaning that when you next ask it to give you an image, part of the time needed to do that will already have elapsed, so your waiting period is shorter.
I have a very basic question about frame capturing using OpenCV. My code look like below:
VideoCapture cap(0);
cv::Mat mat;
int i = 0;
while(cap.read(mat)==true) {
//some code here
i = i + 1;
}
It works well. However, when I look at logcat logs by OpenCV, it says
FRAMES Received 225, grabbed 123.
and this grabbed (123) usually matches with the variable 'i' (number of loops) in my code.
Ideally my code should be able to read all received frames, isn't it? Can someone explain this behavior?
Calling cap.read(mat) takes a certain amount of time as it has to obtain and decode the image's video feed and convert it to the cv::Mat format. This amount of time appears to be greater than the video's capture rate. You can determine the frames per second of the video capture with the following:
double frames_per_second = cap.get(CV_CAP_PROP_FPS);
Try timing the amount of time your cap.read(mat) call takes and see if you can see a relationship between the ratio of frames received to frames grabbed and the ratio of the capture time (1/frames_per_second) and the time cap.read(mat) takes to execute.
Source:
http://opencv-srf.blogspot.ca/2011/09/capturing-images-videos.html
How can I retrieve the current frame number of a video using OpenCV? Does OpenCV have any built-in function for getting the current frame or I have to do it manually?
You can use the "get" method of your capture object like below :
capture.get(CV_CAP_PROP_POS_FRAMES); // retrieves the current frame number
and also :
capture.get(CV_CAP_PROP_FRAME_COUNT); // returns the number of total frames
Btw, these methods return a double value.
You can also use cvGetCaptureProperty method (if you use old C interface).
cvGetCaptureProperty(CvCapture* capture,int property_id);
property_id options are below with definitions:
CV_CAP_PROP_POS_MSEC 0
CV_CAP_PROP_POS_FRAME 1
CV_CAP_PROP_POS_AVI_RATIO 2
CV_CAP_PROP_FRAME_WIDTH 3
CV_CAP_PROP_FRAME_HEIGHT 4
CV_CAP_PROP_FPS 5
CV_CAP_PROP_FOURCC 6
CV_CAP_PROP_FRAME_COUNT 7
POS_MSEC is the current position in a video file, measured in
milliseconds.
POS_FRAME is the position of current frame in video (like 55th frame of video).
POS_AVI_RATIO is the current position given as a number between 0 and 1
(this is actually quite useful when you want to position a trackbar
to allow folks to navigate around your video).
FRAME_WIDTH and FRAME_HEIGHT are the dimensions of the individual
frames of the video to be read (or to be captured at the camera’s
current settings).
FPS is specific to video files and indicates the number of frames
per second at which the video was captured. You will need to know
this if you want to play back your video and have it come out at the
right speed.
FOURCC is the four-character code for the compression codec to be
used for the video you are currently reading.
FRAME_COUNT should be the total number of frames in video, but
this figure is not entirely reliable.
(from Learning OpenCV book )
In openCV version 3.4, the correct flag is:
cap.get(cv2.CAP_PROP_POS_FRAMES)
The way of doing it in OpenCV python is like this:
import cv2
cam = cv2.VideoCapture(<filename>);
print cam.get(cv2.cv.CV_CAP_PROP_POS_FRAMES)
I want to find the length of a video capture in OpenCV;
int frameNumbers = (int) cvGetCaptureProperty(video2, CV_CAP_PROP_FRAME_COUNT);
int fps = (int) cvGetCaptureProperty(video2, CV_CAP_PROP_FPS);
int videoLength = frameNumbers / fps;
but this give me a result which is less than the real answer. What do I have to do?
Actually, I am not sure if there is any issue with the functions that you tried as of today. However, There is an issue with this snippet. Here, it is being assumed that Frames Per Second is an integer value which is not always the case. For example, many videos are encoded at 29.97 FPS, and this code would assume int(29.97) = 29 which obviously results in a larger value in seconds for video length.
The calculation seems to work fine for me if I use floating point values (float) without truncating them.
See this similar post. OpenCV cannt (yet) capture correctly the number of frames
OpenCV captures only a fraction of the frames from a video file