I am doing a project on face detection from video. I detected faces from the video, but it is capturing every frame, so i am getting so many images with in a second itself (so many frames got captured in a second).
Problem: I want to reduce that, capture frame after every 3 second is enough. I tried to use wait() ,sleep() functions. But they are just stop running the video for sometime,nothing else is happening. Can any one help me to overcome from this.
#include <cv.h>
#include <highgui.h>
#include <time.h>
#include <stdio.h>
using namespace std;
IplImage *frame;
int frames;
void facedetect(IplImage* image);
void saveImage(IplImage *img,char *ex);
IplImage* resizeImage(const IplImage *origImg, int newWidth,int newHeight, bool keepAspectRatio);
const char* cascade_name="haarcascade_frontalface_default.xml";//specify classifier cascade.
int k;
int main(int argc, char** argv)
{
// OpenCV Capture object to grab frames
//CvCapture *capture = cvCaptureFromCAM(0);
CvCapture *capture=cvCaptureFromFile("video.flv");
//int frames=cvSetCaptureProperty(capture,CV_CAP_PROP_FPS, 0.5);
//double res1=cvGetCaptureProperty(capture,CV_CAP_PROP_POS_FRAMES);
//cout<<"res"<<res<<endl;
// start and end times
time_t start, end;
// fps calculated using number of frames / seconds
double fps;
// frame counter
int counter = 0;
// start the clock
time(&start);
//while(cvGrabFrame(capture))
while(1)
{
//if(capture.get(CV_CAP_PROP_POS_FRAMES) % 2 == 0)
frames=cvSetCaptureProperty(capture,CV_CAP_PROP_FPS, 0.5);
if(frames%2==0)
{
frame = cvQueryFrame(capture);
cout<<"Frame"<<frame<<endl;
facedetect(frame);
}
}
cvReleaseCapture(&capture);
return 0;
}
I gave cvWaitKey(2000) after every frame is captured.
This would have been my trial. It saves one image per 30 frames. when you say too many images in one second, I understand that you are referring to saved faces.
int counter = 0;
// start the clock
time(&start);
//while(cvGrabFrame(capture))
while(1)
{
frame = cvQueryFrame(capture);
cout<<"Frame"<<frame<<endl;
if(count%30==0)
{
facedetect(frame);
}
count++;
}
if you really meant of skipping the frames, then try this. one frame per second might be the outcome of below code.
while(1)
{
if(count%30==0)
{
frame = cvQueryFrame(capture);
cout<<"Frame"<<frame<<endl;
facedetect(frame);
}
count++;
}
You can try to call waitKey(2000) after each capturing.
Note that the function will not wait exactly 2000ms, it will wait at least 2000ms, depending on what else is running on your computer at that time.
To achieve accurate frame rate, you can set the frame rate of capturing by:
cap.set(CV_CAP_PROP_FPS, 0.5);
Me personally, i would recommend using a modulo operator on the current frame like %2 == would check for every second frame.
if(capture.get(CV_CAP_PROP_POS_FRAMES) % 2 == 0)
//your code to save
Changing 2 to 3 or 5 you can define the offset.
Related
I'm using OpenCV 3.1, I try to run a simple code as the following one (main function):
cv::VideoCapture cam;
cv::Mat matTestingNumbers;
cam.open(0);
if (!cam.isOpened()) { printf("--(!)Error opening video capture\n"); return -1; }
while (cam.read(matTestingNumbers))
{
cv::imshow("matTestingNumbers", matTestingNumbers);
cv::waitKey(5000);
}
When I move the camera it seems that the code does not capture and show the current frame but shows all the captured frames from the previous position and only then from the new one.
So when I capture the wall it shows the correct frames (the wall itself) in the correct delay, but, when I twist the camera to my computer, I first see about 3 frames of the wall and only then the computer, it seems that the frames are stuck.
I've tried to use videoCapture.set() functions and set the FPS to 1, and I tried to switch the method of capturing to cam >> matTestingNumbers (and the rest of the main function according to this change) but nothing helped, I still got "stuck" frames.
BTW, These are the solutions I found on web.
What can I do to fix this problem?
Thank you, Dan.
EDIT:
I tried to retrieve frames as the following:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat frame;
namedWindow("edges",1);
for(;;)
{
cap.grab();
if (waitKey(11) >= 0)
{
cap.retrieve(frame);
imshow("edges", frame);
}
}
return 0;
}
But, it gave the result (when I pointed the camera on one spot and pressed a key it showed one more of the previous frames that were captured of the other point).
It is just like you're trying to picture one person then another but when you picture the second you get the photo of the first person what doesn't make sense.
Then, I tried the following:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat frame;
namedWindow("edges",1);
for(;;)
{
cap >> frame;
if (waitKey(33) >= 0)
imshow("edges", frame);
}
return 0;
}
And it worked as expected.
One of the problems is that you are not calling cv::waitKey(X) to properly freeze the window for X amount of milliseconds. Get rid of usleep()!
I want to subtract the two successive images taken from the webcam.
as you can see I am doing this inside a while loop. In the last line of the while loop I am setting frame2 = frame and so I can subtract them from the next iteration. But the function cv::subtract returns the above error in the terminal.
what am I doing wrong?
#include <iostream>
#include "core.hpp"
#include "highgui.hpp"
#include "imgcodecs.hpp"
#include "cv.h"
using namespace std;
using namespace cv;
int main(int argc, char* argv[])
{
VideoCapture cap(0); ///open the video camera no. 0 (laptop's default camera)
///make a writer object:
cv::VideoWriter writer;
if (!cap.isOpened()) /// if not success, exit program
{
cout << "ERROR INITIALIZING VIDEO CAPTURE" << endl;
return -1;
}
char* windowName = "Webcam Feed(diff image)";
namedWindow(windowName,WINDOW_NORMAL); ///create a window to display our webcam feed
///we need to define 4 arguments for initializing the writer object:
//filename string:
string filename = "C:\\Users\\PEYMAN\\Desktop\\ForegroundExtraction\\openCV_tutorial\\2.writing from video to file\\Payman.avi";
//fourcc integer:
int fcc = CV_FOURCC('D','I','V','3');
//frame per sec integer:
int fps = 10;
//frame size:
cv::Size framesize(cap.get(CV_CAP_PROP_FRAME_WIDTH),cap.get(CV_CAP_PROP_FRAME_HEIGHT));
///initialize the writet object:
writer = VideoWriter(filename,fcc,fps,framesize);
if(!writer.isOpened()){
cout << "Error opening the file" << endl;
getchar();
return -1;
}
int counter = 0;
while (1) {
Mat frame,frame2,diff_frame;
///read a new frame from camera feed and save it to the variable frame:
bool bSuccess = cap.read(frame);
if (!bSuccess) ///test if frame successfully read
{
cout << "ERROR READING FRAME FROM CAMERA FEED" << endl;
break;
}
/// now the last read frame is stored in the variable frame and here it is written to the file:
writer.write(frame);
if (counter > 0){
cv::subtract(frame2,frame,diff_frame);
imshow(windowName, diff_frame ); ///show the frame in "MyVideo" window
}
///wait for 10ms for a key to be pressed
switch(waitKey(1)){
///the writing from webcam feed will go on until the user presses "esc":
case 27:
///'esc' has been pressed (ASCII value for 'esc' is 27)
///exit program.
return 0;
}
frame2 = frame;
counter++;
}
return 0;
}
Every time you execute the while loop frame2 is created and default initialized. When you call
cv::subtract(frame2,frame,diff_frame);
You are trying to subtract a default constructed Mat from a Mat that has an image in it. These two Mats will not be the same size so you get the error.
You need to move the declaration of frame and frame2 outside of the while loop if you want them to retain their values after each execution of the while loop. You also need to initialize frame2 to the same size or capture a second image into it so you can use subtract the first time through.
You need to declare frame2 outside the scope of the while loop like you did with counter. Right now, you get a fresh, empty frame2 with each iteration of the loop.
You might as well move all the Mats outside the while loop so that memory doesn't have to be de-allocated at the end of each iteration and re-allocated the next, although this isn't an error and you likely won't see the performance penalty in this case.
Also, #rhcpfan is right in that you need to be careful about shallow vs deep copies. Use cv::swap(frame, fram2).
I am trying to get the fps from my camera so that I can pass it to the VideoWriter for outputting the video. However, I am getting 0 fps by calling VideoCapture::get(CV_CAP_PROP_FPS) from my camera. If I hardcode it, my video may be too slow or too fast.
#include "opencv2/opencv.hpp"
#include <stdio.h>
#include <stdlib.h>
using namespace std;
using namespace cv;
int main(int argc, char *argv[])
{
cv::VideoCapture cap;
int key = 0;
if(argc > 1){
cap.open(string(argv[1]));
}
else
{
cap.open(CV_CAP_ANY);
}
if(!cap.isOpened())
{
printf("Error: could not load a camera or video.\n");
}
Mat frame;
cap >> frame;
waitKey(5);
namedWindow("video", 1);
double fps = cap.get(CV_CAP_PROP_FPS);
CvSize size = cvSize((int)cap.get(CV_CAP_PROP_FRAME_WIDTH),(int)cap.get(CV_CAP_PROP_FRAME_HEIGHT));
int codec = CV_FOURCC('M', 'J', 'P', 'G');
if(!codec){ waitKey(0); return 0; }
std::cout << "CODEC: " << codec << std::endl;
std::cout << "FPS: " << fps << std::endl;
VideoWriter v("Hello.avi",-1,fps,size);
while(key != 'q'){
cap >> frame;
if(!frame.data)
{
printf("Error: no frame data.\n");
break;
}
if(frame.empty()){ break; }
v << frame;
imshow("video", frame);
key = waitKey(5);
}
return(0);
}
How can I get VideoCapture::get(CV_CAP_PROP_FPS) to return the right fps or give a fps to the VideoWriter that works universally for all webcams?
CV_CAP_PROP_FPS only works on videos as far as I know. If you want to capture video data from a webcam you have to time it correctly yourself. For example use a timer to capture a frame from the webcam every 40ms and then save as 25fps video.
You can use VideoCapture::set(CV_CAP_PROP_FPS) to set the desired FPS for a webcam. However, you can't use get for some reason.
Note that sometimes the driver will choose a different FPS than what you have requested depending on the limitations of the webcam.
My workaround: capture frames during a few seconds (4 is fine in my tests, with 0.5 seconds of initial delay), and estimate the fps the camera outputs.
I've never observed CV_CAP_PROP_FPS to work. I have tried with various flavors of OpenCV 2.4.x (currently 2.4.11) using file inputs.
As a workaround in one scenario, I directly used libavformat (from ffmpeg) to get the frame rate, which I can then use in my other OpenCV code:
static double get_frame_rate(const char *filePath) {
AVFormatContext *gFormatCtx = avformat_alloc_context();
av_register_all();
if (avformat_open_input(&gFormatCtx, filePath, NULL, NULL) != 0) {
return -1;
} else if (avformat_find_stream_info(gFormatCtx, NULL) < 0) {
return -1;
}
for (int i = 0; i < gFormatCtx->nb_streams; i++) {
if (gFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
AVRational rate = gFormatCtx->streams[i]->avg_frame_rate;
return (double)av_q2d(rate);
}
}
return -1;
}
Aside from that, undoubtedly one of the slowest possible (although sure to work) methods to get the average fps, would be to step through each frame and divide the current frame number by the current time:
for (;;) {
currentFrame = cap.get(CV_CAP_PROP_POS_FRAMES);
currentTime = cap.get(CV_CAP_PROP_POS_MSEC);
fps = currentFrame / (currentTime / 1000);
# ... code ...
# stop this loop when you're satisfied ...
}
You'd probably only want to do the latter if the other methods of directly finding the fps failed, and further, there were no better way to summarily get overall duration and frame count information.
The example above works on a file -- to adapt to a camera, you could use elapsed wallclock time since beginning of capture, instead of getting CV_CAP_PROP_POS_MSEC. Then the average fps for the session would be the elapsed wall clock time divided by the current frame number.
For live video from webcam use cap.get(cv2.CAP_PROP_FPS)
I'm capturing frames from a Webcam using OpenCV in a C++ app both on my Windows machine as well as on a RaspberryPi (ARM, Debian Wheezy). The problem is the CPU usage. I only need to process frames like every 2 seconds - so no real time live view. But how to achieve that? Which one would you suggest?
Grab each frame, but process only some: This helps a bit. I get the most recent frames but this option has no significant impact on the CPU usage (less than 25%)
Grab/Process each frame but sleep: Good impact on CPU usage, but the frames that I get are old (5-10sec)
Create/Destroy VideoCapture in each cycle: After some cycles the application crashes - even though VideoCapture is cleaned up correctly.
Any other idea?
Thanks in advance
#include <opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <vector>
#include <unistd.h>
#include <stdio.h>
using namespace std;
int main(int argc, char *argv[])
{
cv::VideoCapture cap(0); //0=default, -1=any camera, 1..99=your camera
if(!cap.isOpened())
{
cout << "No camera detected" << endl;
return 0;
}
// set resolution & frame rate (FPS)
cap.set(CV_CAP_PROP_FRAME_WIDTH, 320);
cap.set(CV_CAP_PROP_FRAME_HEIGHT,240);
cap.set(CV_CAP_PROP_FPS, 5);
int i = 0;
cv::Mat frame;
for(;;)
{
if (!cap.grab())
continue;
// Version 1: dismiss frames
i++;
if (i % 50 != 0)
continue;
if( !cap.retrieve(frame) || frame.empty() )
continue;
// ToDo: manipulate your frame (image processing)
if(cv::waitKey(255) ==27)
break; // stop on ESC key
// Version 2: sleep
//sleep(1);
}
return 0;
}
Create/Destroy VideoCapture in each cycle: not yet tested
It may be a bit troublesome on Windows (and maybe on other operating systems too) - First frame grabbed after creating VideoCapture is usually black or gray. Second frame should be fine :)
Other ideas:
- modified idea nr 2 - after sleep grab 2 frames. First frame may be old, but second should be new. It's not tested and generally i'm not sure about that, but it's easy to check it.
- Eventually after sleep you may grab frames in while loop (without sleep) waiting till you grab the same frame twice (but it may be hard to achieve especially on RasberryPi).
I am an OpenCV and C++ beginner. I've got a problem with my student project.My Tutor wants to grab frames from a Camera and save the grabbed frames into jpg. So first I used "cvCreateCameraCapture,cvQueryFrame,cvSaveImage" and it worded ok. But the frame is relative big,about 2500x2000,and it takes about 1 second to save one Frame. But my Tutor requires at least to save 10 Frames per second.
Then I came out the ideas to save raw data first and after grabbing process I can save them into Jpg. So I wrote following test code.But the problem is that all the saved Images are the same and it seems they are just from the data of the last grabbed frame.I guess the problem is about my poor knowledge of c++ especially pointers.So I really hope to get help here.
Thanks in advance!
void COpenCVDuoArryTestDlg::OnBnClickedButton1()
{
IplImage* m_Frame=NULL;
TRACE("m_Frame initialed");
CvCapture * m_Video=NULL;
m_Video=cvCreateCameraCapture(0);
IplImage**Temp_Frame= (IplImage**)new IplImage*[100];
for(int j=0;j<100;j++){
Temp_Frame[j]= new IplImage [112];
}
TRACE("Tempt_Frame initialed\n");
cvNamedWindow("video",1);
int t=0;
while(m_Frame=cvQueryFrame(m_Video)){
for(int k=0;k<m_Frame->nSize;k++){
Temp_Frame[t][k]= m_Frame[k];
}
cvWaitKey(30);
t++;
if(t==100){
break;
}
}
for(int i=0;i<30;i++){
CString ImagesName;
ImagesName.Format(_T("Image%.3d.jpg"),i);
if(cvWaitKey(20)==27) {
break;
}
else{
cvSaveImage(ImagesName, Temp_Frame[i]);
}
}
cvReleaseCapture(&m_Video);
cvDestroyWindow("video");
TRACE("cvDestroy works\n");
delete []Temp_Frame;
}
If you use C++, why don't you use the C++ opencv interface?
The reason you get N times the same image is that the capture reuses the memory for each frame, if you want to store the frames you need to copy them. Example for the C++ interface:
#include <vector>
#include "cv.h"
#include "highgui.h"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("image",1);
std::vector<cv::Mat> images(100);
for(int i = 0; i < 100;++i) {
// this is optional, preallocation so there's no allocation
// during capture
images[i].create(480, 640, CV_8UC3);
}
for(int i = 0; i < 100;++i)
{
Mat frame;
cap >> frame; // get a new frame from camera
frame.copyTo(images[i]);
}
cap.release();
for(int i = 0; i < 100;++i)
{
imshow("image", images[i]);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
Do you have a multicore/multiple CPU system? Then you could farm out the 1second tasks across 16cores and save 16frames/second !
Or you could write your own optimized jpeg routine on the GPU in Cuda/OpenCL.
If you need it to run for longer you could dump the raw image data to disk, then read it back in later and convert to jpeg. 5Mpixel * 3color * 10fps is 150Mb/s (thanks etarion!) which you can do with two disks and windows Raid.
edit: If you only need to do 10frames then just buffer them in memory and then write them out as the other answer shows.
Since you already know how to retrieve a frame, check this answer:
openCV: How to split a video into image sequence?
This question is a little different because it retrieves frames from an AVI file instead of a webcam, but the way to save a frame to the disk is the same!