OpenCV slowing down WebCam capture - c++

I'm capturing frames from a Webcam using OpenCV in a C++ app both on my Windows machine as well as on a RaspberryPi (ARM, Debian Wheezy). The problem is the CPU usage. I only need to process frames like every 2 seconds - so no real time live view. But how to achieve that? Which one would you suggest?
Grab each frame, but process only some: This helps a bit. I get the most recent frames but this option has no significant impact on the CPU usage (less than 25%)
Grab/Process each frame but sleep: Good impact on CPU usage, but the frames that I get are old (5-10sec)
Create/Destroy VideoCapture in each cycle: After some cycles the application crashes - even though VideoCapture is cleaned up correctly.
Any other idea?
Thanks in advance
#include <opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <vector>
#include <unistd.h>
#include <stdio.h>
using namespace std;
int main(int argc, char *argv[])
{
cv::VideoCapture cap(0); //0=default, -1=any camera, 1..99=your camera
if(!cap.isOpened())
{
cout << "No camera detected" << endl;
return 0;
}
// set resolution & frame rate (FPS)
cap.set(CV_CAP_PROP_FRAME_WIDTH, 320);
cap.set(CV_CAP_PROP_FRAME_HEIGHT,240);
cap.set(CV_CAP_PROP_FPS, 5);
int i = 0;
cv::Mat frame;
for(;;)
{
if (!cap.grab())
continue;
// Version 1: dismiss frames
i++;
if (i % 50 != 0)
continue;
if( !cap.retrieve(frame) || frame.empty() )
continue;
// ToDo: manipulate your frame (image processing)
if(cv::waitKey(255) ==27)
break; // stop on ESC key
// Version 2: sleep
//sleep(1);
}
return 0;
}

Create/Destroy VideoCapture in each cycle: not yet tested
It may be a bit troublesome on Windows (and maybe on other operating systems too) - First frame grabbed after creating VideoCapture is usually black or gray. Second frame should be fine :)
Other ideas:
- modified idea nr 2 - after sleep grab 2 frames. First frame may be old, but second should be new. It's not tested and generally i'm not sure about that, but it's easy to check it.
- Eventually after sleep you may grab frames in while loop (without sleep) waiting till you grab the same frame twice (but it may be hard to achieve especially on RasberryPi).

Related

Is there a way to avoid buffering images when using RTSP camera in OpenCV?

Let's say we have a time consuming process that when we call it on any frame, it takes about 2 seconds for it to complete the operation.
As we capture our frames with Videocapture, frames are stored in a buffer behind the scene(maybe it's occurred in the camera itself), and when process on the nth frame completes, it grabs next frame ((n+1)th frame), However, I would like to grab the frames in real-time, not in order(i.e. skip the intermediate frames)
for example you can test below example
cv::Mat frame;
cv::VideoCapture cap("rtsp url");
while (true) {
cap.read(frame);
cv::imshow("s",frame);
cv::waitKey(2000); //artificial delay
}
Run the above example, and you will see that the frame that shows belongs to the past, not the present. I am wondering how to skip those frames?
NEW METHOD
After you start videoCapture, you can use the following function in OpenCV.
videoCapture cap(0);
cap.set(CV_CAP_PROP_BUFFERSIZE, 1); //now the opencv buffer just one frame.
USING THREAD
The task was completed using multi-threaded programming. May I ask what's the point of this question? If you are using a weak processor like raspberrypi or maybe you have a large algorithm that takes a long time to run, you may experience this problem, leading to a large lag in your frames.
This problem was solved in Qt by using the QtConcurrent class that has the ability to run a thread easily and with little coding. Basically, we run our main thread as a processing unit and run a thread to continuously capture frames, so when the main thread finishes processing on one specific frame, it asks for another frame from the second thread. In the second thread, it is very important to capture frames without delay. Therefore, if a processing unit takes two seconds and a frame is captured in 0.2 seconds, we will lose middle frames.
The project is attached as follows
1.main.cpp
#include <opencv2/opencv.hpp>
#include <new_thread.h> //this is a custom class that required for grab frames(we dedicate a new thread for this class)
#include <QtConcurrent/QtConcurrent>
using namespace cv;
#include <QThread> //need for generate artificial delay, which in real situation it will produce by weak processor or etc.
int main()
{
Mat frame;
new_thread th; //create an object from new_thread class
QtConcurrent::run(&th,&new_thread::get_frame); //start a thread with QtConcurrent
QThread::msleep(2000); //need some time to start(open) camera
while(true)
{
frame=th.return_frame();
imshow("frame",frame);
waitKey(20);
QThread::msleep(2000); //artifical delay, or a big process
}
}
2.new_thread.h
#include <opencv2/opencv.hpp>
using namespace cv;
class new_thread
{
public:
new_thread(); //constructor
void get_frame(void); //grab frame
Mat return_frame(); //return frame to main.cpp
private:
VideoCapture cap;
Mat frame; //frmae i
};
3.new_thread.cpp
#include <new_thread.h>
#include <QThread>
new_thread::new_thread()
{
cap.open(0); //open camera when class is constructing
}
void new_thread::get_frame() //get frame method
{
while(1) // while(1) because we want to grab frames continuously
{
cap.read(frame);
}
}
Mat new_thread::return_frame()
{
return frame; //whenever main.cpp need updated frame it can request last frame by usign this method
}

Opencv 3.2 read mpeg files too slow

I am going through the book Learning OpenCV 3 and test out the video example 2.3. I could edit, compile and run it, but the problem is that it closed down immediately.
// DisplayPicture.cpp : Defines the entry point for the console application.
//
//#include "opencv2/opencv.hpp" // Include file for every supported OpenCV function
#include "opencv2\imgproc\imgproc.hpp"
#include "opencv2\highgui\highgui.hpp"
#include <opencv2/videoio.hpp>
#include <stdio.h>
#include <string.h>
using namespace cv;
using namespace std;
int main(int argc, char** argv) {
namedWindow("video3", WINDOW_AUTOSIZE);
VideoCapture cap;
cap.open( string(argv[1]));
int tell = 0;
Mat frame;
for (;;) {
cap >> frame;
//waitKey(30);
if (frame.empty())
{
break;
//end of film
}
imshow("video3", frame);
}
return 0;
}
I found that my computer processed the data too fast. It could not read the next frame fast enough. if (frame.empty()) became true the program reached the break statement and ended.
By adding a waitkey of 30 millisec before viewing the image frame, the video program works very well. At least I can view the video. Since this example is from the 'bible' it should work, but not with my computer.
I am running a MSI gt72 2PE computer with nvidia gtx880m. Not sure if that matters.
I assume that adding a waitKey(30) is not appropriate, so I am seeking suggestions as to what could be done differently.

OpenCV: Getting black frame when trying to get camera output

I am currently working on a camera capture project. The problem is, even though camera gets opened correctly, sometimes the cap.read(frame) returns a black frame. After some tests, I noticed a workaround: putting a Sleep of 1-2 seconds after cap.open() solves the problem. Without sleep, seems like that cap.read() is performed before that the camera is actually ready for capture. But instead of failing and returning false, it just grabs a black frame. Also, checking for frame.empty() does not work, since frame is not empty, it is just all black.
Any idea on how to solve this, of course without having to sleep for a while after camera open?
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <Windows.h>
// .......... previous not relevant code
VideoCapture cap;
cap.open(0); // open the default camera
if (cap.isOpened()) // check if we succeeded
{
// Sleep(2000);
Mat frame;
while (!cap.read(frame))
{
MessageBoxA(0, "unable to read frame", "CamDll", 0);
}
// Process frame data...
}
I suggest to use cv::countNonZero(InputArray src) to check if the frame is black.
if (cv::countNonZero(frame) == 0)
{
//discard frame
} else
{
//proceed with frame
}

OpenCV very slow - webcam

I'm using OpenCV to send camera bytes via UDP to another computer.
The problem is, that the framerate of the camera is only 15fps. If I send a picture, it works with over 200fps.
My camera supports 30fps, so does anyone know why this happen?
Here's my code:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <sys/socket.h>
#include <netinet/in.h>
#include <stdio.h>
#include <fstream>
#include <arpa/inet.h>
#include <unistd.h>
using namespace std;
using namespace cv;
void createSocket(int port, const char* ip);
void sendData(vector<uchar> buff);
int _socket;
struct sockaddr_in serverAdress,clientAdress;
// argv[1] = Bild
int main(int argc, char** argv)
{
if(argc != 5)
{
cout << "Error\n";
return -1;
}
createSocket(atoi(argv[2]),argv[1]);
vector<uchar> buff;
vector<int> param = vector<int>(2);
param[0] = CV_IMWRITE_JPEG_QUALITY;
param[1] = atoi(argv[3]);
// VideoCapture
VideoCapture cap(1);
cap.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 360);
if(!cap.isOpened())
{
cout << "Error\n";
return -1;
}
Mat frame;
while(true){
cap >> frame;
cvtColor(frame, frame, CV_8U);
imencode(".jpeg",frame, buff,param);
cout<<"coded file size(jpg)"<<buff.size()<<endl;
sendData(buff);
}
return 0;
}
void createSocket(int port, const char* ip){
_socket = socket(AF_INET,SOCK_DGRAM,0);
serverAdress.sin_family = AF_INET;
serverAdress.sin_addr.s_addr= inet_addr(ip);
serverAdress.sin_port=htons(port);
if(_socket == -1){
cout << "Error";
return;
}
}
void sendData(vector<uchar> buff){
char data[buff.size()];
for(int i = 0; i < buff.size(); i++){
data[i] = buff.at(i);
}
sendto(_socket,data,buff.size(),0,(struct sockaddr *)&serverAdress,sizeof(serverAdress));
}
While 30 FPS might be a theoretical maximum for your camera, it might be reachable on a sunny day, outside. I have a sneaking suspicion that the sub-par acquisition frequency might be caused by insufficient lighting. It just needs more time to catch enough photons to produce image of desired quality. I'm not sure what your acquisition conditions are, but things might improve significantly with more intense light. If you have access to the internal settings of the camera, you could also try reducing the exposure time. It would however require an icrease of gain to keep proper contrast and the images will exhibit more noise.
I believe UDP as a maximum theorotical data size per packet of approximately 65KB, but most protocols limit this to a few hundred bytes for safer transmission.Seeing as the video is being recorded at a 640 by 360 resolution without any compression, i can guarantee you that each frame significantly exceeds 65KB. So it is very unrealistic to expect the frame rate of your transmission in a constrained pipe to be equal to the frame rate at which the camera records at. If you want a faster frame rate at the receiver, either reduce the resolution of your video, or used an appropriate library like gstreamer which offers h.264 compression of the video before transmission.
Another solution would be to perform some processing on each frame via opencv (such as as simple filter), this should help reduce the size of each frame
Also what do you mean by if you send a picture, it works with over 200fps? This statement does not appear to make any sense

Can i capture at 20 fps with C++ and Opencv?

i have a Raspberry Pi and installed on it the OpenCV and Guvcview. When i open Guvcview, i get ~ 17-21 fps but when i run a simple program (only capture from webcam and display frame) in C++ with Opencv, i get only 6 fps.
What is wrong? i need to configure Opencv to use Guvcview's configuration? why guvcview get 20 fps? What can i do?
thanks.
P.D. I've done the same in my computer and i get 29 fps in both cases.
//*********************************this is the code C++ :
#include <iostream>
#include "opencv2/opencv.hpp"
using namespace std;
using namespace cv;
time_t start, end; //variabile di tipo time_t , contiene tempo in sec.
// inizializzo contatore nella dichiarazione
int counter=0;
int main()
{ time(&start);
VideoCapture cap(1);
cap.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
if (!cap.isOpened())
{ cout << "could not capture";
return 0; }
Mat frame;
namedWindow("camera", 1);
char key = 'a';
while(key != 27)
{ cap.read( frame);
imshow("camera", frame);
//##################
//time at the end of 1 show, Stop the clock and show FPS
time(&end);
++counter;
cout <<"fps: "<< counter/ difftime(end,start) <<endl <<endl;
//##################
key = waitKey(3); }
destroyAllWindows();
return 0;
}
OpenCV is a heavy weight API and following tips may introduce minor improvements:
you can disable RGB conversion:
cap.set(CV_CAP_PROP_CONVERT_RGB , false);
you can increase frame rate if its default frame rate is low:
cap.set(CV_CAP_PROP_FPS , 60);
I'd suggest to do direct video capture via V4L, since OpenCV may do YUYV to RGB transformations and other stuff that involves floating point calculations, that are expensive on this kind of hardware. We have done many robotics projects on embedded systems and the rule of the thumb is that you will be always better of either directly using V4L or small 3rd party libraries like CMVision (http://www.cs.cmu.edu/~jbruce/cmvision/) to do image processing on embedded systems.