Opencv - Haar cascade - Face tracking is very slow - c++

I have developed a project to tracking face through camera using OpenCV library.
I used haar cascade with haarcascade_frontalface_alt.xml to detect face.
My problem is if image capture from webcame doesn't contain any faces, process to detect faces is very slow so images from camera, which are showed continuosly to user, are delayed.
My source code:
void camera()
{
String face_cascade_name = "haarcascade_frontalface_alt.xml";
String eye_cascade_name = "haarcascade_eye_tree_eyeglasses.xml";
CascadeClassifier face_cascade;
CascadeClassifier eyes_cascade;
String window_name = "Capture - Face detection";
VideoCapture cap(0);
if (!face_cascade.load(face_cascade_name))
printf("--(!)Error loading\n");
if (!eyes_cascade.load(eye_cascade_name))
printf("--(!)Error loading\n");
if (!cap.isOpened())
{
cerr << "Capture Device ID " << 0 << "cannot be opened." << endl;
}
else
{
Mat frame;
vector<Rect> faces;
vector<Rect> eyes;
Mat original;
Mat frame_gray;
Mat face;
Mat processedFace;
for (;;)
{
cap.read(frame);
original = frame.clone();
cvtColor(original, frame_gray, CV_BGR2GRAY);
equalizeHist(frame_gray, frame_gray);
face_cascade.detectMultiScale(frame_gray, faces, 2, 0,
0 | CASCADE_SCALE_IMAGE, Size(200, 200));
if (faces.size() > 0)
rectangle(original, faces[0], Scalar(0, 0, 255), 2, 8, 0);
namedWindow(window_name, CV_WINDOW_AUTOSIZE);
imshow(window_name, original);
}
if (waitKey(30) == 27)
break;
}
}

Haar classifier is relatively slow by nature. Furthermore, there is not much of optimization you can do to the algorithm itself because detectMultiScale is parallelized in OpenCV.
The only note about your code: do you really get some faces ever detected with minSize which equals to Size(200, 200)? Though surely, the bigger the minSize - the better the performance is.
Try scaling the image before detecting anything:
const int scale = 3;
cv::Mat resized_frame_gray( cvRound( frame_gray.rows / scale ), cvRound( frame_gray.cols / scale ), CV_8UC1 );
cv::resize( frame_gray, resized_frame_gray, resized_frame_gray.size() );
face_cascade.detectMultiScale(resized_frame_gray, faces, 1.1, 3, 0 | CASCADE_SCALE_IMAGE, Size(20, 20));
(don't forget to change minSize to more reasonable value and to convert detected face locations to real scale)
Image size reducing for 2, 3, 5 times is a great performance relief for any image processing algorithm, especially when it comes to some costly stuff like detection.
As it was mentioned before, if resizing won't do the trick, try fetching some other bottlenecks using a profiler.
And you can also switch to LBP classifier which is comparably faster though less accurate.
Hope it will help.

May be it will useful for you:
There is a Simd Library, which has an implementation of HAAR and LBP cascade classifiers. It can use standard HAAR and LBP casscades from OpenCV. This implementation has SIMD optimizations with using of SSE4.1, AVX2 and NEON(ARM), so it works in 2-3 times faster then original OpenCV implementation.

I use Haar cascade classifiers regularly, and easily get 15 frames/second for face detection on 640x480 images, on an Intel PC/Mac (Windows/Ubuntu/OS X) with 4GB Ram and 2GHz CPU. What is your configuration?
Here are a few things that you can try.
You don't have to create the window (namedWindow(window_name, CV_WINDOW_AUTOSIZE);) within each frame. Just create it first and update the image.
You can try how fast it runs without histogram equalization. Not always required with a webcam.
As suggested by Micka above, you should check whether your program runs in Debug mode or release mode.
Use a profiler to see whether the bottleneck is.
In case you haven't done it yet, have you measured the frame rate you get if you comment out face detection and drawing rectangles?

You can use LBP Cascade to detect faces. It is much more lightweight. You can find lbpcascade_frontalface.xml in OpenCV source directory.

Related

OpenCV FAST Algorithm creating skewed keypoints on only part of an image

I'm trying to use OpenCV's FAST corner detection algorithm to get an outline of an image of a ball (Not my final project, I'm using it as a simple example). For some reason, it only works on a third of the input Mat, and stretches the Keypoints across the image. I'm not sure as to what could be going wrong here to make the FAST algorithm not apply to the entire Mat.
Code:
void featureDetection(const Mat& imgIn, std::vector<KeyPoint>& pointsOut) {
int fast_threshold = 20;
bool nonmaxSuppression = true;
FAST(imgIn, pointsOut, fast_threshold, nonmaxSuppression);
}
int main(int argc, char** argv) {
Mat out = imread("ball.jpg", IMREAD_COLOR);
// Detect features
std::vector<KeyPoint> keypoints;
featureDetection(out.clone(), keypoints);
Mat out2 = out.clone();
// Draw features (Normal, missing right side)
for(KeyPoint p : keypoints) {
drawMarker(out, Point(p.pt.x / 3, p.pt.y), Scalar(0, 255, 0));
}
imwrite("out.jpg", out, std::vector<int>(0));
// Draw features (Stretched)
for(KeyPoint p : keypoints) {
drawMarker(out2, Point(p.pt.x, p.pt.y), Scalar(127, 0, 255));
}
imwrite("out2.jpg", out2, std::vector<int>(0));
}
Input image
Output 1 (keypoint.x multiplied by a factor of 1/3, but missing right side)
Output 2 (Coordinates untouched)
I'm using OpenCV 4.5.4 on MinGW.
Most keypoint detectors use grayscale images as input.
If you interpret the memory of a bgr image as grayscale, you will have 3 times the number of pixels. Y axis is still ok if the algorithm uses the width-offset per row, which most algorithms do (because this is useful when subimaging or padding is used).
I don't know whether it is a bug or a feature, that FAST doesn't check for the number of channels snd doesnt throw an exception if the wrong number of channels ist given.
You can convert the image to grayscale by cv::cvtColor with the flag cv:: COLOR_BGR2GRAY

Why is haar cascade very slow opencv c++

I am using haar cascading to detect frontal faces. I have below code:
int main()
{
Mat image;
cv::VideoCapture cap;
cap.open(1);
int frame_idx = 0;
time_t fpsStartTime, fpsEndTime;
time(&fpsStartTime);
for (;;)
{
frame_idx = frame_idx + 1;
cap.read(image);
CascadeClassifier face_cascade;
face_cascade.load("<PATH");
std::vector<Rect> faces;
face_cascade.detectMultiScale(image, faces, 1.1, 2, 0 | cv::CASCADE_SCALE_IMAGE, Size(30, 30));
// Draw circles on the detected faces
for (int i = 0; i < faces.size(); i++)
{
Point center(faces[i].x + faces[i].width*0.5, faces[i].y + faces[i].height*0.5);
ellipse(image, center, Size(faces[i].width*0.5, faces[i].height*0.5), 0, 0, 360, Scalar(255, 0, 255), 4, 8, 0);
}
cv::imshow("Detected Face", image);
char k = cv::waitKey(1);
if (k == 27)
break;
time(&fpsEndTime);
double seconds = difftime(fpsEndTime, fpsStartTime);
double fps = frame_idx / seconds;
std::string fps_txt = "FPS: " + std::to_string(fps); // fps_str.str();
cout << "FPS : " << fps_txt << endl;
}
return 0;
}
This code is working fine but giving very low FPS. FPS is ~1fps which is very slow. I am running this on Windows 10 laptop with intel i5 CPU. I believe this should not be this much slow.
In debug mode, it gives ~1fps but in release mode it is 4-5fps which again is very slow. I have run some openvino demo's like pedestrian detection which uses 2 openvino model on same hardware and it gives ~17-20fps which is very good.
I am using USB 3.0 logitech brio 4k camera so this cannot be a reason of low fps. My question is why haar cascading is performing very slow. Is there anyway we can enhance its speed and make it more usable. Please help. Thanks
You should not (re)load the classifier on every frame. It should load once before processing frames.
Move the following statements out of the for loop.
CascadeClassifier face_cascade;
face_cascade.load("<PATH");
See a demo on OpenCV Docs.
Can you confirm if you are using right .lib and .dll file?
I have checked and seen that the opencv_world440.lib & opencv_world440.dll provide great speed compared to opencv_world440d.lib & opencv_world440d.dll files.
My guess is that opencv_world440d.lib & opencv_world440d.dll are for debugging so slow speed.
Note::Your lib name may vary ie.., opencv_world<"SomeNumber">d.lib & opencv_world<"SomeNumber">.lib

Compare two image in real time with predefined image with real time capture image in Opencv c++

I am doing a project of Automatic fabric defect detection. In this i developed the algorithm using the [FFT][1] (Fast Fourier Transform) and its working fine in my Ubuntu 14.04 opencv c++. But now i want to develop this to real time there i have to capture image every 2s and have to process that image with my developed algorithm. I need ideas on how to capture images using webcam in opencv c++ and to process withat same image which is being captured. Please do help me if anyone knows of this. Thank you in advance.
You can follow the guidance which has given by OpenCV - They have provided enough examples such as following sample code. Following code is provided by OpenCV Dev team as sample.
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("edges", edges);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}

OpenCV 2.4.10 Face detection works with video but fails to detect in a static image

I'm using OpenCV's Cascade Classifier in order to detect faces. I followed the webcam tutorial, and I was able to use detectMultiScale to find and track my face while it was streaming video from my laptop's webcam.
But when I take a photo of myself from my laptop's webcam, I load that image into OpenCV, and apply detectMultiScale on that image, and for some reason, the Cascade Classifier can't detect any faces on that static image!
That static image would definitely have been detected if it was one frame in from my webcam stream, but when I just take that one individual image alone, nothing's being detected.
Here's the code I use (just picked out the relevant lines):
Code in Common:
String face_cascade_name = "/path/to/data/haarcascades/haarcascade_frontalface_alt.xml";
CascadeClassifier face_cascade;
Mat imagePreprocessing(Mat frame) {
Mat processed_frame;
cvtColor( frame, processed_frame, COLOR_BGR2GRAY );
equalizeHist( processed_frame, processed_frame );
return processed_frame;
}
For Web-cam streaming face detection:
int detectThroughWebCam() {
VideoCapture capture;
Mat frame;
if( !face_cascade.load( face_cascade_name ) ){ printf("--(!)Error loading face cascade\n"); return -1; };
//-- 2. Read the video stream
capture.open( -1 );
if ( ! capture.isOpened() ) { printf("--(!)Error opening video capture\n"); return -1; }
while ( capture.read(frame) )
{
if(frame.empty()) {
printf(" --(!) No captured frame -- Break!");
break;
}
//-- 3. Apply the classifier to the frame
Mat processed_image = imagePreprocessing( frame);
vector<Rect> faces;
face_cascade.detectMultiScale( processed_frame, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE|CV_HAAR_FIND_BIGGEST_OBJECT, Size(30, 30) );
if (faces.size() > 0) cout << "SUCCESS" << endl;
int c = waitKey(10);
if( (char)c == 27 ) { break; } // escape
}
return 0;
}
For my static image face detection:
void staticFaceDetection() {
Mat image = imread("path/to/jpg/image");
Mat processed_frame = imagePreprocessing(image);
std::vector<Rect> faces;
//-- Detect faces
face_cascade.detectMultiScale( processed_frame, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE|CV_HAAR_FIND_BIGGEST_OBJECT, Size(30, 30) );
if (faces.size() > 0) cout << "SUCCESS" << endl;
}
In my eyes, both of these processes are identical (the only difference being the where I'm acquiring the original image), but the video stream version regularly detects faces, while the static method never seems to be able to find a face.
Am I missing something here?
There are a few possible reasons for that.
You save the image in a low resolution. Try saving it in original resolution
Lossy compression. Do you save images a .jpg file? maybe your compression is too strong. Try saving as BMP file (it preservers the original quality).
Format of the image. I don't know what you imagePreprocessing() method does but you might introduce the following problems. The camera captures video in a specific format (Most cameras use YUV). Typically face detection is performed on the first plane Y. When you save the image and read it from disk as RGB you must not run the face detection on the first plane. This would be the 'B' plane and blue color stores very little information about the face. make sure that you correctly convert the image to gray-scale before you run the face detection.
Range of the image. This is a common mistake. Make sure that the dynamic range of the image is correct. Sometimes by mistake you might multiply all the values by 255 effectively turning the entire image to white.
Maybe face detection on images works fine but you somehow clear the faces vector after face detection. Another mistake might be that you read a different image file. For example, you save images to directory 'A' but accidentally read from directory 'B'
If none of the above helps. Do the following debugging.
For a video frame 'i' - store it in the memory. Then save it to disk and read it back from file to memory. Now the most important part: compare the images. If they are different - that is the reason for different face detection results. If not, then further investigation is needed. I am pretty sure that the images will not be identical and that is the problem. You can see where images are not identical by taking differences between pixel values and displaying the diff image. You can compare images using memcmp() function which compares 2 memory blocks.
Good luck
Solved it!
Really stupid mistake. I didn't call facecascades.load to load the haarcascades for the static image version, but I did that for the video cam version.
It's all working now.

Reducing lag during blob detection of real-time, binary b/w webcam feed using cvblobslib and opencv(c++)

I'm building a skin-detection algorithm that takes constant, real-time feed with a webcam, converts it to a binary image (based on the skin color of the person's face), and filters out the noise by only showing focusing on the largest blobs (using CvBlobsLib). The output of my code, however, shows a lot of lag, and I'm not sure what to change to make it faster.
Here's (the important part of) my code:
Mat frame;
IplImage ipl, *res = new IplImage;
CBlobResult blobs;
CBlob *currentBlob;
cvNamedWindow("output");
for(;;){
cap >> frame; //get a new frame from camera
cvtColor(frame, lab, CV_BGR2Lab);//frame now in L*a*b*
inRange(lab, BW_MIN, BW_MAX, bw);//frame now only shows "skin values"...BW_MIN/BW_MAX determined earlier
ipl = bw; //IplImage header
blobs = CBlobResult(&ipl, NULL, 0);
blobs.Filter(blobs, B_EXCLUDE, CBlobGetArea(), B_LESS, 10000);
res = cvCreateImage(cvGetSize(&ipl), IPL_DEPTH_8U, 3);
cvMerge(&ipl, &ipl, &ipl, NULL, res);
cvShowImage("output", res);
if(waitKey(5) >= 0) break;
}
cvDestroyWindow("output");
I convert Mat to IplImage because CvBlobsLib only works with the IplImage type.
Does anyone see a way that I could make this faster? I've just recently heard other blob detection libraries do a better job with real-time video, but I'd be interested to see if there's something I'm simply overlooking in my code.
You can decrease the resolution of the camera capture using set method
set(CV_CAP_PROP_FRAME_WIDTH , double width)
and
set(CV_CAP_PROP_FRAME_HEIGHT , double height)
If your default capture resolution is too high, this can increase the detection speed considerably.