face detect OpenCV + Qt + cvMat - c++

I'm trying to migrate my old opencv face detect code from the use of an IplImage structure to the use of the Mat class from opencv.
The issue is that the code is working when there is no Qt code.
Here is the code in brute C++ in codelite:
void detectFace(Mat& img)
{
std::vector<Rect> faces;
Mat gray;
cvtColor(img, gray, CV_BGR2GRAY);
equalizeHist(gray, gray);
// Face detect
face_cascade.detectMultiScale(gray, faces, 1.1, 2,1 | CV_HAAR_SCALE_IMAGE, Size(100,100));
for (unsigned int i = 0; i < faces.size(); i++)
{
//face detect rectangle
Point upperLeftFace(faces[i].x, faces[i].y);
Point lowerRightFace(faces[i].x+faces[i].width, faces[i].y+faces[i].height);
rectangle(/*Matrice*/img, /*Rect*/upperLeftFace, /*Rect*/lowerRightFace, /*BGR Color*/Scalar(255, 255,0), /*Line height*/1, /*line type*/8);
}
//Show window
namedWindow("Alexey Eye", CV_WINDOW_AUTOSIZE);
imshow("Alexey Eye", img);
}
And here is the code when i use qt:
void Neski::m_faceDetect()
{
*att_CamCapture >> att_MatCamera;
std::vector<Rect> faces;
Mat grayMat;
cvtColor(att_MatCamera, grayMat, CV_BGR2GRAY);
equalizeHist(grayMat, grayMat);
// Face detect
face_cascade.detectMultiScale(grayMat, faces, 1.1, 2,1 | CV_HAAR_SCALE_IMAGE, Size(150,150));
for (unsigned int i = 0; i < faces.size(); i++)
{
//face detect rectangle
Point upperLeftFace(faces[i].x, faces[i].y);
Point lowerRightFace(faces[i].x+faces[i].width, faces[i].y+faces[i].height);
rectangle(/*Matrice*/att_MatCamera, /*Rect*/upperLeftFace, /*Rect*/lowerRightFace, /*BGR Color*/Scalar(255, 255,0), /*Line height*/1, /*line type*/8);
}
cvtColor(att_MatCamera, att_MatCamera, CV_BGR2RGB);
QImage att_QImageCamera((uchar*) att_MatCamera.data, att_MatCamera.cols, att_MatCamera.rows, att_MatCamera.step, QImage::Format_RGB888);
*att_PixImageCamera = QPixmap::fromImage(att_QImageCamera.scaled(640, 480),Qt::AutoColor);
att_ui->lab_image->setPixmap(*att_PixImageCamera);
}
Both codes are almost the same, but i'm lost on why is there no facedetect when i launch the program. It does show me a video from the webcam but there is no facedetect rectangle.
Does anyone have any ideas?

i got it working after hours of sleep and fresh ideas.
It was a silly mistake, i forgot to load the cascade used to detect the face.
That's why there were no error and no rectangle.
face_cascade.load(face_cascade_name);
Just put it before the line:
face_cascade.detectMultiScale(grayMat, faces, 1.1, 2,1 | CV_HAAR.......
And it works now.
But, still thank you for the great site.
Here is a screenshoot of the final result

Related

std::algorithm Read access violation

I'm setting up the face detection with delay camera in opencv/c++. How can I do it without errors?
For detection I'm used CascadeClassifier.Detectmultiscale.
void detectAndDraw(Mat& img, CascadeClassifier& cascade,
double scale)
{
vector<Rect> faces;
Mat gray;
cvtColor(img, gray, COLOR_BGR2GRAY); // Convert to Gray Scale
// Resize the Grayscale Image
equalizeHist(gray, gray);
// Detect faces of different sizes using cascade classifier
cascade.detectMultiScale(gray, faces);
// Draw circles around the faces
for(int i = 0; i<=faces.size();i++){
//and cout of x,y,width,height
}
I have expexted the details but I have a error with Access Reading Memory in algorithm.
Photo:
Looks like you have an off-by-one error in this loop:
for(int i = 0; i <= faces.size(); i++) {
...
}
That should probably be a < rather than <=, since otherwise on the last iteration your value of i will be out of bounds.

How do I store the captured image in OpenCV (saving the picture to computer)

I am still quite new to OpenCV and c++ programming in general. I am doing on a project that stores image from my webcam. I was able to display camera image and detect faces somehow but I don't know how to save the images.
What should i do in order for the faces that my webcam detects gets captured and store into my computer?
Using my code below, how can i edit it to:
capture an image from the live cam 5 seconds after detecting face
save the images to a folder in jpg format
Thanks so much for any help you can provide!
my code:
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/objdetect.hpp>
#include <opencv2/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
// capture from web camera init
VideoCapture cap(0);
cap.open(0);
Mat img;
// Initialize the inbuilt Harr Cascade frontal face detection
// Below mention the path of where your haarcascade_frontalface_alt2.xml file is located
CascadeClassifier face_cascade;
face_cascade.load("C:/OpenCV/sources/data/haarcascades/haarcascade_frontalface_alt2.xml");
// i tried changing this line to match my folder in C Drive
for (;;)
{
// Image from camera to Mat
cap >> img;
// Just resize input image if you want
resize(img, img, Size(1000, 640));
// Container of faces
vector<Rect> faces;
// Detect faces
face_cascade.detectMultiScale(img, faces, 1.1, 2, 0 | CV_HAAR_SCALE_IMAGE, Size(140, 140));
// error message appears here
//Show the results
// Draw circles on the detected faces
for (int i = 0; i < faces.size(); i++)
{
Point center(faces[i].x + faces[i].width*0.5, faces[i].y + faces[i].height*0.5);
ellipse(img, center, Size(faces[i].width*0.5, faces[i].height*0.5), 0, 0, 360, Scalar(255, 0, 255), 4, 8, 0);
}
// To draw rectangles around detected faces
/* for (unsigned i = 0; i<faces.size(); i++)
rectangle(img,faces[i], Scalar(255, 0, 0), 2, 1);*/
imshow("wooohooo", img);
int key2 = waitKey(20);
}
return 0;
}
Answering this so we can close this question
The function to use to save an image in C++ opencv is cv::imwrite('image_name.jpg', img).
Add this before or after you cv::imshow() if you want to save the image after every detection.

Extract image pixels of triangle Error

I'm new to image processing and development. I need to take the inside triangle pixels of the image. In order to do it I used the following code. Unfortunately I obtain unwanted black pixels. get rid of that problem i tried to remove background[0] pixels by giving alfa value.(tranparent background) But it gives following Error. Any help is appreciated.
My code:
Mat img = cv::imread("/home/fabio/code/lena.jpg", cv::IMREAD_GRAYSCALE);
Mat alpha(img.size(), CV_8UC1, Scalar(0));
//triangle definition (example points)
vector<Point> points;
points.push_back(Point(200, 70));
points.push_back(Point(60, 150));
points.push_back(Point(500, 500));
//apply triangle to mask
fillConvexPoly(alpha, points, Scalar(255));
cv::Mat finalImage = cv::Mat::zeros(img.size(), img.type());
img.copyTo(finalImage, alpha);
imshow("image", finalImage);
Mat dst;
Mat rgb[1];
split(finalImage, rgb);
Mat rgba[2] = { finalImage, alpha };
merge(rgba, 2, dst);
imshow("dst", dst);
Error: OpenCV Error: Bad number of channels (Source image must have 1, 3 or 4 channels) in cvConvertImage, file C:\builds\2_4_PackSlave-win64-vc12-shared\opencv\modules\highgui\src\utils.cpp, line 611
use this instead of your last block:
std::vector<cv::Mat> channels;
cv::split(finalImage,m channels);
if(channels.size() == 0)
{
std::cout << "unexpected error" << std::endl;
return 1;
}
// fill up to reach 3 channels
while(channels,size() < 3)
{
channels.push_back(channels[0]);
}
// add alpha channel
channels.push_back(alpha);
cv::merge(channels, dst);
I didn't test it but this should be what you want?

Drawing Rectangle around difference area

I have a question which i am unable to resolve. I am taking difference of two images using OpenCV. I am getting output in a seperate Mat. Difference method used is the AbsDiff method. Here is the code.
char imgName[15];
Mat img1 = imread(image_path1, COLOR_BGR2GRAY);
Mat img2 = imread(image_path2, COLOR_BGR2GRAY);
/*cvtColor(img1, img1, CV_BGR2GRAY);
cvtColor(img2, img2, CV_BGR2GRAY);*/
cv::Mat diffImage;
cv::absdiff(img2, img1, diffImage);
cv::Mat foregroundMask = cv::Mat::zeros(diffImage.rows, diffImage.cols, CV_8UC3);
float threshold = 30.0f;
float dist;
for(int j=0; j<diffImage.rows; ++j)
{
for(int i=0; i<diffImage.cols; ++i)
{
cv::Vec3b pix = diffImage.at<cv::Vec3b>(j,i);
dist = (pix[0]*pix[0] + pix[1]*pix[1] + pix[2]*pix[2]);
dist = sqrt(dist);
if(dist>threshold)
{
foregroundMask.at<unsigned char>(j,i) = 255;
}
}
}
sprintf(imgName,"D:/outputer/d.jpg");
imwrite(imgName, diffImage);
I want to bound the difference part in a rectangle. findContours is drawing too many contours. but i only need a particular portion. My diff image is
I want to draw a single rectangle around all the five dials.
Please point me to right direction.
Regards,
I would search for the highest value for i index giving a non black pixel; that's the right border.
The lowest non black i is the left border. Similar for j.
You can:
binarize the image with a threshold. Background will be 0.
Use findNonZero to retrieve all points that are not 0, i.e. all foreground points.
use boundingRect on the retrieved points.
Result:
Code:
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
// Load image (grayscale)
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Binarize image
Mat1b bin = img > 70;
// Find non-black points
vector<Point> points;
findNonZero(bin, points);
// Get bounding rect
Rect box = boundingRect(points);
// Draw (in color)
Mat3b out;
cvtColor(img, out, COLOR_GRAY2BGR);
rectangle(out, box, Scalar(0,255,0), 3);
// Show
imshow("Result", out);
waitKey();
return 0;
}
Find contours, it will output a set of contours as std::vector<std::vector<cv::Point> let us call it contours:
std::vector<cv::Point> all_points;
size_t points_count{0};
for(const auto& contour:contours){
points_count+=contour.size();
all_points.reserve(all_points);
std::copy(contour.begin(), contour.end(),
std::back_inserter(all_points));
}
auto bounding_rectnagle=cv::boundingRect(all_points);

Slow speed in stream webcam to do "detectMultiScale"

I'm new using OpenCV. I'm doing a sample face detector application console. I'm using haarcascade to detect the face from the webcam.
I did next code:
int main(int, char**)
{
CascadeClassifier face_cascade;
face_cascade.load("haarcascade_frontalface_alt2.xml");
vector<Rect> faces;
Mat frame_gray;
const double scale_factor = 1.1;
const int min_neighbours = 2;
const int flags = 0 | CV_HAAR_SCALE_IMAGE;
VideoCapture cap(0); // open the default camera
if (!cap.isOpened()) // check if we succeeded
return -1;
Mat frame;
for (;;)
{
cvtColor(frame, frame_gray, CV_BGR2GRAY);
equalizeHist(frame_gray, frame_gray);
face_cascade.detectMultiScale(frame_gray, faces, scale_factor, min_neighbours, flags, Size(30, 30));
if (faces.size() == 0)
{
cout << "No face detected" << endl;
}
else
{
for (unsigned int i = 0; i<faces.size(); i++)
{
Point pt1(faces[i].x, faces[i].y);
Point pt2(faces[i].x + faces[i].width, faces[i].y + faces[i].height);
rectangle(frame, pt1, pt2, Scalar(0, 255, 0), 1.5, 8, 0);
}
}
if (waitKey(30) >= 0) break;
}
return 0;
}
I tested the speed from the webcam is slow. I imagine that could be by the resolution from the image (640x480). I want to know if there are any way to keep the resolution and improving the speed between every frame to do the detection.
Thanks!
You can:
Increase minimal face size from Size(30, 30) to Size(50, 50) (it improves performance in 2-3 times).
Change value of scale_factor from 1.1 to 1.2; (it improves performance in 2 times).
Use LBP detector instead of Haar detector (it is faster in 2-3 times).
Check compiler options (may be you use Debug mode).