C++/OpenCV Segmentation Fault: 11 - c++

I am attempting to do a small project with C++ and OpenCV (I am fairly new to each). I studied the face detection code provided by OpenCV here to get a basic understanding. I was able to compile and run this face detection code with no issues. I then attempted to modify that code to perform full body detection as follows:
#include "opencv2/objdetect.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
/** Function Headers */
void detectAndDisplay( Mat frame );
/** Global variables */
String body_cascade_name = "haarcascade_fullbody.xml";
CascadeClassifier body_cascade;
String window_name = "Capture - Body detection";
/** #function main */
int main( void )
{
VideoCapture capture;
Mat frame;
//-- 1. Load the cascades
if( !body_cascade.load( body_cascade_name ) ){ printf("--(!)Error loading body cascade\n"); return -1; };
//-- 2. Read the video stream
capture.open( -1 );
if ( ! capture.isOpened() ) { printf("--(!)Error opening video capture\n"); return -1; }
while ( capture.read(frame) )
{
if( frame.empty() )
{
printf(" --(!) No captured frame -- Break!");
break;
}
//-- 3. Apply the classifier to the frame
detectAndDisplay( frame );
int c = waitKey(10);
if( (char)c == 27 ) { break; } // escape
}
return 0;
}
/** #function detectAndDisplay */
void detectAndDisplay( Mat frame )
{
std::vector<Rect> bodies;
Mat frame_gray;
cvtColor( frame, frame_gray, COLOR_BGR2GRAY );
equalizeHist( frame_gray, frame_gray );
//-- Detect bodies
body_cascade.detectMultiScale( frame_gray, bodies, 1.1, 2, 0|CASCADE_SCALE_IMAGE, Size(45, 80) );
//body_cascade.detectMultiScale( frame_gray, bodies, 1.1, 3, 3, cv::Size(5,13), cv::Size(45,80));
for( size_t i = 0; i < bodies.size(); i++ )
{
Point center( bodies[i].x + bodies[i].width/2, bodies[i].y + bodies[i].height/2 );
rectangle( frame, bodies[i], Scalar( 255, 0, 255), 4, 8, 0);
//ellipse ( frame, center, Size( bodies[i].width/2, bodies[i].height/2), 0, 0, 360, Scalar( 255, 0, 255 ), 4, 8, 0 );
}
//-- Show what you got
imshow( window_name, frame );
}
Mostly, I just renamed variables from "face" to "body", removed the eye cascade portions and changed the "haarcascade_xxxx".
I am able to compile this using:
g++ bodyDetect.cpp -o bodyDetect `pkg-config --cflags --libs opencv`
but when I attempt to run it I just get a "Segmentation Fault: 11"
I have been able to determine that it is due to the "haarcascade_fullbody.xml" assigned to body_cascade_name. If I change this value back to the "haarcascade_frontalface_alt.xml" it will run just fine as face detection software again. I do have the xml files copied to the same directory as the .cpp files.
Also, the upper body cascade will not work either, but the profile face cascade will work with the same code (haven't tested all cascades).
I have OpenCV 3.0.0 installed (checked using pkg-config --modversion opencv), running MacOSX 10.9.2, if this is relevant. I did have trouble compiling/running slightly different face detection code that appeared to be for OpenCV 2.4.?.
My question is, why would the "haarcascade_fullbody.xml" file cause a segmentation fault, while other cascades do not? Is there a way to correct this issue?
UPDATE: When I run the program in gdb I get the following error:
Program received signal SIGSEGV, Segmentation fault.
0x0000000100d0b707 in ?? ()
I believe that this indicates that the seg fault is not within one of the functions. I still suspect that the haarcascade_fullbody.xml to be the issue. When this is to a different filter I do not receive the seg fault.

Related

Opencv 3.0 Error loading face cascade

Hi i have implemented opencv library. It is working for some code like video capturing or running video from file. but when i implement the code for face detection or object detection or for motion detection program. currently i have implemented this program.
#include "opencv2/objdetect.hpp"
#include "opencv2/videoio.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/opencv.hpp"
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
/** Function Headers */
void detectAndDisplay(Mat frame);
/** Global variables */
String face_cascade_name = "haarcascade_frontalface_alt.xml";
String eyes_cascade_name = "haarcascade_eye_tree_eyeglasses.xml";
CascadeClassifier face_cascade;
CascadeClassifier eyes_cascade;
String window_name = "Capture - Face detection";
/** #function main */
int main(void)
{
VideoCapture capture;
Mat frame;
//-- 1. Load the cascades
if (!face_cascade.load(face_cascade_name)){ printf("--(!)Error loading face cascade\n"); return -1; };
if (!eyes_cascade.load(eyes_cascade_name)){ printf("--(!)Error loading eyes cascade\n"); return -1; };
//-- 2. Read the video stream
capture.open(-1);
if (!capture.isOpened()) { printf("--(!)Error opening video capture\n"); return -1; }
while (capture.read(frame))
{
if (frame.empty())
{
printf(" --(!) No captured frame -- Break!");
break;
}
//-- 3. Apply the classifier to the frame
detectAndDisplay(frame);
int c = waitKey(10);
if ((char)c == 27) { break; } // escape
}
return 0;
}
/** #function detectAndDisplay */
void detectAndDisplay(Mat frame)
{
std::vector<Rect> faces;
Mat frame_gray;
cvtColor(frame, frame_gray, COLOR_BGR2GRAY);
equalizeHist(frame_gray, frame_gray);
//-- Detect faces
face_cascade.detectMultiScale(frame_gray, faces, 1.1, 2, 0 | CASCADE_SCALE_IMAGE, Size(30, 30));
for (size_t i = 0; i < faces.size(); i++)
{
Point center(faces[i].x + faces[i].width / 2, faces[i].y + faces[i].height / 2);
ellipse(frame, center, Size(faces[i].width / 2, faces[i].height / 2), 0, 0, 360, Scalar(255, 0, 255), 4, 8, 0);
Mat faceROI = frame_gray(faces[i]);
std::vector<Rect> eyes;
//-- In each face, detect eyes
eyes_cascade.detectMultiScale(faceROI, eyes, 1.1, 2, 0 | CASCADE_SCALE_IMAGE, Size(30, 30));
for (size_t j = 0; j < eyes.size(); j++)
{
Point eye_center(faces[i].x + eyes[j].x + eyes[j].width / 2, faces[i].y + eyes[j].y + eyes[j].height / 2);
int radius = cvRound((eyes[j].width + eyes[j].height)*0.25);
circle(frame, eye_center, radius, Scalar(255, 0, 0), 4, 8, 0);
}
}
//-- Show what you got
imshow(window_name, frame);
}
When I try to debug it gives me error The program '[7912] ConsoleApplication1.exe' has exited with code -1 (0xffffffff).
When I try start without debugging it gives me an error Error loading face cascade.
I found also one thing,one warning message while debugging is
C:\Users\rushikesh\Documents\Visual Studio 2013\Projects\ConsoleApplication1\x64\Debug\opencv_world300d.dll'. Cannot find or open the PDB file.
but i check there is world300d.dll.
Some programs of opencv 3.0.0 is running, So i guess i have configured it right, but few program especially track objects or motion or detect the face is not running and giving me the same error.
Edit
After trying as per the suggestion of #srslynow i got following error.
Your program cannot find the .xml files. Note that the default working directory when running your program from the Visual Studio IDE is NOT where your .exe is. It is the root directory of your project.
Possible solutions:
Move the xml files to your project root
Change the working directory (under project > properties > debugging) to $(SolutionDir)$(Platform)\$(Configuration)\
edit:
Capturing the default video device is done by using
capture.open(0); This might be the cause for exiting the program with a -1 status, I'm assuming you do actually have a webcam on your machine?
One suggestion related to haar cascades. The best detector file I found is this one:
haarcascade_frontalface_alt2.xml
I did thousands of tests and this was the best file.
If you are using Visual Studio, the problem could be also the compiler version between your application and the opencv binary.
Exaple: if you are using VS 2013 that corresponde to compiler "vc120" but your are linking the opencv binary build with Visual Studio 2010 ("vs100"), you could have that error.
In this case go to properties of the project: Project-> Properties-> Select General Section on the left under "Configuration Properties" and on the right change the property "Platform Toolset" to "Visual Studio 2010 (v100)".
This should work!
Maybe you should use absolute path containing XML file.

C++ OpenCV Reading HaarCascades Slowing Down Computer

I'm writing a program using C++ and OpenCV. It's actually my first so what I'm asking is probably something very basic I've overlooked. Much of it is copied - not copy+pasted mind you, but copied by hand, going line by line, understanding what each line was doing as I wrote it - from some of OpenCV's tutorials. I'll paste the code below.
The problem I'm encountering is that as soon as the webcam starts trying to implement facial recognition, everything just SLOWS. DOWN. As I understand it, its because the .exe is trying to read from two MASSIVE .xml files every frame update, but I don't have any idea how to fix it. It was worse before I constrained the height, width, and framerate of the video.
If anyone has any ideas at this point, I'd love to hear them. I'm very new to software programming - until now I've mostly done web development, so I'm not used to worrying about system memory and other factors.
Thanks in advance!
EDIT: Here are my system specs: Mac, OSX 10.9.4, 2.5 GHz Intel Core i5, 4 GB 1600 MHz DDR3 RAM.
#include "opencv2/objdetect.hpp"
#include "opencv2/videoio.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
/** Function Headers */
void detectAndDisplay( Mat frame );
/** Global variables */
String face_cascade_name = "haarcascade_frontalface_alt.xml";
String eyes_cascade_name = "haarcascade_eye_tree_eyeglasses.xml";
CascadeClassifier face_cascade;
CascadeClassifier eyes_cascade;
String window_name = "Capture - Face detection";
/** #function main */
int main( void )
{
cv::VideoCapture capture;
Mat frame;
//-- 1. Load the cascades
if( !face_cascade.load( face_cascade_name ) ){ printf("--(!)Error loading face cascade\n"); return -1; };
if( !eyes_cascade.load( eyes_cascade_name ) ){ printf("--(!)Error loading eyes cascade\n"); return -1; };
//-- 2. Read the video stream
capture.open( -1 );
if ( ! capture.isOpened() ) { printf("--(!)Error opening video capture\n"); return -1; }
capture.set(CV_CAP_PROP_FRAME_WIDTH,640);
capture.set(CV_CAP_PROP_FRAME_HEIGHT,480);
capture.set(CV_CAP_PROP_FPS, 15);
while ( capture.read(frame) )
{
if( frame.empty() )
{
printf(" --(!) No captured frame -- Break!");
break;
}
//-- 3. Apply the classifier to the frame
detectAndDisplay( frame );
int c = waitKey(10);
if( (char)c == 27 ) { break; } // escape
}
return 0;
}
/** #function detectAndDisplay */
void detectAndDisplay( Mat frame )
{
std::vector<Rect> faces;
Mat frame_gray;
cvtColor( frame, frame_gray, COLOR_BGR2GRAY );
equalizeHist( frame_gray, frame_gray );
//-- Detect faces
face_cascade.detectMultiScale( frame_gray, faces, 1.1, 2, 0|CASCADE_SCALE_IMAGE, Size(30, 30) );
for ( size_t i = 0; i < faces.size(); i++ )
{
Point center( faces[i].x + faces[i].width/2, faces[i].y + faces[i].height/2 );
ellipse( frame, center, Size( faces[i].width/2, faces[i].height/2 ), 0, 0, 360, Scalar( 255, 0, 255 ), 4, 8, 0 );
Mat faceROI = frame_gray( faces[i] );
std::vector<Rect> eyes;
//-- In each face, detect eyes
eyes_cascade.detectMultiScale( faceROI, eyes, 1.1, 2, 0 |CASCADE_SCALE_IMAGE, Size(30, 30) );
for ( size_t j = 0; j < eyes.size(); j++ )
{
Point eye_center( faces[i].x + eyes[j].x + eyes[j].width/2, faces[i].y + eyes[j].y + eyes[j].height/2 );
int radius = cvRound( (eyes[j].width + eyes[j].height)*0.25 );
circle( frame, eye_center, radius, Scalar( 255, 0, 0 ), 4, 8, 0 );
}
}
//-- Show what you got
imshow( window_name, frame );
}
A quick solution would be to replace:
eyes_cascade.detectMultiScale( faceROI, eyes, 1.1, 2, 0 |CASCADE_SCALE_IMAGE, Size(30, 30) );
by
eyes_cascade.detectMultiScale( faceROI, eyes, 1.3, 2, 0 |CASCADE_SCALE_IMAGE, Size(60, 60), Size(350, 350) );
1.3 is the scale factor, Size(60, 60) the min windows size and Size(350, 350) the max one. It means basically that it will start to search for 60*60 faces then increase size by oldWindowSize*1.3 until it reach 350*350. It is assumed there that your faces are min 60*60 and max 350 * 350.
You can tune it even more depending what you want. The minSize will have a the most impact on performance as well as scale (but 1.3 is already high). The maxSize will have less impact.
After this update, your prog should be twice faster or decrease CPU usage by half. However, I am still surprise that with your current tunings and you computer you have performances problems...
Give us a feedback if it works.

Contours opencv c++ Assertation failed (scn == 3 || scn == 4) [duplicate]

I read alot from other solution but i still confused what should i do with mine...
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
int main(int argc, const char** argv)
{
//create the cascade classifier object used for the face detection
CascadeClassifier face_cascade;
//use the haarcascade_frontalface_alt.xml library
face_cascade.load("haarcascade_frontalface_alt.xml");
//setup video capture device and link it to the first capture device
VideoCapture captureDevice;
captureDevice.open(0);
//setup image files used in the capture process
Mat captureFrame;
Mat grayscaleFrame;
//create a window to present the results
namedWindow("outputCapture", 1);
//create a loop to capture and find faces
while (true)
{
//capture a new image frame
captureDevice >> captureFrame;
//convert captured image to gray scale and equalize
cvtColor(captureFrame, grayscaleFrame, CV_BGR2GRAY);
equalizeHist(grayscaleFrame, grayscaleFrame);
//create a vector array to store the face found
std::vector<Rect> faces;
//find faces and store them in the vector array
face_cascade.detectMultiScale(grayscaleFrame, faces, 1.1, 3, CV_HAAR_FIND_BIGGEST_OBJECT | CV_HAAR_SCALE_IMAGE, Size(30, 30));
//draw a rectangle for all found faces in the vector array on the original image
for (int i = 0; i < faces.size(); i++)
{
Point pt1(faces[i].x + faces[i].width, faces[i].y + faces[i].height);
Point pt2(faces[i].x, faces[i].y);
rectangle(captureFrame, pt1, pt2, cvScalar(0, 255, 0, 0), 1, 8, 0);
}
//print the output
imshow("outputCapture", captureFrame);
//pause for 33ms
waitKey(33);
}
return 0;
}
This is the error
OpenCV Error: Assertion failed in unknown function
It seems the error happened after "captureDevice >> captureFrame;" Please guide me, this is taking image from camera.
It seems like VideoCapture can't grab frame from your camera. Add this code to check result of frame grabbing:
//create a loop to capture and find faces
while (true)
{
//capture a new image frame
captureDevice >> captureFrame;
if (captureFrame.empty())
{
cout << "Failed to grab frame" << endl;
break;
}
...
If it is a problem of VideoCapture check that you installed drivers for your camera.
Okay, I think I know what happened. I tried running the program and it worked perfectly.
You have probably not linked the required DLL's.. Make sure (in case you are using Windows) your opencv/bin is included in your environment variables. This is my CMakeLists.txt file to make things easier for you,
cmake_minimum_required(VERSION 2.6 FATAL_ERROR)
project(SO_example)
FIND_PACKAGE(OpenCV REQUIRED)
SET (SOURCES
#VTKtoPCL.h
main.cpp
)
add_executable(so_example ${SOURCES})
target_link_libraries(so_example ${OpenCV_LIBS})

How we can detect faces more accurately

I am doing face detection from video. So I wrote one small code to detect the face.
#include<opencv2/objdetect/objdetect.hpp>
#include<opencv2/highgui/highgui.hpp>
#include<opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <stdio.h>
#include<cv.h>
using namespace std;
using namespace cv;
CvCapture *capture=cvCaptureFromFile("foot.mp4");
double min_face_size=30;
double max_face_size=400;
Mat detectFace(Mat src);
int main( )
{
namedWindow( "window1", 1 );
while(1)
{
Mat frame,frame1;
frame1=cvQueryFrame(capture);;
frame=detectFace(frame1);
imshow( "window1", frame );
if(waitKey(1) == 'c') break;
}
waitKey(0);
return 0;
}
Mat detectFace(Mat image)
{
CascadeClassifier face_cascade( "haarcascade_frontalface_alt2.xml" );
CvPoint ul,lr;
std::vector<Rect> faces;
face_cascade.detectMultiScale( image, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE, Size(min_face_size, min_face_size),Size(max_face_size, max_face_size) );
for( int i = 0; i < faces.size(); i++ )
{
min_face_size = faces[i].width*0.8;
max_face_size = faces[i].width*1.2;
ul.x=faces[i].x;
ul.y=faces[i].y;
lr.x=faces[i].x + faces[i].width;
lr.y=faces[i].y + faces[i].height;
rectangle(image,ul,lr,CV_RGB(1,255,0),3,8,0);
}
return image;
}
I took one video for face detection which contains both small and large faces. My problem is using my code, it detects only small faces and also it shows some unwanted detection.
I need to detect both small and large faces in a video. How shall I do this?
Is there any problem with the scaling factor?
Please help me understand this problem.
Try to increase 'double max_face_size', which controls how large faces you want to detect.
You can also increase '2' in the parameters of 'detectMultiScale()', which controls the quality of the faces.

cvCaptureFromCAM program creates segmentation faults only some of the time

I've been kicking around with OpenCV 2.4.3 and a Logitech C920 camera hoping to get a primitive sort of facial recognition scheme going. Very simple, not very sophisticated.
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/nonfree/features2d.hpp"
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
/** Function Headers */
void grabcurrentuser();
void capturecurrentuser( Mat vsrc );
/** Global Variables **/
string face_cascade_name = "haarcascade_frontalface_alt.xml";
CascadeClassifier face_cascade;
int main( void ){//[main]
grabcurrentuser();
}//]/main]
void grabcurrentuser(){//[grabcurrentuser]
CvCapture* videofeed;
Mat videoframe;
//Load face cascade
if( !face_cascade.load( face_cascade_name ) ){
printf("Can't load haarcascade_frontalface_alt.xml\n");
}
//Read from source video feed for current user
videofeed = cvCaptureFromCAM( 1 );
if( videofeed ){
for(int i=0; i<10;i++){//Change depending on platform
videoframe = cvQueryFrame( videofeed );
//Debug source videofeed with display
if( !videoframe.empty() ){
imshow( "Source Video Feed", videoframe );
//Perform face detection on videoframe
capturecurrentuser( videoframe );
}else{
printf("Videoframe is empty or error!!!"); break;
}
int c = waitKey(33);//change to increase or decrease delay between frames
if( (char)c == 'q' ) { break; }
}
}
}//[/grabcurrentuser]
void capturecurrentuser( Mat vsrc ){//[capturecurrentuser]
std::vector<Rect> faces;
Mat vsrc_gray;
Mat currentuserface;
//Preprocess frame for face detection
cvtColor( vsrc, vsrc_gray, CV_BGR2GRAY );
equalizeHist( vsrc_gray, vsrc_gray );
//Find face
face_cascade.detectMultiScale( vsrc_gray, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE, Size(30,30) );
//Take face and crop out into a Mat
currentuserface = vsrc_gray( faces[0] );
//Save the mat into a jpg file on disk
imwrite( "currentuser.jpg", currentuserface );
//Show saved image in a window
imshow( "Current User Face", currentuserface );
}//[/capturecurrentuser]
The above code is the first component of this system. It's job is to accept the video feed, take 10 frames or so (hence the for loop) and run a haar cascade on the frames to obtain a face. Once a face is acquired, it cuts that face out into a Mat and saves it as a jpg in the working directory.
It's worked so far, but seems to be a very tempermental piece of code. It's giving me the desired output most of the time (I don't intend to ask here how I can make things more accurate or precise - but feel free to tell me :D) but other times it ends in a segmentation fault. The following is an example of normal output (i've looked around and seen that the VIDIOC invalid argument is something that can be ignored - again, if its an easy fix feel free to tell me) with the segmentation fault.
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
init done
opengl support available
Segmentation fault (core dumped)
Can anyone tell me why sometimes with concurrent runs of this program I run into a single or series of segmentation fault results like above, and other times not? This program is designed to create an output thats shunted off to another program I wrote, so I can't have it seizing up on me like this.
Much appreciated!
Your problem is in the following line:
currentuserface = vsrc_gray( faces[0] );
From my experience segmentation faults arise when you are accessing stuff that does not exist.
The program works fine if a face is detected because faces[0] contains data. However, when no face is detected (cover the camera), no rectangle is stored in faces[0]. Thus the error occurs.
Try initialising like this, so that imshow and imwrite works when nothing is detected:
cv::Mat currentuserface = cv::Mat::zeros(vsrc.size(), CV_8UC1);
and then check if faces is empty before you initialize currentuserface with it:
if( !faces.empty() )
currentuserface = vsrc_gray( faces[0] );