cvCaptureFromCAM program creates segmentation faults only some of the time - c++

I've been kicking around with OpenCV 2.4.3 and a Logitech C920 camera hoping to get a primitive sort of facial recognition scheme going. Very simple, not very sophisticated.
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/nonfree/features2d.hpp"
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
/** Function Headers */
void grabcurrentuser();
void capturecurrentuser( Mat vsrc );
/** Global Variables **/
string face_cascade_name = "haarcascade_frontalface_alt.xml";
CascadeClassifier face_cascade;
int main( void ){//[main]
grabcurrentuser();
}//]/main]
void grabcurrentuser(){//[grabcurrentuser]
CvCapture* videofeed;
Mat videoframe;
//Load face cascade
if( !face_cascade.load( face_cascade_name ) ){
printf("Can't load haarcascade_frontalface_alt.xml\n");
}
//Read from source video feed for current user
videofeed = cvCaptureFromCAM( 1 );
if( videofeed ){
for(int i=0; i<10;i++){//Change depending on platform
videoframe = cvQueryFrame( videofeed );
//Debug source videofeed with display
if( !videoframe.empty() ){
imshow( "Source Video Feed", videoframe );
//Perform face detection on videoframe
capturecurrentuser( videoframe );
}else{
printf("Videoframe is empty or error!!!"); break;
}
int c = waitKey(33);//change to increase or decrease delay between frames
if( (char)c == 'q' ) { break; }
}
}
}//[/grabcurrentuser]
void capturecurrentuser( Mat vsrc ){//[capturecurrentuser]
std::vector<Rect> faces;
Mat vsrc_gray;
Mat currentuserface;
//Preprocess frame for face detection
cvtColor( vsrc, vsrc_gray, CV_BGR2GRAY );
equalizeHist( vsrc_gray, vsrc_gray );
//Find face
face_cascade.detectMultiScale( vsrc_gray, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE, Size(30,30) );
//Take face and crop out into a Mat
currentuserface = vsrc_gray( faces[0] );
//Save the mat into a jpg file on disk
imwrite( "currentuser.jpg", currentuserface );
//Show saved image in a window
imshow( "Current User Face", currentuserface );
}//[/capturecurrentuser]
The above code is the first component of this system. It's job is to accept the video feed, take 10 frames or so (hence the for loop) and run a haar cascade on the frames to obtain a face. Once a face is acquired, it cuts that face out into a Mat and saves it as a jpg in the working directory.
It's worked so far, but seems to be a very tempermental piece of code. It's giving me the desired output most of the time (I don't intend to ask here how I can make things more accurate or precise - but feel free to tell me :D) but other times it ends in a segmentation fault. The following is an example of normal output (i've looked around and seen that the VIDIOC invalid argument is something that can be ignored - again, if its an easy fix feel free to tell me) with the segmentation fault.
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
init done
opengl support available
Segmentation fault (core dumped)
Can anyone tell me why sometimes with concurrent runs of this program I run into a single or series of segmentation fault results like above, and other times not? This program is designed to create an output thats shunted off to another program I wrote, so I can't have it seizing up on me like this.
Much appreciated!

Your problem is in the following line:
currentuserface = vsrc_gray( faces[0] );
From my experience segmentation faults arise when you are accessing stuff that does not exist.
The program works fine if a face is detected because faces[0] contains data. However, when no face is detected (cover the camera), no rectangle is stored in faces[0]. Thus the error occurs.
Try initialising like this, so that imshow and imwrite works when nothing is detected:
cv::Mat currentuserface = cv::Mat::zeros(vsrc.size(), CV_8UC1);
and then check if faces is empty before you initialize currentuserface with it:
if( !faces.empty() )
currentuserface = vsrc_gray( faces[0] );

Related

Using custom kernel in opencv 2DFilter - causing crash ... convolution how?

Thought I'd try my hand at a little (auto)correlation/convolution today in openCV and make my own 2D filter kernel.
Following openCV's 2D Filter Tutorial I discovered that making your own kernels for openCV's Filter2D might not be that hard. However I'm getting unhandled exceptions when I try to use one.
Code with comments relating to the issue here:
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char** argv) {
//Loading the source image
Mat src;
src = imread( "1.png" );
//Output image of the same size and the same number of channels as src.
Mat dst;
//Mat dst = src.clone(); //didn't help...
//desired depth of the destination image
//negative so dst will be the same as src.depth()
int ddepth = -1;
//the convolution kernel, a single-channel floating point matrix:
Mat kernel = imread( "kernel.png" );
kernel.convertTo(kernel, CV_32F); //<<not working
//normalize(kernel, kernel, 1.0, 0.0, 4, -1, noArray()); //doesn't help
//cout << kernel.size() << endl; // ... gives 11, 11
//however, the example from tutorial that does work:
//kernel = Mat::ones( 11, 11, CV_32F )/ (float)(11*11);
//default value (-1,-1) here means that the anchor is at the kernel center.
Point anchor = Point(-1,-1);
//value added to the filtered pixels before storing them in dst.
double delta = 0;
//alright, let's do this...
filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT );
imshow("Source", src); //<< unhandled exception here
imshow("Kernel", kernel);
imshow("Destination", dst);
waitKey(1000000);
return 0;
}
As you can see, using the tutorials kernel works fine, but my image will crash the program, I've tried changing the bit-depth, normalizing, checking size and lots of commenting out blocks to see where it fails, but haven't cracked it yet.
The image is, '1.png':
And the kernel I want 'kernel.png':
I'm trying to see if I can get a hotspot in dst at the point where the eye catchlight is (the kernel I've chosen is the catchlight). I know there are other ways to do this, but I'm interested to see how effective convolving the catchlight over itself is. (autocorrelation I think that's called?)
Direct questions:
why the crash?
is the crash indicating a fundamental conceptual mistake?
or (hopefully) is it just some (silly) fault in the code?
Thanks in advance for any help :)
The assertion error should be posted which would help someone to answer you other than questioning why is the crash. Anyways, I have posted below the possible errors and solution for convolution filter2D.
Error 1:
OpenCV Error: Assertion failed (src.channels() == 1 && func != 0) in cv::countNo
nZero, file C:\builds\2_4_PackSlave-win32-vc12-shared\opencv\modules\core\src\st
at.cpp, line 549
Solution : Your input Image and the kernel should be grayscales. You can use the flag 0 in imread. (ex. cv::imread("kernel.png",0) to read the image as grayscale.) If you want to apply different kernels to different channels, split the image into separate color planes using split() and process them individually.
I don't see anything other than the obove error that may crash. Kernel size should in odd numbers and your kernel image is 11X11 which is fine. If it stills crashes kindly provide more information in order to help you out.

Accessing Ximea camera and setting a predefined resolution with OpenCV shows puzzled output, due to Mat in size of camera’s default resolution

Problem description:
With the simple code I can access my internal camera webcam and also change the resolution. So displaying the frame with default resolution and with a predefined resolution (e.g., from 640x480 to 320x240, with cap.set(CV_CAP_PROP_FRAME_WIDTH,320), and FRAME_HEIGHT, 240)) – both work fine.
Using the same code with slight adaption so that it works with a Ximea camera (VideoCapture cap(CV_CAP_XIAPI)), does work for the default resolution.
It is a MU9PC_MH with a default resolution of 2592x1944, so 648 x486 is the lower resolution which is required.
For a manually set resolution the displayed window/grabbed frame has the size of the default resolution (2592x1944) and shows the lower amount of pixels of the capture in the this huge display, so that the upper fifths is filled with puzzled pixels and the rest of the window is black.
This effect happens for both, C++ with VideoCapture and Mat and for C CvCaptureFromCAM and IplImage*.
When I’m using the set flag CV_CAP_PROP_XI_DOWNSAMPLING, then I can see the output image with pixels in correct order, but the display frame has still the default high resolution and the output image is shown multiple times (the factor depends on the factor I am using for downsampling).
I’m even not able to force the Mat or IplImage to a predefined size, because then an error of assertion or access violation occurs(Mat image(XRES, YRES, CV_8UC3, Scalar(0,0,255)) (frame = cvCreateImage(cvSize( XRES, YRES), IPL_DEPTH_8U,3);. I've checked through frame.type(), that the output is an CV_8UC3
What I want is a Ximea camera output of 648x486 shown in an (ideally) Mat of same size.
Did anyone experience the same problem?
Probably it is due to a lack in my knowledge about industrial camera configuration/development, but any help is appreciated.
Situation:
Windows 7(x64)
Visual Studio Professional 2012
OpenCV Version 2.4.10 compiled for x86 with following flags checked: WITH_XIMEA and WITH_OPENGL
Simple VS2012 Project in x86 (both release and debug) for camera streaming and displaying of camera frame in window.
Simple Code Snippets(not mine, copied from opencv tutorials and ximea):
C++-Style:
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char **argv) {
VideoCapture cap(0); // open the default camera
//VideoCapture cap(CV_CAP_XIAPI);
if(!cap.isOpened()) // check if we succeeded
return -1;
cout<<" "<<cap.get(CV_CAP_PROP_FRAME_WIDTH)<<" "<<cap.get(CV_CAP_PROP_FRAME_HEIGHT)<<endl; //output: 640, 480
cap.set(CV_CAP_PROP_FRAME_WIDTH,XRES);
cap.set(CV_CAP_PROP_FRAME_HEIGHT,YRES);
cout<<" "<<cap.get(CV_CAP_PROP_FRAME_WIDTH)<<" "<<cap.get(CV_CAP_PROP_FRAME_HEIGHT)<<endl; //output: 320, 240
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("camera frame", frame);
if(waitKey(30) == 27) //wait for 'esc' key press
{
cout << "esc key is pressed by user" << endl;
break;
}
}
return 0;}
C-Style:
#include "cv.h"
#include "highgui.h"
#include <stdio.h>
#include <iostream>
using namespace cv;
using namespace std;
// A Simple Camera Capture Framework
int main()
{
CvCapture* capture = cvCaptureFromCAM( CV_CAP_XIAPI );
if ( !capture ) {
fprintf( stderr, "ERROR: capture is NULL \n" );
getchar();
return -1;
}
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, 648 );
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, 486 );
// Create a window in which the captured images will be presented
cvNamedWindow( "mywindow", CV_WINDOW_AUTOSIZE );
// Show the image captured from the camera in the window and repeat
while ( 1 ) {
// Get one frame
IplImage* frame = cvQueryFrame( capture );
cout<<cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH); //output: 648
cout<<cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT); //output: 486
cout<<frame->height<<frame->width<<endl; //output: 1944, 2592
if ( !frame ) {
fprintf( stderr, "ERROR: frame is null...\n" );
getchar();
break;
}
cvShowImage( "mywindow", frame );
// Do not release the frame!
//If ESC key pressed, Key=0x10001B under OpenCV 0.9.7(linux version),
//remove higher bits using AND operator
if ( (cvWaitKey(10) & 255) == 27 ) break;
}
// Release the capture device housekeeping
cvReleaseCapture( &capture );
cvDestroyWindow( "mywindow" );
return 0;
}
thank you karlphillip for your answer. Unfortunately, you are right that this was not an ideal way. So I take yesterday's snowy wheather and found the solution myself:
There's a mistake in the resetCvImage() method of the cap_ximea.cpp.
line 207 if( (int)image.width != width || (int)image.height != height
|| image.frm != (XI_IMG_FORMAT)format)
has to be:
if( (int)image.width != **frame->**width || (int)image.height != **frame->**height || image.frm != (XI_IMG_FORMAT)format)
VideoCapture::set() returns a bool to indicate the success of the method. You shouldn't let your application continue to run without checking the success/failure of this call and readjusting the size of the capture when necessary.
The fact is that some camera drivers don't accept arbitrary dimensions, and there's simply nothing you can do about it. However, you can retrieve the frame using the default size and then cv::resize() it to your needs.
It's not ideal, but it will get the job done.

C/C++ OpenCV video processing

Good day everyone! So currently I'm working on a project with video processing, so I decided to give a try to OpenCV. As I'm new to it, I decided to find few sample codes and test them out. First one, is C OpenCV and looks like this:
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <stdio.h>
int main( void ) {
CvCapture* capture = 0;
IplImage *frame = 0;
if (!(capture = cvCaptureFromCAM(0)))
printf("Cannot initialize camera\n");
cvNamedWindow("Capture", CV_WINDOW_AUTOSIZE);
while (1) {
frame = cvQueryFrame(capture);
if (!frame)
break;
IplImage *temp = cvCreateImage(cvSize(frame->width/2, frame->height/2), frame->depth, frame->nChannels); // A new Image half size
cvResize(frame, temp, CV_INTER_CUBIC); // Resize
cvSaveImage("test.jpg", temp, 0); // Save this image
cvShowImage("Capture", frame); // Display the frame
cvReleaseImage(&temp);
if (cvWaitKey(5000) == 27) // Escape key and wait, 5 sec per capture
break;
}
cvReleaseImage(&frame);
cvReleaseCapture(&capture);
return 0;
}
So, this one works perfectly well and stores image to hard drive nicely. But problems begin with next sample, which uses C++ OpenCV:
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
//namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_RGB2XYZ);
imshow("edges", edges);
//imshow("edges2", frame);
//imwrite("test1.jpg", frame);
if(waitKey(1000) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
So, yeah, generally, in terms of showing video (image frames) there is practically no changes, but when it comes to using im** functions, some problems arise.
Using cvSaveImage() works out nicely, but the moment I try to use imwrite(), unhandled exception arises in regards of 'access violation reading location'. Same goes for imread(), when I'm trying to load image.
So, the thing I wanted to ask, is it possible to use most of the functionality with C OpenCV? Or is it necessary to use C++ OpenCV. If yes, is there any solution for the problem I described earlier.
Also as stated here, images initially are in BGR-format, so conversion needed. But doing BGR2XYZ conversion seems to invert colors, while RGB2XYZ preserve them. Examples:
images
Or is it necessary to use C++ OpenCV?
No, there is no necessity whatsoever. You can use any interface you like and you think you are good with it (OpenCV offers C, C++, Python interfaces).
For your problem about imwrite() and imread() :
For color images the order channel is normally Blue, Green, Red , this
is what imshow() , imread() and imwrite() expect
Quoted from there

how is PCA implemented on a camera captured image?

I have successfully implemented face detection part in my Face Recognition project.Now i have a rectangular region of face in an image.Now i have to implement PCA on this detected rectangular region to extract important features.I have used examples of implementing PCA on face databases.I want to know how we can pass our detected face to function implementing PCA?Is it that we pass the rectangle frame?
This is the code for my face detection.
#include "cv.h"
#include "highgui.h"
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <assert.h>
#include <math.h>
#include <float.h>
#include <limits.h>
#include <time.h>
#include <ctype.h>
// Create a string that contains the exact cascade name
const char* cascade_name =
"haarcascade_frontalface_alt.xml";
/* "haarcascade_profileface.xml";*/
// Function prototype for detecting and drawing an object from an image
void detect_and_draw( IplImage* image );
// Main function, defines the entry point for the program.
int main( int argc, char** argv )
{
// Create a sample image
IplImage *img = cvLoadImage("Image018.jpg");
if(!img)
{
printf("could not load image");
return -1;
}
// Call the function to detect and draw the face positions
detect_and_draw(img);
// Wait for user input before quitting the program
cvWaitKey();
// Release the image
cvReleaseImage(&img);
// Destroy the window previously created with filename: "result"
cvDestroyWindow("result");
// return 0 to indicate successfull execution of the program
return 0;
}
// Function to detect and draw any faces that is present in an image
void detect_and_draw( IplImage* img )
{
// Create memory for calculations
static CvMemStorage* storage = 0;
// Create a new Haar classifier
static CvHaarClassifierCascade* cascade = 0;
int scale = 1;
// Create a new image based on the input image
IplImage* temp = cvCreateImage( cvSize(img->width/scale,img->height/scale), 8, 3 );
// Create two points to represent the face locations
CvPoint pt1, pt2;
int i;
// Load the HaarClassifierCascade
cascade = (CvHaarClassifierCascade*)cvLoad( cascade_name, 0, 0, 0 );
// Check whether the cascade has loaded successfully. Else report and error and quit
if( !cascade )
{
fprintf( stderr, "ERROR: Could not load classifier cascade\n" );
return;
}
// Allocate the memory storage
storage = cvCreateMemStorage(0);
// Create a new named window with title: result
cvNamedWindow( "result", 1 );
// Clear the memory storage which was used before
cvClearMemStorage( storage );
// Find whether the cascade is loaded, to find the faces. If yes, then:
if( cascade )
{
// There can be more than one face in an image. So create a growable sequence of faces.
// Detect the objects and store them in the sequence
CvSeq* faces = cvHaarDetectObjects( img, cascade, storage,
1.1, 2, CV_HAAR_DO_CANNY_PRUNING,
cvSize(40, 40) );
// Loop the number of faces found.
for( i = 0; i < (faces ? faces->total : 0); i++ )
{
// Create a new rectangle for drawing the face
CvRect* r = (CvRect*)cvGetSeqElem( faces, i );
// Find the dimensions of the face,and scale it if necessary
pt1.x = r->x*scale;
pt2.x = (r->x+r->width)*scale;
pt1.y = r->y*scale;
pt2.y = (r->y+r->height)*scale;
// Draw the rectangle in the input image
cvRectangle( img, pt1, pt2, CV_RGB(255,0,0), 3, 8, 0 );
}
}
// Show the image in the window named "result"
cvShowImage( "result", img );
// Release the temp image created.
cvReleaseImage( &temp );
}
Edit:
Just to notify anyone visiting this site. I have written some sample code to perform face recognition in videos using my libfacerec library:
https://github.com/bytefish/libfacerec/blob/master/samples/facerec_video.cpp
Original post:
I assume your problem is the following. You've used the Cascade Classifier cv::CascadeClassifier coming with OpenCV to detect and extract faces from images. Now you want to perform a face recognition on the images.
You want to use the Eigenfaces for face recognition. So the first thing you have to do is to learn the Eigenfaces from the images you've gathered. I rewrote the Eigenfaces class for you to make it simpler. To learn the eigenfaces simply pass a vector with your face images and the corresponding labels (the subject) either to Eigenfaces::Eigenfaces or Eigenfaces::compute. Make sure all your images have the same size, you can use cv::resize to ensure this.
Once you have computed the Eigenfaces, you can get predictions from your model. Simply call Eigenfaces::predict on a computed model. The main.cpp shows you how to use the class and its methods (for prediction, projection, reconstruction of images), here's how to get a prediction for an image.
Now I see where your problem is. You are using the old OpenCV C API. That makes it's hard to interface with the new OpenCV2 C++ API my code is written in. Not to be offending, but if you want to interface with my code you better use the OpenCV2 C++ API. I can't give a guide on learning C++ and the OpenCV2 API here, there's a lot of documentation coming with OpenCV. A good start is the OpenCV C++ Cheat Sheet (also available at http://opencv.willowgarage.com/) or the OpenCV Reference Manual.
For recognizing images from the Cascade Detector, I repeat: First learn the Eigenfaces model with the persons you want to recognize, it's shown in the example coming with my code. Then you need to get the Region Of Interest (ROI), that's the face, the Rectangle the Cascade Detector outputs. Finally you can get a prediction for the ROI from the Eigenfaces model (you have computed it above), it's shown in the example coming with my code. You probably have to convert your image to grayscale, but that's all. That's how it's done.

OpenCV Gives an error when using the imgproc functions

when I compile and run this code, I get an error. It compiles, but when I try to run it, it gives the following error:
The application has requested the Runtime to terminate in an unusual way.
This is the code:
#include <opencv2/opencv.hpp>
#include <string>
int main() {
cv::VideoCapture c(0);
double rate = 10;
bool stop(false);
cv::Mat frame;
cv::namedWindow("Hi!");
int delay = 1000/rate;
cv::Mat corners;
while(!stop){
if(!c.read(frame))
break;
cv::cornerHarris(frame,corners,3,3,0.1);
cv::imshow("Hi!",corners);
if(cv::waitKey(delay)>=0)
stop = true;
}
return 0;
}
BTW, I get the same error when using the Canny edge detector.
Your corners matrix is declared as a variable, but there is no memory allocated to it. The same with your frame variable. First you have to create a matrix big enough for the image to fit into it.
I suggest you first take a look at cvCreateImage so you can learn how basic images are created and handled, before you start working with video streams.
Make sure the capture is ready, and the image is ok:
if(!cap.IsOpened())
break;
if(!c.read(frame))
break;
if(frame.empty())
break;
You need to convert the image to grayscale before you use the corner detector:
cv::Mat frameGray;
cv::cvtColor(frame, frameGray, CV_RGB2GRAY);