Cannot detect Faces using Offline Affectiva SDK - c++

I'm new to Affectiva Emotion Recognition SDK. I have been following example from video from this link But when I feed some pictures example this image the face could not be detected.
My code looks:-
Listener
class Listener : public affdex::ImageListener{
void onImageResults(std::map<affdex::FaceId,affdex::Face> faces,affdex::Frame image){
std::string pronoun="they";
std::string emotion="neutral";
for (auto pair : faces){
affdex::FaceId faceId=pair.first;
affdex::Face face=pair.second;
if(face.appearance.gender==affdex::Gender::Male){
pronoun="Male";
}else if(face.appearance.gender==affdex::Gender::Female){
pronoun="Female";
}
if(face.emotions.joy>25){
emotion="Happy :)";
}else if(face.emotions.sadness>25){
emotion="Sad :(";
}
cout<<faceId<<" : "<<pronoun <<" looks "<< emotion <<endl;
}
}
void onImageCapture(affdex::Frame image){
cout<<"IMage captured"<<endl;
}
};
Main code
Mat img;
img=imread(argv[1],CV_LOAD_IMAGE_COLOR);
affdex::Frame frame(img.size().width, img.size().height, img.data, affdex::Frame::COLOR_FORMAT::BGR);
affdex::PhotoDetector detector(3);
detector.setClassifierPath("/xxx/xxx/affdex-sdk/data");
affdex::ImageListener * listener(new Listener());
detector.setImageListener(listener);
detector.setDetectAllEmotions(true);
detector.setDetectAllExpressions(true);
detector.start();
detector.process(frame);
detector.stop();
Where do am I making mistake?Or is the sdk cannot detect faces from some images? can any body help me?
Edit
I Used the following images

Sometimes the SDK cannot detect faces in an image. There is no detector that can detect all faces all the time. Did you check with different images?
Edit:
Those two images are 250x250 and 260x194 and really low quality. I recommend you to test the app with higher resolution images. As Affectiva states in their webpage that the minimum recommended resolution is 320x240 and faces should be at least 30x30.
https://developer.affectiva.com/obtaining-optimal-results/

Related

lag in opencv videocapture when i use rtsp camera stream

So i'm currently working on a project that needs to do a facial recognition on rtsp ip cam , i managed to get the rtsp feed with no problems, but when it comes to applying the face recognition the video feed gets too slow and shows a great delay, i even used multithreading to make it better but with no success,here is my code i'm still a beginner in multi threading matters so any help would be appreciated.
#include <iostream>
#include <thread>
#include "opencv2/opencv.hpp"
#include <vector>
using namespace std;
using namespace cv;
void detect(Mat img, String strCamera) {
string cascadeName1 = "C:\\ocv3.2\\Build\\install\\etc\\haarcascades\\haarcascade_frontalface_alt.xml";
CascadeClassifier facedetect;
bool loaded1 = facedetect.load(cascadeName1);
Mat original;
img.copyTo(original);
vector<Rect> human;
cvtColor(img, img, CV_BGR2GRAY);
equalizeHist(img, img);
facedetect.detectMultiScale(img, human, 1.1, 2, 0 | 1, Size(40, 80), Size(400, 480));
if (human.size() > 0)
{
for (int gg = 0; gg < human.size(); gg++)
{
rectangle(original, human[gg].tl(), human[gg].br(), Scalar(0, 0, 255), 2, 8, 0);
}
}
imshow("Detect " + strCamera, original);
int key6 = waitKey(40);
//End of the detect
}
void stream(String strCamera) {
VideoCapture cap(strCamera);
if (cap.isOpened()) {
while (true) {
Mat frame;
cap >> frame;
resize(frame, frame, Size(640, 480));
detect(frame, strCamera);
}
}
}
int main() {
thread cam1(stream, "rtsp://admin:password#ipaddress:554/live2.sdp?tcp");
thread cam2(stream, "rtsp://admin:password#ipaddress/live2.sdp?tcp");
cam1.join();
cam2.join();
return 0;
}
I had similar issues and was able to resolve them by completely isolating the frame capturing from processing of the images. I also updated OpenCV to the latest (3.2.0) available, but I think this will also resolve problems with earlier versions.
void StreamLoop(String strCamera, LFQueue1P1C<Mat> *imageQueue, bool *shutdown) {
VideoCapture cap(strCamera, CV_CAP_FFMPEG);
Mat image;
while(!(*shutdown) && cap.isOpened()){
*cap >> image;
imageQueue->Produce(image);
}
}
int main(){
Mat aImage1;
bool shutdown(false);
LFQueue1P1C<Mat> imageQueue;
string rstp("rtsp://admin:password#ipaddress:554/live2.sdp?tcp");
thread streamThread(StreamLoop, rtsp, &imageQueue, &shutdown);
...
while(!shutdownCondition){
if(imageQueue.Consume(aImage1)) {
// Process Image
resize(aImage1, aImage1, Size(640, 480));
detect(aImage1, rtsp);
}
}
shutdown = true;
if(streamThread.joinable()) streamThread.join();
...
return 0;
}
It seems that there is some issue with rtsp in OpenCV where it easily hangs up if there are even slight pauses while picking up the frames. As long as I pick up frames without much pause I have not seen a problem.
Also, I didn't have this issue when the video cameras where directly connected to my local network. It was not until we deployed them at a remote site that I started getting the hang ups. Separating frame retrieval and processing into separate threads resolved my issues, hopefully someone else might find this solution useful.
Note: The queue I used is a custom queue for passing items from one thread to another. The code I posted is modified from my original code to make it more readable and applicable to this problem.
i'm still a beginner in multi threading matters so any help would be appreciated
Having threads that have no way of exiting will cause you issues in the future. Even if it is test code, get in the habit of making sure the code has an exit path. As an example: You might copy and paste a section of code later on and forget there is an infinite loop in there and it will cause great grief later trying to track down why you have mysterious crashing or your resources are locked up.
I am not a C++ developer but I had the same problem in Java. I solved my issue by calling VideoCapture.grab() function before reading camera frame. According to OpenCV Doc, the use of the grab function is :
The primary use of the function is in multi-camera environments,
especially when the cameras do not have hardware synchronization.
Besides that, in java application, you should release your frame's Mat objects every time you read new frames.

OpenCV 2.4.10 Face detection works with video but fails to detect in a static image

I'm using OpenCV's Cascade Classifier in order to detect faces. I followed the webcam tutorial, and I was able to use detectMultiScale to find and track my face while it was streaming video from my laptop's webcam.
But when I take a photo of myself from my laptop's webcam, I load that image into OpenCV, and apply detectMultiScale on that image, and for some reason, the Cascade Classifier can't detect any faces on that static image!
That static image would definitely have been detected if it was one frame in from my webcam stream, but when I just take that one individual image alone, nothing's being detected.
Here's the code I use (just picked out the relevant lines):
Code in Common:
String face_cascade_name = "/path/to/data/haarcascades/haarcascade_frontalface_alt.xml";
CascadeClassifier face_cascade;
Mat imagePreprocessing(Mat frame) {
Mat processed_frame;
cvtColor( frame, processed_frame, COLOR_BGR2GRAY );
equalizeHist( processed_frame, processed_frame );
return processed_frame;
}
For Web-cam streaming face detection:
int detectThroughWebCam() {
VideoCapture capture;
Mat frame;
if( !face_cascade.load( face_cascade_name ) ){ printf("--(!)Error loading face cascade\n"); return -1; };
//-- 2. Read the video stream
capture.open( -1 );
if ( ! capture.isOpened() ) { printf("--(!)Error opening video capture\n"); return -1; }
while ( capture.read(frame) )
{
if(frame.empty()) {
printf(" --(!) No captured frame -- Break!");
break;
}
//-- 3. Apply the classifier to the frame
Mat processed_image = imagePreprocessing( frame);
vector<Rect> faces;
face_cascade.detectMultiScale( processed_frame, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE|CV_HAAR_FIND_BIGGEST_OBJECT, Size(30, 30) );
if (faces.size() > 0) cout << "SUCCESS" << endl;
int c = waitKey(10);
if( (char)c == 27 ) { break; } // escape
}
return 0;
}
For my static image face detection:
void staticFaceDetection() {
Mat image = imread("path/to/jpg/image");
Mat processed_frame = imagePreprocessing(image);
std::vector<Rect> faces;
//-- Detect faces
face_cascade.detectMultiScale( processed_frame, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE|CV_HAAR_FIND_BIGGEST_OBJECT, Size(30, 30) );
if (faces.size() > 0) cout << "SUCCESS" << endl;
}
In my eyes, both of these processes are identical (the only difference being the where I'm acquiring the original image), but the video stream version regularly detects faces, while the static method never seems to be able to find a face.
Am I missing something here?
There are a few possible reasons for that.
You save the image in a low resolution. Try saving it in original resolution
Lossy compression. Do you save images a .jpg file? maybe your compression is too strong. Try saving as BMP file (it preservers the original quality).
Format of the image. I don't know what you imagePreprocessing() method does but you might introduce the following problems. The camera captures video in a specific format (Most cameras use YUV). Typically face detection is performed on the first plane Y. When you save the image and read it from disk as RGB you must not run the face detection on the first plane. This would be the 'B' plane and blue color stores very little information about the face. make sure that you correctly convert the image to gray-scale before you run the face detection.
Range of the image. This is a common mistake. Make sure that the dynamic range of the image is correct. Sometimes by mistake you might multiply all the values by 255 effectively turning the entire image to white.
Maybe face detection on images works fine but you somehow clear the faces vector after face detection. Another mistake might be that you read a different image file. For example, you save images to directory 'A' but accidentally read from directory 'B'
If none of the above helps. Do the following debugging.
For a video frame 'i' - store it in the memory. Then save it to disk and read it back from file to memory. Now the most important part: compare the images. If they are different - that is the reason for different face detection results. If not, then further investigation is needed. I am pretty sure that the images will not be identical and that is the problem. You can see where images are not identical by taking differences between pixel values and displaying the diff image. You can compare images using memcmp() function which compares 2 memory blocks.
Good luck
Solved it!
Really stupid mistake. I didn't call facecascades.load to load the haarcascades for the static image version, but I did that for the video cam version.
It's all working now.

Real Time Multiple object detection from mulitple videos, while correcting image distortion

I'm working on a project where I'm trying to detect multiple objects from multiple videos at the same time and I am also correcting the distortion in the videos so I can get an accurate reading for the bearing of the detections relative to the camera.
I'm working with OpenCV, in C++ using Visual Studio 2010
The code works but it is very slow, not real time. I am hoping someone may have suggestions as how it may be sped up, if it can. I'm not much of a coder at present and learning about image processing and don't know many tricks.
What the code does in a general sense is:
-Opens a video file
-Applies distortion correction to each image frame (for bearings)
-Crops the image (improves detections, time stamps give false positives)
-Applies a Haar type cascade to the cropped image to detect objects
-Draws a bounding box around the detections
-Displays the images
-It calculates the angle of the detections of the objects and prints to terminal
-It also draws an image like a radar, and displays the angle relative to the camera of each detection in each frame, the idea is to have it as a single source giving the detections from each camera on a single source mapping out the surrounding area.
I've included the main code that runs the video and detections for a single video, this is still pretty slow and takes approx. 18 seconds for each second of video. And when I have 4 videos attempting to run it's about 3 times longer.
Video dimensions are 704x576.
Any help or advice would be much appreciated, or even just knowing that it can only be sped up with purpose designed hardware.
Cheers,
Dave
int main(){
/////////////////////////////
//**Distortion Correction**//
/////////////////////////////
std::cout<< endl << "Reading:" << endl;
//stores a file
FileStorage fs;
//reads and stores the xml file with the camera calibration
fs.open("cal2.xml", FileStorage::READ);
if (fs.isOpened())
{
cout<<"File is opened\n";
}
//Mat objects to store the camera matrix and distortion coefficients
Mat CamMat, DistCoeff;
FileNode n = fs.root();
//takes the parameters from the xml file and stores them in the Mat objects
fs["Camera_Matrix"] >> CamMat;
fs["Distortion_Coefficients"] >> DistCoeff;
/////////////////////
//**Video Display**//
/////////////////////
//Mat objects to store the images
Mat Original, Vid1, Vid1Crop;
//Cropping Image to exclude time/camera stamps to improve detections
Rect roi(0, 35, 704, 490);
//for reading video or webcam
VideoCapture cap;
//for opening video file, give the location and name of the file seperating folders with "\\" instead of with a single "\"
cap.open("C:\\Users\\Desktop\\Run_01_005 two containers\\Video\\ch04_20140219124355.mp4");
//Windows for the images
namedWindow("New", CV_WINDOW_NORMAL);
namedWindow("Display", CV_WINDOW_NORMAL);
///////////////////////
//**Detection Setup**//
///////////////////////
// Cascade Classifier object
CascadeClassifier boat_cascade;
//loads the xml file for the classifier, put the address and name of the xml file in the brackets
boat_cascade.load( "boatcascadeAttp3.xml" );
/////////////////////////////////////
//**Single Source Display Image**////
/////////////////////////////////////
Mat Output;
//loop to continually capture/update images
while (1){
Output = Display();
cap>>Original;
//applies the distortion correction to input image and outputs to New image
undistort(Original, Vid1, CamMat, DistCoeff, noArray());
//Image excluding the time/camera stamps in video which caused a lot of false positives
Vid1Crop = Vid1(roi);
//Set.NewCrop(New, roi);
// Detect boats
std::vector<Rect> boats;
//Parameters may need some further adjustment, currently seems to work well
//Detection performed on Region of Interest that excludes the time stamp which caused a number of False Positives
boat_cascade.detectMultiScale( Vid1Crop, boats, 1.1, 15, 0|CV_HAAR_SCALE_IMAGE, Size(25, 25), Size(75,75) );
// Draw circles on the detected boats
for( int i = 0; i < boats.size(); i++ )
{
//Draws a box around the detected object
rectangle( Vid1Crop, Point(boats[i].x, boats[i].y), Point(boats[i].x+boats[i].width, boats[i].y+boats[i].height), Scalar( 0, 255, 0), 2, 8);
//finds the position of the detection along the X axis
int centreX = boats[i].x + boats[i].width*0.5;
int fromCent = Vid1Crop.cols - centreX;
float angle;
//calls Angle function
angle = Angle(centreX, fromCent, Vid1Crop);
//calls DisplayPoints function
Point XYpoints = DisplayPoints(angle);
//prints out the result, angle for cam ranges
cout << angle;
cout << " degrees" << endl;
//Draws red circles on the single source display corresponding to the detections
circle( Output, XYpoints, 5.0, Scalar( 0, 0, 255 ), 4, 8 );
}
//shows the New output image after correction
imshow("New", Vid1);
imshow("Display", Output);
//delay for 1ms between frames - Note 25 fps in video
waitKey(1);
}
fs.release();
return (0);
}

Issue with stitching images using openCV for iOS

I'm trying to adopt the code from here:
https://github.com/foundry/OpenCVStitch
into my program. However, I've run up against a wall. This code stitches images together that are already existing. The program I'm trying to make will stitch images together that the user took. The error I'm getting is that when I pass the images to the stitch function, it is saying they are of invalid size (0 x 0).
Here is the stitching function:
- (IBAction)stitchImages:(UIButton *)sender {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
NSArray* imageArray = [NSArray arrayWithObjects:
chosenImage, chosenImage2, nil];
UIImage* stitchedImage = [CVWrapper processWithArray:imageArray]; // error occurring within processWithArray function
dispatch_async(dispatch_get_main_queue(), ^{
NSLog (#"stitchedImage %#",stitchedImage);
UIImageView *imageView = [[UIImageView alloc] initWithImage:stitchedImage];
self.imageView = imageView;
[self.scrollView addSubview:imageView];
self.scrollView.backgroundColor = [UIColor blackColor];
self.scrollView.contentSize = self.imageView.bounds.size;
self.scrollView.maximumZoomScale = 4.0;
self.scrollView.minimumZoomScale = 0.5;
self.scrollView.contentOffset = CGPointMake(-(self.scrollView.bounds.size.width-self.imageView.bounds.size.width)/2, -(self.scrollView.bounds.size.height-self.imageView.bounds.size.height)/2);
[self.spinner stopAnimating];
});
});
}
chosenImage and chosenImage2 are images the user has taken using these two functions:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
savedImage = info[UIImagePickerControllerOriginalImage];
// display photo in the correct UIImageView
switch(image_location){
case 1:
chosenImage = info[UIImagePickerControllerOriginalImage];
self.imageView2.image = chosenImage;
image_location++;
break;
case 2:
chosenImage2 = info[UIImagePickerControllerOriginalImage];
self.imageView3.image = chosenImage2;
image_location--;
break;
}
// if user clicked "take photo", it should save photo
// if user clicked "select photo", it should not save photo
/*if (should_save){
UIImageWriteToSavedPhotosAlbum(chosenImage, nil, nil, nil);
}*/
[picker dismissViewControllerAnimated:YES completion:NULL];
}
- (IBAction)takePhoto:(UIButton *)sender {
UIImagePickerController *picker = [[UIImagePickerController alloc] init];
picker.delegate = self;
picker.allowsEditing = NO;
picker.sourceType = UIImagePickerControllerSourceTypeCamera;
//last_pressed = 1;
should_save = 1;
[self presentViewController:picker animated:YES completion:NULL];
}
The stitchesImages function passes an array of these two images to this function:
+ (UIImage*) processWithArray:(NSArray*)imageArray
{
if ([imageArray count]==0){
NSLog (#"imageArray is empty");
return 0;
}
cv::vector<cv::Mat> matImages;
for (id image in imageArray) {
if ([image isKindOfClass: [UIImage class]]) {
cv::Mat matImage = [image CVMat3];
NSLog (#"matImage: %#",image);
matImages.push_back(matImage);
}
}
NSLog (#"stitching...");
cv::Mat stitchedMat = stitch (matImages); // error occurring within stitch function
UIImage* result = [UIImage imageWithCVMat:stitchedMat];
return result;
}
This is where the program is running into a problem. When it is passed images that are saved locally in the application file, it works fine. However, when it is passed images that are saved in variables (chosenImage and chosenImage2), it doesn't work.
Here is the stitch function that is being called in the processWithArray function and is causing the error:
cv::Mat stitch (vector<Mat>& images)
{
imgs = images;
Mat pano;
Stitcher stitcher = Stitcher::createDefault(try_use_gpu);
Stitcher::Status status = stitcher.stitch(imgs, pano);
if (status != Stitcher::OK)
{
cout << "Can't stitch images, error code = " << int(status) << endl;
//return 0;
}
return pano;
}
The error is "Can't stitch images, error code = 1".
You are hitting memory limits. The four demo images included are 720 x 960 px, whereas you are using the full resolution image from the device camera.
Here is an Allocations trace in instruments leading up to the crash, stitching two images from the camera...
The point of this github sample is to illustrate a few things...
(1) how to integrate openCV with iOS;
(2) how to separate Objective-C and C++ code using a wrapper;
(3) how to implement the most basic stitching function in openCV.
It is best regarded as a 'hello world' project for iOS+openCV, and was not designed to work robustly with camera images. If you want to use my code as-is, I would suggest first reducing your camera images to a manageable size (e.g. max 1000 on the long side).
In any case the openCV framework you are using is as old as the project. Thanks to your question, I have just updated it (now arm64-friendly), although the memory limitations still apply.
V2, OpenCVSwiftStitch may be a more interesting starting-point for your experiments - the interface is written in Swift, and it uses cocoaPods to keep up with openCV versions (albeit currently fixed to 2.4.9.1 as 2.4.10 breaks everything). So it still illustrates the three points, and also shows how to use Swift with C++ using an Objective-C wrapper as an intermediary.
I may be able to improve memory handling (by passing around pointers). If so I will push an update to both v1 and v2. If you can make any improvements, please send a pull request.
update i've had another look and I am fairly sure it won't be possible to improve the memory handling without getting deeper into the openCV stitching algorithms. The images are already allocated on the heap so there are no improvements to be made there. I expect the best bet would be to tile and cache the intermediate images which it seems openCV is creating as part of the process. I will post an update if I get any further with this. Meanwhile, resizing the camera images is the way to go.
update 2
Some while later, I found the underlying cause of the issue. When you use images from the iOS camera as your inputs, if those images are in portrait orientation they will have the incorrect input dimensions (and orientation) for openCV. This is because all iOS camera photos are taken natively as 'landscape left'. The pixel dimensions are landscape, with the home button on the right. To display portrait, the 'imageOrientation' flag is set to UIImageOrientationRight. This is only an indication to the OS to rotate the image 90 degrees to the right for display.
The image is stored unrotated, landscape left. The incorrect pixel orientation leads to higher memory requirements and unpredictable/broken results in openCV.
I have fixed this in the latest version of openCVSwiftStitch: when necessary images are rotated pixelwise before adding to the openCV pipeline.

how to save identical faces only once

I am doing a project on face detection from surveillance cameras.Now I am at the stage of face detection and I can detect faces from each frame.After detecting the face I need store that face to local folder.Now I can save each face in the specified folder.
Problem Now it is saving every faces,but I need to save identical faces only once.That means if saved one face as a jpeg image and when face detection progress again the same face is coming, so this time I don't want to save that particular face.
This is my code:
#include <cv.h>
#include <highgui.h>
#include <time.h>
#include <stdio.h>
using namespace std;
int ct=1;
int ct1=0;
IplImage *frame;
int frames;
void facedetect(IplImage* image);
void saveImage(IplImage *img,char *ex);
IplImage* resizeImage(const IplImage *origImg, int newWidth,int newHeight, bool keepAspectRatio);
const char* cascade_name="haarcascade_frontalface_default.xml";
int k=1;
int main(int argc, char** argv)
{
CvCapture *capture=cvCaptureFromFile("Arnab Goswami on Pepper spary rajagopal Complete NewsHour Debate (Mobile).3gp");
int count=1;
while(1)
{
frame = cvQueryFrame(capture);
if(count%30==0)
{
facedetect(frame);
}
count++;
}
cvReleaseCapture(&capture);
return 0;
}
void facedetect(IplImage* image)
{
ct1++;
cvNamedWindow("output");
int j=0,i,count=0,l=0,strsize;
char numstr[50];
int arr[100],arr1[100];
CvPoint ul,lr,w,h,ul1,lr1;
CvRect *r;
//int i=0;
IplImage* image1;IplImage* tmpsize;IplImage* reimg;
CvHaarClassifierCascade* cascade=(CvHaarClassifierCascade*) cvLoad(cascade_name);
CvMemStorage* storage=cvCreateMemStorage(0);
const char *extract;
if(!cascade)
{
cout<<"Coulid not load classifier cascade"<<endl;
}
if(cascade)
{
CvSeq*faces=cvHaarDetectObjects(image,cascade,storage,1.1,1,CV_HAAR_DO_CANNY_PRUNING,cvSize(10,10));
//function used for detecting faces.o/p is list of detected faces.
for(int i=0;i<(faces ? faces->total : 0);i++)
{
string s1="im",re,rename,ex=".jpeg";
sprintf(numstr, "%d", k);
re = s1 + numstr;
rename=re+ex;
char *extract1=new char[rename.size()+1];
extract1[rename.size()]=0;
memcpy(extract1,rename.c_str(),rename.size());
//Copies the values of rename.size from the location pointed by source //(rename.c_str)directly to the memory block pointed by destination(extract).
strsize=rename.size();
r=(CvRect*) cvGetSeqElem(faces,i);//draw rectangle outline around each image.
ul.x=r->x;
ul.y=r->y;
w.x=r->width;
h.y=r->height;
lr.x=(r->x + r->width);
lr.y=(r->y + r->height);
cvSetImageROI(image,cvRect(ul.x,ul.y,w.x,h.y));
image1=cvCreateImage(cvGetSize(image),image->depth,image->nChannels);
cvCopy(image, image1, NULL);
reimg=resizeImage(image1, 40, 40, true);
saveImage(reimg,extract1);
cvResetImageROI(image);
cvRectangle(image,ul,lr,CV_RGB(1,255,0),3,8,0);
j++,count++;
k++;
cout<<"frame"<<ct1<<" "<<"face"<<ct<<":"<<"x: "<<ul.x<<endl;
cout<<"frame"<<ct1<<" "<<"face"<<ct<<":"<<"y: "<<ul.y<<endl;
cout<<""<<endl;
ct++;
//cvShowImage("output",image);
}
//return image;
//cvNamedWindow("output");//creating a window.
cvShowImage("output",image);//showing resized image.
cvWaitKey(0);
}
}
void saveImage(IplImage *img,char *ex)
{
int i=0;
char path[255]="/home/athira/Image/OutputImage";
char *ext[200];
char buff[1000];
ext[i]=ex;
sprintf(buff,"%s/%s",path,ext[i]);//copy ext[i] to buff
strcat(path,buff);//concat path & buff
cvSaveImage(buff,img);
i++;
}
You are using the haar feature-based cascade classifier for object detection. As far as i know these xml files are only trained to detect the specific objects based on hundreds of evaluated pictures (see cascade classifier training).
So to compare saved images you will need another "detection" mode, because you have to compare if two faces are identical with respect to the view angle and so on (keyword: biometric data).
The keyword you're looking for is "face recognition" i think. Just build up a database based on your detected faces and use them for face recognition after that.
Edit:
Another maybe helpful link: www.shervinemami.info/faceRecognition.html
If I understood correctly, what you want is to detect faces in one frame, save a thumbnail of this face. Then, in the following frame, you want to detect faces again but only save the thumbnails for those that were not present in the first frame.
This problem is hard, because the faces captured in a video always change from one frame to the next. This is due to noise in the images, to the fact that the persons may be moving, etc. As a consequence, no two faces are ever identical in a surveillance video.
Hence, in order to achieve what you asked, you need to determine if the face you are considering has already been observed in previous frames. In its general form, this problem is not obvious one and is still the topic of a lot of research related to biometrics, pedestrian tracking and re-identification, etc. Therefore, you will have a hard time to achieve 100% effectiveness in detecting that a given face has already been observed.
However, if can accept a method that is not 100% effective, you could try the following approach:
Detect faces F0i in frame 0, with associated image position (x0i, y0i), and store the thumbnails
Compute sparse optical-flow (e.g. using KLT, see this link) on the positions (xn-1i, yn-1i) of the faces in previous frame n-1, in order to estimate their positions (xxni, yyni) in the current frame n.
Detect faces F0i in the current frame n, with associated image position (xni, yni), and save only the thumbnail of those which are not close to one of the predicted positions (xxni, yyni).
Increment n and repeat steps 2-3 using the next frame.
This is a simple algorithm using tracking to determine if a given face was already observed previously. It should be easier to implement than biometrics-based approaches, and also probably more appropriate in the context of video surveillance. However, it is not a 100% accurate, due to the limited effectivity of the optical-flow estimation and of the face detector.