Why OpenCV face detection recognition the faces for untrained face? - python-2.7

I trained 472 unique images for a person A for Face Recognition using "haarcascade_frontalface_default.xml".
While I am trying to detect face for the same person A for the same images which I have trained getting 20% to 80% confidence, that's fine for me.
But, I am also getting 20% to 80% confidence for person B which I have not included in training the images. Why its happening for person B while I am doing face detection?
I am using python 2.7 and OpenCV 3.2.0-dev version.

This is because Haar-cascade Detection is used for detecting objects with the same set of features. ยด
Even though face B is different from face A they share the same features; two eyes, a nose and a mouth, and therefore is the confidence for A and B the same. Using only Haar Cascades is not enough for the task of distinguishing different faces.
I recommend reading the original paper by Viola-Jones.

i guess here in your problem you are not actually referring to detection ,but recognition ,you must know the difference between these two things:
1-detection does not distinguish between persons, it just detects the facial shape of a person based on the haarcascade previously trained
2-recognition is the case where u first detect a person ,then try to distinguish that person from your cropped and aligned database of pics,i suggest you follow the philipp wagner tutorial for that matter.

Related

Real-time object tracking in OpenCV

I have written an object classification program using BoW clustering and SVM classification algorithms. The program runs successfully. Now that I can classify the objects, I want to track them in real time by drawing a bounding rectangle/circle around them. I have researched and came with the following ideas.
1) Use homography by using the train set images from the train data directory. But the problem with this approach is, the train image should be exactly same as the test image. Since I'm not detecting specific objects, the test images are closely related to the train images but not essentially an exact match. In homography we find a known object in a test scene. Please correct me if I am wrong about homography.
2) Use feature tracking. Im planning to extract the features computed by SIFT in the test images which are similar to the train images and then track them by drawing a bounding rectangle/circle. But the issue here is how do I know which features are from the object and which features are from the environment? Is there any member function in SVM class which can return the key points or region of interest used to classify the object?
Thank you

Is Face Detection needed before doing annotation - Image Processing

I need to annotate frontal (or near frontal) images using openCV. I'm currently going through the OpenCV manual and the book "Mastering OpenCV". This is the first time I'm using OpenCV and due to that I'm little bit confused with annotation and face detection.
I need to mark about 25 points in the human face. The required points are there in eyes, mouth, nose, eyes, ears .My question is :
Is it necessary to detect the face first, and then eyes, eyebrows, mouth, nose, ears. Is it the case that then only I can proceed with annotation. The reason why I'm asking this is that I'll be doing the annotation manually. So that, obviously I can see where the face is and then eyes, nose etc. I don't see the point of detecting the face first.
Can someone explain whether face detection is really needed in this case ?
According to the book "Mastering openCV" , I need to do the following step-by-step.
(1) Loading Haar Detector for face Detection
(2) Grayscale colour conversion
(3) Shrinking the image
(4) Histogram Equalization
(5) Detecting the face
(6) Face preprocessing to detect eyes, mouth, nose etc.
(7) Annotation
Face detection allows a computer algorithm to search an image much much faster for the features like eyes & mouth.
If you are annotating the image yourself then it is of course much quicker just to annotate the wanted features and ignore unwanted ones.
No, You don't need to annotate landmarks for face detection, Opencv provide you by some functions to detect faces, using some already trained models using Haar Cascades classifiers, prepared in opencv package as xml files, you just need to call them as explained here
Annotation of images by some predefined landmarks is used to detect facial expression, and some facial details as estimation of head pose in the space, for these purposes AAM, ASM models are used.
As well, annotating images is a step to train a model, for that you may use a lot of universal annotated databases, available on internet, whereas your test images don't need to be annotated

Feature detection in profile face images

I am using opencv 2.4.2 and c++. I am trying to detect the eyes,nose and mouth of a profile face using haarcascade xml files.The eyes are most of the time detected correctly using haarcascade_mcs_righteye and haarcascade_mcs_lefteye. However,the nose and mouth xml are mostly failures with profile faces[as shown below]. I understand that those were made for frontal face,but is there any other "not-so-complicated" open source method which I can use to detect the tip of the nose and corner of mouth in profile images?Basically,I will need their coordinates,but first I will need to detect them. Anybody please?
Recently, Zhu and Ramanan CVPR 2012 had intoduced Face detection, pose estimation and landmark localization, this is by far the best I've seen, OpenCV Is Great By All means, but it's not state of the art for all applications out there nowadays.
I hope this helps

Detect and extract face from an image

I have been trying to do the following -
When a user uploads an Image in my web app, I'd like to detect his/her face in it and extract face (from forehead to chin and cheek to cheek) from it.
I tried OpenCV/C++ face detection using Haar Cascade but problem with it is that it gives a probability of where the face would be because of which either background of image comes inside the ROI or even the complete face doesn't come in the ROI.
I also want to detect eye inside the face and while using the above technique, the eye detection isn't that accurate.
I've read up on a new technique called Active Appearance Model (AAM). The blogs where I read up about this show that this is exactly what I want but I am lost on how to implement this.
My queries are -
Is using AAM a good idea for face detection and face feature detection.
Are there any other techniques for doing the same.
Any help on any of these is much appreciated.
Thanks !
As you noticed OpenCV's implementation of face detection is not state-of-the-art. It is a very good and robust implementation but you can do better.
Recently, Zhu and Ramanan (CVPR 2012) had intoduced Face detection, pose estimation and landmark localization in the wild which is considered to be one of the leading algorithms for face detection in recent years.
Their algorithm is capable of detecting faces both frontal and profile views AND identifying keypoints on the detected face such as eyes nose and mouth.
The authors were kind enough to publish their code along with learned models, it is a Matlab implementation but the main computations are done in C++, so it should not be too difficult to make a standalone C++ implementation of thier method.

face tracking or "dynamic recognition"

What is a best approach for face detection/tracking considering following scenario:
when person enters in scene/frame it should be detected and recognized in every next frame until he/she leaves the scene.
also should be able to do this for multiple users at once.
I have experience with viola jones detection, and fisher face recognition. But I've used ff recognition only for previously prepared learning set, and now I need something for any user that enters the scene..
I am also interested in different solutions.
I used opencv face detection for multiple faces and the rekognition api (http://rekognition.com) and pushed the faces and retrained the dataset frequently. Light-weighted from our side, but I am sure there are more robust solutions for this.
Have you tried VideoSurveillance? Also known as OpenCV blob tracker.
It's a motion-based tracker with across frames data association(1) and if you want to replace motion with face detection, you must adjust the code by replacing the foreground mask with detection responses. This approach is called track-by-detect in the literature.
(1) "Appearance Models for Occlusion Handling", Andrew Senior et al.