I need to annotate frontal (or near frontal) images using openCV. I'm currently going through the OpenCV manual and the book "Mastering OpenCV". This is the first time I'm using OpenCV and due to that I'm little bit confused with annotation and face detection.
I need to mark about 25 points in the human face. The required points are there in eyes, mouth, nose, eyes, ears .My question is :
Is it necessary to detect the face first, and then eyes, eyebrows, mouth, nose, ears. Is it the case that then only I can proceed with annotation. The reason why I'm asking this is that I'll be doing the annotation manually. So that, obviously I can see where the face is and then eyes, nose etc. I don't see the point of detecting the face first.
Can someone explain whether face detection is really needed in this case ?
According to the book "Mastering openCV" , I need to do the following step-by-step.
(1) Loading Haar Detector for face Detection
(2) Grayscale colour conversion
(3) Shrinking the image
(4) Histogram Equalization
(5) Detecting the face
(6) Face preprocessing to detect eyes, mouth, nose etc.
(7) Annotation
Face detection allows a computer algorithm to search an image much much faster for the features like eyes & mouth.
If you are annotating the image yourself then it is of course much quicker just to annotate the wanted features and ignore unwanted ones.
No, You don't need to annotate landmarks for face detection, Opencv provide you by some functions to detect faces, using some already trained models using Haar Cascades classifiers, prepared in opencv package as xml files, you just need to call them as explained here
Annotation of images by some predefined landmarks is used to detect facial expression, and some facial details as estimation of head pose in the space, for these purposes AAM, ASM models are used.
As well, annotating images is a step to train a model, for that you may use a lot of universal annotated databases, available on internet, whereas your test images don't need to be annotated
Related
I wish to stitch two or more images using OpenCV and C++. The images have regions of overlap but they are not being detected. I tried using homography detector. Can someone please suggest as to what other methods I should use. Also, I wish to use the ORB algorithm, and not SIFT or SURF.
The images can be found at-
https://drive.google.com/open?id=133Nbo46bgwt7Q4IT2RDuPVR67TX9xG6F
This a very common problem. Because images like this, they actually do not have much in common. The overlap region is not rich in feature. What you can do is dig into opencv stitcher code and there they use confidence factor for feature matching, you can play with that confidence factor to get matches in this case. But this will only work if your feature detector is able to detect some features in overlapping resion.
You can also look at this post:
Related Question
It might be helpful for you.
"OpenCV stitching code"
This is full pipleline of OPencv Stitching code. You can see that there are lot of parameters you can change to make your code give some good stitching result. Also I would suggest using a small image (640 X480) for the feature detection step. Using small images is better than using very large images
I trained 472 unique images for a person A for Face Recognition using "haarcascade_frontalface_default.xml".
While I am trying to detect face for the same person A for the same images which I have trained getting 20% to 80% confidence, that's fine for me.
But, I am also getting 20% to 80% confidence for person B which I have not included in training the images. Why its happening for person B while I am doing face detection?
I am using python 2.7 and OpenCV 3.2.0-dev version.
This is because Haar-cascade Detection is used for detecting objects with the same set of features. ยด
Even though face B is different from face A they share the same features; two eyes, a nose and a mouth, and therefore is the confidence for A and B the same. Using only Haar Cascades is not enough for the task of distinguishing different faces.
I recommend reading the original paper by Viola-Jones.
i guess here in your problem you are not actually referring to detection ,but recognition ,you must know the difference between these two things:
1-detection does not distinguish between persons, it just detects the facial shape of a person based on the haarcascade previously trained
2-recognition is the case where u first detect a person ,then try to distinguish that person from your cropped and aligned database of pics,i suggest you follow the philipp wagner tutorial for that matter.
I am doing a project on face recognition from CCTV cameras, I want to recognize each individual faces. I think eigenface method is best for face recognition. But when we use eigenface method for moving object face recognition, is there any problem? Can we recognize individuals perfectly? Since it is not still image, I am really confused to select a method.
Please help me to know whether this method is ok, otherwise suggest a better alternative.
Short answer: Typically those computer vision techniques used in image analysis can be used in video analysis, too. Videos just give you more information (esp. the temporal information.) For example, you could do face recognition using multiple frames, and between each frame you do object tracking. Associating multiple frames typically give you higher accuracy.
IMO, the most difficult problems are: you're more likely to face viewing angle, calibration problems, and lighting condition problems, in which you will need accurate face detection technique, or more training data in order to recognize faces under viewing angles and lighting conditions. Eigen face based approach relies on an accurate position of faces, eyes, and so on. Otherwise, you are likely to mix different features in the same vector. But again, this problem also exists in face recognition under still image.
To sum up, video content only gives you more information. If you don't really want to associated frames and consider temporal information, video is just a collection of still images :)
I have been trying to do the following -
When a user uploads an Image in my web app, I'd like to detect his/her face in it and extract face (from forehead to chin and cheek to cheek) from it.
I tried OpenCV/C++ face detection using Haar Cascade but problem with it is that it gives a probability of where the face would be because of which either background of image comes inside the ROI or even the complete face doesn't come in the ROI.
I also want to detect eye inside the face and while using the above technique, the eye detection isn't that accurate.
I've read up on a new technique called Active Appearance Model (AAM). The blogs where I read up about this show that this is exactly what I want but I am lost on how to implement this.
My queries are -
Is using AAM a good idea for face detection and face feature detection.
Are there any other techniques for doing the same.
Any help on any of these is much appreciated.
Thanks !
As you noticed OpenCV's implementation of face detection is not state-of-the-art. It is a very good and robust implementation but you can do better.
Recently, Zhu and Ramanan (CVPR 2012) had intoduced Face detection, pose estimation and landmark localization in the wild which is considered to be one of the leading algorithms for face detection in recent years.
Their algorithm is capable of detecting faces both frontal and profile views AND identifying keypoints on the detected face such as eyes nose and mouth.
The authors were kind enough to publish their code along with learned models, it is a Matlab implementation but the main computations are done in C++, so it should not be too difficult to make a standalone C++ implementation of thier method.
I am trying to find a way to determine the correctness of edge detection. I want it to have little markers showing where the program determines the edges to be with something like x's or dots or lines. I am looking for something that does this: http://en.wikipedia.org/wiki/File:Corner.png
OpenCV has an edge detector and is usable in C++. As it happens the image you linked to is used in the article describing (one of) the built in algorithms.
The image you link to ins't edge detection.
Edge detection is normally just finding abrubt brightness changes in a greyscale image - you do this with differention - eg. Sobel operator.
Specifically finding corners is either done with SIFT or something like Laplacian of Gaussians
That image is not result of edge detection operations! It's corner detection. They have entirely different purposes:
Corner detection is an approach used
within computer vision systems to
extract certain kinds of features and
infer the contents of an image. Corner
detection is frequently used in motion
detection, image matching, tracking,
image mosaicing, panorama stitching,
3D modelling and object recognition.
Corner detection overlaps with the
topic of interest point detection.
OpenCV has corner detection algorithms. The latest link includes a source code example for VS 2008. You can also check this link for another example. Google can provide much more.