How to Detect Edge of lips using Opencv in iOS? - c++

Im new at OpenCV so please help me out. I want to detect lips edge using OpenCV. So can you give me some links and solution ?
Ive checked out normal procedure of detecting face and mouth using OpenCV but accuracy is not there. Ive used "haarcascade_mcs_mouth" to detect mouth in a picture but result was not so good. And I heard about AAM method , but could not get any documents regarding it. please help me...

Lip recognition is a problem in Computer Vision that it is not completely solved. The haar-like classifiers that you've been using (included in OpenCV) perform well in face recognition but for lip recognition better techniques have been developed. You will have to build different algorithms and choose the better one for your purpose. The fact that you are developing for iOS makes the task harder because of additional constraints (memory footprint, CPU, etc.). I have condensed for you a brief overview of the state of the art in lip recognition so you can research further:
Methods for recognising lips can be classified in three big categories:
Image based techniques: These are based on the hypotesis that skin and lips have different color. The paper [2] is an example of this kind of approach applied for sign language recognition. Color clustering has also been explored by [3]. It assumes that there are two pixel classes in the image: skin and lips. This method is not appropriate if the person has a beard, or is showing his/her teeth, for example.
Model based techniques: These methods are more robust than the previous ones because they use prior information about the lip shape. However, they are more expensive computationally so they may not be suitable for an implementation on mobile devices. AAM (Active Appearance Models) belong to this group and learn the shape of lips from manually annotated data. In the "External links" section of the Wikipedia article you can see some open source implementations and libraries that can be ported to C++/OpenCV.
Hybrid techniques: These methods are a combination of image based methods and model based methods. Typically, a color based technique is first applied to the image in order to estimate the lip region position and size; then, a model based technique (like AAM) is applied to the region of interest to extract lip contours. [4] is an example of this technique.
[2] U. Canzler and T. Dziurzyk, "Extraction of Non Manual Features for Videobased Sign Language Recognition". ;In Proceedings of MVA. 2002, 318-321
[3] Leung, Shu-Hung, Shi-Lin Wang, and Wing-Hong Lau. "Lip image segmentation using fuzzy clustering incorporating an elliptic shape function." Image Processing, IEEE Transactions on 13.1 (2004): 51-62.
[4] Bouvier, Christian, P-Y. Coulon, and Xavier Maldague. "Unsupervised lips segmentation based on ROI optimisation and parametric model." Image Processing, 2007. ICIP 2007. IEEE International Conference on. Vol. 4. IEEE, 2007.

Related

How to extract LBP features from a hand contour using opencv c++

I am currently working on a hand recognition system. I have been able to detect the hand and draw a contour for it. Now, I have to extract features from the hand region. What is the best feature extraction method that i can use?
I was thinking to use Local Binary Pattern, but since i am new to computer vision i don't know how to use it.
Perhaps you must look at histogram of gradients (HOG), which can be considered as a more general version of LBP. You can have multiple images of hands; by extracting HOG features from each image and using an SVM or neural network classifier, you can learn a statistical model of hand poses. This will help in recognizing an unseen hand. Look also at the current literature on deep learning.
A C++ implementation of HOG is available from vlfeat library [1], which can be called from OpenCV. HOG can be computer from OpenCV also [2].
[1] http://www.vlfeat.org/overview/hog.html
[2] http://goo.gl/8jTetR

Surface approximation based on RBF

I am looking for a way of approximating a surface based on a set of 3D data points. For this purpose I would like to use a method based on radial basis functions but I cannot find a free implementation in C++.
I looked in ITK, VTK and open CV but I did not find anything...
Does anyone knows a free implementation of such an algorithm ?
Any suggestion about the reconstruction of a surface based on a set of 3D data points is also more than welcome ! :)
3D surface reconstruction can be challenging. I would first recommend taking a look at PCL. The Point Cloud Library has grown into a nice set of tools for 3D point management and interpretation, and its license and API sound compatible with your needs. The surface reconstruction features of the library appear to be most applicable. In fact, RBF reconstruction is supported.
If PCL doesn't work, there are other options:
MeshLab,
This SO post provides a nice summary, and
of course, Wikipedia provides some links
Finally, you might search CiteSeerX, Google Scholar, etc. for papers like this one. As an example, a search for "3D Surface Reconstruction" at CiteSeerX yields many hits. RBF-based reconstruction is just one of many methods: is your application truly limited to radial basis functions? If not, there are many choices, (i.e. Ball Pivoting Algorithm). See this survey paper for some comparisons.

Generate an image that can be most easily detected by Computer Vision algorithms

Working on a small side project related to Computer Vision, mostly to try playing around with OpenCV. It lead me to an interesting question:
Using feature detection to find known objects in an image isn't always easy- objects are hard to find, especially if the features of the target object aren't great.
But if I could choose ahead of time what it is I'm looking for, then in theory I could generate for myself an optimal image for detection. Any quality that makes feature detection hard would be absent, and all the qualities that make it easy would exist.
I suspect this sort of thought went into things like QR codes, but with the limitations that they wanted QR codes to be simple, and small.
So my question for you: How would you generate an optimal image for later recognition by a camera? What if you already know that certain problems like skew, or partial obscuring would occur?
Thanks very much
I think you need something like AR markers.
Take a look at ArToolkit, ArToolkitPlus or Aruco libraries, they have marker generators and detectors.
And papeer about marker generation: http://www.uco.es/investiga/grupos/ava/sites/default/files/GarridoJurado2014.pdf
If you plan to use feature detection, than marker should be specific to used feature detector. Common practice for detector design is good response to "corners" or regions with high x,y gradients. Also you should note the scaling of target.
The simplest detection can be performed with BLOBS. It can be faster and more robust than feature points. For example you can detect circular blobs or rectangular.
Depending on the distance you want to see your markers from and viewing conditions/backgrounds you typically use and camera resolution/noise you should choose different images/targets. Under moderate perspective from a longer distance a color target is pretty unique, see this:
https://surf-it.soe.ucsc.edu/sites/default/files/velado_report.pdf
at close distances various bar/QR codes may be a good choice. Other than that any flat textured object will be easy to track using homography as opposed to 3D objects.
http://docs.opencv.org/trunk/doc/py_tutorials/py_feature2d/py_feature_homography/py_feature_homography.html
Even different views of 3d objects can be quickly learned and tracked by such systems as Predator:
https://www.youtube.com/watch?v=1GhNXHCQGsM
then comes the whole field of hardware, structured light, synchronized markers, etc, etc. Kinect, for example, uses a predefined pattern projected on the surface to do stereo. This means it recognizes and matches million of micro patterns per second creating a depth map from the matched correspondences. Note that one camera sees the pattern and while another device - a projector generates it working as a virtual camera, see
http://article.wn.com/view/2013/11/17/Apple_to_buy_PrimeSense_technology_from_the_360s_Kinect/
The quickest way to demonstrate good tracking of a standard checkerboard pattern is to use pNp function of open cv:
http://www.juergenwiki.de/work/wiki/lib/exe/fetch.php?media=public:cameracalibration_detecting_fieldcorners_of_a_chessboard.gif
this literally can be done by calling just two functions
found = findChessboardCorners(src, chessboardSize, corners, camFlags);
drawChessCornersDots(dst, chessboardSize, corners, found);
To sum up, your question is very broad and there are multiple answers and solutions. Formulate your viewing condition, camera specs, backgrounds, distances, amount of motion and perspective you expect to have indoors vs outdoors, etc. There is no such a thing as a general average case in computer vision!

Object Annotation in images with OpenCV

I am trying to develop an automatic(or semi-automatic) image annotator for my final year project with OpenCV. I have been studying many OpenCV resources and have come across cascade classification for training and detection purposes. I understood that part, and also tried the Face Detection tutorial provided with OpenCV. So, now I know how to train and detect objects.
However, I still cannot understand how can I annotate objects present in the image?
For example, the system will show that this is an object, but I want the system to show that it is a ball. How can i accomplish that?
Thanks in advance.
One binary classificator (detector) can separate objects by two classes:
positive - the object type classifier was trained for,
and negative - all others.
If you need detect several distinguished classes you should use one detector for each class, or you can train multiclass classifier ("one vs all" type of classifiers for example), but it usually works slower and with less accuracy (because detector better search for similar objects). You can also take a look at convolutional networks (by Yann LeCun).
This is a very hard task. I suggest simplifying it by using latent SVM detector and limiting yourself to the models it supplies:
http://docs.opencv.org/modules/objdetect/doc/latent_svm.html

A Color/Shape detection mechanism for Augmented Reality

Is there a very basic color/shape detection mechanism through which one could detect a specific color or a shape in a webcam feed? Wanted to use the color/or shape as a symbolic marker for an AR application.
Though the ideal case would be a NFT , but i am not much of coder and have no experience in OpenCV( have read a lot about it in previous discussions here).Have worked so far with the SLAR tooolkit only and that offers only the basic b/w marker detection
And the more easily useable NFT libraries are , well, not freeware :/
Any guidance to integrate the abovementioned detection routines in a .Net/Flash environment would be of great help.
Color detection is very easy: take your videostream images, convert them to binary images by using the RGB value as a vector (like RGB = [0,255,0] = green), and setting other vectors within a given distance as positive hits. This is one of the easiest forms of computer vision, and a couple of early CV-based PS2 games involved detecting brightly colored props.
This is my favorite paper on shape recognition - if you want to detect simple 2D outlines on flat surfaces, this is a great technique.
I'm neither a .Net or a Flash programmer, so I can't offer any help there.