I am developing an eye tracker application in emgu CV, To track eyes i need to detect iris accurately ,So i used hough circles , but in some cases it fails because the shape of iris is not a perfect circle, So i decided to convert eye image in to binary and detect iris ,
To convert it to binary i used
grayframeright_1 = grayframeright_1.ThresholdBinary(new
Gray(threshold_value), new Gray(220));
and the result is
Now how can i detect iris in the above binary image ?Can i run blob detector to detect iris ?
Please help me to figure this out, your help will be highly appreciated , I am running out of time for my deadline.
Providing code sample would be useful
Thanks in advance
You can try erosion. I've used it in a image processing class at university to find the visual center of airplanes in the sky and it worked surprisingly well.
Erosion is a fairly simple operator used in broader practice like blob detection, which you already mentioned.
Erode should remove border pixels of irregular shapes, leaving at last, just a moment before the shape completely vanishes, only few pixels. The geometrical center of those pixels is c, the visual center of the irregular shape. Starting from c draw a circle of radius r which is completely inscribed in the irregular shape. The circle at c with radius r is an approximation of the iris. Or at least so the story goes.
When I say erosion I mean this: example of erosion
This was just my personal idea based on university work, I've never done this in the industry.
Maybe you should look at a more serious approach to the problem which does not use erosion but wavelets: Iris recognition
I'm very curious. If you try this could you please share your results/findings? A quick comment would suffice!
Related
I am currently working on a robotic project: a robot must grab an cube using a Kinect camera that process cube detection and calculate coordinates.
I am new in computer vision. I first worked on static image of square in order to get a basic understanding. Using C++ and openCV, I managed to get the corners (and their x y pixel coordinates) of the square using smoothing (remove noise), edge detection (canny function), lines detection (Hough transform) and lines intersection (mathematical calculation) on an simplified picture (uniform background).
By adjusting some threshold I can achieve corners detection assuming that I have only one square and no line feature in the background.
Now is my question: do you have any direction/recommendation/advice/literature about cube recognition algorithm ?
What I have found so far involves shape detection combined with texture detection and/or learning sequence. Moreover, in their applications, they often use GPU/parallellisation computing, which I don't have...
The Kinect also provided a depth camera which gives distance of the pixel from the camera. Maybe I can use this to bypass "complicated" image processing ?
Thanks in advance.
OpenCV 3.0 with contrib includes surface_matching module.
Cameras and similar devices with the capability of sensation of 3D
structure are becoming more common. Thus, using depth and intensity
information for matching 3D objects (or parts) are of crucial
importance for computer vision. Applications range from industrial
control to guiding everyday actions for visually impaired people. The
task in recognition and pose estimation in range images aims to
identify and localize a queried 3D free-form object by matching it to
the acquired database.
http://docs.opencv.org/3.0.0/d9/d25/group__surface__matching.html
I am using Opencv to detect shapes and size of material( like disc, washers, nuts and bolts of different size) on that will be held on running belt. what function would be best to distinguish between them.
I am planing to use cvFindContours( to find the shapes) and cvArcLength & cvContourArea to get their area.
Any better approach ?
This is a simple approach to shape matching:
Convert to grayscale
Smoothen the image.
Apply some morphological operations (if necessary).
Edge detect
Find contours (the same you mentioned). The contour function is hierarchical. Hence, segmenting the required (outer in most cases) contour(s) should be easy. Disc and washers can be distinguished by the hole in the contour hierarchy.
Use ApproxPolyDP to get your contour to a rough regular shape. You might be able to distinguish the shapes based on the vertex count in the contour.
Use moments to distinguish the shapes if ApproxPolyDP is not sufficient.
It works for most cases. Always provide sample images to help us assess the complexity of the problem :D.
Check for haar cascade object detection technique in opencv
here are some links....
http://coding-robin.de/2013/07/22/train-your-own-opencv-haar-classifier.html
http://www.technolabsz.com/2011/08/how-to-do-opencv-haar-training.html
For working with haar cascade u need haar kit for traing purpose..
http://kineme.net/files/haar.zip
I've working on this some time now, and can't find a decent solution for this.
I use OpenCV for image processing and my workflow is something like this:
Took a picture of a tv.
Split image in to R, G, B planes - I'm starting to test using H, S, V too and seems a bit promising.
For each plane, threshold image for a range values in 0 to 255
Reduce noise, detect edges with canny, find the contours and approximate it.
Select contours that contains the center of the image (I can assume that the center of the image is inside the tv screen)
Use convexHull and HougLines to filter and refine invalid contours.
Select contours with certain area (area between 10%-90% of the image).
Keep only contours that have only 4 points.
But this is too slow (loop on each channel (RGB), then loop for the threshold, etc...) and is not good enought as it not detects many tv's.
My base code is the squares.cpp example of the OpenCV framework.
The main problems of TV Screen detection, are:
Images that are half dark and half bright or have many dark/bright items on screen.
Elements on the screen that have the same color of the tv frame.
Blurry tv edges (in some cases).
I also have searched many SO questions/answers on Rectangle detection, but all are about detecting a white page on a dark background or a fixed color object on a contrast background.
My final goal is to implement this on Android/iOS for near-real time tv screen detection. My code takes up to 4 seconds on a Galaxy Nexus.
Hope anyone could help . Thanks in advance!
Update 1: Just using canny and houghlines, does not work, because there can be many many lines, and selecting the correct ones can be very difficult. I think that some sort of "cleaning" on the image should be done first.
Update 2: This question is one of the most close to the problem, but for the tv screen, it didn't work.
Hopefully these points provide some insight:
1)
If you can properly segment the image via foreground and background, then you can easily set a bounding box around the foreground. Graph cuts are very powerful methods of segmenting images. It appears that OpenCV provides easy to use implementations for it. So, for example, you provide some brush strokes which cover "foreground" and "background" pixels, and your image is converted into a digraph which is sliced optimally to split the two. Here is a fun example:
http://docs.opencv.org/trunk/doc/py_tutorials/py_imgproc/py_grabcut/py_grabcut.html
This is a quick something I put together to illustrate its effectiveness:
2)
If you decide to continue down the edge detection route, then consider using Mathematical Morphology to "clean up" the lines you detect before trying to fit a bounding box or contour around the object.
http://en.wikipedia.org/wiki/Mathematical_morphology
3)
You could train across a dataset containing TVs and use the viola jones algorithm for object detection. Traditionally it is used for face detection but you can adapt it for TVs given enough data. For example you could script downloading images of living rooms with TVs as your positive class and living rooms without TVs as your negative class.
http://en.wikipedia.org/wiki/Viola%E2%80%93Jones_object_detection_framework
http://docs.opencv.org/trunk/doc/py_tutorials/py_objdetect/py_face_detection/py_face_detection.html
4)
You could perform image registration using cross correlation, like this nice MATLAB example demonstrates:
http://www.mathworks.com/help/images/examples/registering-an-image-using-normalized-cross-correlation.html
As for your template TV image which would be slid across the search image, you could obtain a bunch of pictures of TVs and create "Eigenscreens" similar to how Eigenfaces are used for facial recognition and generate an average TV image:
http://jeremykun.com/2011/07/27/eigenfaces/
5)
It appears OpenCV has plenty of fun tools for describing shape and structure features, which appears to be mainly what you're interested in. Worth a look if you haven't seen this already:
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html
Best of luck.
Anyone know how to set ROI based on image bellow?
I used Hough Transform to detect the white line and draw the red line into the image.
What I need to do is to set the ROI in the rectangle.
Since Hough Transform unable to get location of each rectangle and the main problem is I cannot defined the location (x,y) manually.
Any solution that able to auto detect the rectangle and set the ROI?
Anyone can give some idea for me or the code can be use?
Please forgive my poor english and thank you.
this blog post is very good in explaining how to find a rectangle with the hough transform and it has also some c++ code with opencv 2 API.
The approach is to find lines, intersect them, and find the rectangle. In your case you will have more rectangles and so it's a little bit more complicated..
But if you manage to obtain such image.. why don't use just some threshold and find connected regions (aka blob)?
I'm trying to detect a shape (a cross) in my input video stream with the help of OpenCV. Currently I'm thresholding to get a binary image of my cross which works pretty good. Unfortunately my algorithm to decide whether the extracted blob is a cross or not doesn't perform very good. As you can see in the image below, not all corners are detected under certain perspectives.
I'm using findContours() and approxPolyDP() to get an approximation of my contour. If I'm detecting 12 corners / vertices in this approximated curve, the blob is assumed to be a cross.
Is there any better way to solve this problem? I thought about SIFT, but the algorithm has to perform in real-time and I read that SIFT is not really suitable for real-time.
I have a couple of suggestions that might provide some interesting results although I am not certain about either.
If the cross is always near the center of your image and always lies on a planar surface you could try to find a homography between the camera and the plane upon which the cross lies. This would enable you to transform a sample image of the cross (at a selection of different in plane rotations) to the coordinate system of the visualized cross. You could then generate templates which you could match to the image. You could do some simple pixel agreement tests to determine if you have a match.
Alternatively you could try to train a Haar-based classifier to recognize the cross. This type of classifier is often used in face detection and detects oriented edges in images, classifying faces by the relative positions of several oriented edges. It has good classification accuracy on faces and is extremely fast. Although I cannot vouch for its accuracy in this particular situation it might provide some good results for simple shapes such as a cross.
Computing the convex hull and then taking advantage of the convexity defects might work.
All crosses should have four convexity defects, making up four sets of two points, or four vectors. Furthermore, if your shape was a cross then these four vectors will have two pairs of supplementary angles.