I am newbie to openCV, but I want to create iris recognition program. Although the system with webcam can detect the eyes, it cannot, however, detect the circular iris. I am using the Hough Circle Transformation. But in case iris in an image is not circular enough, system can't detect it. Any solution for it?
the algorithm used is Hough Circle Transformation.
IplImage *capturedImg = cvLoadImage("circle.jpg",1);
IplImage *grayscaleImg = cvCreateImage(cvGetSize(capturedImg), 8, 1);
cvCvtColor(capturedImg, grayscaleImg, CV_BGR2GRAY);
// Gaussian filter for less noise
cvSmooth(grayscaleImg, grayscaleImg, CV_GAUSSIAN,9, 9 );
//Detect the circles in the image
CvSeq* circles = cvHoughCircles(grayscaleImg,
storage,
CV_HOUGH_GRADIENT,
2,
grayscaleImg->height/4,
200,
100 );
for (i = 0; i < circles->total; i++)
{
float* p = (float*)cvGetSeqElem( circles, i );
cvCircle( capturedImg, cvPoint(cvRound(p[0]),cvRound(p[1])),
3, CV_RGB(0,255,0), -1, 8, 0 );
cvCircle( capturedImg, cvPoint(cvRound(p[0]),cvRound(p[1])),
cvRound(p[2]), CV_RGB(0,0,255), 3, 8, 0 );
}
// cvCircle( img,cvPoint( r->x, r->y ),67, CV_RGB(255,0,0), 3, 8, 0 );
cvNamedWindow( "circles", 1 );
cvShowImage( "circles", capturedImg );
Add a call to cvCanny() between cvSmooth() and cvHoughCircles(). This will execute an edge detection algorithm which is going to provide a better input image for cvHoughCircles() and will probably improve your results.
There's a lot of similar questions on Stackoverflow, I suggest you use the search box.
Related
1.Some Information: I would like to develop a kind of circle recognition with the help of openCV. I successfully set up a connection between Swift, objc-c++, but strangely I have some problems with the circle recognition algorithm: Not all of the circles in my image gets detected!
2.Have a look at my code:
+(UIImage *)ConvertImage:(UIImage *)image {
cv::Mat matImage;
UIImageToMat(image, matImage);
cv::Mat modImage;
cv::medianBlur(matImage, matImage, 5);
cv::cvtColor(matImage, modImage, CV_RGB2GRAY);
cv::GaussianBlur(modImage, modImage, cv::Size(9, 9), 2, 2);
vector<Vec3f> circles;
cv::HoughCircles(modImage, circles, CV_HOUGH_GRADIENT, 1, 1, 100, 50, 0, 0);
for (auto i = circles.begin(); i != circles.end(); ++i)
std::cout << *i << ' ';
for( size_t i = 0; i < circles.size(); i++ )
{
cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
circle( matImage, center, 3, Scalar(0,255,0), -1, 8, 0 );
circle( matImage, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
UIImage *binImg = MatToUIImage(matImage);
return binImg;
}
As you can see in the image [click] there appears this issue :
Only 3 of 7 circles gets detected!
So in the docs I found the parameters explanation for this line:
cv::HoughCircles(modImage, circles, CV_HOUGH_GRADIENT, 1, 1, 100, 50, 0, 0);
dp = 1: The inverse ratio of resolution.
min_dist = modImage.rows/8: Minimum distance between detected centers.
param_1 = 200: Upper threshold for the internal Canny edge detector.
param_2 = 100*: Threshold for center detection.
min_radius = 0: Minimum radio to be detected. If unknown, put zero as default.
max_radius = 0: Maximum radius to be detected. If unknown, put zero as default.
3.My question
How to get rid of the issue mentioned above?
Any help would be very appreciated :)
For issue number 2 : The outline should be colored, not white!
What color should it be? At any rate you draw that circle in your code with this line.
circle( matImage, center, radius, Scalar(0,0,255), 3, 8, 0 );
If you want to change the color you can change the values you have declared in Scalar(0,0,255).
If you dont want the circle there at all you can remove that line of code.
Your images seems to be noise free. If the image is to contain circle always, You can extract the contours and fit circles using Least Squares
You can get the circle fit equations here. It is a straightforward implementation. Create a structure for the circle parameters (center and radius), fit circle and store the parameters in the structure and use it to draw circle using OpenCV.
You can also generate points on the circle using "ellipse2poly" function.
I have an image with one circle like shape that contains another similar shape. I am trying find the areas of those two shapes. I am using openCv c++ Hough circle detection, but it does not detect the shapes. Is there any other functions in OpenCV can be used to detect the shapes and find the ares?
[EDIT] The image has been added.
Here is my sample code
int main()
{
Mat src, gray;
src = imread( "detect_circles_simple.jpg", 1 );resize(src,src,Size(640,480));
cvtColor( src, gray, CV_BGR2GRAY );
// Reduce the noise so we avoid false circle detection
GaussianBlur( gray, gray, Size(9, 9), 2, 2 );
vector<Vec3f> circles;
// Apply the Hough Transform to find the circles
HoughCircles( gray, circles, CV_HOUGH_GRADIENT, 1, 30, 200, 50, 0, 0 );
cout << "No. of circles : " << circles.size()<<endl;
// Draw the circles detected
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
circle( src, center, 3, Scalar(0,255,0), -1, 8, 0 );// circle center
circle( src, center, radius, Scalar(0,0,255), 3, 8, 0 );// circle outline
cout << "center : " << center << "\nradius : " << radius << endl;
}
exit(0);
// Show your results
namedWindow( "Hough Circle Transform Demo", CV_WINDOW_AUTOSIZE );
imshow( "Hough Circle Transform Demo", src );
waitKey(0);
return 0;
}
I have a similar approach.
img1 = cv2.imread('disc1.jpg', 1)
img2 = img1.copy()
img = cv2.cvtColor(img1,cv2.COLOR_BGR2GRAY)
#--- Blur the gray scale image
img = cv2.GaussianBlur(img,(5, 5),0)
#--- Perform Canny edge detection (in my case lower = 84 and upper = 255, because I resized the image, may vary in your case)
edges = cv2.Canny(img, lower, upper)
cv2.imshow('Edges', edges )
#---Find and draw all existing contours
_, contours , _= cv2.findContours(edges, cv2.RETR_TREE, 1)
rep = cv2.drawContours(img1, contours, -1, (0,255,0), 3)
cv2.imshow(Contours',rep)
Since you are analyzing the shape of a circular edge, determining the eccentricity of your contours will help in this case.
#---Determine eccentricity
cnt = contours
for i in range(0, len(cnt)):
ellipse = cv2.fitEllipse(cnt[i])
(center,axes,orientation) =ellipse
majoraxis_length = max(axes)
minoraxis_length = min(axes)
eccentricity=(np.sqrt(1-(minoraxis_length/majoraxis_length)**2))
cv2.ellipse(img2,ellipse,(0,0,255),2)
cv2.imshow('Detected ellipse', img2)
Now based on the value given by the eccentricity variable you can come to a conclusion whether your contour is circular or not. The threshold depends on what you consider to be circular or an approximate circle.
If you have complete shapes (the edge completely or very nearly joins) it is generally easier to edge detect -> contour -> analyse the contour shape.
Hough lines or circles are very useful when you only have small fragments of a line or circle, but can be tricky to tune
edit: Try cv::adaptiveThreshold to get the edges, then cv::findContours.
For each contour compare the area to the perimeter to see if it is the right size to be your target. Then do cv::fitEllipse to check if it is a circle and get the accurate center. FindCOntours also has a mode which tells you which contours are inside which others, so you can easily find one circle inside another.
You might (depending on lighting) find the same circle with 2 or more contours, ie. for the inner and outer edge.
i want to find hwo to get diff b/w 2 similar grayscale images for implementation in system for security purposes. I want to check whether any difference has occurred between them. For object tracking, i have implementd canny detection in the program below. I get outline of structured objects easily.. which cn later be subtracted to give only the outline of the difference in the delta image....but what if there's a non structural difference such as smoke or fire in the second image? i have increased the contrast for clearer edge detection as well have modified threshold vals in the canny fn parameters..yet got no suitable results.
also canny edge detects shadows edges too. if my two similar image were taken at different times during the day, the shadows will vary, so the edges will vary and will give undesirable false alarm
how should i work around this? Can anyone help? thanks!
Using c language api in enter code hereopencv 2.4 in visual studio 2010
#include "stdafx.h"
#include "cv.h"
#include "highgui.h"
#include "cxcore.h"
#include <math.h>
#include <iostream>
#include <stdio.h>
using namespace cv;
using namespace std;
int main()
{
IplImage* img1 = NULL;
if ((img1 = cvLoadImage("libertyH1.jpg"))== 0)
{
printf("cvLoadImage failed\n");
}
IplImage* gray1 = cvCreateImage(cvGetSize(img1), IPL_DEPTH_8U, 1); //contains greyscale //image
CvMemStorage* storage1 = cvCreateMemStorage(0); //struct for storage
cvCvtColor(img1, gray1, CV_BGR2GRAY); //convert to greyscale
cvSmooth(gray1, gray1, CV_GAUSSIAN, 7, 7); // This is done so as to //prevent a lot of false circles from being detected
IplImage* canny1 = cvCreateImage(cvGetSize(gray1),IPL_DEPTH_8U,1);
IplImage* rgbcanny1 = cvCreateImage(cvGetSize(gray1),IPL_DEPTH_8U,3);
cvCanny(gray1, canny1, 50, 100, 3); //cvCanny( const //CvArr* image, CvArr* edges(output edge map), double threshold1, double threshold2, int //aperture_size CV_DEFAULT(3) );
cvNamedWindow("Canny before hough");
cvShowImage("Canny before hough", canny1);
CvSeq* circles1 = cvHoughCircles(gray1, storage1, CV_HOUGH_GRADIENT, 1, gray1->height/3, 250, 100);
cvCvtColor(canny1, rgbcanny1, CV_GRAY2BGR);
cvNamedWindow("Canny after hough");
cvShowImage("Canny after hough", rgbcanny1);
for (size_t i = 0; i < circles1->total; i++)
{
// round the floats to an int
float* p = (float*)cvGetSeqElem(circles1, i);
cv::Point center(cvRound(p[0]), cvRound(p[1]));
int radius = cvRound(p[2]);
// draw the circle center
cvCircle(rgbcanny1, center, 3, CV_RGB(0,255,0), -1, 8, 0 );
// draw the circle outline
cvCircle(rgbcanny1, center, radius+1, CV_RGB(0,0,255), 2, 8, 0 );
printf("x: %d y: %d r: %d\n",center.x,center.y, radius);
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////
IplImage* img2 = NULL;
if ((img2 = cvLoadImage("liberty_wth_obj.jpg"))== 0)
{
printf("cvLoadImage failed\n");
}
IplImage* gray2 = cvCreateImage(cvGetSize(img2), IPL_DEPTH_8U, 1);
CvMemStorage* storage = cvCreateMemStorage(0);
cvCvtColor(img2, gray2, CV_BGR2GRAY);
// This is done so as to prevent a lot of false circles from being detected
cvSmooth(gray2, gray2, CV_GAUSSIAN, 7, 7);
IplImage* canny2 = cvCreateImage(cvGetSize(img2),IPL_DEPTH_8U,1);
IplImage* rgbcanny2 = cvCreateImage(cvGetSize(img2),IPL_DEPTH_8U,3);
cvCanny(gray2, canny2, 50, 100, 3);
CvSeq* circles2 = cvHoughCircles(gray2, storage, CV_HOUGH_GRADIENT, 1, gray2->height/3, 250, 100);
cvCvtColor(canny2, rgbcanny2, CV_GRAY2BGR);
for (size_t i = 0; i < circles2->total; i++)
{
// round the floats to an int
float* p = (float*)cvGetSeqElem(circles2, i);
cv::Point center(cvRound(p[0]), cvRound(p[1]));
int radius = cvRound(p[2]);
// draw the circle center
cvCircle(rgbcanny2, center, 3, CV_RGB(0,255,0), -1, 8, 0 );
// draw the circle outline
cvCircle(rgbcanny2, center, radius+1, CV_RGB(0,0,255), 2, 8, 0 );
printf("x: %d y: %d r: %d\n",center.x,center.y, radius);
}
You want code help here? This is not an easy task. There are few algorithms available in internet or you can try to invent new one. A lot of research is going on this. I have some idea about a process. You can find the edges by Y from YCbCr color system. Deduct this Y value from blurred image's Y value. Then you will get the edge. Now make an array representation. You have to divide the image in blocks. Now check the block with blocks. It may slide, rotated, twisted etc. Compare with array matching. Object tracking is difficult due to background. Take care/omit unnecessary objects carefully.
I think the way to go could be Background subtraction. It lets you cope with lighting conditions changes.
See wikipedia entry for an intro. The basic idea is you have to build a model for the scene background, then all differences are computed relative to the background.
I have done some analysis on Image Differencing but the code was written for java. Kindly look into the below link that may come to help
How to find rectangle of difference between two images
Cheers !
I have been working on a basic hand/finger tracking code using OpenCV and the ConvexHull and ConvexityDefects method.
Basically I am able to create a contour of the hand. I now need to be able to count the number of fingers. I know that the start and the end points of the Convex Hull are the finger tips but I am unsure how to count them and also how to highlight them by drawing circles on them or something.
I want my code to perform something like this.
This is a sample part of my code so far:
cvFindContours( hsv_mask, storage, &contours, sizeof(CvContour), CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cvPoint(0,0) );
CvSeq* contours2 = NULL;
CvRect rect = cvBoundingRect( contours2, 0 );
cvRectangle( bitImage, cvPoint(rect.x, rect.y + rect.height), cvPoint(rect.x + rect.width, rect.y), CV_RGB(200, 0, 200), 1, 8, 0 );
CvSeq* hull = cvConvexHull2( contours2, 0, CV_CLOCKWISE, 0 );
CvSeq* defect = cvConvexityDefects( contours2, hull, dftStorage );
CvBox2D box = cvMinAreaRect2( contours2, minStorage );
cvDrawContours( bg, contours2, CV_RGB( 0, 200, 0), CV_RGB( 0, 100, 0), 1, 1, 8, cvPoint(0,0));
I have played around with it and I can now draw the fingertip points using this code
for(;defect;defect = defect->h_next)
{
int nomdef = defect->total;
if(nomdef == 0)
continue;
defectArray = (CvConvexityDefect*)malloc(sizeof(CvConvexityDefect)*nomdef);
cvCvtSeqToArray (defect, defectArray, CV_WHOLE_SEQ);
for(i=0; i<nomdef;>
{
cvCircle( bg, *(defectArray[i].end), 5, CV_RGB(255,0,0), -1, 8,0);
cvCircle( bg, *(defectArray[i].start), 5, CV_RGB(0,0,255), -1, 8,0);
cvCircle( bg, *(defectArray[i].depth_point), 5, CV_RGB(0,255,255), -1, 8,0);
}
j++;
free(defectArray);
}
However I am still getting a lot of false positives. Also if anyone could suggest any methods to now count the fingers that would be wonderful.
One of the possibilities you have is to count the amount of defects there are. If you have done it right, the defects is suppose to be located in the bottom section between two fingers.http://img27.imageshack.us/img27/6532/herpz.jpg
Making sure you don't get any "unwanted" defects, you can use the 'depth' parameter from the CvConvexityDefect(); function to filter the low length defects. A better description of the "depth" parameter can be found here:
opencv.itseez.com defect description
I need to detect the Sun from the space sky.
These are examples of the input images:
I've got such results after Morphologic filtering ( open operation for twice )
Here's the algorithm code of this processing:
// Color to Gray
cvCvtColor(image, gray, CV_RGB2GRAY);
// color threshold
cvThreshold(gray,gray,150,255,CV_THRESH_BINARY);
// Morphologic open for 2 times
cvMorphologyEx( gray, dst, NULL, CV_SHAPE_RECT, CV_MOP_OPEN, 2);
Isn't it too heavy processing for such a simple task? And how to find the center of the Sun? If I find white points, than I'll find white points of big Earth ( left top corner on first example image )
Please advise me please my further action to detect the Sun.
UPDATE 1:
Trying algorithm of getting centroid by formula : {x,y} = {M10/M00, M01/M00}
CvMoments moments;
cvMoments(dst, &moments, 1);
double m00, m10, m01;
m00 = cvGetSpatialMoment(&moments, 0,0);
m10 = cvGetSpatialMoment(&moments, 1,0);
m01 = cvGetSpatialMoment(&moments, 0,1);
// calculating centroid
float centroid_x = m10/m00;
float centroid_y = m01/m00;
cvCircle( image,
cvPoint(cvRound(centroid_x), cvRound(centroid_y)),
50, CV_RGB(125,125,0), 4, 8,0);
And where Earth is in the photo, I got such a result:
So, centroid is on the Earth. :(
UPDATE 2:
Trying cvHoughCircles:
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* circles = cvHoughCircles(dst, storage, CV_HOUGH_GRADIENT, 12,
dst->width/2, 255, 100, 0, 35);
if ( circles->total > 0 ) {
// getting first found circle
float* circle = (float*)cvGetSeqElem( circles, 0 );
// Drawing:
// green center dot
cvCircle( image, cvPoint(cvRound(circle[0]),cvRound(circle[1])),
3, CV_RGB(0,255,0), -1, 8, 0 );
// wrapping red circle
cvCircle( image, cvPoint(cvRound(circle[0]),cvRound(circle[1])),
cvRound(circle[2]), CV_RGB(255,0,0), 3, 8, 0 );
}
First example: bingo, but the second - no ;(
I've tried different configuration of cvHoughCircles() - couldn't find configuration to fit every my example photo.
UPDATE3:
matchTemplate approach worked for me ( response of mevatron ). It worked with big number of tests.
How about trying a simple matchTemplate approach. I used this template image:
And, it detected the 3 out of 3 of the sun images I tried:
This should work due to the fact that circles (in your case the sun) are rotationally invariant, and since you are so far away from the sun it should be roughly scale invariant as well. So, template matching will work quite nicely here.
Finally, here is the code that I used to do this:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
/// Load image and template
string inputName = "sun2.png";
string outputName = "sun2_detect.png";
Mat img = imread( inputName, 1 );
Mat templ = imread( "sun_templ.png", 1 );
/// Create the result matrix
int result_cols = img.cols - templ.cols + 1;
int result_rows = img.rows - templ.rows + 1;
Mat result( result_cols, result_rows, CV_32FC1 );
/// Do the Matching and Normalize
matchTemplate(img, templ, result, CV_TM_CCOEFF);
normalize(result, result, 0, 1, NORM_MINMAX, -1, Mat());
Point maxLoc;
minMaxLoc(result, NULL, NULL, NULL, &maxLoc);
rectangle(img, maxLoc, Point( maxLoc.x + templ.cols , maxLoc.y + templ.rows ), Scalar(0, 255, 0), 2);
rectangle(result, maxLoc, Point( maxLoc.x + templ.cols , maxLoc.y + templ.rows ), Scalar(0, 255, 0), 2);
imshow("img", img);
imshow("result", result);
imwrite(outputName, img);
waitKey(0);
return 0;
}
Hope you find that helpful!
Color Segmentation Approach
Do a color segmentation on the images to identify objects on the black background. You may identify the sun according to its area (given this uniquely identifies it, resp. don't varies largely accross images).
A more sophisticated approach could compute image moments, e.g. hu moments of the objects. See this page for these features.
Use a classification algorithm of your choice to do the actual classification of the objects found. The most simple approach is to manually specify thresholds, resp. value ranges that turn out to work for all(most) of your object/image combinations.
You may compute the actual position from the raw moments, as for the circular sun the position is equal to the center of mass
Centroid: {x, y } = { M10/M00, M01/M00 }
Edge Map Approach
Another option would be a circle hough transformation of the edge map, this will hopefully return some candidate circles (by position and radius). You may select the sun-circle according to the radius you expect (if you are lucky there is at most one).
A simple addition to your code is to filter out objects based on their size. If you always expect the earth to be much bigger than the sun, or the sun to have almost the same area in each picture, you can filter it by area.
Try Blob detector for this task.
And note that it may be good to apply a morphological opening/closing instead of simple erode or dilate, so your sun will have almost the same area before and after processing.