I am trying to detect the circle inside traffic light, and I am able to detect only 1 out of the 2 circle, and the size of the circle which i am getting seems to be too big
Input Image: https://i.imgur.com/VkNDt2B.png
Output image: https://i.imgur.com/BBq5tE0.png
int main()
{
Mat src, gray;
src = imread("C:\/test_image2.png", 1);
resize(src, src, Size(640, 480));
cvtColor(src, gray, CV_BGR2GRAY);
// Reduce the noise so we avoid false circle detection
GaussianBlur(gray, gray, Size(9, 9), 2, 2);
vector<Vec3f> circles;
// Apply the Hough Transform to find the circles
HoughCircles(gray, circles, CV_HOUGH_GRADIENT, 1, 60, 200, 20, 0, 35);
// Draw the circles detected
for (size_t i = 0; i < circles.size(); i++)
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
circle(src, center, 3, Scalar(0, 255, 0), -1, 8, 0);// circle center
circle(src, center, radius, Scalar(0, 0, 255), 3, 8, 0);// circle outline
cout << "center : " << center << "\nradius : " << radius << endl;
}
// Show your results
namedWindow("Hough Circle Transform Demo", CV_WINDOW_AUTOSIZE);
imshow("Hough Circle Transform Demo", src);
waitKey(0);
return 0;
}
HoughCircles works best if you know in advance the approx size of the circles you're looking for. I suggest you give a better value for min_radius and max_radius parameters.
In any case, you need to play with param1 and param2 parameters. If circles are not perfect circles you can try to lower the image resolution using the dp parameter (f.ex. with dp = 2 the image is downscaled to half its resolution).
Basically: play with param1 and param2 until your circles are detected, no matter if other circles are detected. Use this result to find out what radius your circles are, then fix the min and max radius to remove most circles you don't want and finally play again with param1 and param2 until only your circles are left.
this is a pretty huge image
try cropping to the traffic light part first ( to get something to begin with ) and then by trying different combinations of min_distance and param_1,param_2 parameter try getting most circles ( even the wrong ones ) detected. find out what values get the most circles and what combination gets least ( or no ) circles and then fine tune the parameters to get lesser circles detected and finally find the perfect combination
Related
1.Some Information: I would like to develop a kind of circle recognition with the help of openCV. I successfully set up a connection between Swift, objc-c++, but strangely I have some problems with the circle recognition algorithm: Not all of the circles in my image gets detected!
2.Have a look at my code:
+(UIImage *)ConvertImage:(UIImage *)image {
cv::Mat matImage;
UIImageToMat(image, matImage);
cv::Mat modImage;
cv::medianBlur(matImage, matImage, 5);
cv::cvtColor(matImage, modImage, CV_RGB2GRAY);
cv::GaussianBlur(modImage, modImage, cv::Size(9, 9), 2, 2);
vector<Vec3f> circles;
cv::HoughCircles(modImage, circles, CV_HOUGH_GRADIENT, 1, 1, 100, 50, 0, 0);
for (auto i = circles.begin(); i != circles.end(); ++i)
std::cout << *i << ' ';
for( size_t i = 0; i < circles.size(); i++ )
{
cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
circle( matImage, center, 3, Scalar(0,255,0), -1, 8, 0 );
circle( matImage, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
UIImage *binImg = MatToUIImage(matImage);
return binImg;
}
As you can see in the image [click] there appears this issue :
Only 3 of 7 circles gets detected!
So in the docs I found the parameters explanation for this line:
cv::HoughCircles(modImage, circles, CV_HOUGH_GRADIENT, 1, 1, 100, 50, 0, 0);
dp = 1: The inverse ratio of resolution.
min_dist = modImage.rows/8: Minimum distance between detected centers.
param_1 = 200: Upper threshold for the internal Canny edge detector.
param_2 = 100*: Threshold for center detection.
min_radius = 0: Minimum radio to be detected. If unknown, put zero as default.
max_radius = 0: Maximum radius to be detected. If unknown, put zero as default.
3.My question
How to get rid of the issue mentioned above?
Any help would be very appreciated :)
For issue number 2 : The outline should be colored, not white!
What color should it be? At any rate you draw that circle in your code with this line.
circle( matImage, center, radius, Scalar(0,0,255), 3, 8, 0 );
If you want to change the color you can change the values you have declared in Scalar(0,0,255).
If you dont want the circle there at all you can remove that line of code.
Your images seems to be noise free. If the image is to contain circle always, You can extract the contours and fit circles using Least Squares
You can get the circle fit equations here. It is a straightforward implementation. Create a structure for the circle parameters (center and radius), fit circle and store the parameters in the structure and use it to draw circle using OpenCV.
You can also generate points on the circle using "ellipse2poly" function.
I have an image with one circle like shape that contains another similar shape. I am trying find the areas of those two shapes. I am using openCv c++ Hough circle detection, but it does not detect the shapes. Is there any other functions in OpenCV can be used to detect the shapes and find the ares?
[EDIT] The image has been added.
Here is my sample code
int main()
{
Mat src, gray;
src = imread( "detect_circles_simple.jpg", 1 );resize(src,src,Size(640,480));
cvtColor( src, gray, CV_BGR2GRAY );
// Reduce the noise so we avoid false circle detection
GaussianBlur( gray, gray, Size(9, 9), 2, 2 );
vector<Vec3f> circles;
// Apply the Hough Transform to find the circles
HoughCircles( gray, circles, CV_HOUGH_GRADIENT, 1, 30, 200, 50, 0, 0 );
cout << "No. of circles : " << circles.size()<<endl;
// Draw the circles detected
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
circle( src, center, 3, Scalar(0,255,0), -1, 8, 0 );// circle center
circle( src, center, radius, Scalar(0,0,255), 3, 8, 0 );// circle outline
cout << "center : " << center << "\nradius : " << radius << endl;
}
exit(0);
// Show your results
namedWindow( "Hough Circle Transform Demo", CV_WINDOW_AUTOSIZE );
imshow( "Hough Circle Transform Demo", src );
waitKey(0);
return 0;
}
I have a similar approach.
img1 = cv2.imread('disc1.jpg', 1)
img2 = img1.copy()
img = cv2.cvtColor(img1,cv2.COLOR_BGR2GRAY)
#--- Blur the gray scale image
img = cv2.GaussianBlur(img,(5, 5),0)
#--- Perform Canny edge detection (in my case lower = 84 and upper = 255, because I resized the image, may vary in your case)
edges = cv2.Canny(img, lower, upper)
cv2.imshow('Edges', edges )
#---Find and draw all existing contours
_, contours , _= cv2.findContours(edges, cv2.RETR_TREE, 1)
rep = cv2.drawContours(img1, contours, -1, (0,255,0), 3)
cv2.imshow(Contours',rep)
Since you are analyzing the shape of a circular edge, determining the eccentricity of your contours will help in this case.
#---Determine eccentricity
cnt = contours
for i in range(0, len(cnt)):
ellipse = cv2.fitEllipse(cnt[i])
(center,axes,orientation) =ellipse
majoraxis_length = max(axes)
minoraxis_length = min(axes)
eccentricity=(np.sqrt(1-(minoraxis_length/majoraxis_length)**2))
cv2.ellipse(img2,ellipse,(0,0,255),2)
cv2.imshow('Detected ellipse', img2)
Now based on the value given by the eccentricity variable you can come to a conclusion whether your contour is circular or not. The threshold depends on what you consider to be circular or an approximate circle.
If you have complete shapes (the edge completely or very nearly joins) it is generally easier to edge detect -> contour -> analyse the contour shape.
Hough lines or circles are very useful when you only have small fragments of a line or circle, but can be tricky to tune
edit: Try cv::adaptiveThreshold to get the edges, then cv::findContours.
For each contour compare the area to the perimeter to see if it is the right size to be your target. Then do cv::fitEllipse to check if it is a circle and get the accurate center. FindCOntours also has a mode which tells you which contours are inside which others, so you can easily find one circle inside another.
You might (depending on lighting) find the same circle with 2 or more contours, ie. for the inner and outer edge.
I'm trying to process the following images from a maze. My Question is about how to process the edges. I'm using OpenCV 2.4 with c++.
I'd like to know if there is any way to discriminate the edges between the floor and the wall from the lines painted in the floor?
The floor is black, the walls are white and the lines painted in the floor are white too.
What I am trying to do is distinguish between wall and marks in floor. The lines on the floor will give me a distance reference and if I can turn in the maze. While the walls just tell the limit of the halls of the maze.
here you'll find the process images I've done.
I'm using Canny and HoughLinesP functions to detect and save the lines. But as you can see in the images the program doesn't separate the lines from the edges.
The code:
vector<Vec4i> get_lines(Mat dst, Mat cdst)
{
vector<Vec4i> lines;
HoughLinesP(dst, lines, 1, CV_PI/180, 100, 50, 10 );
for( size_t i = 0; i < lines.size(); i++ )
{
Vec4i l = lines[i];
double size = norm(Mat(Point(l[0], l[1])), Mat(Point(l[2], l[3])) );
if(size > 100)
line( cdst, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(0,0,255), 3, CV_AA);
}
return lines;
}
And main function is:
int main(int argc, char** argv)
{
const char* filename = argc >= 2 ? argv[1] : "pic1.jpg";
Mat src = imread(filename, 0);
if(src.empty())
{
help();
cout << "can not open " << filename << endl;
return -1;
}
Mat dst, cdst;
Canny(src, dst, 50, 200, 3);
cvtColor(dst, cdst, CV_GRAY2BGR);
vector<Vec4i> lines = get_lines(dst, cdst);
imshow("source W&B", src);
imshow("edges", dst);
imshow("detected lines", cdst);
imwrite("lines.jpg",cdst);
imwrite("src.jpg",src);
imwrite("canny.jpg",dst);
waitKey();
return 0;
}
The obvious thing to try would be to compare the brightness of pixels on either side of the line.
make three regions: pixels a little distance to one side of the line, pixels a little distance to the other side of the line, and pixels close to the line. Calculate the average brightness in either region.
The walls are light grey, the floor is black and the lines are white, so
if one side is significantly brighter than the other side, it is probably an edge (and you can even tell which side is the floor),
if both sides are significantly darker than the middle it is probably a marking on the floor.
(and if the line is vertical, it is a wall-wall edge)
I have a face tracking program that reads video from a camera and draws a rectangle around the persons face. What I want to do is have the program recognise when the face enters a particular region of the frame, and initialise some other action. What commands would I need to do this? (I am using C++ and openCV 2.4.3)
E.g
detect face;
if (face is in ROI)
{
close video feed;
}
So you have a rectangle enclosing your face and a rectangle defining the ROI of the image. To check if the face enters the ROI you just have to check if the two rects intersect. The easiest way to do this is to use the overloaded operator & of cv::Rect_ as described here ( http://docs.opencv.org/modules/core/doc/basic_structures.html#rect ) and then check if the area of the resulting rect is > 0.
An example code would look as follows:
cv::Rect r1(0, 0, 10, 10);
cv::Rect r2(5, 5, 10, 10);
if ( (r1&r2).area() )
{
// rects intersect
}
If you want the face to have entered the ROI with a certain percentage, you can compare the intersection area with the minimun of both input areas:
cv::Rect r1(0, 0, 10, 10);
cv::Rect r2(5, 5, 10, 10);
double minFraction( 0.1 );
if ( (r1&r2).area() > minFraction * std::min(r1.area(), r2.area() ) )
{
// rects intersect by at least minFraction
}
I need to detect the Sun from the space sky.
These are examples of the input images:
I've got such results after Morphologic filtering ( open operation for twice )
Here's the algorithm code of this processing:
// Color to Gray
cvCvtColor(image, gray, CV_RGB2GRAY);
// color threshold
cvThreshold(gray,gray,150,255,CV_THRESH_BINARY);
// Morphologic open for 2 times
cvMorphologyEx( gray, dst, NULL, CV_SHAPE_RECT, CV_MOP_OPEN, 2);
Isn't it too heavy processing for such a simple task? And how to find the center of the Sun? If I find white points, than I'll find white points of big Earth ( left top corner on first example image )
Please advise me please my further action to detect the Sun.
UPDATE 1:
Trying algorithm of getting centroid by formula : {x,y} = {M10/M00, M01/M00}
CvMoments moments;
cvMoments(dst, &moments, 1);
double m00, m10, m01;
m00 = cvGetSpatialMoment(&moments, 0,0);
m10 = cvGetSpatialMoment(&moments, 1,0);
m01 = cvGetSpatialMoment(&moments, 0,1);
// calculating centroid
float centroid_x = m10/m00;
float centroid_y = m01/m00;
cvCircle( image,
cvPoint(cvRound(centroid_x), cvRound(centroid_y)),
50, CV_RGB(125,125,0), 4, 8,0);
And where Earth is in the photo, I got such a result:
So, centroid is on the Earth. :(
UPDATE 2:
Trying cvHoughCircles:
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* circles = cvHoughCircles(dst, storage, CV_HOUGH_GRADIENT, 12,
dst->width/2, 255, 100, 0, 35);
if ( circles->total > 0 ) {
// getting first found circle
float* circle = (float*)cvGetSeqElem( circles, 0 );
// Drawing:
// green center dot
cvCircle( image, cvPoint(cvRound(circle[0]),cvRound(circle[1])),
3, CV_RGB(0,255,0), -1, 8, 0 );
// wrapping red circle
cvCircle( image, cvPoint(cvRound(circle[0]),cvRound(circle[1])),
cvRound(circle[2]), CV_RGB(255,0,0), 3, 8, 0 );
}
First example: bingo, but the second - no ;(
I've tried different configuration of cvHoughCircles() - couldn't find configuration to fit every my example photo.
UPDATE3:
matchTemplate approach worked for me ( response of mevatron ). It worked with big number of tests.
How about trying a simple matchTemplate approach. I used this template image:
And, it detected the 3 out of 3 of the sun images I tried:
This should work due to the fact that circles (in your case the sun) are rotationally invariant, and since you are so far away from the sun it should be roughly scale invariant as well. So, template matching will work quite nicely here.
Finally, here is the code that I used to do this:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
/// Load image and template
string inputName = "sun2.png";
string outputName = "sun2_detect.png";
Mat img = imread( inputName, 1 );
Mat templ = imread( "sun_templ.png", 1 );
/// Create the result matrix
int result_cols = img.cols - templ.cols + 1;
int result_rows = img.rows - templ.rows + 1;
Mat result( result_cols, result_rows, CV_32FC1 );
/// Do the Matching and Normalize
matchTemplate(img, templ, result, CV_TM_CCOEFF);
normalize(result, result, 0, 1, NORM_MINMAX, -1, Mat());
Point maxLoc;
minMaxLoc(result, NULL, NULL, NULL, &maxLoc);
rectangle(img, maxLoc, Point( maxLoc.x + templ.cols , maxLoc.y + templ.rows ), Scalar(0, 255, 0), 2);
rectangle(result, maxLoc, Point( maxLoc.x + templ.cols , maxLoc.y + templ.rows ), Scalar(0, 255, 0), 2);
imshow("img", img);
imshow("result", result);
imwrite(outputName, img);
waitKey(0);
return 0;
}
Hope you find that helpful!
Color Segmentation Approach
Do a color segmentation on the images to identify objects on the black background. You may identify the sun according to its area (given this uniquely identifies it, resp. don't varies largely accross images).
A more sophisticated approach could compute image moments, e.g. hu moments of the objects. See this page for these features.
Use a classification algorithm of your choice to do the actual classification of the objects found. The most simple approach is to manually specify thresholds, resp. value ranges that turn out to work for all(most) of your object/image combinations.
You may compute the actual position from the raw moments, as for the circular sun the position is equal to the center of mass
Centroid: {x, y } = { M10/M00, M01/M00 }
Edge Map Approach
Another option would be a circle hough transformation of the edge map, this will hopefully return some candidate circles (by position and radius). You may select the sun-circle according to the radius you expect (if you are lucky there is at most one).
A simple addition to your code is to filter out objects based on their size. If you always expect the earth to be much bigger than the sun, or the sun to have almost the same area in each picture, you can filter it by area.
Try Blob detector for this task.
And note that it may be good to apply a morphological opening/closing instead of simple erode or dilate, so your sun will have almost the same area before and after processing.