Hough Circle on binary images - c++

I'm trying to create a generic function which always finds my 3 color balls. (Red, yellow and white). I spend a lot of time to search a solution, and it's pretty hard...
For the moment, first, I use the Canny filter (I use the Otsu method to determine the lower and highter parameter) and I call the Hough Circle method by incrementing param2 until I find 3 circles.
while (!findCircles){
Imgproc.HoughCircles(hough, circles, Imgproc.CV_HOUGH_GRADIENT, 1, 100, 200, low, 20, 100); //find3Circles = true;
if (circles.cols() == 3){
findCircles = true;
}
low++;
}
It doesn't work very well...
If someone vote up for my question, i could post images (i have no enough points...) Please, if someone found the solution, it would be nice to tell me.

I think that you should base you method on finding colors, not shapes or at least you should stary with finding colors and then find shapes. Here there is good(it uses old OpenCV API, but everything else is fine) article describing how to perform color based object tracking in OpenCV. The general idea is simple - convert image to HSV color space, use inRange function to find pixels which might be your objects and then track them (most likely you will have to filter the pixels - find biggest contour or contour which shape is close to circle). Note that you will need to call inRange function 3 times (one for each ball).

Related

cv::detail::MultiBandBlender strange white streaks at the end of the photo

I'm working with OpenCV 3.4.8 with C++11 and I'm trying to blend images together.
In this example I have 2 images (thiers mask shown in the screen belowe). I have georeference, so I can easy calculate corners of this images in the final image.
The data outside the masks are black.
My code looks like something like that:
std::vector<cv::UMat> inputImages;
std::vector<cv::UMat> masks;
std::vector<cv::Point> corners;
std::vector<cv::Size> imgSizes;
/*
here is code where I load images, create thier masks
(like in the screen above) and calculate corners.
*/
cv::Ptr<cv::detail::SeamFinder> seamFinder = new cv::detail::DpSeamFinder();
seamFinder->find(inputImages, corners, masks);
cv::Ptr<cv::detail::Blender> blender = new cv::detail:: MultiBandBlender(false);
blender->prepare(corners, imgSizes);
for(size_t i = 0; i < inputImages.size(); i++)
{
blender->feed(inputImages[i], masks[i], corners[i]);
}
cv::UMat blendedImg, outMask;
blender->blend(blendedImg, outMask);
SeamFinder gives me result like in the screen above. Finded seam lines looks good and Im very satisied form them. But the other problem occurs in the next step. The MultiBandBlender is making strange white streaks when the seam line goes on the end of the data.
This is an example:
When I don't use blender, but just use masks to cut the oryginal images and just add (cv::add()) images together with additional alpha channel (made from masks) I get very good results without any holes and strange colors, but I need to have more smoothed transition :/
Can anyone help me? When I create MultiBand Blender with smaller num_bands the white streaks are smaller, and with the num_bands = 0 the results looks like with just adding images.
I looked at feed() and blend() methods in the MultiBandBlender and I think that it is connected with Gaussian or Laplacian pyramid and the final restoring images from Laplacian pyramid in the blend() method.
EDIT1:
When Gaussian and Laplacian pyramids are created the copyMakeBorder(), which prevents the MultiBandBlender from making this white streaks when images are fully filled with the data. So in my case I think that I need to create my blender almost the same like MultiBandBlender, but copyMakeBorder() method in the feed() method change to the something that will "extend" my image inside the mask, like #AlexanderKondratskiy suggested.
Now I don't know how to achive correct "extend" similar to BORDER_REFLECT or BORDER_REFLECT_101.
I suspect your input images contain white pixels outside those masks. The white banding occurs around the areas where the seam follows the mask exactly. For Laplacian for example, pixels outside the mask do influence the final result, as each layer of a pyramid is essentially some blurring kernel on the image.
If you have some kind of good data outside the mask, keep it. If you do not, I suggest "extending" your image beyond the mask to maintain a smooth transition.
Edit:
Here's two things you could try, unless someone with more experience with OpenCV comes along.
To prove/disprove my hypothesis, fill the black region with just the average or median color within the mask. This should make the transition to the outside region less sharp, and hopefully reduce the artefacts. If that does not happen, my answer is wrong.
In terms of what is probably a good generalization of "BORDER_REFLECT" when the edge is arbitrary, you could try something like this:
Find the centroid c of the mask polygon
For each pixel p outside the mask, think of the line between it and c
Calculate point p' along this line that is the same distance inside the mask area, as p is from the mask edge. (i.e. you're reflecting along the mask edge)
Linearly interpolate the color of from the neighbors of p' (as it's position may not fall exactly in the middle of a pixel). That's the color of pixel p

Extract one object from bunch of objects and detect edges

For my college project I need to identify a species of a plant from plant leaf shape by detecting edges of a leaf. (I use OpenCV 2.4.9 and C++), but the source image has taken in the real environment of the plant and has more than one leaf. See the below example image. So here I need to extract the edge pattern of just one leaf to process further.
Using Canny Edge Detector I can identify edges of the whole image.
But I don't know how to proceed from here to extract edge pattern of just one leaf, may be more clear and complete leaf. I don't know even if this is possible also. Can anyone please tell me if this is possible how to extract edges of one leaf I just want to know the image peocessing steps that I need to apply to the image. I don't want any code samples. I'm new to image processing and OpenCV and learning by doing experiments.
Thanks in advance.
Edit
As Luis said said I have done Morphological close to the image after doing edge detection using Canny edge detection, but it seems still it is difficult me to find the largest contour from the image.
Here are the steps I have taken to process the image
Apply Bilateral Filter to reduce noise
bilateralFilter(img_src, img_blur, 31, 31 * 2, 31 / 2);
Adjust contrast by histogram equaliztion
cvtColor(img_blur,img_equalized,CV_BGR2GRAY);
Apply Canny edge detector
Canny(img_equalized, img_edge_detected, 20, 60, 3);
Threshold binary image to remove some background data
threshold(img_edge_detected, img_threshold, 1, 255,THRESH_BINARY_INV);
Morphological close of the image
morphologyEx(img_threshold, img_closed, MORPH_CLOSE, getStructuringElement(MORPH_ELLIPSE, Size(2, 2)));
Following are the resulting images I'm getting.
This result I'm getting for the above original image
Source image and result for second image
Source :
Result :
Is there any way to detect the largest contour and extract it from the image ?
Note that my final target is to create a plant identification system using real environmental image, but here I cannot use template matching or masking kind of things because the user has to take an image and upload it so the system doesn't have any prior idea about the leaf.
Here is the full code
#include <opencv\cv.h>
#include <opencv\highgui.h>
using namespace cv;
int main()
{
Mat img_src, img_blur,img_gray,img_equalized,img_edge_detected,img_threshold,img_closed;
//Load original image
img_src = imread("E:\\IMAG0196.jpg");
//Apply Bilateral Filter to reduce noise
bilateralFilter(img_src, img_blur, 31, 31 * 2, 31 / 2);
//Adjust contrast by histogram equaliztion
cvtColor(img_blur,img_equalized,CV_BGR2GRAY);
//Apply Canny edge detector
Canny(img_equalized, img_edge_detected, 20, 60, 3);
//Threshold binary image to remove some background data
threshold(img_edge_detected, img_threshold, 15, 255,THRESH_BINARY_INV);
//Morphological close of the image
morphologyEx(img_threshold, img_closed, MORPH_CLOSE, getStructuringElement(MORPH_ELLIPSE, Size(2, 2)));
imshow("Result", img_closed);
waitKey(0);
return 0;
}
Thank you.
Well there is a similar question that was asked here:
opencv matching edge images
It seems that edge information is not a good descriptor for the image, still if you want to try it I'll do the following steps:
Load image and convert it to grayscale
Detect edges - Canny, Sobel try them and find what it suits you best
Set threshold to a given value that eliminates most background - Binarize image
Close the image - Morphological close dont close the window!
Count and identify objects in the image (Blobs, Watershed)
Check each object for a shape (assuming you have described shapes of the leaf you could find before or a standard shape like an ellipse) features like:
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html
http://www.math.uci.edu/icamp/summer/research_11/park/shape_descriptors_survey.pdf
If a given object has a given shape that you described as a leaf then you detected the leaf!.
I believe that given images are taken in the real world these algorithm will perform poorly but it's a start. Well hope it helps :).
-- POST EDIT 06/07
Well since you have no prior information about the leaf, I think the best we could do is the following:
Load image
Bilateral filter
Canny
Extract contours
Assume: that the contour with the largest perimeter is the leaf
Convex hull the 3 or 2 largest contours (the blue line is the convex hull done)
Use this convex hull to do a graph cut on the image and segmentate it
If you do those steps, you'll end up with images like these:
I won't post the code here, but you can check it out in my messy github. I hope you don't mind it was made in python.
Leaf - Github
Still, I have a couple of things to finish that could improve the result.. Roadmap would be:
Define the mask in the graphcut (like its described in the doc)
Apply region grow may give a better convex hull
Remove all edges that touch the border of the image can help to identify larger edges
Well, again, I hope it helps

OpenCV, how to use arrays of points for smoothing and sampling contours?

I have a problem to get my head around smoothing and sampling contours in OpenCV (C++ API).
Lets say I have got sequence of points retrieved from cv::findContours (for instance applied on this this image:
Ultimately, I want
To smooth a sequence of points using different kernels.
To resize the sequence using different types of interpolations.
After smoothing, I hope to have a result like :
I also considered drawing my contour in a cv::Mat, filtering the Mat (using blur or morphological operations) and re-finding the contours, but is slow and suboptimal. So, ideally, I could do the job using exclusively the point sequence.
I read a few posts on it and naively thought that I could simply convert a std::vector(of cv::Point) to a cv::Mat and then OpenCV functions like blur/resize would do the job for me... but they did not.
Here is what I tried:
int main( int argc, char** argv ){
cv::Mat conv,ori;
ori=cv::imread(argv[1]);
ori.copyTo(conv);
cv::cvtColor(ori,ori,CV_BGR2GRAY);
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i > hierarchy;
cv::findContours(ori, contours,hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_NONE);
for(int k=0;k<100;k += 2){
cv::Mat smoothCont;
smoothCont = cv::Mat(contours[0]);
std::cout<<smoothCont.rows<<"\t"<<smoothCont.cols<<std::endl;
/* Try smoothing: no modification of the array*/
// cv::GaussianBlur(smoothCont, smoothCont, cv::Size(k+1,1),k);
/* Try sampling: "Assertion failed (func != 0) in resize"*/
// cv::resize(smoothCont,smoothCont,cv::Size(0,0),1,1);
std::vector<std::vector<cv::Point> > v(1);
smoothCont.copyTo(v[0]);
cv::drawContours(conv,v,0,cv::Scalar(255,0,0),2,CV_AA);
std::cout<<k<<std::endl;
cv::imshow("conv", conv);
cv::waitKey();
}
return 1;
}
Could anyone explain how to do this ?
In addition, since I am likely to work with much smaller contours, I was wondering how this approach would deal with border effect (e.g. when smoothing, since contours are circular, the last elements of a sequence must be used to calculate the new value of the first elements...)
Thank you very much for your advices,
Edit:
I also tried cv::approxPolyDP() but, as you can see, it tends to preserve extremal points (which I want to remove):
Epsilon=0
Epsilon=6
Epsilon=12
Epsilon=24
Edit 2:
As suggested by Ben, it seems that cv::GaussianBlur() is not supported but cv::blur() is. It looks very much closer to my expectation. Here are my results using it:
k=13
k=53
k=103
To get around the border effect, I did:
cv::copyMakeBorder(smoothCont,smoothCont, (k-1)/2,(k-1)/2 ,0, 0, cv::BORDER_WRAP);
cv::blur(smoothCont, result, cv::Size(1,k),cv::Point(-1,-1));
result.rowRange(cv::Range((k-1)/2,1+result.rows-(k-1)/2)).copyTo(v[0]);
I am still looking for solutions to interpolate/sample my contour.
Your Gaussian blurring doesn't work because you're blurring in column direction, but there is only one column. Using GaussianBlur() leads to a "feature not implemented" error in OpenCV when trying to copy the vector back to a cv::Mat (that's probably why you have this strange resize() in your code), but everything works fine using cv::blur(), no need to resize(). Try Size(0,41) for example. Using cv::BORDER_WRAP for the border issue doesn't seem to work either, but here is another thread of someone who found a workaround for that.
Oh... one more thing: you said that your contours are likely to be much smaller. Smoothing your contour that way will shrink it. The extreme case is k = size_of_contour, which results in a single point. So don't choose your k too big.
Another possibility is to use the algorithm openFrameworks uses:
https://github.com/openframeworks/openFrameworks/blob/master/libs/openFrameworks/graphics/ofPolyline.cpp#L416-459
It traverses the contour and essentially applies a low-pass filter using the points around it. Should do exactly what you want with low overhead and (there's no reason to do a big filter on an image that's essentially just a contour).
How about approxPolyDP()?
It uses this algorithm to 'smooth' a contour (basically gettig rid of most of the contour's points and leave the ones that represent a good approximation of your contour)
From 2.1 OpenCV doc section Basic Structures:
template<typename T>
explicit Mat::Mat(const vector<T>& vec, bool copyData=false)
You probably want to set 2nd param to true in:
smoothCont = cv::Mat(contours[0]);
and try again (this way cv::GaussianBlur should be able to modify the data).
I know this was written a long time ago, but did you tried a big erode followed by a big dilate (opening), and then find the countours? It looks like a simple and fast solution, but I think it could work, at least to some degree.
Basically the sudden changes in contour corresponds to high frequency content. An easy way to smooth your contour would be to find the fourier coefficients assuming the coordinates form a complex plane x + iy and then by eliminating the high frequency coefficients.
My take ... many years later ...!
Maybe two easy ways to do it:
loop a few times with dilate,blur,erode. And find the contours on that updated shape. I found 6-7 times gives good results.
create a bounding box of the contour, and draw an ellipse inside the bounded rectangle.
Adding the visual results below:
This applies to me. The edges are smoother than before:
medianBlur(mat, mat, 7)
morphologyEx(mat, mat, MORPH_OPEN, getStructuringElement(MORPH_RECT, Size(12.0, 12.0)))
val contours = getContours(mat)
This is opencv4android code.

Robustly find N circles with the same diameter: alternative to bruteforcing Hough transform threshold

I am developing application to track small animals in Petri dishes (or other circular containers).
Before any tracking takes place, the first few frames are used to define areas.
Each dish will match an circular independent static area (i.e. will not be updated during tracking).
The user can request the program to try to find dishes from the original image and use them as areas.
Here are examples:
In order to perform this task, I am using Hough Circle Transform.
But in practice, different users will have very different settings and images and I do not want to ask the user to manually define the parameters.
I cannot just guess all the parameters either.
However, I have got additional informations that I would like to use:
I know the exact number of circles to be detected.
All the circles have the almost same dimensions.
The circles cannot overlap.
I have a rough idea of the minimal and maximal size of the circles.
The circles must be entirely in the picture.
I can therefore narrow down the number of parameters to define to one: the threshold.
Using these informations and considering that I have got N circles to find, my current solution is to
test many values of threshold and keep the circles between which the standard deviation is the smallest (since all the circles should have a similar size):
//at this point, minRad and maxRad were calculated from the size of the image and the number of circles to find.
//assuming circles should altogether fill more than 1/3 of the images but cannot be altogether larger than the image.
//N is the integer number of circles to find.
//img is the picture of the scene (filtered).
//the vectors containing the detected circles and the --so far-- best circles found.
std::vector<cv::Vec3f> circles, bestCircles;
//the score of the --so far-- best set of circles
double bestSsem = 0;
for(int t=5; t<400 ; t=t+2){
//Apply Hough Circles with the threshold t
cv::HoughCircles(img, circles, CV_HOUGH_GRADIENT, 3, minRad*2, t,3, minRad, maxRad );
if(circles.size() >= N){
//call a routine to give a score to this set of circles according to the similarity of their radii
double ssem = scoreSetOfCircles(circles,N);
//if no circles are recorded yet, or if the score of this set of circles is higher than the former best
if( bestCircles.size() < N || ssem > bestSsem){
//this set become the temporary best set of circles
bestCircles=circles;
bestSsem=ssem;
}
}
}
With:
//the methods to assess how good is a set of circle (the more similar the circles are, the higher is ssem)
double scoreSetOfCircles(std::vector<cv::Vec3f> circles, int N){
double ssem=0, sum = 0;
double mean;
for(unsigned int j=0;j<N;j++){
sum = sum + circles[j][2];
}
mean = sum/N;
for(unsigned int j=0;j<N;j++){
double em = mean - circles[j][2];
ssem = 1/(ssem + em*em);
}
return ssem;
}
I have reached a higher accuracy by performing a second pass in which I repeated this algorithm narrowing the [minRad:maxRad] interval using the result of the first pass.
For instance minRad2 = 0.95 * average radius of best circles and maxRad2 = 1.05 * average radius of best circles.
I had fairly good results using this method so far. However, it is slow and rather dirty.
My questions are:
Can you thing of any alternative algorithm to solve this problem in a cleaner/faster manner ?
Or what would you suggest to improve this algorithm?
Do you think I should investigate generalised Hough transform ?
Thank you for your answers and suggestions.
The following approach should work pretty well for your case:
Binarize your image (you might need to do this on several levels of threshold to make algorithm independent of the lighting conditions)
Find contours
For each contour calculate the moments
Filter them by area to remove too small contours
Filter contours by circularity:
double area = moms.m00;
double perimeter = arcLength(Mat(contours[contourIdx]), true);
double ratio = 4 * CV_PI * area / (perimeter * perimeter);
ratio close to 1 will give you circles.
Calculate radius and center of each circle
center = Point2d(moms.m10 / moms.m00, moms.m01 / moms.m00);
And you can add more filters to improve the robustness.
Actually you can find an implementation of the whole procedure in OpenCV. Look how the SimpleBlobDetector class and findCirclesGrid function are implemented.
Within the current algorithm, the biggest thing that sticks out is the for(int t=5; t<400; t=t+2) loop. Trying recording score values for some test images. Graph score(t) versus t. With any luck, it will either suggest a smaller range for t or be a smoothish curve with a single maximum. In the latter case you can change your loop over all t values into a smarter search using Hill Climbing methods.
Even if it's fairly noisy, you can first loop over multiples of, say, 30, and for the best 1 or 2 of those loop over nearby multiples of 2.
Also, in your score function, you should disqualify any results with overlapping circles and maybe penalize overly spaced out circles.
You don't explain why you are using a black background. Unless you are using a telecentric lens (which seems unlikely, given the apparent field of view), and ignoring radial distortion for the moment, the images of the dishes will be ellipses, so estimating them as circles may lead to significant errors.
All and all, it doesn't seem to me that you are following a good approach. If the goals is simply to remove the background, so you can track the bugs inside the dishes, then your goal should be just that: find which pixels are background and mark them. The easiest way to do that is to take a picture of the background without dishes, under the same illumination and camera, and directly detect differences with the picture with the images. A colored background would be preferable to do that, with a color unlikely to appear in the dishes (e.g. green or blue velvet). So you'd have reduced the problem to bluescreening (or chroma keying), a classic technique in machine vision as applied to visual effects. Do a google search for "matte petro vlahos assumption" to find classic algorithms for solving this problem.

Writing robust (color and size invariant) circle detection with OpenCV (based on Hough transform or other features)

I wrote the following very simple python code to find circles in an image:
import cv
import numpy as np
WAITKEY_DELAY_MS = 10
STOP_KEY = 'q'
cv.NamedWindow("image - press 'q' to quit", cv.CV_WINDOW_AUTOSIZE);
cv.NamedWindow("post-process", cv.CV_WINDOW_AUTOSIZE);
key_pressed = False
while key_pressed != STOP_KEY:
# grab image
orig = cv.LoadImage('circles3.jpg')
# create tmp images
grey_scale = cv.CreateImage(cv.GetSize(orig), 8, 1)
processed = cv.CreateImage(cv.GetSize(orig), 8, 1)
cv.Smooth(orig, orig, cv.CV_GAUSSIAN, 3, 3)
cv.CvtColor(orig, grey_scale, cv.CV_RGB2GRAY)
# do some processing on the grey scale image
cv.Erode(grey_scale, processed, None, 10)
cv.Dilate(processed, processed, None, 10)
cv.Canny(processed, processed, 5, 70, 3)
cv.Smooth(processed, processed, cv.CV_GAUSSIAN, 15, 15)
storage = cv.CreateMat(orig.width, 1, cv.CV_32FC3)
# these parameters need to be adjusted for every single image
HIGH = 50
LOW = 140
try:
# extract circles
cv.HoughCircles(processed, storage, cv.CV_HOUGH_GRADIENT, 2, 32.0, HIGH, LOW)
for i in range(0, len(np.asarray(storage))):
print "circle #%d" %i
Radius = int(np.asarray(storage)[i][0][2])
x = int(np.asarray(storage)[i][0][0])
y = int(np.asarray(storage)[i][0][1])
center = (x, y)
# green dot on center and red circle around
cv.Circle(orig, center, 1, cv.CV_RGB(0, 255, 0), -1, 8, 0)
cv.Circle(orig, center, Radius, cv.CV_RGB(255, 0, 0), 3, 8, 0)
cv.Circle(processed, center, 1, cv.CV_RGB(0, 255, 0), -1, 8, 0)
cv.Circle(processed, center, Radius, cv.CV_RGB(255, 0, 0), 3, 8, 0)
except:
print "nothing found"
pass
# show images
cv.ShowImage("image - press 'q' to quit", orig)
cv.ShowImage("post-process", processed)
cv_key = cv.WaitKey(WAITKEY_DELAY_MS)
key_pressed = chr(cv_key & 255)
As you can see from the following two examples, the 'circle finding quality' varies quite a lot:
CASE1:
CASE2:
Case1 and Case2 are basically the same image, but still the algorithm detects different circles. If I present the algorithm an image with differently sized circles, the circle detection might even fail completely. This is mostly due to the HIGH and LOW parameters which need to be adjusted individually for each new picture.
Therefore my question: What are the various possibilities of making this algorithm more robust? It should be size and color invariant so that different circles with different colors and in different sizes are detected. Maybe using the Hough transform is not the best way of doing things? Are there better approaches?
The following is based on my experience as a vision researcher. From your question you seem to be interested in possible algorithms and methods rather only a working piece of code. First I give a quick and dirty Python script for your sample images and some results are shown to prove it could possibly solve your problem. After getting these out of the way, I try to answer your questions regarding robust detection algorithms.
Quick Results
Some sample images (all the images apart from yours are downloaded from flickr.com and are CC licensed) with the detected circles (without changing/tuning any parameters, exactly the following code is used to extract the circles in all the images):
Code (based on the MSER Blob Detector)
And here is the code:
import cv2
import math
import numpy as np
d_red = cv2.cv.RGB(150, 55, 65)
l_red = cv2.cv.RGB(250, 200, 200)
orig = cv2.imread("c.jpg")
img = orig.copy()
img2 = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
detector = cv2.FeatureDetector_create('MSER')
fs = detector.detect(img2)
fs.sort(key = lambda x: -x.size)
def supress(x):
for f in fs:
distx = f.pt[0] - x.pt[0]
disty = f.pt[1] - x.pt[1]
dist = math.sqrt(distx*distx + disty*disty)
if (f.size > x.size) and (dist<f.size/2):
return True
sfs = [x for x in fs if not supress(x)]
for f in sfs:
cv2.circle(img, (int(f.pt[0]), int(f.pt[1])), int(f.size/2), d_red, 2, cv2.CV_AA)
cv2.circle(img, (int(f.pt[0]), int(f.pt[1])), int(f.size/2), l_red, 1, cv2.CV_AA)
h, w = orig.shape[:2]
vis = np.zeros((h, w*2+5), np.uint8)
vis = cv2.cvtColor(vis, cv2.COLOR_GRAY2BGR)
vis[:h, :w] = orig
vis[:h, w+5:w*2+5] = img
cv2.imshow("image", vis)
cv2.imwrite("c_o.jpg", vis)
cv2.waitKey()
cv2.destroyAllWindows()
As you can see it's based on the MSER blob detector. The code doesn't preprocess the image apart from the simple mapping into grayscale. Thus missing those faint yellow blobs in your images is expected.
Theory
In short: you don't tell us what you know about the problem apart from giving only two sample images with no description of them. Here I explain why I in my humble opinion it is important to have more information about the problem before asking what are efficient methods to attack the problem.
Back to the main question: what is the best method for this problem?
Let's look at this as a search problem. To simplify the discussion assume we are looking for circles with a given size/radius. Thus, the problem boils down to finding the centers. Every pixel is a candidate center, therefore, the search space contains all the pixels.
P = {p1, ..., pn}
P: search space
p1...pn: pixels
To solve this search problem two other functions should be defined:
E(P) : enumerates the search space
V(p) : checks whether the item/pixel has the desirable properties, the items passing the check are added to the output list
Assuming the complexity of the algorithm doesn't matter, the exhaustive or brute-force search can be used in which E takes every pixel and passes to V. In real-time applications it's important to reduce the search space and optimize computational efficiency of V.
We are getting closer to the main question. How we could define V, to be more precise what properties of the candidates should be measures and how should make solve the dichotomy problem of splitting them into desirable and undesirable. The most common approach is to find some properties which can be used to define simple decision rules based on the measurement of the properties. This is what you're doing by trial and error. You're programming a classifier by learning from positive and negative examples. This is because the methods you're using have no idea what you want to do. You have to adjust / tune the parameters of the decision rule and/or preprocess the data such that the variation in the properties (of the desirable candidates) used by the method for the dichotomy problem are reduced. You could use a machine learning algorithm to find the optimal parameter values for a given set of examples. There's a whole host of learning algorithms from decision trees to genetic programming you can use for this problem. You could also use a learning algorithm to find the optimal parameter values for several circle detection algorithms and see which one gives a better accuracy. This takes the main burden on the learning algorithm you just need to collect sample images.
The other approach to improve robustness which is often overlooked is to utilize extra readily available information. If you know the color of the circles with virtually zero extra effort you could improve the accuracy of the detector significantly. If you knew the position of the circles on the plane and you wanted to detect the imaged circles, you should remember the transformation between these two sets of positions is described by a 2D homography. And the homography can be estimated using only four points. Then you could improve the robustness to have a rock solid method. The value of domain-specific knowledge is often underestimated. Look at it this way, in the first approach we try to approximate some decision rules based on a limited number of sample. In the second approach we know the decision rules and only need to find a way to effectively utilize them in an algorithm.
Summary
To summarize, there are two approaches to improve the accuracy / robustness of the solution:
Tool-based: finding an easier to use algorithm / with fewer number of parameters / tweaking the algorithm / automating this process by using machine learning algorithms
Information-based: are you using all the readily available information? In the question you don't mention what you know about the problem.
For these two images you have shared I would use a blob detector not the HT method. For background subtraction I would suggest to try to estimate the color of the background as in the two images it is not varying while the color of the circles vary. And the most of the area is bare.
This is a great modelling problem. I have the following recommendations/ ideas:
Split the image to RGB then process.
pre-processing.
Dynamic parameter search.
Add constraints.
Be sure about what you are trying to detect.
In more detail:
1: As noted in other answers, converting straight to grayscale discards too much information - any circles with a similar brightness to the background will be lost. Much better to consider the colour channels either in isolation or in a different colour space. There are pretty much two ways to go here: perform HoughCircles on each pre-processed channel in isolation, then combine results, or, process the channels, then combine them, then operate HoughCircles. In my attempt below, I've tried the second method, splitting to RGB channels, processing, then combining. Be wary of over saturating the image when combining, I use cv.And to avoid this issue (at this stage my circles are always black rings/discs on white background).
2: Pre-processing is quite tricky, and something its often best to play around with. I've made use of AdaptiveThreshold which is a really powerful convolution method that can enhance edges in an image by thresholding pixels based on their local average (similar processes also occur in the early pathway of the mammalian visual system). This is also useful as it reduces some noise. I've used dilate/erode with only one pass. And I've kept the other parameters how you had them. It seems using Canny before HoughCircles does help a lot with finding 'filled circles', so probably best to keep it in. This pre-processing is quite heavy and can lead to false positives with somewhat more 'blobby circles', but in our case this is perhaps desirable?
3: As you've noted HoughCircles parameter param2 (your parameter LOW) needs to be adjusted for each image in order to get an optimal solution, in fact from the docs:
The smaller it is, the more false circles may be detected.
Trouble is the sweet spot is going to be different for every image. I think the best approach here is to make set a condition and do a search through different param2 values until this condition is met. Your images show non-overlapping circles, and when param2 is too low we typically get loads of overlapping circles. So I suggest searching for the:
maximum number of non-overlapping, and non-contained circles
So we keep calling HoughCircles with different values of param2 until this is met. I do this in my example below, just by incrementing param2 until it reaches the threshold assumption. It would be way faster (and fairly easy to do) if you perform a binary search to find when this is met, but you need to be careful with exception handling as opencv often throws a errors for innocent looking values of param2 (at least on my installation). A different condition that would we very useful to match against would be the number of circles.
4: Are there any more constraints we can add to the model? The more stuff we can tell our model the easy a task we can make it to detect circles. For example, do we know:
The number of circles. - even an upper or lower bound is helpful.
Possible colours of the circles, or of the background, or of 'non-circles'.
Their sizes.
Where they can be in an image.
5: Some of the blobs in your images could only loosely be called circles! Consider the two 'non-circular blobs' in your second image, my code can't find them (good!), but... if I 'photoshop' them so they are more circular, my code can find them... Maybe if you want to detect things that are not circles, a different approach such as Tim Lukins may be better.
Problems
By doing heavy pre-processing AdaptiveThresholding and `Canny' there can be a lot of distortion to features in an image, which may lead to false circle detection, or incorrect radius reporting. For example a large solid disc after processing can appear a ring, so HughesCircles may find the inner ring. Furthermore even the docs note that:
...usually the function detects the circles’ centers well, however it may fail to find the correct radii.
If you need more accurate radii detection, I suggest the following approach (not implemented):
On the original image, ray-trace from reported centre of circle, in an expanding cross (4 rays: up/down/left/right)
Do this seperately in each RGB channel
Combine this info for each channel for each ray in a sensible fashion (ie. flip, offset, scale, etc as necessary)
take the average for the first few pixels on each ray, use this to detect where a significant deviation on the ray occurs.
These 4 points are estimates of points on the circumference.
Use these four estimates to determine a more accurate radius, and centre position(!).
This could be generalised by using an expanding ring instead of four rays.
Results
The code at end does pretty good quite a lot of the time, these examples were done with code as shown:
Detects all circles in your first image:
How the pre-processed image looks before canny filter is applied (different colour circles are highly visible):
Detects all but two (blobs) in second image:
Altered second image (blobs are circle-afied, and large oval made more circular, thus improving detection), all detected:
Does pretty well in detecting centres in this Kandinsky painting (I cannot find concentric rings due to he boundary condition).
Code:
import cv
import numpy as np
output = cv.LoadImage('case1.jpg')
orig = cv.LoadImage('case1.jpg')
# create tmp images
rrr=cv.CreateImage((orig.width,orig.height), cv.IPL_DEPTH_8U, 1)
ggg=cv.CreateImage((orig.width,orig.height), cv.IPL_DEPTH_8U, 1)
bbb=cv.CreateImage((orig.width,orig.height), cv.IPL_DEPTH_8U, 1)
processed = cv.CreateImage((orig.width,orig.height), cv.IPL_DEPTH_8U, 1)
storage = cv.CreateMat(orig.width, 1, cv.CV_32FC3)
def channel_processing(channel):
pass
cv.AdaptiveThreshold(channel, channel, 255, adaptive_method=cv.CV_ADAPTIVE_THRESH_MEAN_C, thresholdType=cv.CV_THRESH_BINARY, blockSize=55, param1=7)
#mop up the dirt
cv.Dilate(channel, channel, None, 1)
cv.Erode(channel, channel, None, 1)
def inter_centre_distance(x1,y1,x2,y2):
return ((x1-x2)**2 + (y1-y2)**2)**0.5
def colliding_circles(circles):
for index1, circle1 in enumerate(circles):
for circle2 in circles[index1+1:]:
x1, y1, Radius1 = circle1[0]
x2, y2, Radius2 = circle2[0]
#collision or containment:
if inter_centre_distance(x1,y1,x2,y2) < Radius1 + Radius2:
return True
def find_circles(processed, storage, LOW):
try:
cv.HoughCircles(processed, storage, cv.CV_HOUGH_GRADIENT, 2, 32.0, 30, LOW)#, 0, 100) great to add circle constraint sizes.
except:
LOW += 1
print 'try'
find_circles(processed, storage, LOW)
circles = np.asarray(storage)
print 'number of circles:', len(circles)
if colliding_circles(circles):
LOW += 1
storage = find_circles(processed, storage, LOW)
print 'c', LOW
return storage
def draw_circles(storage, output):
circles = np.asarray(storage)
print len(circles), 'circles found'
for circle in circles:
Radius, x, y = int(circle[0][2]), int(circle[0][0]), int(circle[0][1])
cv.Circle(output, (x, y), 1, cv.CV_RGB(0, 255, 0), -1, 8, 0)
cv.Circle(output, (x, y), Radius, cv.CV_RGB(255, 0, 0), 3, 8, 0)
#split image into RGB components
cv.Split(orig,rrr,ggg,bbb,None)
#process each component
channel_processing(rrr)
channel_processing(ggg)
channel_processing(bbb)
#combine images using logical 'And' to avoid saturation
cv.And(rrr, ggg, rrr)
cv.And(rrr, bbb, processed)
cv.ShowImage('before canny', processed)
# cv.SaveImage('case3_processed.jpg',processed)
#use canny, as HoughCircles seems to prefer ring like circles to filled ones.
cv.Canny(processed, processed, 5, 70, 3)
#smooth to reduce noise a bit more
cv.Smooth(processed, processed, cv.CV_GAUSSIAN, 7, 7)
cv.ShowImage('processed', processed)
#find circles, with parameter search
storage = find_circles(processed, storage, 100)
draw_circles(storage, output)
# show images
cv.ShowImage("original with circles", output)
cv.SaveImage('case1.jpg',output)
cv.WaitKey(0)
Ah, yes… the old colour/size invariants for circles problem (AKA the Hough transform is too specific and not robust)...
In the past I have relied much more on the structural and shape analysis functions of OpenCV instead. You can get a very good idea of from "samples" folder of what is possible - particularly fitellipse.py and squares.py.
For your elucidation, I present a hybrid version of these examples and based on your original source. The contours detected are in green and the fitted ellipses in red.
It's not quite there yet:
The pre-processing steps need a bit of tweaking to detect the more faint circles.
You could test the contour further to determine if it is a circle or not...
Good luck!
import cv
import numpy as np
# grab image
orig = cv.LoadImage('circles3.jpg')
# create tmp images
grey_scale = cv.CreateImage(cv.GetSize(orig), 8, 1)
processed = cv.CreateImage(cv.GetSize(orig), 8, 1)
cv.Smooth(orig, orig, cv.CV_GAUSSIAN, 3, 3)
cv.CvtColor(orig, grey_scale, cv.CV_RGB2GRAY)
# do some processing on the grey scale image
cv.Erode(grey_scale, processed, None, 10)
cv.Dilate(processed, processed, None, 10)
cv.Canny(processed, processed, 5, 70, 3)
cv.Smooth(processed, processed, cv.CV_GAUSSIAN, 15, 15)
#storage = cv.CreateMat(orig.width, 1, cv.CV_32FC3)
storage = cv.CreateMemStorage(0)
contours = cv.FindContours(processed, storage, cv.CV_RETR_EXTERNAL)
# N.B. 'processed' image is modified by this!
#contours = cv.ApproxPoly (contours, storage, cv.CV_POLY_APPROX_DP, 3, 1)
# If you wanted to reduce the number of points...
cv.DrawContours (orig, contours, cv.RGB(0,255,0), cv.RGB(255,0,0), 2, 3, cv.CV_AA, (0, 0))
def contour_iterator(contour):
while contour:
yield contour
contour = contour.h_next()
for c in contour_iterator(contours):
# Number of points must be more than or equal to 6 for cv.FitEllipse2
if len(c) >= 6:
# Copy the contour into an array of (x,y)s
PointArray2D32f = cv.CreateMat(1, len(c), cv.CV_32FC2)
for (i, (x, y)) in enumerate(c):
PointArray2D32f[0, i] = (x, y)
# Fits ellipse to current contour.
(center, size, angle) = cv.FitEllipse2(PointArray2D32f)
# Convert ellipse data from float to integer representation.
center = (cv.Round(center[0]), cv.Round(center[1]))
size = (cv.Round(size[0] * 0.5), cv.Round(size[1] * 0.5))
# Draw ellipse
cv.Ellipse(orig, center, size, angle, 0, 360, cv.RGB(255,0,0), 2,cv.CV_AA, 0)
# show images
cv.ShowImage("image - press 'q' to quit", orig)
#cv.ShowImage("post-process", processed)
cv.WaitKey(-1)
EDIT:
Just an update to say that I believe a major theme to all these answers is that there are a host of further assumptions and constraints that can be applied to what you seek to recognise as circular. My own answer makes no pretences at this - neither in the low-level pre-processing or the high-level geometric fitting. The fact that many of the circles are not really that round due to the way they are drawn or the non-affine/projective transforms of the image, and with the other properties in how they are rendered/captured (colour, noise, lighting, edge thickness) - all result in any number of possible candidate circles within just one image.
There are much more sophisticated techniques. But they will cost you. Personally I like #fraxel idea of using the addaptive threshold. That is fast, reliable and reasonably robust. You can then test further the final contours (e.g. use Hu moments) or fittings with a simple ratio test of the ellipse axis - e.g. if ((min(size)/max(size))>0.7).
As ever with Computer Vision there is the tension between pragmatism, principle, and parsomony. As I am fond of telling people who think that CV is easy, it is not - it is in fact famously an AI complete problem. The best you can often hope for outside of this is something that works most of the time.
Looking through your code, I noticed the following:
Greyscale conversion. I understand why you're doing it, but realize that you're throwing
away information there. As you see in the "post-process" images, your yellow circles are
the same intensity as the background, just in a different color.
Edge detection after noise removal (erae/dilate). This shouldn't be necessary; Canny ought to take care of this.
Canny edge detection. Your "open" circles have two edges, an inner and outer edge. Since they're fairly close, the Canny gauss filter might add them together. If it doesn't, you'll have two edges close together. I.e. before Canny, you have open and filled circles. Afterwards, you have 0/2 and 1 edge, respectively. Since Hough calls Canny again, in the first case the two edges might be smoothed together (depending on the initial width), which is why the core Hough algorithm can treat open and filled circles the same.
So, my first recommendation would be to change the grayscale mapping. Don't use intensity, but use hue/saturation/value. Also, use a differential approach - you're looking for edges. So, compute a HSV transform, smooth a copy, and then take the difference between the original and smoothed copy. This will get you dH, dS, dV values (local variation in Hue, Saturation, Value) for each point. Square and add to get a one-dimensional image, with peaks near all edges (inner and outer).
My second recommendation would be local normalization, but I'm not sure if that's even necessary. The idea is that you don't care particularly much about the exact value of the edge signal you got out, it should really be binary anyway (edge or not). Therefore, you can normalize each value by dividing by a local average (where local is in the order of magnitude of your edge size).
The Hough transform uses a "model" to find certain features in a (typically) edge-detected image, as you may know. In the case of HoughCircles that model is a perfect circle. This means there probably doesn't exist a combination of parameters that will make it detect the more erratically and ellipse shaped circles in your picture without increasing the number of false positives. On the other hand, due to the underlying voting mechanism, a non-closed perfect circle or a perfect circle with a "dent" might consistently show up. So depending on your expected output you may or may not want to use this method.
That said, there are a few things I see which might help you on your way with this function:
HoughCircles calls Canny internally, so I guess you can leave that call out.
param1 (which you call HIGH) is typically initialised around a value of 200. It is used as a parameter to the internal call to Canny: cv.Canny(processed, cannied, HIGH, HIGH/2). It might help to run Canny yourself like this to see how setting HIGH affects the image being worked with by the Hough transform.
param2 (which you call LOW) is typically initialised around a value 100. It is the voting threshold for the Hough transform's accumulators. Setting it higher means more false negatives, lower more false positives. I believe this is the first one you want to start fiddling around with.
Ref: http://docs.opencv.org/3.0-beta/modules/imgproc/doc/feature_detection.html#houghcircles
Update re: filled circles: After you've found the circle shapes with the Hough transform you can test if they are filled by sampling the boundary colour and comparing it to one or more points inside the supposed circle. Alternatively you can compare one or more points inside the supposed circle to a given background colour. The circle is filled if the former comparison succeeds, or in the case of the alternative comparison if it fails.
Ok looking at the images. I suggest using **Active Contours**
Active Contours
The good thing about active contours is that they almost perfectly fit into the any given shape. Be it squares or triangle and in your case they are the perfect candidates.
If you are able to extract the centre of the circles, that is great. Active contours always need a point to start from which they can either grow or shrink to fit. Not necessary that the centres are always aligned to the centre. A little offset will still be ok.
And in your case, if you let the contours to grow from the centre outwards, they shall rest a the circle boundaries.
Note that active contours that grow or shrink use balloon energy which means you can set the direction of contours, inwards or outwards.
You would probably need to use the gradient image in grey scale. But still you can try in colour as well. If it works!
And if you do not provide centres, throw in lots of active contours, make then grow/shrink. Contours that settle down are kept, unsettled ones are thrown away. This is a brute force approach. Will CPU intensive. But will require more careful work to make sure you leave correct contours and throw out the bad ones.
I hope this way you can solve the problem.