Detecting Many Small Circles in Close Proximity with cv::HoughCircles() - c++

I am trying to detect a large number of small circles that are in relatively close proximity to one another (only about 20 pixels apart) using OpenCV. I have managed to create this mask using cv::inRange() and cv::Canny().
Original Image
Mask
However, when I use cv::HoughCircles() only some of the circles are being detected accurately. Currently, I am using cv::HoughCircles() with the following parameters:
cv::HoughCircles(mat, circles, CV_HOUGH_GRADIENT, 2, mat.rows / 256, 100, 8, 2, 8);
Is this method not effective enough to detect circles that are this small and close together, or do I simply need to modify the parameters of cv::HoughCircles()?
Also, it would be useful to get rid of the "noise" surrounding the array of circles in the middle of the mask because some "false circles" are being detected around the edges of the mask. Is there a simple way to do this?

Get rid of the noise :
If you can make sure to always have the same environment parameters (e.g. distance from the circle, luminosity...), then you could mask your image just after the Canny edge detection, with cvAnd; here is what the mask would look like :
Hough circles detection :
Now, about HoughCircle. First, this function performs its own Canny edge detection. You are doing one too just before the call to HoughCircle. It may have an impact on the shapes of your circles, because of the way Canny works (i.e. intensity gradient on binary image...).
Speaking about the shape of your circles, just below is a close-up of what your "circles" look like; I would have been very impressed if HoughCircle actually did detect all or even just some of those. It can't give anything good in Hough space. Just to make sure, set the last two parameters to 0 (min/max radius), and try to lower the minimum distance between centers. But honestly, I think you need to find another approach to your problem.
[EDIT]
A possible approach would be to perform connected component labeling (e.g. blob detection). As far as I know it is not possible to do this simply with OpenCV alone, you will need something like cvblob, which is a very good OpenCV-based blob library. In particular, you might be interested in cvCentroid(CvBlob *blob).
Cheers

Hum, do you really need to detect them as circles? (as opposed to model them as circles).
If this is some kind of calibration pattern, and you are only interested in estimating the image positions of the centers, It may be a lot more efficient to detect them as point-like features first, then process each detected one individually - e.g. fitting a circle to a blob of white pixels in the neighborhood of each detected feature.

Related

OpenCV edge based object detection C++

I have an application where I have to detect the presence of some items in a scene. The items can be rotated and a little scaled (bigger or smaller). I've tried using keypoint detectors but they're not fast and accurate enough. So I've decided to first detect edges in the template and the search area, using Canny ( or a faster edge detection algo ), and then match the edges to find the position, orientation, and size of the match found.
All this needs to be done in less than a second.
I've tried using matchTemplate(), and matchShape() but the former is NOT scale and rotation invariant, and the latter doesn't work well with the actual images. Rotating the template image in order to match is also time consuming.
So far I have been able to detect the edges of the template but I don't know how to match them with the scene.
I've already gone through the following but wasn't able to get them to work (they're either using old version of OpenCV, or just not working with other images apart from those in the demo):
https://www.codeproject.com/Articles/99457/Edge-Based-Template-Matching
Angle and Scale Invariant template matching using OpenCV
https://answers.opencv.org/question/69738/object-detection-kinect-depth-images/
Can someone please suggest me an approach for this? Or a code snipped for the same if possible ?
This is my sample input image ( the parts to detect are marked in red )
These are some software that are doing this and also how I want it should be:
This topic is what I am actually dealing for a year on a project. So I will try to explain what my approach is and how I am doing that. I assume that you already did the preprocess steps(filters,brightness,exposure,calibration etc). And be sure you clean the noises on image.
Note: In my approach, I am collecting data from contours on a reference image which is my desired object. Then I am comparing these data with the other contours on the big image.
Use canny edge detection and find the contours on reference
image. You need to be sure here about that it shouldn't miss some parts of
contours. If it misses, probably preprocess part should have some
problems. The other important point is that you need to find an
appropriate mode of findContours because every modes have
different properties so you need to find an appropriate one for your
case. At the end you need to eliminate the contours which are okey
for you.
After getting contours from reference, you can find the length of
every contours using outputArray of findContours(). You can compare
these values on your big image and eliminate the contours which are
so different.
minAreaRect precisely draws a fitted, enclosing rectangle for
each contour. In my case, this function is very good to use. I am
getting 2 parameters using this function:
a) Calculate the short and long edge of fitted rectangle and compare the
values with the other contours on the big image.
b) Calculate the percentage of blackness or whiteness(if your image is
grayscale, get a percentage how many pixel close to white or black) and
compare at the end.
matchShape can be applied at the end to the rest of contours or you can also apply to all contours(I suggest first approach). Each contour is just an array so you can hold the reference contours in an array and compare them with the others at the end. After doing 3 steps and then applying matchShape is very good on my side.
I think matchTemplate is not good to use directly. I am drawing every contour to a different mat zero image(blank black surface) as a template image and then I compare with the others. Using a reference template image directly doesnt give good results.
OpenCV have some good algorithms about finding circles,convexity etc. If your situations are related with them, you can also use them as a step.
At the end, you just get the all data,values, and you can make a table in your mind. The rest is kind of statistical analysis.
Note: I think the most important part is preprocess part. So be sure about that you have a clean almost noiseless image and reference.
Note: Training can be a good solution for your case if you just want to know the objects exist or not. But if you are trying to do something for an industrial application, this is totally wrong way. I tried YOLO and haarcascade training algorithms several times and also trained some objects with them. The experiences which I get is that: they can find objects almost correctly but the center coordinates, rotation results etc. will not be totally correct even if your calibration is correct. On the other hand, training time and collecting data is painful.
You have rather bad image quality very bad light conditions, so you have only two ways:
1. To use filters -> binary threshold -> find_contours -> matchShape. But this very unstable algorithm for your object type and image quality. You will get a lot of wrong contours and its hard to filter them.
2. Haarcascades -> cut bounding box -> check the shape inside
All "special points/edge matching " algorithms will not work in such bad conditions.

Detecting Rectangular Shapes in edge image with OpenCV

I want to detect multiple (similar) rectangular objects in an image that have a lot of substructure within them. So, my current plan is to use gaussian blur, morphology and edge detection (Canny). After using edge detection I get this (with very low threshold parameters):
What I want in the end is the outline of the greater rectangles. See:
Currently I try to get this by using HoughLines and findContours afterwards. For this to work, I need to fiddle a lot with the threshold parameters for Canny and HoughLines.
When I get it right for one image the parameters most likely will not work for the next one (e.g. the edges in the previous image were less dominant leading to too many lines detected by the hough transformation). Another problem is that sometimes inner structures are equally or less dominant than one side of the outer edges.
I tried to use a stronger blur or morphology but at some point this blurred away the small gap between the rectangles.
Can I extract the bigger rectangles somehow else given the edge image (preferably with opencv)?
Getting the 4 corner points would be enough.

Dynamic background separation and reliable circle detection with OpenCV

I am attempting to detect coloured tennis balls on a similar coloured background. I am using OpenCV and C++
This is the test image I am working with:
http://i.stack.imgur.com/yXmO4.jpg
I have tried using multiple edge detectors; sobel, laplace and canny. All three detect the white line, but when the threshold is at a value where it can detect the edge of the tennis ball, there is too much noise in the output.
I have also tried the Hough Circle transform but as it is based on canny, it isn't effective.
I cannot use background subtraction because the background can move. I also cannot modify the threshold values as lighting conditions may create gradients within the tennis ball.
I feel my only option is too template match or detect the white line, however I would like to avoid this if possible.
Do you have any suggestions ?
I had to tilt my screen to spot the tennisball myself. It's a hard image.
That said, the default OpenCV implementation of the Hough transform uses the Canny edge detector, but it's not the only possible implementation. For these harder cases, you might need to reimplement it yourself.
You can certainly run the Hough algorithm repeatedly with different settings for the edge detection, to generate multiple candidates. Besides comparing candidates directly, you can also check that each candidate has a dominant texture (after local shading corrections) and possibly a stripe. But that might be very tricky if those tennisballs are actually captured in flight, i.e. moving.
What are you doing to the color image BEFORE the edge detection? Simply converting it to gray?
In my experience colorful balls pop out best when you use the HSV color space. Then you would have to decide which channel gives the best results.
Perhaps transform the image to a different feature space might be better then relying on color. Maybe try LBP which responds to texture. Then do PCA on the result to reduce the feature space to 1 single channel image and try Hough Transform on that.

OpenCV C++ extract features from binary image

I have written an algorithm to process a camera capture and extract a binary image of two features I'm interested in. I'm trying to find the best (fastest) way of detecting when the two features intersect and where the lowest (y coordinate is greatest) point is (this will be the intersection).
I do not want to use a findContours() based method as this is too slow and, in my opinion, unnecessary. I also think blob detection libraries are too bloated for this.
I have two sample images (sorry for low quality):
(not touching: http://i.imgur.com/7bQ9qMo.jpg)
(touching: http://i.imgur.com/tuSmKw7.jpg)
Due to the way these images are created, there is often noise in the top right corner which looks like pixelated lines but methods such as dilation and erosion lose resolution around the features I'm trying to find.
My initial thought would be to use direct pixel access to form a width filter and a height filter. The lowest point in the image is therefore the intersection.
I have no idea how to detect when they touch... logically I can see that a triangle is formed when they intersect and otherwise there is no enclosed black area. Can I fill the image starting from the corner with say, red, and then calculate how much of the image is still black?
Does anyone have any suggestions?
Thanks
Your suggestion is a way more slow than finding contours. For binary images, finding contour is very easy and quick because you just need to find a black pixel followed by a white pixel or vice versa.
Anyway, if you don't want to use it, you can use the vertical projection or vertical profile you will see it the objects intersect or not.
For example, in the following image check the the letter "n" which is little similar to non-intersecting object, and the letter "o" which is similar to intersecting objects :
By analyzing the histograms you can recognize which one is intersecting or not.

Finding Circle Edges :

Finding Circle Edges :
Here are the two sample images that i have posted.
Need to find the edges of the circle:
Does it possible to develop one generic circle algorithm,that could find all possible circles in all scenarios ?? Like below
1. Circle may in different color ( White , Black , Gray , Red)
2. Background color may be different
3. Different in its size
http://postimage.org/image/tddhvs8c5/
http://postimage.org/image/8kdxqiiyb/
Please suggest some idea to write a algorithm that should work out on above circle
Sounds like a job for the Hough circle transform:
I have not used it myself so far, but it is included in OpenCV. Among other parameters, you can give it a minimum and maximum radius.
Here are links to documentation and a tutorial.
I'd imagine your second example picture will be very hard to detect though
You could apply an edge detection transformation to both images.
Here is what I did in Paint.NET using the outline effect:
You could test edge detect too but that requires more contrast in the images.
Another thing to take into consideration is what it exactly is that you want to detect; in the first image, do you want to detect the white ring or the disc inside. In the second image; do you want to detect the all the circles (there are many tiny ones) or just the big one(s). These requirement will influence what transformation to use and how to initialize these.
After transforming the images into versions that 'highlight' the circles you'll need an algorithm to find them.
Again, there are more options than just one. Here is a paper describing an algoritm
Searching the web for image processing circle recognition gives lots of results.
I think you will have to use a couple of different feature calculations that can be used for segmentation. I the first picture the circle is recognizeable by intensity alone so that one is easy. In the second picture it is mostly the texture that differentiates the circle edge, in that case a feature image based based on some kind of texture filter will be needed, calculating the local variance for instance will result in a scalar image that can segment out the circle. If there are other features that defines the circle in other scenarios (different colors for background foreground etc) you might need other explicit filters that give a scalar difference for those cases.
When you have scalar images where the circles stand out you can use the circular Hough transform to find the circle. Either run it for different circle sizes or modify it to detect a range of sizes.
If you know that there will be only one circle and you know the kind of noise that will be present (vertical/horizontal lines etc) an alternative approach is to design a more specific algorithm e.g. filter out the noise and find center of gravity etc.
Answer to comment:
The idea is to separate the algorithm into independent stages. I do not know how the specific algorithm you have works but presumably it could take a binary or grayscale image where high values means pixel part of circle and low values pixel not part of circle, the present algorithm also needs to give some kind of confidence value on the circle it finds. This present algorithm would then represent some stage(s) at the end of the complete algorithm. You will then have to add the first stage which is to generate feature images for all kind of input you want to handle. For the two examples it should suffice with one intensity image (simply grayscale) and one image where each pixel represents the local variance. In the color case do a color transform an use the hue value perhaps? For every input feed all feature images to the later stage, use the confidence value to select the most likely candidate. If you have other unknowns that your algorithm need as input parameters (circle size etc) just iterate over the possible values and make sure your later stages returns confidence values.