I am attempting to detect coloured tennis balls on a similar coloured background. I am using OpenCV and C++
This is the test image I am working with:
http://i.stack.imgur.com/yXmO4.jpg
I have tried using multiple edge detectors; sobel, laplace and canny. All three detect the white line, but when the threshold is at a value where it can detect the edge of the tennis ball, there is too much noise in the output.
I have also tried the Hough Circle transform but as it is based on canny, it isn't effective.
I cannot use background subtraction because the background can move. I also cannot modify the threshold values as lighting conditions may create gradients within the tennis ball.
I feel my only option is too template match or detect the white line, however I would like to avoid this if possible.
Do you have any suggestions ?
I had to tilt my screen to spot the tennisball myself. It's a hard image.
That said, the default OpenCV implementation of the Hough transform uses the Canny edge detector, but it's not the only possible implementation. For these harder cases, you might need to reimplement it yourself.
You can certainly run the Hough algorithm repeatedly with different settings for the edge detection, to generate multiple candidates. Besides comparing candidates directly, you can also check that each candidate has a dominant texture (after local shading corrections) and possibly a stripe. But that might be very tricky if those tennisballs are actually captured in flight, i.e. moving.
What are you doing to the color image BEFORE the edge detection? Simply converting it to gray?
In my experience colorful balls pop out best when you use the HSV color space. Then you would have to decide which channel gives the best results.
Perhaps transform the image to a different feature space might be better then relying on color. Maybe try LBP which responds to texture. Then do PCA on the result to reduce the feature space to 1 single channel image and try Hough Transform on that.
Related
I'm trying to use Hough Transform during number plate localization process. I have seen some articles and ideas about finding rectangles with that, still almost every example was quite simple - one rectangle on image, usually game card or TV. When I want to implement that in my system, it's not working well. I'm finding usually more then 3000 lines, and much more intersections. I'm using Canny edge filter. I tested that with some different parameters (both, Canny Filter and HoughLinesP function) and always got very huge numer of points. Is it possible to find that plate, when we are having a lot of environment information on our image? Or are there any other options to achieve some good results? I would appreciate any answers and ideas. Some code samples in OpenCV will be very usefull too.
Detecting many line segments is typical for the Hough transform. E.g. the letters on the plates might contain straight line segments, the surroundings of the plate (a car?) and whatever.
So, you should try utilizing more context information on your plate detection such as
background color of the plate (e.g. is it white? or black or yellow or whatever? are your image data colored?) So, try filtering for the color
what size is a typical plate on the image? is it always roughly the same size? Then you could filter the found Hough segments by their length, respectively. Look for sets of colinear line segments, which might be the parts of a single but broken line.
What orientation have the plates? Parallel to the image main axes? Or can they be rotated or even warped by depth projection? In the first case of axe-parallel plates, restrict to all Hough line with angle orientations of 0° or 90°.
Have you applied contrast normalization on the original image? What do the Canny edge images look like, are they already suited for finding plates? Can You see the plates on the edge images or are they hidden between so many edgels or got split apart too much? What about the thresholds for Canny detector?
Finally, have you googled for papers about plate-finding algorithms?
I am trying to detect a ball in an filtered image.
In this image I've already removed the stuff that can't be part of the object.
Of course I tried the HoughCircle function, but I did not get the expected output.
Either it didn't find the ball or there were too many circles detected.
The problem is that the ball isn't completly round.
Screenshots:
I had the idea that it could work, if I identify single objects, calculate their center and check whether the radius is about the same in different directions.
But it would be nice if it detect the ball also if he isn't completely visible.
And with that method I can't detect semi-circles or something like that.
EDIT: These images are from a video stream (real time).
What other method could I try?
Looks like you've used difference imaging or something similar to obtain the images you have..? Instead of looking for circles, look for a more generic loop. Suggestions:
Separate all connected components.
For every connected component -
Walk around the contour and collect all contour pixels in a list
Suggestion 1: Use least squares to fit an ellipse to the contour points
Suggestion 2: Study the curvature of every contour pixel and check if it fits a circle or ellipse. This check may be done by computing a histogram of edge orientations for the contour pixels, or by checking the gradients of orienations from contour pixel to contour pixel. In the second case, for a circle or ellipse, the gradients should be almost uniform (ask me if this isn't very clear).
Apply constraints on perimeter, area, lengths of major and minor axes, etc. of the ellipse or loop. Collect these properties as features.
You can either use hard-coded heuristics/thresholds to classify a set of features as ball/non-ball, or use a machine learning algorithm. I would first keep it simple and simply use thresholds obtained after studying some images.
Hope this helps.
I am trying to detect a large number of small circles that are in relatively close proximity to one another (only about 20 pixels apart) using OpenCV. I have managed to create this mask using cv::inRange() and cv::Canny().
Original Image
Mask
However, when I use cv::HoughCircles() only some of the circles are being detected accurately. Currently, I am using cv::HoughCircles() with the following parameters:
cv::HoughCircles(mat, circles, CV_HOUGH_GRADIENT, 2, mat.rows / 256, 100, 8, 2, 8);
Is this method not effective enough to detect circles that are this small and close together, or do I simply need to modify the parameters of cv::HoughCircles()?
Also, it would be useful to get rid of the "noise" surrounding the array of circles in the middle of the mask because some "false circles" are being detected around the edges of the mask. Is there a simple way to do this?
Get rid of the noise :
If you can make sure to always have the same environment parameters (e.g. distance from the circle, luminosity...), then you could mask your image just after the Canny edge detection, with cvAnd; here is what the mask would look like :
Hough circles detection :
Now, about HoughCircle. First, this function performs its own Canny edge detection. You are doing one too just before the call to HoughCircle. It may have an impact on the shapes of your circles, because of the way Canny works (i.e. intensity gradient on binary image...).
Speaking about the shape of your circles, just below is a close-up of what your "circles" look like; I would have been very impressed if HoughCircle actually did detect all or even just some of those. It can't give anything good in Hough space. Just to make sure, set the last two parameters to 0 (min/max radius), and try to lower the minimum distance between centers. But honestly, I think you need to find another approach to your problem.
[EDIT]
A possible approach would be to perform connected component labeling (e.g. blob detection). As far as I know it is not possible to do this simply with OpenCV alone, you will need something like cvblob, which is a very good OpenCV-based blob library. In particular, you might be interested in cvCentroid(CvBlob *blob).
Cheers
Hum, do you really need to detect them as circles? (as opposed to model them as circles).
If this is some kind of calibration pattern, and you are only interested in estimating the image positions of the centers, It may be a lot more efficient to detect them as point-like features first, then process each detected one individually - e.g. fitting a circle to a blob of white pixels in the neighborhood of each detected feature.
Finding Circle Edges :
Here are the two sample images that i have posted.
Need to find the edges of the circle:
Does it possible to develop one generic circle algorithm,that could find all possible circles in all scenarios ?? Like below
1. Circle may in different color ( White , Black , Gray , Red)
2. Background color may be different
3. Different in its size
http://postimage.org/image/tddhvs8c5/
http://postimage.org/image/8kdxqiiyb/
Please suggest some idea to write a algorithm that should work out on above circle
Sounds like a job for the Hough circle transform:
I have not used it myself so far, but it is included in OpenCV. Among other parameters, you can give it a minimum and maximum radius.
Here are links to documentation and a tutorial.
I'd imagine your second example picture will be very hard to detect though
You could apply an edge detection transformation to both images.
Here is what I did in Paint.NET using the outline effect:
You could test edge detect too but that requires more contrast in the images.
Another thing to take into consideration is what it exactly is that you want to detect; in the first image, do you want to detect the white ring or the disc inside. In the second image; do you want to detect the all the circles (there are many tiny ones) or just the big one(s). These requirement will influence what transformation to use and how to initialize these.
After transforming the images into versions that 'highlight' the circles you'll need an algorithm to find them.
Again, there are more options than just one. Here is a paper describing an algoritm
Searching the web for image processing circle recognition gives lots of results.
I think you will have to use a couple of different feature calculations that can be used for segmentation. I the first picture the circle is recognizeable by intensity alone so that one is easy. In the second picture it is mostly the texture that differentiates the circle edge, in that case a feature image based based on some kind of texture filter will be needed, calculating the local variance for instance will result in a scalar image that can segment out the circle. If there are other features that defines the circle in other scenarios (different colors for background foreground etc) you might need other explicit filters that give a scalar difference for those cases.
When you have scalar images where the circles stand out you can use the circular Hough transform to find the circle. Either run it for different circle sizes or modify it to detect a range of sizes.
If you know that there will be only one circle and you know the kind of noise that will be present (vertical/horizontal lines etc) an alternative approach is to design a more specific algorithm e.g. filter out the noise and find center of gravity etc.
Answer to comment:
The idea is to separate the algorithm into independent stages. I do not know how the specific algorithm you have works but presumably it could take a binary or grayscale image where high values means pixel part of circle and low values pixel not part of circle, the present algorithm also needs to give some kind of confidence value on the circle it finds. This present algorithm would then represent some stage(s) at the end of the complete algorithm. You will then have to add the first stage which is to generate feature images for all kind of input you want to handle. For the two examples it should suffice with one intensity image (simply grayscale) and one image where each pixel represents the local variance. In the color case do a color transform an use the hue value perhaps? For every input feed all feature images to the later stage, use the confidence value to select the most likely candidate. If you have other unknowns that your algorithm need as input parameters (circle size etc) just iterate over the possible values and make sure your later stages returns confidence values.
I am trying to think of the best method to detect rectangles in an image.
My initial thought is to use the Hough transform for lines, and to select combinations of lines where you have two lines intersected at both the lower portion and upper portion by the same two lines, but this is not sufficient.
Would using a corner detector along with the Hough transform work?
Check out /samples/c/squares.c in your OpenCV distribution. This example provides a square detector, and it should be a pretty good start.
My answer here also applies.
I don't think that currently there exists a simple and robust method to detect rectangles in an image. You have to deal with many problems such as the rectangles not being exactly rectangular but only approximately, partial occlusions, lighting changes, etc.
One possible direction is to do a segmentation of the image and then check how close each segment is to being a rectangle. Since you can't trust your segmentation algorithm, you can run it multiple times with different parameters.
Another direction is to try to parametrically fit a rectangle to the image such that the image gradient magnitude along the contour will be maximized.
If you choose to work on a parametric approach, notice that while the trivial way to parameterize a rectangle is by the locations of it's four corners, which is 8 parameters, there are a few other representations that require less parameters.
There is an extension of Hough that can be useful.
http://en.wikipedia.org/wiki/Generalised_Hough_transform