I am attempting to use OpenCV to detect "splotchy" lines in a binary image. I have an image like the one below and would like to robustly detect the three roughly vertical lines of splotches. HoughLines does a decent job, but often fails if only a few of the white pixels in the splotches are perfectly colinear. I've also tried Generalized Hough transform functions, but they aren't much better.
What I would like to be able to do is have HoughLines perform the Hough transform and give me the image in the Hough space (where each pixel value represents the votes for a particular rho and theta). That way I could look for high density regions instead of peaks, as those regions may better represent the lines I'm looking for.
I can't seem to find a way to get the raw Hough space image however. Is it possible?
Are there better ways to detect splotchy lines like this?
As a side note, I would like to do this all on a GPU, so functions with a 'cv::cuda::` api would be best.
Related
I have an application where I have to detect the presence of some items in a scene. The items can be rotated and a little scaled (bigger or smaller). I've tried using keypoint detectors but they're not fast and accurate enough. So I've decided to first detect edges in the template and the search area, using Canny ( or a faster edge detection algo ), and then match the edges to find the position, orientation, and size of the match found.
All this needs to be done in less than a second.
I've tried using matchTemplate(), and matchShape() but the former is NOT scale and rotation invariant, and the latter doesn't work well with the actual images. Rotating the template image in order to match is also time consuming.
So far I have been able to detect the edges of the template but I don't know how to match them with the scene.
I've already gone through the following but wasn't able to get them to work (they're either using old version of OpenCV, or just not working with other images apart from those in the demo):
https://www.codeproject.com/Articles/99457/Edge-Based-Template-Matching
Angle and Scale Invariant template matching using OpenCV
https://answers.opencv.org/question/69738/object-detection-kinect-depth-images/
Can someone please suggest me an approach for this? Or a code snipped for the same if possible ?
This is my sample input image ( the parts to detect are marked in red )
These are some software that are doing this and also how I want it should be:
This topic is what I am actually dealing for a year on a project. So I will try to explain what my approach is and how I am doing that. I assume that you already did the preprocess steps(filters,brightness,exposure,calibration etc). And be sure you clean the noises on image.
Note: In my approach, I am collecting data from contours on a reference image which is my desired object. Then I am comparing these data with the other contours on the big image.
Use canny edge detection and find the contours on reference
image. You need to be sure here about that it shouldn't miss some parts of
contours. If it misses, probably preprocess part should have some
problems. The other important point is that you need to find an
appropriate mode of findContours because every modes have
different properties so you need to find an appropriate one for your
case. At the end you need to eliminate the contours which are okey
for you.
After getting contours from reference, you can find the length of
every contours using outputArray of findContours(). You can compare
these values on your big image and eliminate the contours which are
so different.
minAreaRect precisely draws a fitted, enclosing rectangle for
each contour. In my case, this function is very good to use. I am
getting 2 parameters using this function:
a) Calculate the short and long edge of fitted rectangle and compare the
values with the other contours on the big image.
b) Calculate the percentage of blackness or whiteness(if your image is
grayscale, get a percentage how many pixel close to white or black) and
compare at the end.
matchShape can be applied at the end to the rest of contours or you can also apply to all contours(I suggest first approach). Each contour is just an array so you can hold the reference contours in an array and compare them with the others at the end. After doing 3 steps and then applying matchShape is very good on my side.
I think matchTemplate is not good to use directly. I am drawing every contour to a different mat zero image(blank black surface) as a template image and then I compare with the others. Using a reference template image directly doesnt give good results.
OpenCV have some good algorithms about finding circles,convexity etc. If your situations are related with them, you can also use them as a step.
At the end, you just get the all data,values, and you can make a table in your mind. The rest is kind of statistical analysis.
Note: I think the most important part is preprocess part. So be sure about that you have a clean almost noiseless image and reference.
Note: Training can be a good solution for your case if you just want to know the objects exist or not. But if you are trying to do something for an industrial application, this is totally wrong way. I tried YOLO and haarcascade training algorithms several times and also trained some objects with them. The experiences which I get is that: they can find objects almost correctly but the center coordinates, rotation results etc. will not be totally correct even if your calibration is correct. On the other hand, training time and collecting data is painful.
You have rather bad image quality very bad light conditions, so you have only two ways:
1. To use filters -> binary threshold -> find_contours -> matchShape. But this very unstable algorithm for your object type and image quality. You will get a lot of wrong contours and its hard to filter them.
2. Haarcascades -> cut bounding box -> check the shape inside
All "special points/edge matching " algorithms will not work in such bad conditions.
Below is a binary image in which I would like to detect the "hills", semicircles, cutouts.. In the image below in the red circle. The detection does not have to be precise, I just need to know that something like this is in the picture. I am thinking about the algorithm which would use kind of line sweep approach and count the black pixels on the line and evaluate that with some kind of "cleveristic", but before that I would like to know if I am missing any technique that would be easier or more robust. I have tried the HoughCircles, but with no good results, because the circles have quite huge radius and there is many of them (houghCircles takes grayscale image as input).
Counting pixels should be fine in this simple situation. If you face more complex scenarios with other things you don't want to count in, you should consider a blob detector.
This method searches for regions of connected pixels. Once you have blobs you can easily sort them by size, shape, position which helps to get rid of unwanted things.
https://www.learnopencv.com/blob-detection-using-opencv-python-c/
This is a very basic technique. Please read a book on image processing fundamentals befor you continue. It will make life much easier.
As #piglet said, this is a case of blob analysis.
If you want to further characterize and classify the defects, you have to compute some geometrical features of the shapes such as area, diameter, elungation... and feed them to a classifier/neural net.
I'm trying to use Hough Transform during number plate localization process. I have seen some articles and ideas about finding rectangles with that, still almost every example was quite simple - one rectangle on image, usually game card or TV. When I want to implement that in my system, it's not working well. I'm finding usually more then 3000 lines, and much more intersections. I'm using Canny edge filter. I tested that with some different parameters (both, Canny Filter and HoughLinesP function) and always got very huge numer of points. Is it possible to find that plate, when we are having a lot of environment information on our image? Or are there any other options to achieve some good results? I would appreciate any answers and ideas. Some code samples in OpenCV will be very usefull too.
Detecting many line segments is typical for the Hough transform. E.g. the letters on the plates might contain straight line segments, the surroundings of the plate (a car?) and whatever.
So, you should try utilizing more context information on your plate detection such as
background color of the plate (e.g. is it white? or black or yellow or whatever? are your image data colored?) So, try filtering for the color
what size is a typical plate on the image? is it always roughly the same size? Then you could filter the found Hough segments by their length, respectively. Look for sets of colinear line segments, which might be the parts of a single but broken line.
What orientation have the plates? Parallel to the image main axes? Or can they be rotated or even warped by depth projection? In the first case of axe-parallel plates, restrict to all Hough line with angle orientations of 0° or 90°.
Have you applied contrast normalization on the original image? What do the Canny edge images look like, are they already suited for finding plates? Can You see the plates on the edge images or are they hidden between so many edgels or got split apart too much? What about the thresholds for Canny detector?
Finally, have you googled for papers about plate-finding algorithms?
I am trying to detect a large number of small circles that are in relatively close proximity to one another (only about 20 pixels apart) using OpenCV. I have managed to create this mask using cv::inRange() and cv::Canny().
Original Image
Mask
However, when I use cv::HoughCircles() only some of the circles are being detected accurately. Currently, I am using cv::HoughCircles() with the following parameters:
cv::HoughCircles(mat, circles, CV_HOUGH_GRADIENT, 2, mat.rows / 256, 100, 8, 2, 8);
Is this method not effective enough to detect circles that are this small and close together, or do I simply need to modify the parameters of cv::HoughCircles()?
Also, it would be useful to get rid of the "noise" surrounding the array of circles in the middle of the mask because some "false circles" are being detected around the edges of the mask. Is there a simple way to do this?
Get rid of the noise :
If you can make sure to always have the same environment parameters (e.g. distance from the circle, luminosity...), then you could mask your image just after the Canny edge detection, with cvAnd; here is what the mask would look like :
Hough circles detection :
Now, about HoughCircle. First, this function performs its own Canny edge detection. You are doing one too just before the call to HoughCircle. It may have an impact on the shapes of your circles, because of the way Canny works (i.e. intensity gradient on binary image...).
Speaking about the shape of your circles, just below is a close-up of what your "circles" look like; I would have been very impressed if HoughCircle actually did detect all or even just some of those. It can't give anything good in Hough space. Just to make sure, set the last two parameters to 0 (min/max radius), and try to lower the minimum distance between centers. But honestly, I think you need to find another approach to your problem.
[EDIT]
A possible approach would be to perform connected component labeling (e.g. blob detection). As far as I know it is not possible to do this simply with OpenCV alone, you will need something like cvblob, which is a very good OpenCV-based blob library. In particular, you might be interested in cvCentroid(CvBlob *blob).
Cheers
Hum, do you really need to detect them as circles? (as opposed to model them as circles).
If this is some kind of calibration pattern, and you are only interested in estimating the image positions of the centers, It may be a lot more efficient to detect them as point-like features first, then process each detected one individually - e.g. fitting a circle to a blob of white pixels in the neighborhood of each detected feature.
I am trying to think of the best method to detect rectangles in an image.
My initial thought is to use the Hough transform for lines, and to select combinations of lines where you have two lines intersected at both the lower portion and upper portion by the same two lines, but this is not sufficient.
Would using a corner detector along with the Hough transform work?
Check out /samples/c/squares.c in your OpenCV distribution. This example provides a square detector, and it should be a pretty good start.
My answer here also applies.
I don't think that currently there exists a simple and robust method to detect rectangles in an image. You have to deal with many problems such as the rectangles not being exactly rectangular but only approximately, partial occlusions, lighting changes, etc.
One possible direction is to do a segmentation of the image and then check how close each segment is to being a rectangle. Since you can't trust your segmentation algorithm, you can run it multiple times with different parameters.
Another direction is to try to parametrically fit a rectangle to the image such that the image gradient magnitude along the contour will be maximized.
If you choose to work on a parametric approach, notice that while the trivial way to parameterize a rectangle is by the locations of it's four corners, which is 8 parameters, there are a few other representations that require less parameters.
There is an extension of Hough that can be useful.
http://en.wikipedia.org/wiki/Generalised_Hough_transform