I have extracted the contours of an image using C++ & OPENCV. I need now to check the linearity of every contour (just checking not detecting lines). So, I can eliminate some contours which are less or more than a threshold.
I found this great paper: http://milos.stojmenovic.com/Publications_files/P0386.pdf
However, The methods are too complex to be implemented in a robust way. It is possible to to perform the linearity check on a set of N random points which works well but without robustness.
Any suggested solutions ? Many thanks
use a minAreaRect function get RotatedRect. For line one of the dimensions of RotatedRect is so small.
Related
I have an application where I have to detect the presence of some items in a scene. The items can be rotated and a little scaled (bigger or smaller). I've tried using keypoint detectors but they're not fast and accurate enough. So I've decided to first detect edges in the template and the search area, using Canny ( or a faster edge detection algo ), and then match the edges to find the position, orientation, and size of the match found.
All this needs to be done in less than a second.
I've tried using matchTemplate(), and matchShape() but the former is NOT scale and rotation invariant, and the latter doesn't work well with the actual images. Rotating the template image in order to match is also time consuming.
So far I have been able to detect the edges of the template but I don't know how to match them with the scene.
I've already gone through the following but wasn't able to get them to work (they're either using old version of OpenCV, or just not working with other images apart from those in the demo):
https://www.codeproject.com/Articles/99457/Edge-Based-Template-Matching
Angle and Scale Invariant template matching using OpenCV
https://answers.opencv.org/question/69738/object-detection-kinect-depth-images/
Can someone please suggest me an approach for this? Or a code snipped for the same if possible ?
This is my sample input image ( the parts to detect are marked in red )
These are some software that are doing this and also how I want it should be:
This topic is what I am actually dealing for a year on a project. So I will try to explain what my approach is and how I am doing that. I assume that you already did the preprocess steps(filters,brightness,exposure,calibration etc). And be sure you clean the noises on image.
Note: In my approach, I am collecting data from contours on a reference image which is my desired object. Then I am comparing these data with the other contours on the big image.
Use canny edge detection and find the contours on reference
image. You need to be sure here about that it shouldn't miss some parts of
contours. If it misses, probably preprocess part should have some
problems. The other important point is that you need to find an
appropriate mode of findContours because every modes have
different properties so you need to find an appropriate one for your
case. At the end you need to eliminate the contours which are okey
for you.
After getting contours from reference, you can find the length of
every contours using outputArray of findContours(). You can compare
these values on your big image and eliminate the contours which are
so different.
minAreaRect precisely draws a fitted, enclosing rectangle for
each contour. In my case, this function is very good to use. I am
getting 2 parameters using this function:
a) Calculate the short and long edge of fitted rectangle and compare the
values with the other contours on the big image.
b) Calculate the percentage of blackness or whiteness(if your image is
grayscale, get a percentage how many pixel close to white or black) and
compare at the end.
matchShape can be applied at the end to the rest of contours or you can also apply to all contours(I suggest first approach). Each contour is just an array so you can hold the reference contours in an array and compare them with the others at the end. After doing 3 steps and then applying matchShape is very good on my side.
I think matchTemplate is not good to use directly. I am drawing every contour to a different mat zero image(blank black surface) as a template image and then I compare with the others. Using a reference template image directly doesnt give good results.
OpenCV have some good algorithms about finding circles,convexity etc. If your situations are related with them, you can also use them as a step.
At the end, you just get the all data,values, and you can make a table in your mind. The rest is kind of statistical analysis.
Note: I think the most important part is preprocess part. So be sure about that you have a clean almost noiseless image and reference.
Note: Training can be a good solution for your case if you just want to know the objects exist or not. But if you are trying to do something for an industrial application, this is totally wrong way. I tried YOLO and haarcascade training algorithms several times and also trained some objects with them. The experiences which I get is that: they can find objects almost correctly but the center coordinates, rotation results etc. will not be totally correct even if your calibration is correct. On the other hand, training time and collecting data is painful.
You have rather bad image quality very bad light conditions, so you have only two ways:
1. To use filters -> binary threshold -> find_contours -> matchShape. But this very unstable algorithm for your object type and image quality. You will get a lot of wrong contours and its hard to filter them.
2. Haarcascades -> cut bounding box -> check the shape inside
All "special points/edge matching " algorithms will not work in such bad conditions.
I have tried to use edge detection to find the contour of images and try to compare the similarity of the contours by matchshape function. However, results are not as good as expected. I think it may be because of the images are not aligned before calculating the similarity. Therefore, I am asking for a way of aligning two contours in opencv. I am thinking of aligning by first finding the smallest bounding box or circle and then find out translation, rotation or resize needed to align those boxes. Then apply those transformation on the contour and test the similarity of them. Does this method work? Is there any method to align images? Thanks for your help. For your reference, attached are two contours going to be tested. They should be very similar but the distance found is quite large. The first two images have larger distance than that between the first and the last one, which seems contradicts with what it looks like (the last one should be the worst). Thanks.
These kinds of problems are known as registration problems. CPD, BCPD, and ICP would be your best shot.
[https://github.com/neka-nat/probreg][1]
I have performed the thinning operation on a binary image with the code provided here. The source image which I used was this one.
And the result image which I obtained after applying thinning operation on the source image was this one
The problem I am facing is how to remove the noise in the image. Which is visible around the thinned white lines.
In such particular case, the easiest and safest solution is to label the connected component (union-find algorithm), and delete the one with a surface lower than one or two pixels.
FiReTiTi and kcc__ have already provided good answers, but I thought I'd provide another perspective. Having looked through some of your previous posts, it appears that you're trying to build software that uses vascular patterns on the hand to identify people. So at some point, you will need to build some kind of classification algorithm.
I bring this up because many such algorithms are quite robust in the presence of this kind of noise. For example, if you intend to use supervised learning to train a convolutional neural net (which would be a reasonable approach assuming you can collect a decent amount of training samples), you may find that extensive pre-processing of this sort is unnecessary, and may even degrade the performance.
Just some thoughts to consider. Cheers!
Another simple but perhaps not so robust is to use contour area to remove small connected regions, then use erode/dilate before applying thinning process.
However you can so process your thinned image directly by using cv::findContours(,) and mask about contours with small area. This is similar to what FiReTiTi answered.
You can use the findContour example from OpenCV to build a contour detection using edge detector such as Canny. The example can be ported directly as part your requirment.
Once you got the contours in vector<vector<Point> > contours;you can iterate over each contour and use cv::contourArea to find the area of each region. Using pre-defined threshold you can remove unwanted areas.
In my opinion why dont you use distance transform on the 1st image and then from the resultant image use size filter to de-speckle the image.
In Matlab, there is a function "contour" (Matlab contour). If I use this for my Image, I got what I want. But my goal is to implement such a function to my image editor myself. I read the Matlab's "documentation" for "contour" function and based on that, I used Marching Squares algorithm. Hovewer, my result looks "ugly". Contours are crossing each other and I have very hight number of nested contours, which are eliminated in Matlab.
Anyone know about some solution, how to generate contours from grey-scale image with, lets say, every 10th brightness value ?
The openCV source for their contouring algorithm is available
One of the simplest serious algorithms is Paul Bourke's conrec (with source available) or there is a simple discussion of popular approaches at imageprocessingplace
I am just starting to use OpenCV to detect specific curves in an image. First, I want to verify if there is a curve, and next, I would like to identify the type of curve according to vertical or horizontal convex or concave curve. Is there an available function in OpenCV? If not, can you give me some ideas about how can I possibly write such a function? Thanks! By the way, I'm using C++.
Template matching is not a robust way to solve this problem (its like looking at an object from a small pinhole) and edge detectors don't necessarily return you the true edges in the image; false edges such as those due to shadows are returned too. Further, you have to deal with the problem of incomplete edges and other problems that scales up with the complexity of the scene in your image.
The problem you posed, in general, is a very challenging one and, except for toy examples, there are no good solutions.
A rough attempt could be to first try to detect plausible edges using an edge detector (e.g. the canny edge detector suggested). Next, use RANSAC to try to fit a subset of the points in the detected edges to your curve model.
For e.g. let's say you are trying to detect a curve of the following form f(x) = ax^2 + bx + c. RANSAC will basically try to find from among the points in the detected edges, a subset of them that would best fit this curve model. To detect different curves, change f(x) accordingly and run RANSAC for each of them. You can then try to determine if the curve represented by f(x) really exists in your image using some heuristic applied to from the points that were assigned to it by RANSAC (e.g. if too few points were fitted to the model it is likely that the curve is not there. But how to determine a good threshold for the number of points?). You model will get more complex when you have to account for allowable transformation such as rotation etc.
The problem with this approach is you are basically trying fit what you think should be in the image to the points and sometimes, even if what you are looking for is not there, it will return you the "best possible" fit. For e.g. you have a whole bunch of points detected from a concentric circle. If you try to detect straight lines from these points, RANSAC will return you the best fit line! In fact, it could give you many different lines from different runs depending on which points it selected during its random initialization stage.
For more details on how to use RANSAC on this sort of problem, have a look at RANSAC for Dummies by Marco Zuliani. He also has a nice MATLAB toolbox to accompany this tech report, which you can probably port to the language of your choice.
Unless you know what you background looks like, or if you are in control of it e.g. by forcing a clean background, this is a very difficult problem to solve.