Finding the middle point of 2 parallel contours - c++

Couldn't find this with search but I'm not sure how to describe this anyway, which is one reason the problems been so hard to find the right approach for. Sorry if its already been asked or if the problem description is to vague.
The problem: I am detecting the contours of a curving, painted physical path of consistent width from an image using opencv's findcontours. I need to map a vector of points in the middle of those 2 edges along the length of the path so as to trace the painted path with a single vector.
I was wondering if there is a way to find points within a certain pixel distance in the imagespace, in various other contours. I can iterate through them all, and find the closest ones that way, but thats time intensive.
If there was I could search the other contour vectors for points about the right width away and use the 2 points to estimate a mid point and add that to the growing middle vector.
Or if theres a better approach to converting the detected path into a single, workable vector that would work to.

Sounds like you're looking for the "skeleton" - it helps if you know the terms.
If you can transform the image into a black&white image that still shows the path, it's trivial: iterate the erosion procedure until no more points are eroded.
Another approach is to realize that the operation is expensive once, but once you've found one pair of points, the next pair is easily found in the respective 3x3 neighborhoods.

Related

OpenCV edge based object detection C++

I have an application where I have to detect the presence of some items in a scene. The items can be rotated and a little scaled (bigger or smaller). I've tried using keypoint detectors but they're not fast and accurate enough. So I've decided to first detect edges in the template and the search area, using Canny ( or a faster edge detection algo ), and then match the edges to find the position, orientation, and size of the match found.
All this needs to be done in less than a second.
I've tried using matchTemplate(), and matchShape() but the former is NOT scale and rotation invariant, and the latter doesn't work well with the actual images. Rotating the template image in order to match is also time consuming.
So far I have been able to detect the edges of the template but I don't know how to match them with the scene.
I've already gone through the following but wasn't able to get them to work (they're either using old version of OpenCV, or just not working with other images apart from those in the demo):
https://www.codeproject.com/Articles/99457/Edge-Based-Template-Matching
Angle and Scale Invariant template matching using OpenCV
https://answers.opencv.org/question/69738/object-detection-kinect-depth-images/
Can someone please suggest me an approach for this? Or a code snipped for the same if possible ?
This is my sample input image ( the parts to detect are marked in red )
These are some software that are doing this and also how I want it should be:
This topic is what I am actually dealing for a year on a project. So I will try to explain what my approach is and how I am doing that. I assume that you already did the preprocess steps(filters,brightness,exposure,calibration etc). And be sure you clean the noises on image.
Note: In my approach, I am collecting data from contours on a reference image which is my desired object. Then I am comparing these data with the other contours on the big image.
Use canny edge detection and find the contours on reference
image. You need to be sure here about that it shouldn't miss some parts of
contours. If it misses, probably preprocess part should have some
problems. The other important point is that you need to find an
appropriate mode of findContours because every modes have
different properties so you need to find an appropriate one for your
case. At the end you need to eliminate the contours which are okey
for you.
After getting contours from reference, you can find the length of
every contours using outputArray of findContours(). You can compare
these values on your big image and eliminate the contours which are
so different.
minAreaRect precisely draws a fitted, enclosing rectangle for
each contour. In my case, this function is very good to use. I am
getting 2 parameters using this function:
a) Calculate the short and long edge of fitted rectangle and compare the
values with the other contours on the big image.
b) Calculate the percentage of blackness or whiteness(if your image is
grayscale, get a percentage how many pixel close to white or black) and
compare at the end.
matchShape can be applied at the end to the rest of contours or you can also apply to all contours(I suggest first approach). Each contour is just an array so you can hold the reference contours in an array and compare them with the others at the end. After doing 3 steps and then applying matchShape is very good on my side.
I think matchTemplate is not good to use directly. I am drawing every contour to a different mat zero image(blank black surface) as a template image and then I compare with the others. Using a reference template image directly doesnt give good results.
OpenCV have some good algorithms about finding circles,convexity etc. If your situations are related with them, you can also use them as a step.
At the end, you just get the all data,values, and you can make a table in your mind. The rest is kind of statistical analysis.
Note: I think the most important part is preprocess part. So be sure about that you have a clean almost noiseless image and reference.
Note: Training can be a good solution for your case if you just want to know the objects exist or not. But if you are trying to do something for an industrial application, this is totally wrong way. I tried YOLO and haarcascade training algorithms several times and also trained some objects with them. The experiences which I get is that: they can find objects almost correctly but the center coordinates, rotation results etc. will not be totally correct even if your calibration is correct. On the other hand, training time and collecting data is painful.
You have rather bad image quality very bad light conditions, so you have only two ways:
1. To use filters -> binary threshold -> find_contours -> matchShape. But this very unstable algorithm for your object type and image quality. You will get a lot of wrong contours and its hard to filter them.
2. Haarcascades -> cut bounding box -> check the shape inside
All "special points/edge matching " algorithms will not work in such bad conditions.

Efficient way of computing slopes of edge lines

I performed edge detection on images (with Python 2-7 and OpenCV 3.2) and have results like the following picture, i.e. one-pixel-wide edges not necessarily closed (can have "loose ends"), and with possible holes :
Now I would like to get the "derivative" of these edges, meaning the "slope" at each point, as in the following image :
For the moment, the only way I managed to do it is very locally. For each point of the edge (in red in next "zoomed" picture), I create a circle around it (in pink), mask the circle with the edge to get the red point's neighbors, then compute the slope of these two neighbors.
However, it can be quite messy if edges have holes (which they often do) or are close to other edges (which they often are) and masking all the points is pretty computationally intensive, so I wonder if there is a better way.
My first idea was spline interpolation, but you need to give as input an ordered list of points, which you can't have for a given edge unless you use a pixel neighbor tracking algorithm which can also get quite messy in case of not-that-good edges.
I also thought of findContours but it needs closed edges or else it yields the contour of a one-pixel-wide edge, i.e. two lines on both side of the edges, started at an arbitrary location on the edge, in short it's a mess.
Is there a cleaner and more efficient way than my actual method to achieve what I want ? Does OpenCV have any resources or is its job done after edge detection (I think the latter is more probable !) ?
P.S. : "I don't think there is a better way" is an answer I'm ready to accept !
So, if I understood everything correctly, what you need is an ordered list of your points free from holes, because after that it seems you know how to proceed to obtain your result. So, you should concentrate in getting an ordered gapless list.
FindContours does output an ordered list, but probably not in the order you need. It groups connected pixels with a TOP-DOWN / LEFT-RIGHT priority. So, it swipes each row sequentially, when it hits a white pixel, it finds the first contour. So, in your image, the first contour it finds is actually the one on the right, since it has the closest to 0 Y value.
In the case of this particular image, if you rotate it 90 degrees you'll realize that it will actually order your contours and points in the way you need. But will this always be the case? Only you can tell. If there is a pre-process method to apply to your images that will guarantee that findContours will order your pixels in the correct way, the rest will be easy. If not, I suggest you create your own pixel-connectivity algorithm that will work as you need it to, since all your problems depends on getting an ordered list.
Once you have the ordered list, just interpolate the missing pixels.
If you have an ordered set of pixels, "closing the gaps" is easy, since you just need to find the gaps and interpolate between them as an approximation that probably wont hurt your algorithm.

opencv - Contour Alignment and comparison

I have tried to use edge detection to find the contour of images and try to compare the similarity of the contours by matchshape function. However, results are not as good as expected. I think it may be because of the images are not aligned before calculating the similarity. Therefore, I am asking for a way of aligning two contours in opencv. I am thinking of aligning by first finding the smallest bounding box or circle and then find out translation, rotation or resize needed to align those boxes. Then apply those transformation on the contour and test the similarity of them. Does this method work? Is there any method to align images? Thanks for your help. For your reference, attached are two contours going to be tested. They should be very similar but the distance found is quite large. The first two images have larger distance than that between the first and the last one, which seems contradicts with what it looks like (the last one should be the worst). Thanks.
These kinds of problems are known as registration problems. CPD, BCPD, and ICP would be your best shot.
[https://github.com/neka-nat/probreg][1]

How to use marching squares for multiple contours?

how to make marching squares proceed after it finds the first contour?
the contours in the image am working on are going to change quite often and because am in an embedded environment(android/ios) i would like a fast performance solution above all.
and using an external library isn't an option.
i tried connected component labeling but never got it to work as i have a PNG which isn't black and white (isn't thresholded) and if am not mistaken the CCL only works on black and white (binary) images.
i thought about saving the blob information to another vector and check if the newly found pixels fall within the earlier found blobs but i don't think that's fast enough as the vector gets filled with more and more blobs it gets more and more expensive to check every blob inside the vector.
which leaves me with my almost finished current approach which is erasing the contours i find and repeat until there's nothing left? but that's my currently used approach which seems expensive too.
and if there is no fast solution then can anyone suggest a different approach...even if that means a different algorithm.
Mark1: i chose marching squares because i only need the outline of the contours even if there are holes in theme.
i solved my problem by using an implementation of the marching squares algorithm that i found in the chipmunk2d physics library.

Counting objects on a grid with OpenCV

I'm relatively new to OpenCV, and I'm working on a project where I need to count the number of objects on a grid. the grid is the background of the image, and there's either an object in each space or there isn't; I need to count the number present, and I don't really know where to start. I've searched here and other places, but can't seem to find what I'm looking for. I will need to be tracking the space numbers of the grid in the future, so I will also eventually need to know whether each grid space is occupied or empty. I'm not going so far as to ask for a coded example, but does anybody know of any source or tutorials to accomplish this task or one similar to it? Thanks for your help!
Further Details: images will come from a stable-mounted camera, objects are of relatively uniform shape, but varying size and color.
I would first answer a few questions:
Will an object be completely enclosed in a grid cell? Or can it be placed on top of a grid line? (In other words, will the object hide a line from the camera?)
Will more than one object be in one cell?
Can an object occupy more than one cell? (closely related to question 1)
Given reasonable answers to those questions, I believe the problem can be broken into two parts: first, identify the centers of each grid space. To count objects, you can then sample that region to see if anything "not background" is there.
You can then assume that a grid space is defined by four strong, regularly-placed, corner features. (For the sake of discussion, I'll assume you've performed the initial image preparation as needed: histogram equalization, gaussian blur for noise reduction, etc.) From there, you might try some of OpenCV's methods for finding corners (Harris corner detector, cvGoodFeaturesToTrack, etc). It's likely that you can borrow some of the techniques found in OpenCV's square finding example (samples/c/square.c). For this task, it's probably sufficient to assume that the grid center is just the centroid of each set of "adjacent" (or sufficiently near) corners.
Alternatively, you might use the Hough transform to identify the principal horizontal and vertical lines in the image. You can then determine the intersection points to identify the extents of each grid cell. This implementation might be more challenging since inferring structure (or adjacency) from "nearby" vertices in order to find a grid center seems more difficult.