OpenCV C++ extract features from binary image - c++

I have written an algorithm to process a camera capture and extract a binary image of two features I'm interested in. I'm trying to find the best (fastest) way of detecting when the two features intersect and where the lowest (y coordinate is greatest) point is (this will be the intersection).
I do not want to use a findContours() based method as this is too slow and, in my opinion, unnecessary. I also think blob detection libraries are too bloated for this.
I have two sample images (sorry for low quality):
(not touching: http://i.imgur.com/7bQ9qMo.jpg)
(touching: http://i.imgur.com/tuSmKw7.jpg)
Due to the way these images are created, there is often noise in the top right corner which looks like pixelated lines but methods such as dilation and erosion lose resolution around the features I'm trying to find.
My initial thought would be to use direct pixel access to form a width filter and a height filter. The lowest point in the image is therefore the intersection.
I have no idea how to detect when they touch... logically I can see that a triangle is formed when they intersect and otherwise there is no enclosed black area. Can I fill the image starting from the corner with say, red, and then calculate how much of the image is still black?
Does anyone have any suggestions?
Thanks

Your suggestion is a way more slow than finding contours. For binary images, finding contour is very easy and quick because you just need to find a black pixel followed by a white pixel or vice versa.
Anyway, if you don't want to use it, you can use the vertical projection or vertical profile you will see it the objects intersect or not.
For example, in the following image check the the letter "n" which is little similar to non-intersecting object, and the letter "o" which is similar to intersecting objects :
By analyzing the histograms you can recognize which one is intersecting or not.

Related

OpenCV edge based object detection C++

I have an application where I have to detect the presence of some items in a scene. The items can be rotated and a little scaled (bigger or smaller). I've tried using keypoint detectors but they're not fast and accurate enough. So I've decided to first detect edges in the template and the search area, using Canny ( or a faster edge detection algo ), and then match the edges to find the position, orientation, and size of the match found.
All this needs to be done in less than a second.
I've tried using matchTemplate(), and matchShape() but the former is NOT scale and rotation invariant, and the latter doesn't work well with the actual images. Rotating the template image in order to match is also time consuming.
So far I have been able to detect the edges of the template but I don't know how to match them with the scene.
I've already gone through the following but wasn't able to get them to work (they're either using old version of OpenCV, or just not working with other images apart from those in the demo):
https://www.codeproject.com/Articles/99457/Edge-Based-Template-Matching
Angle and Scale Invariant template matching using OpenCV
https://answers.opencv.org/question/69738/object-detection-kinect-depth-images/
Can someone please suggest me an approach for this? Or a code snipped for the same if possible ?
This is my sample input image ( the parts to detect are marked in red )
These are some software that are doing this and also how I want it should be:
This topic is what I am actually dealing for a year on a project. So I will try to explain what my approach is and how I am doing that. I assume that you already did the preprocess steps(filters,brightness,exposure,calibration etc). And be sure you clean the noises on image.
Note: In my approach, I am collecting data from contours on a reference image which is my desired object. Then I am comparing these data with the other contours on the big image.
Use canny edge detection and find the contours on reference
image. You need to be sure here about that it shouldn't miss some parts of
contours. If it misses, probably preprocess part should have some
problems. The other important point is that you need to find an
appropriate mode of findContours because every modes have
different properties so you need to find an appropriate one for your
case. At the end you need to eliminate the contours which are okey
for you.
After getting contours from reference, you can find the length of
every contours using outputArray of findContours(). You can compare
these values on your big image and eliminate the contours which are
so different.
minAreaRect precisely draws a fitted, enclosing rectangle for
each contour. In my case, this function is very good to use. I am
getting 2 parameters using this function:
a) Calculate the short and long edge of fitted rectangle and compare the
values with the other contours on the big image.
b) Calculate the percentage of blackness or whiteness(if your image is
grayscale, get a percentage how many pixel close to white or black) and
compare at the end.
matchShape can be applied at the end to the rest of contours or you can also apply to all contours(I suggest first approach). Each contour is just an array so you can hold the reference contours in an array and compare them with the others at the end. After doing 3 steps and then applying matchShape is very good on my side.
I think matchTemplate is not good to use directly. I am drawing every contour to a different mat zero image(blank black surface) as a template image and then I compare with the others. Using a reference template image directly doesnt give good results.
OpenCV have some good algorithms about finding circles,convexity etc. If your situations are related with them, you can also use them as a step.
At the end, you just get the all data,values, and you can make a table in your mind. The rest is kind of statistical analysis.
Note: I think the most important part is preprocess part. So be sure about that you have a clean almost noiseless image and reference.
Note: Training can be a good solution for your case if you just want to know the objects exist or not. But if you are trying to do something for an industrial application, this is totally wrong way. I tried YOLO and haarcascade training algorithms several times and also trained some objects with them. The experiences which I get is that: they can find objects almost correctly but the center coordinates, rotation results etc. will not be totally correct even if your calibration is correct. On the other hand, training time and collecting data is painful.
You have rather bad image quality very bad light conditions, so you have only two ways:
1. To use filters -> binary threshold -> find_contours -> matchShape. But this very unstable algorithm for your object type and image quality. You will get a lot of wrong contours and its hard to filter them.
2. Haarcascades -> cut bounding box -> check the shape inside
All "special points/edge matching " algorithms will not work in such bad conditions.

Finding order of objects with OpenCV 3

With a group of friend, we are trying to accomplish a computer vision task on Raspberry Pi, coding with C++ using OpenCV library.
Let me explain the task first.
There is a pattern consisting of 16 seperate squares with each square being red, yellow or blue colored. We are mounting rasperry pi on a quadcopter with its camera module and gathering video feed of the pattern.
We have to detect colors of squares which was easy to accomplish with a little research on web. Tricky part is we have to detect order of the squares as well in order to save the colors in an array in an order.
So far we have accomplished filtering desired colors (red, yellow, blue) to determine squares.
example pattern to recognize and our process so far
In the second image, we know the colors and center points of each square. What we need is a way to write them in an order to a file or on screen.
And to find the order, we tried several OpenCV methods that find corners. With corner points at hand, we compared each point and determined end points so we could draw a boundingrectangle and overcome little distortions.
But since quadcopter gets the video stream, there is always a chance of high distortion. That messes up our corner theory, resulting in wrong order of colors. For example it can capture an image like this:
highly distorted image
It is not right to find order of these squares by comparing their center points. It also won't work finding endpoints to draw a larger rectangle around them to flatten pattern. And then order...
What I ask for is algorithm suggestions. Are we totally going in the wrong direction trying to find corners? Is it possible to determine the order without taking distortion into consideration?
Thanks in advance.
Take the two centers that are the furthest apart and number them 1 and 16. Then find the two centers that are the furthest from the line 1-16, to the left (number 4) and to the right (number 13). Now you have the four corners.
Compute the affine transform that maps the coordinates of the corners 1, 4 and 13 to (0,0), (3,0) and (0,3). Apply this transform to the 16 centers and round to the nearest integers. If all goes well, you will obtain the "logical" coordinates of the squares, in range [0, 3] x [0, 3]. The mapping to the cell indexes is immediate.
Note that because of symmetry, a fourfold undeterminacy will remain, which you can probably lift by checking the color patterns.
This procedure will be very robust to deformations. If there is extreme perspective, you can even exploit the four corners to determine an homographic transform instead of affine. In your case, I doubt this will be useful. You can assess proper working by checking that all expected indexes have been assigned.

Detect ball/circle in OpenCV (C++)

I am trying to detect a ball in an filtered image.
In this image I've already removed the stuff that can't be part of the object.
Of course I tried the HoughCircle function, but I did not get the expected output.
Either it didn't find the ball or there were too many circles detected.
The problem is that the ball isn't completly round.
Screenshots:
I had the idea that it could work, if I identify single objects, calculate their center and check whether the radius is about the same in different directions.
But it would be nice if it detect the ball also if he isn't completely visible.
And with that method I can't detect semi-circles or something like that.
EDIT: These images are from a video stream (real time).
What other method could I try?
Looks like you've used difference imaging or something similar to obtain the images you have..? Instead of looking for circles, look for a more generic loop. Suggestions:
Separate all connected components.
For every connected component -
Walk around the contour and collect all contour pixels in a list
Suggestion 1: Use least squares to fit an ellipse to the contour points
Suggestion 2: Study the curvature of every contour pixel and check if it fits a circle or ellipse. This check may be done by computing a histogram of edge orientations for the contour pixels, or by checking the gradients of orienations from contour pixel to contour pixel. In the second case, for a circle or ellipse, the gradients should be almost uniform (ask me if this isn't very clear).
Apply constraints on perimeter, area, lengths of major and minor axes, etc. of the ellipse or loop. Collect these properties as features.
You can either use hard-coded heuristics/thresholds to classify a set of features as ball/non-ball, or use a machine learning algorithm. I would first keep it simple and simply use thresholds obtained after studying some images.
Hope this helps.

Finding Circle Edges :

Finding Circle Edges :
Here are the two sample images that i have posted.
Need to find the edges of the circle:
Does it possible to develop one generic circle algorithm,that could find all possible circles in all scenarios ?? Like below
1. Circle may in different color ( White , Black , Gray , Red)
2. Background color may be different
3. Different in its size
http://postimage.org/image/tddhvs8c5/
http://postimage.org/image/8kdxqiiyb/
Please suggest some idea to write a algorithm that should work out on above circle
Sounds like a job for the Hough circle transform:
I have not used it myself so far, but it is included in OpenCV. Among other parameters, you can give it a minimum and maximum radius.
Here are links to documentation and a tutorial.
I'd imagine your second example picture will be very hard to detect though
You could apply an edge detection transformation to both images.
Here is what I did in Paint.NET using the outline effect:
You could test edge detect too but that requires more contrast in the images.
Another thing to take into consideration is what it exactly is that you want to detect; in the first image, do you want to detect the white ring or the disc inside. In the second image; do you want to detect the all the circles (there are many tiny ones) or just the big one(s). These requirement will influence what transformation to use and how to initialize these.
After transforming the images into versions that 'highlight' the circles you'll need an algorithm to find them.
Again, there are more options than just one. Here is a paper describing an algoritm
Searching the web for image processing circle recognition gives lots of results.
I think you will have to use a couple of different feature calculations that can be used for segmentation. I the first picture the circle is recognizeable by intensity alone so that one is easy. In the second picture it is mostly the texture that differentiates the circle edge, in that case a feature image based based on some kind of texture filter will be needed, calculating the local variance for instance will result in a scalar image that can segment out the circle. If there are other features that defines the circle in other scenarios (different colors for background foreground etc) you might need other explicit filters that give a scalar difference for those cases.
When you have scalar images where the circles stand out you can use the circular Hough transform to find the circle. Either run it for different circle sizes or modify it to detect a range of sizes.
If you know that there will be only one circle and you know the kind of noise that will be present (vertical/horizontal lines etc) an alternative approach is to design a more specific algorithm e.g. filter out the noise and find center of gravity etc.
Answer to comment:
The idea is to separate the algorithm into independent stages. I do not know how the specific algorithm you have works but presumably it could take a binary or grayscale image where high values means pixel part of circle and low values pixel not part of circle, the present algorithm also needs to give some kind of confidence value on the circle it finds. This present algorithm would then represent some stage(s) at the end of the complete algorithm. You will then have to add the first stage which is to generate feature images for all kind of input you want to handle. For the two examples it should suffice with one intensity image (simply grayscale) and one image where each pixel represents the local variance. In the color case do a color transform an use the hue value perhaps? For every input feed all feature images to the later stage, use the confidence value to select the most likely candidate. If you have other unknowns that your algorithm need as input parameters (circle size etc) just iterate over the possible values and make sure your later stages returns confidence values.

How to detect points which are drastically different than their neighbours

I'm doing some image processing, and am trying to keep track of points similar to those circled below, a very dark spot of a couple of pixels diameter, with all neighbouring pixels being bright. I'm sure there are algorithms and methods which are designed for this, but I just don't know what they are. I don't think edge detection would work, as I only want the small spots. I've read a little about morphological operators, could these be a suitable approach?
Thanks
Loop over your each pixel in your image. When you are done considering a pixel, mark it as "used" (change it to some sentinel value, or keep this data in a separate array parallel to the image).
When you come across a dark pixel, perform a flood-fill on it, marking all those pixels as "used", and keep track of how many pixels were filled in. During the flood-fill, make sure that if the pixel you're considering isn't dark, that it's sufficiently bright.
After the flood-fill, you'll know the size of the dark area you filled in, and whether the border of the fill was exclusively bright pixels. Now, continue the original loop, skipping "used" pixels.
How about some kind of median filtering? Sample values from 3*3 grid (or some other suitable size) around the pixel and set the value of pixel to median of those 9 pixels.
Then if most of the neighbours are bright the pixel becomes bright etc.
Edit: After some thinking, I realized that this will not detect the outliers, it will remove them. So this is not the solution original poster was asking.
Are you sure that you don't want to do an edge detection-like approach? It seems like a comparing the current pixel to the average value of the neighborhood pixels would do the trick. (I would evaluate various neighborhood sizes to be sure.)
Personally I like this corner detection algorithms manual.
Also you can workout naive corner detection algorithm by exploiting idea that isolated pixel is such pixel through which intensity changes drastically in every direction. It is just a starting idea to begin from and move on further to better algorithms.
I can think of these methods that might work with some tweaking of parameters:
Adaptive thresholds
Morphological operations
Corner detection
I'm actually going to suggest simple template matching for this, if all your features are of roughly the same size.
Just copy paste the pixels of one (or a few features) to create few templates, and then use Normalized Cross Correlation or any other score that OpenCV provides in its template matching routines to find similar regions. In the result, detect all the maximal peaks of the response (OpenCV has a function for this too), and those are your feature coordinates.
Blur (3x3) a copy of your image then diff your original image. The pixels with the highest values are the ones that are most different from their neighbors. This could be used as an edge detection algorithm but points are like super-edges so set your threshold higher.
what a single off pixel looks like:
(assume surrounding pixels are all 1)
original blurred diff
1,1,1 8/9,8/9,8/9 1/9,1/9,1/9
1,0,1 8/9,8/9,8/9 1/9,8/9,1/9
1,1,1 8/9,8/9,8/9 1/9,1/9,1/9
what an edge looks like:
(assume surrounding pixels are the same as their closest neighbor)
original blurred diff
1,0,0 6/9,3/9,0/9 3/9,3/9,0/9
1,0,0 6/9,3/9,0/9 3/9,3/9,0/9
1,0,0 6/9,3/9,0/9 3/9,3/9,0/9
Its been a few years since i did any image processing. But I would probably start by converting to a binary representation. It doesn't seem like you're overly interested in the grey middle values, just the very dark/very light regions, so get rid of all the grey. At that point, various morphological operations can accentuate the points you're interested in. Opening and Closing are pretty easy to implement, and can yield pretty nice results, leaving you with a field of black everywhere except the points you're interested in.
Have you tried extracting connected components using cvContours? First thresholding the image (using Otsu's method say) and then extracting each contour. Since the spots you wish to track are (from what I see in your image) somewhat isolated from neighborhood they will some up as separate contours. Now if we compute the area of the Bounding Rectangle of each contour and filter out the larger ones we'd be left with only small dots separate from dark neighbors.
As suggested earlier a bit of Morphological tinkering before the contour separation should yield good results.