Find point of interest on image - c++

I'm trying to track little white dots on edges table. In most of case it works. I'm using cornerHarris function like it's used in this tutorial : http://docs.opencv.org/doc/tutorials/features2d/trackingmotion/harris_detector/harris_detector.html .
Sometimes, I have got a problem : reflection of the light on edges creates point of interest which I have to not consider.
For example :
I'm searching to the two nearest points of the top corners, as you can see on the right edges, i have find dots(red and green dots) and on the left edges, light noise is a problem (cyan and blue dots).
Does someone knows a method to keep only dots white on my picture ? Thankyou and sorry for my english

On the purely image processing part, I would recommend using some kind of shape feature analysis(like comparing the histogram in say 8x8 around your currently examined point of interest to precomputed ones of the features you want .
This would mean that you first look for points with Harris corners, then compare the features to dismiss unwanted ones ( euclidian distance in the 8x8 = 64D ?). This of course assumes the existence of a strong feature (read "taking time to find a good one")
It also assumes you already know what your feature points look like beforehand.
Alternative more on the computer vision side : use geometry of your corner points repartition to your advantage : you probably want a distorted rectangle, so make sure you find one ! Surely you can compute a function that gives the validity of the last feature point assuming 3 others ? (Distance of intersection of the 2 lines generated by the other 3 points ...)
The typical and coolest approach would then apply RANSAC to it : try random (but not all !) combinations of your points and check which one fits best using that function, and consider those as good.
If you intend on tracking over time or over several images, you will have to tune it a bit, as ransac can occasionally fail (statistics of random combinations ...), and you would then use points from your previously successful run to guestimate the position.
Last idea for the moment : use some color-aware derivation technique : do you compute Harris corner of the rgb image or of a flattened version to gray ? Some gradients use color information as extra tip to discern edges, and I'm not sure the corners you're finding use any of those. Then again it might mean reimplementing Harris corners algorithm (try it, it's fun, and not that hard if you have a good algebra library to do the heavy work)
I recommend the geometric test of fitting as it uses wisely model-info of your system rather than assumptions on how reflections look like.
Really funny introduction to RANSAC : danielwedge.com/ransac/
Edit : Trusty photoshop knows what I mean : I highlighted the invalid shapes
Valid grid, photoshop says so
Invalid grid, logical, right ?

Related

Find the Peaks of contour in Python-OpenCV

I have got a binary image/contour containing four human beings, and I want to detect/count all humans. Since there are occlusions, so I think it is best to get the head/maxima in the contour of all the humans. In that case human can be counted.
I am able to get the global maxima\topmost point (in terms of calculus language), but I want to get all the local maximas
The code for finding the topmost point is as suggested by Adrian in his blogpost i.e.:
topmost = tuple(biggest_contour[biggest_contour[:,:,1].argmin()][0])
Can anyone please suggest how to get all the local maximas, instead of just topmost location?
Here is the sample of my Image:
The definition of "local maximum" can be tricky to pin down, but if you start with a simple method you'll develop an intuition to look further. Even if there are methods available on the web to do this work for you, it's worth implementing a few basic techniques yourself before you go googling.
One simple method I've used in the path goes something like this:
Find the contours as arrays/lists/containers of (x,y) coordinates.
At each element N (a pixel) in the list, get the pixels at N - D and N + D; that is the pixels D ahead of the current pixel and D behind the current pixel
Calculate the point-to-point distance
Calculate the distance along the contour from N-D to N+D
Calculate (distanceAlongContour)/(point-to-point distance)
...
There are numerous other ways to do this, but this is quick to implement from scratch, and I think a reasonable starting point: Compare the "geodesic" distance and the Euclidean distance.
A few other possibilities:
Do a bunch of curve fits to chunks of pixels from the contour. (Lots of details to investigate here.)
Use Ramer-Puecker-Douglas to render the outlines as polygons, then choose parameters to ensure those polygons are appropriately simplified. (Second time I've mentioned R-P-D today; it's handy.) Check for vertices with angles that deviate much from 180 degrees.
Try a corner detector. Crude, but easy to implement.
Implement an edge follower that moves from one pixel to the next in the contour list, and calculate some kind of "inertia" as the pixel shifts direction. This wouldn't be useful on a pixel-by-pixel basis, but you could compare, say, pixels N-1,N,N+1 to pixels N+1,N+2,N+3. Or just calculate the angle between them.

how to detect the coordinates of certain points on image

I'm using the ORB algorithm to detect and get the coordinates of the crossings of rope shown in the image, which is represented by the red dot. I want to detect the coordinates of the four points surrounding the crossing represented by the blue dots. All the four points have the same distance from the red spot.
Any idea how to get their coordinates by getting use of the red spot coordinate.
Thank you in Advance
Although you're using ORB, you're still going to need an algorithm to segment the rope from the background, or at least some technique to identify image chunks that belong to the rope and that are equidistant from the red dot. There are a number of options to explore.
It's important to consider your lighting & imaging as separate problems to be solved if this is meant to be a real-world application. This looks a bit like a problem for a class rather than for a application you'll sell and support, but you should still consider lighting:
Will your algorithm(s) still work when light level is reduced?
How will detection be affected by changes in camera pose relative to the surface where the rope will be located?
If you'll be detecting "black" rope, will the algorithm also be required to detect rope of different colors? dirty rope? rope on different backgrounds?
Since you're object of interest is rope, you have to consider a class of algorithms suitable for detection of non-rigid objects. Always consider the simplest solution first!
Connected Components
Connected components labeling is a traditional image processing algorithm and still suitable as the starting point for many applications. The last I knew, this was implemented in OpenCV as findContours(). This can also be called "blob finding" or some variant thereof.
https://en.wikipedia.org/wiki/Connected-component_labeling
https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=findcontours
Depending on lighting, you may have to take different steps to binarize the image before running connected components. As a start, convert the color image to grayscale, which will simplify the task significantly.
Try a manual threshold since you can quickly test a number of values to see the effect. Don't be too discouraged if the binarization isn't quite right--this can often be fixed with preprocessing.
If a range of manual thresholds works (e.g. 52 - 76 in an 8-bit grayscale range), then use an algorithm that will automatically calculate the threshold for you: Otsu, entropy-based methods, etc., will all offer comparable performance. Whichever technique works best, the code/algorithm can be tweaked further to optimize for your rope application.
If thresholding and binarization don't work--which for your rope application seems unlikely, at least how you've presented it--then switch to thinking in terms of gradient-based (edge-based, energy-based) techniques.
But assuming you can separate the rope from the background, you're still going to need a method to start at the red dot [within the rope] and move equal distances out to the blue points. More about that later after a discussion of other rope segmentation methods.
Note: connected components labeling can work in scenarios beyond just binarizing black & white images. If you can create a texture field or some other 2D representation of the image that makes it possible to distinguish the black rope from the relatively light background, you may be able to use a connected components algorithm. (Finding a "more complicated" or "more modern" algorithm isn't necessarily going to be the right approach.)
In a binarized image, blobs can be nested: on a white background you can have several black blobs, inside of one or more of which are white blobs, inside of which are black blobs, etc. An earlier version of OpenCV handled this reasonably well. (OpenCV is a nice starting point, and a touchpoint for many, but for a number of reasons it doesn't always compare favorably to other open source and commercial packages; popularity notwithstanding, OpenCV has some issues.)
Once you have a "blob" (a 4-connected region of pixels) in a 2D digital image, you can treat the blob as an object, at which point you have a number of options:
Edge tracing: trace around the inside and outside edges of the blob. From what I recall, OpenCV does (or at least should) have some relatively straightforward method to get the edges.
Split the blob into component blobs, each of which can be treated separately
Convert the blob to a polygon
...
A connected components algorithm should be high on the list of techniques to try if you have a non-rigid object.
Boolean Operations
Once you have the rope as a connected component (and possibly even without this), you can use boolean image operations to find the spots at the blue dots in your image:
Create a circular region in data, or even in the image
Find the intersection of the circle (an annulus) and the black region representing the rope. Using your original image, you should have four regions.
Find the center point of the intersection regions.
You could even try this without using connected components at all, but using connected components as part of the solution could make it more robust.
Polygon Simplification
If you have a blob, which in your application would be a connected set of black pixels representing the rope on the floor, then you can consider converting this blob to one or more polygons for further processing. There are advantages to working with polygons.
If you consider only the outside boundary of the rope, then you can see that the set of pixels defining the boundary represents a polygon. It's a polygon with a lot of points, and not a convex polygon, but a polygon nonetheless.
To simplify the polygon, you can use an algorithm such as Ramer-Douglas-Puecker:
https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm
Once you have a simplified polygon, you can try a few techniques to render useful data from the polygon
Angle Bisector Network
Triangulation (e.g. using ear clipping)
Triangulation is typically dependent on initial conditions, so the resulting triangulation for slighting different polygons (that is, rope -> blob -> polygon -> simplified polygon). So in your application it might be useful to triangulate the dark rope region, and then to connect the center of one triangle to the center of the next nearest triangle. You'll also have to deal with crossings, such as the rope overlap. Ultimately this can yield a "skeletonization" of the rope. Speaking of which...
Skeletonization
If the rope problem was posed to you as a class exercise, then it may have been a prompt to try skeletonization. You can read about it here:
https://en.wikipedia.org/wiki/Topological_skeleton
Skeletonization and thinning have their own problems to solve, but you should dig into them a bit and see those problems themselves.
The Medial Axis Transform (MAT) is a related concept. Long story there.
Edge-based techniques
There are a number of techniques to generate "edge images" based on edge strength, energy, entropy, etc. Making them robust takes a little effort. If you've had academic training in image processing you've likely heard of Harris, Sobel, Canny, and similar processing methods--none are magic bullets, but they're simple and dependable and will yield data you need.
An "edge image" consists of pixels representing the image gradient strength [and sometimes the gradient direction]. People may call this edge image something else, but it's the concept that matters.
What you then do with the edge data is another subject altogether. But one reason to think of edge images (or at least object borders) is that it reduces the amount of information your algorithm(s) will need to process.
Mean Shift (and related)
To get back to segmentation mentioned in the section on connected components, there are other methods for segmenting figures from a background: K-means, mean shift, and so on. You probably won't need any of those, but they're neat and worth studying.
Stroke Width Transform
This is an intriguing technique used to extract text from noisy backgrounds. Although it's intended for OCR, it could work for rope since the rope width is relatively constant, the rope shape varies, there are crossings, etc.
In short, and simplifying quite a bit, you can think of SWT as a means to find "strokes" (thick lines) by finding gradients antiparallel to each other. On either side of a stroke (or line), the edge gradient points normal to the object edge. The normal on one side of the stroke points opposite the direction of the normal on the other side of the stroke. By filtering for pixel-gradient pairs within a certain distance of each other, you can isolate certain strokes--even automatically. For your example the collection of points representing edge pairs for the rope would be much more common than other point pairs.
Non-Rigid Matching
There are techniques for matching non-rigid shapes, but they would not be worth exploring. If any of the techniques I mentioned above is unfamiliar to you, explore some of those first before you try any fancier algorithms.
CNNs, machine learning, etc.
Just don't even think of these methods as a starting point.
Other Considerations
If this were an application for industry, security, or whatnot, you'd have to determine how well your image processing worked under all environmental considerations. That's not an easy task, and can make all the difference between a setup that "works" in the lab and a setup that actually works in practice.
I hope that's of some help. Feel free to post a reply if I've confused more than helped, or if you want to explore some idea in more detail. Though I tried to touch on some common(ish) techniques, I didn't mention all the different ways of addressing this problem.
And briefly: once you have a skeleton, point network, or whatever representing a reduced data set for the rope and the red dot (the identified feature), a few techniques to find the items at the blue dots:
For a skeleton, trace along each "branch" of the rope outward from the know until the geodesic distance or straight-line 2D distance is the distance D that you want.
To use geometry, create a circle of width 1 - 2 pixels. Find the intersection of that circle and the rope. Find the center point of the intersections of circle and rope. (Also described above.)
Good luck!

OpenCV C++ extract features from binary image

I have written an algorithm to process a camera capture and extract a binary image of two features I'm interested in. I'm trying to find the best (fastest) way of detecting when the two features intersect and where the lowest (y coordinate is greatest) point is (this will be the intersection).
I do not want to use a findContours() based method as this is too slow and, in my opinion, unnecessary. I also think blob detection libraries are too bloated for this.
I have two sample images (sorry for low quality):
(not touching: http://i.imgur.com/7bQ9qMo.jpg)
(touching: http://i.imgur.com/tuSmKw7.jpg)
Due to the way these images are created, there is often noise in the top right corner which looks like pixelated lines but methods such as dilation and erosion lose resolution around the features I'm trying to find.
My initial thought would be to use direct pixel access to form a width filter and a height filter. The lowest point in the image is therefore the intersection.
I have no idea how to detect when they touch... logically I can see that a triangle is formed when they intersect and otherwise there is no enclosed black area. Can I fill the image starting from the corner with say, red, and then calculate how much of the image is still black?
Does anyone have any suggestions?
Thanks
Your suggestion is a way more slow than finding contours. For binary images, finding contour is very easy and quick because you just need to find a black pixel followed by a white pixel or vice versa.
Anyway, if you don't want to use it, you can use the vertical projection or vertical profile you will see it the objects intersect or not.
For example, in the following image check the the letter "n" which is little similar to non-intersecting object, and the letter "o" which is similar to intersecting objects :
By analyzing the histograms you can recognize which one is intersecting or not.

Finding Circle Edges :

Finding Circle Edges :
Here are the two sample images that i have posted.
Need to find the edges of the circle:
Does it possible to develop one generic circle algorithm,that could find all possible circles in all scenarios ?? Like below
1. Circle may in different color ( White , Black , Gray , Red)
2. Background color may be different
3. Different in its size
http://postimage.org/image/tddhvs8c5/
http://postimage.org/image/8kdxqiiyb/
Please suggest some idea to write a algorithm that should work out on above circle
Sounds like a job for the Hough circle transform:
I have not used it myself so far, but it is included in OpenCV. Among other parameters, you can give it a minimum and maximum radius.
Here are links to documentation and a tutorial.
I'd imagine your second example picture will be very hard to detect though
You could apply an edge detection transformation to both images.
Here is what I did in Paint.NET using the outline effect:
You could test edge detect too but that requires more contrast in the images.
Another thing to take into consideration is what it exactly is that you want to detect; in the first image, do you want to detect the white ring or the disc inside. In the second image; do you want to detect the all the circles (there are many tiny ones) or just the big one(s). These requirement will influence what transformation to use and how to initialize these.
After transforming the images into versions that 'highlight' the circles you'll need an algorithm to find them.
Again, there are more options than just one. Here is a paper describing an algoritm
Searching the web for image processing circle recognition gives lots of results.
I think you will have to use a couple of different feature calculations that can be used for segmentation. I the first picture the circle is recognizeable by intensity alone so that one is easy. In the second picture it is mostly the texture that differentiates the circle edge, in that case a feature image based based on some kind of texture filter will be needed, calculating the local variance for instance will result in a scalar image that can segment out the circle. If there are other features that defines the circle in other scenarios (different colors for background foreground etc) you might need other explicit filters that give a scalar difference for those cases.
When you have scalar images where the circles stand out you can use the circular Hough transform to find the circle. Either run it for different circle sizes or modify it to detect a range of sizes.
If you know that there will be only one circle and you know the kind of noise that will be present (vertical/horizontal lines etc) an alternative approach is to design a more specific algorithm e.g. filter out the noise and find center of gravity etc.
Answer to comment:
The idea is to separate the algorithm into independent stages. I do not know how the specific algorithm you have works but presumably it could take a binary or grayscale image where high values means pixel part of circle and low values pixel not part of circle, the present algorithm also needs to give some kind of confidence value on the circle it finds. This present algorithm would then represent some stage(s) at the end of the complete algorithm. You will then have to add the first stage which is to generate feature images for all kind of input you want to handle. For the two examples it should suffice with one intensity image (simply grayscale) and one image where each pixel represents the local variance. In the color case do a color transform an use the hue value perhaps? For every input feed all feature images to the later stage, use the confidence value to select the most likely candidate. If you have other unknowns that your algorithm need as input parameters (circle size etc) just iterate over the possible values and make sure your later stages returns confidence values.

Detecting curves in OpenCV

I am just starting to use OpenCV to detect specific curves in an image. First, I want to verify if there is a curve, and next, I would like to identify the type of curve according to vertical or horizontal convex or concave curve. Is there an available function in OpenCV? If not, can you give me some ideas about how can I possibly write such a function? Thanks! By the way, I'm using C++.
Template matching is not a robust way to solve this problem (its like looking at an object from a small pinhole) and edge detectors don't necessarily return you the true edges in the image; false edges such as those due to shadows are returned too. Further, you have to deal with the problem of incomplete edges and other problems that scales up with the complexity of the scene in your image.
The problem you posed, in general, is a very challenging one and, except for toy examples, there are no good solutions.
A rough attempt could be to first try to detect plausible edges using an edge detector (e.g. the canny edge detector suggested). Next, use RANSAC to try to fit a subset of the points in the detected edges to your curve model.
For e.g. let's say you are trying to detect a curve of the following form f(x) = ax^2 + bx + c. RANSAC will basically try to find from among the points in the detected edges, a subset of them that would best fit this curve model. To detect different curves, change f(x) accordingly and run RANSAC for each of them. You can then try to determine if the curve represented by f(x) really exists in your image using some heuristic applied to from the points that were assigned to it by RANSAC (e.g. if too few points were fitted to the model it is likely that the curve is not there. But how to determine a good threshold for the number of points?). You model will get more complex when you have to account for allowable transformation such as rotation etc.
The problem with this approach is you are basically trying fit what you think should be in the image to the points and sometimes, even if what you are looking for is not there, it will return you the "best possible" fit. For e.g. you have a whole bunch of points detected from a concentric circle. If you try to detect straight lines from these points, RANSAC will return you the best fit line! In fact, it could give you many different lines from different runs depending on which points it selected during its random initialization stage.
For more details on how to use RANSAC on this sort of problem, have a look at RANSAC for Dummies by Marco Zuliani. He also has a nice MATLAB toolbox to accompany this tech report, which you can probably port to the language of your choice.
Unless you know what you background looks like, or if you are in control of it e.g. by forcing a clean background, this is a very difficult problem to solve.