I am trying to use opencv's simple blob detector to determine the colors on the face of a Rubik's cube. I have been using this fantastic resource so far and it has proved to be very helpful and descriptive. After a fair bit of tweaking, I have been successful in making good filters for each color. I look at the x and y position of each color blob, and since the cubies have an even spacing, do a quick rounding division to determine which row and column they belong in, with groups of two being split and belonging in two different rows/columns respectively.
This is more of a curiosity question than anything else. To my eye, it looks like the centroids are being calculated incorrectly... shouldn't the drawn circle be more centered in each blob? Yet both seem to be sticking out arbitrarily to one side.
Below, I have the original image of the cube, and two of the color filters, the top for green, and the bottom for blue.
As you can see, the green and blue blobs are positioned correctly, and should be far enough away from each other to be classified into separate rows, but the centroids visually seem to be skewed from the centers of the blobs (green centroid should be more to the right, and blue more to the left). Is there something that I am not picking up on here? Is this just a quirk of the system?
I have written an algorithm to process a camera capture and extract a binary image of two features I'm interested in. I'm trying to find the best (fastest) way of detecting when the two features intersect and where the lowest (y coordinate is greatest) point is (this will be the intersection).
I do not want to use a findContours() based method as this is too slow and, in my opinion, unnecessary. I also think blob detection libraries are too bloated for this.
I have two sample images (sorry for low quality):
(not touching: http://i.imgur.com/7bQ9qMo.jpg)
(touching: http://i.imgur.com/tuSmKw7.jpg)
Due to the way these images are created, there is often noise in the top right corner which looks like pixelated lines but methods such as dilation and erosion lose resolution around the features I'm trying to find.
My initial thought would be to use direct pixel access to form a width filter and a height filter. The lowest point in the image is therefore the intersection.
I have no idea how to detect when they touch... logically I can see that a triangle is formed when they intersect and otherwise there is no enclosed black area. Can I fill the image starting from the corner with say, red, and then calculate how much of the image is still black?
Does anyone have any suggestions?
Thanks
Your suggestion is a way more slow than finding contours. For binary images, finding contour is very easy and quick because you just need to find a black pixel followed by a white pixel or vice versa.
Anyway, if you don't want to use it, you can use the vertical projection or vertical profile you will see it the objects intersect or not.
For example, in the following image check the the letter "n" which is little similar to non-intersecting object, and the letter "o" which is similar to intersecting objects :
By analyzing the histograms you can recognize which one is intersecting or not.
I'm trying to extract the cube from the image (looks like a square...). I've used canny and dilate to get the edges and remove the noise.
I'm not even sure if it is possible to get the square out in a robust way.
Advice appreciated!
Thanks.
It's not excessively hard.
Sort all edges by direction. Look for a pair of edges in one direction with another pair 90 degrees rotated. Check for rough proximity. If so, they probably form a rectangle. Check the edge distances to pick the squares from the rectangles, and to discard small squares. Check if you have sufficiently large parts of the edge to be convinced the entire edge must exist. An edge might even be broken in 2. Check if the 4 edges now found delimit an area that is sufficiently uniform.
The last bit is a bit tricky. That's domain knowlegde. Could there be other objects inside the square, and could they touch or overlap the edges of the square?
You can utilize color information and kmeans clustering as explained in the link.
As long as target object color differs from the background, the pixels of the square object can be detected accurately.
Imagine a plain rectangular bitmap of, say, 1024x768 pixels filled with white. There are a few (non-overlapping) sprites drawn onto the bitmap: circles, squares and triangles.
Is there an algorithm (possibly even a C++ implementation) which, given the bitmap and the color which is the background color (white, in the above example), yields a list containing the smallest bounding rectangles for each of the sprites?
Here's some sample: On the left side you can see a sample bitmap which my code is given (together with the information that the 'background' is white). On the right side you can see the same image together with the bounding rectangles of the four shapes (in red); the algorithm I'm looking for computes the geometry of these rectangles.
Some painting programs have a similiar feature for selecting shapes: they can even compute seemingly arbitrary bounding polygons. Instead of dragging a selection rectangle manually, you can click the 'background' (what's background and what's not is determined by some threshold) and then the tool automatically computes the shape of the object drawn onto the background. I need something like this, except that I'm perfectly fine if I just have the rectangular bounding areas for objects.
I became aware of OpenCV; it appears to be relevant (it seems to be a library which includes every graphics algorithm I can think of - and then some) but in the fast amount of information I couldn't find the way to the algorithm I'm thinking of. I would be surprised if OpenCV couldn't do this, but I fear you've got to have a PhD to use it. :-)
Here is the great article on the subject:
http://softsurfer.com/Archive/algorithm_0107/algorithm_0107.htm
I think that PhD is not required here :)
These are my first thoughts, none complicated, except for the edge detection
For each square,
if it's not-white
mark as "found"
if you havn't found one next to it already
add it to points list
for each point in the points list
use basic edge detection to find outline
keep track of bounds while doing so
add bounds to shapes list
remove duplicates from shapes list. (this can happen for concave shapes)
I just realized this will consider white "holes" (like in your leftmost circle in your sample) to be it's own shape. If the first "loop" is a flood fill, it doesn't have this problem, but will be much slower/take much more memory.
The basic edge detection I was thinking of was simple:
given eight cardinal directions left, downleft, etc...
given two relative directions cw(direction-1) and ccw(direction+1)
starting with a point "begin"
set bounds to point
find direction d, where the begin+d is not white, and begin+cw(d) is white.
set current to begin+d
do
if current is outside of bounds, increase bounds
set d = cw(d)
while(cur+d is white or cur+ccw(d) is not white)
d = ccw(d)
cur = cur + d;
while(cur != begin
http://ideone.com/
There's a quite a few edge cases not considered here: what if begin is a single point, what if it runs to the edge of the picture, what if start point is only 1 px wide, but has blobs to two sides, probably others... But the basic algorithm isn't that complicated.
I'm doing some image processing, and am trying to keep track of points similar to those circled below, a very dark spot of a couple of pixels diameter, with all neighbouring pixels being bright. I'm sure there are algorithms and methods which are designed for this, but I just don't know what they are. I don't think edge detection would work, as I only want the small spots. I've read a little about morphological operators, could these be a suitable approach?
Thanks
Loop over your each pixel in your image. When you are done considering a pixel, mark it as "used" (change it to some sentinel value, or keep this data in a separate array parallel to the image).
When you come across a dark pixel, perform a flood-fill on it, marking all those pixels as "used", and keep track of how many pixels were filled in. During the flood-fill, make sure that if the pixel you're considering isn't dark, that it's sufficiently bright.
After the flood-fill, you'll know the size of the dark area you filled in, and whether the border of the fill was exclusively bright pixels. Now, continue the original loop, skipping "used" pixels.
How about some kind of median filtering? Sample values from 3*3 grid (or some other suitable size) around the pixel and set the value of pixel to median of those 9 pixels.
Then if most of the neighbours are bright the pixel becomes bright etc.
Edit: After some thinking, I realized that this will not detect the outliers, it will remove them. So this is not the solution original poster was asking.
Are you sure that you don't want to do an edge detection-like approach? It seems like a comparing the current pixel to the average value of the neighborhood pixels would do the trick. (I would evaluate various neighborhood sizes to be sure.)
Personally I like this corner detection algorithms manual.
Also you can workout naive corner detection algorithm by exploiting idea that isolated pixel is such pixel through which intensity changes drastically in every direction. It is just a starting idea to begin from and move on further to better algorithms.
I can think of these methods that might work with some tweaking of parameters:
Adaptive thresholds
Morphological operations
Corner detection
I'm actually going to suggest simple template matching for this, if all your features are of roughly the same size.
Just copy paste the pixels of one (or a few features) to create few templates, and then use Normalized Cross Correlation or any other score that OpenCV provides in its template matching routines to find similar regions. In the result, detect all the maximal peaks of the response (OpenCV has a function for this too), and those are your feature coordinates.
Blur (3x3) a copy of your image then diff your original image. The pixels with the highest values are the ones that are most different from their neighbors. This could be used as an edge detection algorithm but points are like super-edges so set your threshold higher.
what a single off pixel looks like:
(assume surrounding pixels are all 1)
original blurred diff
1,1,1 8/9,8/9,8/9 1/9,1/9,1/9
1,0,1 8/9,8/9,8/9 1/9,8/9,1/9
1,1,1 8/9,8/9,8/9 1/9,1/9,1/9
what an edge looks like:
(assume surrounding pixels are the same as their closest neighbor)
original blurred diff
1,0,0 6/9,3/9,0/9 3/9,3/9,0/9
1,0,0 6/9,3/9,0/9 3/9,3/9,0/9
1,0,0 6/9,3/9,0/9 3/9,3/9,0/9
Its been a few years since i did any image processing. But I would probably start by converting to a binary representation. It doesn't seem like you're overly interested in the grey middle values, just the very dark/very light regions, so get rid of all the grey. At that point, various morphological operations can accentuate the points you're interested in. Opening and Closing are pretty easy to implement, and can yield pretty nice results, leaving you with a field of black everywhere except the points you're interested in.
Have you tried extracting connected components using cvContours? First thresholding the image (using Otsu's method say) and then extracting each contour. Since the spots you wish to track are (from what I see in your image) somewhat isolated from neighborhood they will some up as separate contours. Now if we compute the area of the Bounding Rectangle of each contour and filter out the larger ones we'd be left with only small dots separate from dark neighbors.
As suggested earlier a bit of Morphological tinkering before the contour separation should yield good results.