I am trying to use opencv's simple blob detector to determine the colors on the face of a Rubik's cube. I have been using this fantastic resource so far and it has proved to be very helpful and descriptive. After a fair bit of tweaking, I have been successful in making good filters for each color. I look at the x and y position of each color blob, and since the cubies have an even spacing, do a quick rounding division to determine which row and column they belong in, with groups of two being split and belonging in two different rows/columns respectively.
This is more of a curiosity question than anything else. To my eye, it looks like the centroids are being calculated incorrectly... shouldn't the drawn circle be more centered in each blob? Yet both seem to be sticking out arbitrarily to one side.
Below, I have the original image of the cube, and two of the color filters, the top for green, and the bottom for blue.
As you can see, the green and blue blobs are positioned correctly, and should be far enough away from each other to be classified into separate rows, but the centroids visually seem to be skewed from the centers of the blobs (green centroid should be more to the right, and blue more to the left). Is there something that I am not picking up on here? Is this just a quirk of the system?
Related
I am trying to use color information of detection of rectangles. Some of my rectangles are overlapping and with multicolor. I found a solution to detect these rectangles using Hue values. I am checking inRange with Hue values of colors
Orange 0-22
Yellow 22- 38
Green 38-75
Blue 75-130
Violet 130-160
Red 160-179
, but I do not know what exact color is going to be. For example, in one image rectangles can be orange, red, blue and in another image, it can be other colors.
I tried to look histogram, but I would have a background which is not only white or black. So, the histogram is confusing.
If you give me some ideas about how to handle this problem, I will appreciate it.
You can try a brute force approach, where you try all the color ranges, then use findcontours (example) to see if you can find a contour that is possibly a rectangle. If the background is very noisy you can use a minimum size for the contour
(contourArea). You could also check the solidity by dividing the contour area by the area of the minAreaRect, the result for a rectangle (that has good detection) should approach 1.
Whether this could possibly work depends on several factors, and overlapping rectangles will quickly break it.
So if I understand correctly, you have a variety of images, each of which contain multiple rectangles which can be a variety of different colors, and the background of the image is non uniform, and you're trying to segment out the rectangles using a histogram?
Using histograms for image segmentation works best with grey scale images with a uniform background, so that upon seeing the peeks in your histogram you know the primary intensities of the objects you are trying to segment out. This method is not going to translate well to your application because the shapes you are attempting to segment are non uniform in shade, without seeing example images I would probably say this isn't going to work, however you might be able to get away with it if the shade variation of the rectangles is relatively similar... basically if you have rectangles that are 15-30 you might be alright, but if they vary from 20-100 you're going to be out of luck, same goes with variation of the background.
If the background and the rectangles have very clearly defined borders, and the background colors transition VERY smoothly, you may be able to get away with some sort of region growing on the background in order to get a list of all the background pixels and then just set those to black or something in order to allow better analysis of the rectangles in the foreground, but I can only speculate so much with the information you've given in your post
With a group of friend, we are trying to accomplish a computer vision task on Raspberry Pi, coding with C++ using OpenCV library.
Let me explain the task first.
There is a pattern consisting of 16 seperate squares with each square being red, yellow or blue colored. We are mounting rasperry pi on a quadcopter with its camera module and gathering video feed of the pattern.
We have to detect colors of squares which was easy to accomplish with a little research on web. Tricky part is we have to detect order of the squares as well in order to save the colors in an array in an order.
So far we have accomplished filtering desired colors (red, yellow, blue) to determine squares.
example pattern to recognize and our process so far
In the second image, we know the colors and center points of each square. What we need is a way to write them in an order to a file or on screen.
And to find the order, we tried several OpenCV methods that find corners. With corner points at hand, we compared each point and determined end points so we could draw a boundingrectangle and overcome little distortions.
But since quadcopter gets the video stream, there is always a chance of high distortion. That messes up our corner theory, resulting in wrong order of colors. For example it can capture an image like this:
highly distorted image
It is not right to find order of these squares by comparing their center points. It also won't work finding endpoints to draw a larger rectangle around them to flatten pattern. And then order...
What I ask for is algorithm suggestions. Are we totally going in the wrong direction trying to find corners? Is it possible to determine the order without taking distortion into consideration?
Thanks in advance.
Take the two centers that are the furthest apart and number them 1 and 16. Then find the two centers that are the furthest from the line 1-16, to the left (number 4) and to the right (number 13). Now you have the four corners.
Compute the affine transform that maps the coordinates of the corners 1, 4 and 13 to (0,0), (3,0) and (0,3). Apply this transform to the 16 centers and round to the nearest integers. If all goes well, you will obtain the "logical" coordinates of the squares, in range [0, 3] x [0, 3]. The mapping to the cell indexes is immediate.
Note that because of symmetry, a fourfold undeterminacy will remain, which you can probably lift by checking the color patterns.
This procedure will be very robust to deformations. If there is extreme perspective, you can even exploit the four corners to determine an homographic transform instead of affine. In your case, I doubt this will be useful. You can assess proper working by checking that all expected indexes have been assigned.
Imagine a plain rectangular bitmap of, say, 1024x768 pixels filled with white. There are a few (non-overlapping) sprites drawn onto the bitmap: circles, squares and triangles.
Is there an algorithm (possibly even a C++ implementation) which, given the bitmap and the color which is the background color (white, in the above example), yields a list containing the smallest bounding rectangles for each of the sprites?
Here's some sample: On the left side you can see a sample bitmap which my code is given (together with the information that the 'background' is white). On the right side you can see the same image together with the bounding rectangles of the four shapes (in red); the algorithm I'm looking for computes the geometry of these rectangles.
Some painting programs have a similiar feature for selecting shapes: they can even compute seemingly arbitrary bounding polygons. Instead of dragging a selection rectangle manually, you can click the 'background' (what's background and what's not is determined by some threshold) and then the tool automatically computes the shape of the object drawn onto the background. I need something like this, except that I'm perfectly fine if I just have the rectangular bounding areas for objects.
I became aware of OpenCV; it appears to be relevant (it seems to be a library which includes every graphics algorithm I can think of - and then some) but in the fast amount of information I couldn't find the way to the algorithm I'm thinking of. I would be surprised if OpenCV couldn't do this, but I fear you've got to have a PhD to use it. :-)
Here is the great article on the subject:
http://softsurfer.com/Archive/algorithm_0107/algorithm_0107.htm
I think that PhD is not required here :)
These are my first thoughts, none complicated, except for the edge detection
For each square,
if it's not-white
mark as "found"
if you havn't found one next to it already
add it to points list
for each point in the points list
use basic edge detection to find outline
keep track of bounds while doing so
add bounds to shapes list
remove duplicates from shapes list. (this can happen for concave shapes)
I just realized this will consider white "holes" (like in your leftmost circle in your sample) to be it's own shape. If the first "loop" is a flood fill, it doesn't have this problem, but will be much slower/take much more memory.
The basic edge detection I was thinking of was simple:
given eight cardinal directions left, downleft, etc...
given two relative directions cw(direction-1) and ccw(direction+1)
starting with a point "begin"
set bounds to point
find direction d, where the begin+d is not white, and begin+cw(d) is white.
set current to begin+d
do
if current is outside of bounds, increase bounds
set d = cw(d)
while(cur+d is white or cur+ccw(d) is not white)
d = ccw(d)
cur = cur + d;
while(cur != begin
http://ideone.com/
There's a quite a few edge cases not considered here: what if begin is a single point, what if it runs to the edge of the picture, what if start point is only 1 px wide, but has blobs to two sides, probably others... But the basic algorithm isn't that complicated.
I'm doing some image processing, and am trying to keep track of points similar to those circled below, a very dark spot of a couple of pixels diameter, with all neighbouring pixels being bright. I'm sure there are algorithms and methods which are designed for this, but I just don't know what they are. I don't think edge detection would work, as I only want the small spots. I've read a little about morphological operators, could these be a suitable approach?
Thanks
Loop over your each pixel in your image. When you are done considering a pixel, mark it as "used" (change it to some sentinel value, or keep this data in a separate array parallel to the image).
When you come across a dark pixel, perform a flood-fill on it, marking all those pixels as "used", and keep track of how many pixels were filled in. During the flood-fill, make sure that if the pixel you're considering isn't dark, that it's sufficiently bright.
After the flood-fill, you'll know the size of the dark area you filled in, and whether the border of the fill was exclusively bright pixels. Now, continue the original loop, skipping "used" pixels.
How about some kind of median filtering? Sample values from 3*3 grid (or some other suitable size) around the pixel and set the value of pixel to median of those 9 pixels.
Then if most of the neighbours are bright the pixel becomes bright etc.
Edit: After some thinking, I realized that this will not detect the outliers, it will remove them. So this is not the solution original poster was asking.
Are you sure that you don't want to do an edge detection-like approach? It seems like a comparing the current pixel to the average value of the neighborhood pixels would do the trick. (I would evaluate various neighborhood sizes to be sure.)
Personally I like this corner detection algorithms manual.
Also you can workout naive corner detection algorithm by exploiting idea that isolated pixel is such pixel through which intensity changes drastically in every direction. It is just a starting idea to begin from and move on further to better algorithms.
I can think of these methods that might work with some tweaking of parameters:
Adaptive thresholds
Morphological operations
Corner detection
I'm actually going to suggest simple template matching for this, if all your features are of roughly the same size.
Just copy paste the pixels of one (or a few features) to create few templates, and then use Normalized Cross Correlation or any other score that OpenCV provides in its template matching routines to find similar regions. In the result, detect all the maximal peaks of the response (OpenCV has a function for this too), and those are your feature coordinates.
Blur (3x3) a copy of your image then diff your original image. The pixels with the highest values are the ones that are most different from their neighbors. This could be used as an edge detection algorithm but points are like super-edges so set your threshold higher.
what a single off pixel looks like:
(assume surrounding pixels are all 1)
original blurred diff
1,1,1 8/9,8/9,8/9 1/9,1/9,1/9
1,0,1 8/9,8/9,8/9 1/9,8/9,1/9
1,1,1 8/9,8/9,8/9 1/9,1/9,1/9
what an edge looks like:
(assume surrounding pixels are the same as their closest neighbor)
original blurred diff
1,0,0 6/9,3/9,0/9 3/9,3/9,0/9
1,0,0 6/9,3/9,0/9 3/9,3/9,0/9
1,0,0 6/9,3/9,0/9 3/9,3/9,0/9
Its been a few years since i did any image processing. But I would probably start by converting to a binary representation. It doesn't seem like you're overly interested in the grey middle values, just the very dark/very light regions, so get rid of all the grey. At that point, various morphological operations can accentuate the points you're interested in. Opening and Closing are pretty easy to implement, and can yield pretty nice results, leaving you with a field of black everywhere except the points you're interested in.
Have you tried extracting connected components using cvContours? First thresholding the image (using Otsu's method say) and then extracting each contour. Since the spots you wish to track are (from what I see in your image) somewhat isolated from neighborhood they will some up as separate contours. Now if we compute the area of the Bounding Rectangle of each contour and filter out the larger ones we'd be left with only small dots separate from dark neighbors.
As suggested earlier a bit of Morphological tinkering before the contour separation should yield good results.
Avast there fellow programmers!
I have the following problem:
I have two rectangles overlapping like shown on the picture below.
I want to figure out the polygon consisting of point ABCDEF.
Alternate christmas description: The red cookie cutter is cutting away a bit of the black cookie. I want to calculate the black cookie.
Each rectangle is a data structure with 4 2d-vertices.
What is the best algorithm to achieve this?
This is a special case of general 2D polygon clipping. A good place to start is the Weiler-Atherton algorithm. Wikipedia has a summary and links to the original paper. The algorithm seems to match the data structure you've described pretty well.
Note that it's quite possible you'll end up with a rectangle with a hole in it (if the red one is entirely inside the black one) or even two rectangles (eg if the red is taller and skinnier than the black). If you're certain there is only one corner of the red rectangle inside the black one then the solution should be much simpler.
constructive solid geometry
How precise are the coordinates? If the rectangles are fairly small, the easiest approach might be to just paint them on a canvas, black rectangle first, followed by red. The remaining black pixels on the canvas are the polygon that's left.
Another approach is to split the coordinate grid into a bunch of rectangles based on all of the sides of the rectangles (not counting unbounded rectangles, you have up to 9 rectangles generated if you have two original rectangles). Then just test a representative point from each of these rectangles for membership in the particular polygons to determine which rectangles are in and which are out.
I found some stuff here I might use:
http://www.cgal.org/Manual/3.3/doc_html/cgal_manual/Boolean_set_operations_2/Chapter_main.html
I actually downloaded the CGAL source before I even posted this question, but I think I'll look closer into it.