Say I have a closed shape as seen in image below and I have the edge pixels. What is the most efficient way to fill the shape, i.e. turn pixels 'on' inside the shape if:
1) I have all the edge pixels
2) I have most of the edge pixels and not all of them (as seen in the figure).
Construct the convex hull and add the missing pixels. Then use a scanline algorithm to fill the polygon.
It all depends on the situation.
If you manually created the framebuffer (basically using a byte array or something alike) you have to iterate over all pixels you want to change. So, for example, starting at the leftmost edge of a row:
Find start of shape on row
Jump one right and turn on pixel until found second end of shape on row (or end of row)
Continue on next row
This will of course only work if you have all edge pixels. Take a look at Marching Squares, can be of some assistance.
And please, be more specific. "The most efficient way to fill the shape" depends alot of your underlying rendering library, if it's raster graphics and so on...
EDIT
Note, the algorithm is much faster if you can generate the edge pixels, then there's no need to look for start of edge.
A standard flood fill algorithm will be pretty efficient on a convex shape, and will handle the cases where the shape is less convex than you anticipated. Unfortunately it requires an unbroken outline.
The breaks in the boundary destroy the meaning if the word "inside".
A neural network like a human retina is very efficient at doing this processing.
On a computer you need to take time to define what you mean by "inside". How big a gap? How wriggerly a boundary?
Simulate a largish circular bug bouncing arround the "inside" - too big to go thru the gaps but smaller than the min radius of curvature of the boundary????
Before you can fill the inside of something you would need to determine the exact boundary, in this case that would constitute recognising the circle.
After that you can just check in a box around the circle for every pixel, if it is actually in it. Since you have to do something with every pixel inside the circle and the number of pixels in the circle is linear in the number of pixels of a bounding square (assuming the bounding square's sides have length 'radius * constant' for some constant), this should be close to optimal.
Related
I hope you are doing well. I am stuck at one part of a visual effect program in C++, and wanted to ask for help.
I have an array of colors at random positions on an image. There can be any number of these "subpixels" that fall over top of any given pixel. The subpixels that overlap a pixel can be at any position within the pixel, since they're distributed randomly throughout the image. All I have access to is their position on the image and their color, which represents what the color should be at that precise subpixel point on the image.
I need to determine what color to make each pixel of the image. In other words, I need to interpolate what the color should be at the centre of each pixel.
Here is a diagram with an example of this on a 5x5 image:
I need to go from this:
To this:
If it aids your understanding, you can think of the first image as a series of random points whose color values were calculated using bilinear interpolation on the second image.
I am writing this in C++, and ideally it will be as fast as possible, but I welcome contributions in any language or just explained with symbols or words. It should be as accurate as possible, but I also welcome solutions that are slightly inaccurate in favour of performance or simplicity.
Please let me know if you need clarification on the problem.
Thank you.
I ended up finding quite a decent solution which, while it doesn't find the absolutely 100% technically correct color for each pixel, was more than good enough and acceptably fast, especially when I added multithreading.
I first create a vector for each pixel/cell that contains pointers to subpixels (points with known colors). When I create a subpixel, I add a pointer to it to the vector representing the pixel/cell that it overlaps and to each of the vectors representing pixels/cells directly adjacent to the pixel/cell that that it overlaps.
Then, I split each pixel/cell into n sub-cells (I found 8 works well). This is not as expensive as you might imagine, because I only have to calculate & compare the distance for those subpixels that are in that pixel/cell's subpixel pointer vector. For each sub-cell, I calculate which subpixel is the closest to its centre. That subpixel's color then contributes 1/nth of the color for that pixel/cell.
I found it was important to add the subpixel pointers to adjacent cell/pixel vectors, so that each sub-cell can take into account subpixels from adjacent pixels/cells. This even makes it produce a reasonable color when there are pixels/cells that have no subpixels overlapping them (as long as the neighboring pixels/cells do).
Thanks for all the comments so far; any ideas about how to speed this up would be appreciated as well.
I have an algorithmic problem on a Cartesian plane.. I need to efficiently search for geometric shapes that intersect with a given point. There are several shapes(rectangle, circle, triangle and polygon) but those are not important, because the determining the actual point inclusion is not a problem here, I will implement those on my own. The problem lies in determining which shapes need to be verified for the inclusion with the given point. Iterating through all of my shapes on plane and running the point inclusion method on each one of them is inefficient as the number of instances of shapes will be quite large. My first idea was to divide the plane for segments(the plane is finite, but too large for any kind of 3D array) and when adding a shape to the database, i would determine which segments it would intersect with and save them within object of the shape. Then when the point for inclusion verification is given, I would only need to determine the segment in which the point is located and then verify the inclusion only with objects which intersect with that segment.
Is that the way to go? I don't know if the method I described is optimal or if i am not missing something. Any help would be appreciated..
Thanks in advance
P.S.: I will be writing this in C++. That is not really relevant as it is more of an algorithmic problem but I wanted to put that out if someone was curious...
The gridding approach can be used here.
See the plane as a raster image where you draw all your shapes using a scan conversion algorithm, making sure that all pixels even partially covered are filled. For every image pixel, keep a list of the shapes that filled it.
A query is then straightforward: find the pixel where the query point falls in time O(1) and check every shape in the list, in time O(K), where K is the list length, approximately equal to the number of intersecting shapes.
If your image is made of N² pixels and you have M objects having an average area A pixels, you will need to store N²+M.A list elements (a shape identifier + a link to the next). You will choose the pixel size to achieve a good compromise between accuracy and storage cost. In any case, you must limit yourself to N²<Q.M, where Q is the total number of queries, otherwise the cost of just initializing the image could exceed the total query time.
In case your scene is very sparse (more voids than shapes), you can use a compressed representation of the image, using a quadtree.
Imagine a plain rectangular bitmap of, say, 1024x768 pixels filled with white. There are a few (non-overlapping) sprites drawn onto the bitmap: circles, squares and triangles.
Is there an algorithm (possibly even a C++ implementation) which, given the bitmap and the color which is the background color (white, in the above example), yields a list containing the smallest bounding rectangles for each of the sprites?
Here's some sample: On the left side you can see a sample bitmap which my code is given (together with the information that the 'background' is white). On the right side you can see the same image together with the bounding rectangles of the four shapes (in red); the algorithm I'm looking for computes the geometry of these rectangles.
Some painting programs have a similiar feature for selecting shapes: they can even compute seemingly arbitrary bounding polygons. Instead of dragging a selection rectangle manually, you can click the 'background' (what's background and what's not is determined by some threshold) and then the tool automatically computes the shape of the object drawn onto the background. I need something like this, except that I'm perfectly fine if I just have the rectangular bounding areas for objects.
I became aware of OpenCV; it appears to be relevant (it seems to be a library which includes every graphics algorithm I can think of - and then some) but in the fast amount of information I couldn't find the way to the algorithm I'm thinking of. I would be surprised if OpenCV couldn't do this, but I fear you've got to have a PhD to use it. :-)
Here is the great article on the subject:
http://softsurfer.com/Archive/algorithm_0107/algorithm_0107.htm
I think that PhD is not required here :)
These are my first thoughts, none complicated, except for the edge detection
For each square,
if it's not-white
mark as "found"
if you havn't found one next to it already
add it to points list
for each point in the points list
use basic edge detection to find outline
keep track of bounds while doing so
add bounds to shapes list
remove duplicates from shapes list. (this can happen for concave shapes)
I just realized this will consider white "holes" (like in your leftmost circle in your sample) to be it's own shape. If the first "loop" is a flood fill, it doesn't have this problem, but will be much slower/take much more memory.
The basic edge detection I was thinking of was simple:
given eight cardinal directions left, downleft, etc...
given two relative directions cw(direction-1) and ccw(direction+1)
starting with a point "begin"
set bounds to point
find direction d, where the begin+d is not white, and begin+cw(d) is white.
set current to begin+d
do
if current is outside of bounds, increase bounds
set d = cw(d)
while(cur+d is white or cur+ccw(d) is not white)
d = ccw(d)
cur = cur + d;
while(cur != begin
http://ideone.com/
There's a quite a few edge cases not considered here: what if begin is a single point, what if it runs to the edge of the picture, what if start point is only 1 px wide, but has blobs to two sides, probably others... But the basic algorithm isn't that complicated.
As seen in the image
I draw set of contours (polygons) as GL_LINE_STRIP.
Now I want to select curve(polygon) under the mouse to delete,move..etc in 3D .
I am wondering which method to use:
1.use OpenGL picking and selection. ( glRenderMode(GL_SELECT) )
2.use manual collision detection , by using a pick-ray and check whether the ray is inside each polygon.
I strongly recommend against GL_SELECT. This method is very old and absent in new GL versions, and you're likely to get problems with modern graphics cards. Don't expect it to be supported by hardware - probably you'd encounter a software (driver) fallback for this mode on many GPUs, provided it would work at all. Use at your own risk :)
Let me provide you with an alternative.
For solid, big objects, there's an old, good approach of selection by:
enabling and setting the scissor test to a 1x1 window at the cursor position
drawing the screen with no lighting, texturing and multisampling, assigning an unique solid colour for every "important" entity - this colour will become the object ID for picking
calling glReadPixels and retrieving the colour, which would then serve to identify the picked object
clearing the buffers, resetting the scissor to the normal size and drawing the scene normally.
This gives you a very reliable "per-object" picking method. Also, drawing and clearing only 1 pixel with minimal per-pixel operation won't really hurt your performance, unless you are short on vertex processing power (unlikely, I think) or have really a lot of objects and are likely to get CPU-bound on the number of draw calls (but then again, I believe it's possible to optimize this away to a single draw call if you could pass the colour as per-pixel data).
The colour in RGB is 3 unsigned bytes, but it should be possible to additionally use the alpha channel of the framebuffer for the last byte, so you'd get 4 bytes in total - enough to store any 32-bit pointer to the object as the colour.
Alternatively, you can create a dedicated framebuffer object with a specific pixel format (like GL_R32UI, or even GL_RG32UI if you need 64 bits) for that.
The above is a nice and quick alternative (both in terms of reliability and in implementation time) for the strict geometric approach.
I found that on new GPUs, the GL_SELECT mode is extremely slow. I played with a few different ways of fixing the problem.
The first was to do a CPU collision test, which worked, but wasn't as fast as I would have liked. It definitely slows down when you are casting rays into the screen (using gluUnproject) and then trying to find which object the mouse is colliding with. The only way I got satisfactory speeds was to use an octree to reduce the number of collision tests down and then do a bounding box collision test - however, this resulted in a method that was not pixel perfect.
The method I settled on was to first find all the objects under the mouse (using gluUnproject and bounding box collision tests) which is usually very fast. I then rendered each of the objects that have potentially collided with the mouse in the backbuffer as a different color. I then used glReadPixel to get the color under the mouse, and map that back to the object. glReadPixel is a slow call, since it has to read from the frame buffer. However, it is done once per frame, which ends up taking a negligible amount of time. You can speed it up by rendering to a PBO if you'd like.
Giawa
umanga, Cant see how to reply inline... maybe I should sign up :)
First of all I must apologize for giving you the wrong algo - i did the back face culling one. But the one you need is very similar which is why I got confused... d'oh.
Get the camera position to mouse vector as said before.
For each contour, loop through all the coords in pairs (0-1, 1-2, 2-3, ... n-0) in it and make a vec out of them as before. I.e. walk the contour.
Now do the cross prod of those two (contour edge to mouse vec) instead of between pairs like I said before, do that for all the pairs and vector add them all up.
At the end find the magnitude of the resulting vector. If the result is zero (taking into account rounding errors) then your outside the shape - regardless of facing. If your interested in facing then instead of the mag you can do that dot prod with the mouse vector to find the facing and test the sign +/-.
It works because the algo finds the amount of distance from the vector line to each point in turn. As you sum them up and you are outside then they all cancel out because the contour is closed. If your inside then they all sum up. Its actually Gauss's Law of electromagnetic fields in physics...
See:http://en.wikipedia.org/wiki/Gauss%27s_law and note "the right-hand side of the equation is the total charge enclosed by S divided by the electric constant" noting the word "enclosed" - i.e. zero means not enclosed.
You can still do that optimization with the bounding boxes for speed.
In the past I've used GL_SELECT to determine which object(s) contributed the pixel(s) of interest and then used computational geometry to get an accurate intersection with the object(s) if required.
Do you expect to select by clicking the contour (on the edge) or the interior of the polygon? Your second approach sounds like you want clicks in the interior to select the tightest containing polygon. I don't think that GL_SELECT after rendering GL_LINE_STRIP is going to make the interior responsive to clicks.
If this was a true contour plot (from the image I don't think it is, edges appear to intersect) then a much simpler algorithm would be available.
You cant use select if you stay with the lines because you would have to click on the line pixels rendered not the space inside the lines bounding them which I read as what you wish to do.
You can use Kos's answer but in order to render the space you need to solid fill it which would involve converting all of your contours to convex types which is painful. So I think that would work sometimes and give the wrong answer in some cases unless you did that.
What you need to do is use the CPU. You have the view extents from the viewport and the perspective matrix. With the mouse coord, generate the view to mouse pointer vector. You also have all the coords of the contours.
Take the first coord of the first contour and make a vector to the second coord. Make a vector out of them. Take 3rd coord and make a vector from 2 to 3 and repeat all the way around your contour and finally make the last one from coord n back to 0 again. For each pair in sequence find the cross product and sum up all the results. When you have that final summation vector keep hold of that and do a dot product with the mouse pointer direction vector. If its +ve then the mouse is inside the contour, if its -ve then its not and if 0 then I guess the plane of the contour and the mouse direction are parallel.
Do that for each contour and then you will know which of them are spiked by your mouse. Its up to you which one you want to pick from that set. Highest Z ?
It sounds like a lot of work but its not too bad and will give the right answer. You might like to additionally keep bounding boxes of all your contours then you can early out the ones off of the mouse vector by doing the same math as for the full vector but only on the 4 sides and if its not inside then the contour cannot be either.
The first is easy to implement and widely used.
I'm doing some image processing, and am trying to keep track of points similar to those circled below, a very dark spot of a couple of pixels diameter, with all neighbouring pixels being bright. I'm sure there are algorithms and methods which are designed for this, but I just don't know what they are. I don't think edge detection would work, as I only want the small spots. I've read a little about morphological operators, could these be a suitable approach?
Thanks
Loop over your each pixel in your image. When you are done considering a pixel, mark it as "used" (change it to some sentinel value, or keep this data in a separate array parallel to the image).
When you come across a dark pixel, perform a flood-fill on it, marking all those pixels as "used", and keep track of how many pixels were filled in. During the flood-fill, make sure that if the pixel you're considering isn't dark, that it's sufficiently bright.
After the flood-fill, you'll know the size of the dark area you filled in, and whether the border of the fill was exclusively bright pixels. Now, continue the original loop, skipping "used" pixels.
How about some kind of median filtering? Sample values from 3*3 grid (or some other suitable size) around the pixel and set the value of pixel to median of those 9 pixels.
Then if most of the neighbours are bright the pixel becomes bright etc.
Edit: After some thinking, I realized that this will not detect the outliers, it will remove them. So this is not the solution original poster was asking.
Are you sure that you don't want to do an edge detection-like approach? It seems like a comparing the current pixel to the average value of the neighborhood pixels would do the trick. (I would evaluate various neighborhood sizes to be sure.)
Personally I like this corner detection algorithms manual.
Also you can workout naive corner detection algorithm by exploiting idea that isolated pixel is such pixel through which intensity changes drastically in every direction. It is just a starting idea to begin from and move on further to better algorithms.
I can think of these methods that might work with some tweaking of parameters:
Adaptive thresholds
Morphological operations
Corner detection
I'm actually going to suggest simple template matching for this, if all your features are of roughly the same size.
Just copy paste the pixels of one (or a few features) to create few templates, and then use Normalized Cross Correlation or any other score that OpenCV provides in its template matching routines to find similar regions. In the result, detect all the maximal peaks of the response (OpenCV has a function for this too), and those are your feature coordinates.
Blur (3x3) a copy of your image then diff your original image. The pixels with the highest values are the ones that are most different from their neighbors. This could be used as an edge detection algorithm but points are like super-edges so set your threshold higher.
what a single off pixel looks like:
(assume surrounding pixels are all 1)
original blurred diff
1,1,1 8/9,8/9,8/9 1/9,1/9,1/9
1,0,1 8/9,8/9,8/9 1/9,8/9,1/9
1,1,1 8/9,8/9,8/9 1/9,1/9,1/9
what an edge looks like:
(assume surrounding pixels are the same as their closest neighbor)
original blurred diff
1,0,0 6/9,3/9,0/9 3/9,3/9,0/9
1,0,0 6/9,3/9,0/9 3/9,3/9,0/9
1,0,0 6/9,3/9,0/9 3/9,3/9,0/9
Its been a few years since i did any image processing. But I would probably start by converting to a binary representation. It doesn't seem like you're overly interested in the grey middle values, just the very dark/very light regions, so get rid of all the grey. At that point, various morphological operations can accentuate the points you're interested in. Opening and Closing are pretty easy to implement, and can yield pretty nice results, leaving you with a field of black everywhere except the points you're interested in.
Have you tried extracting connected components using cvContours? First thresholding the image (using Otsu's method say) and then extracting each contour. Since the spots you wish to track are (from what I see in your image) somewhat isolated from neighborhood they will some up as separate contours. Now if we compute the area of the Bounding Rectangle of each contour and filter out the larger ones we'd be left with only small dots separate from dark neighbors.
As suggested earlier a bit of Morphological tinkering before the contour separation should yield good results.