How to fill polygons with even-odd rule? - c++

I have n polygons. Each Polygon has n points, and some other properties like bounding box and etc. I can fill them separetly in some color , but i want to fill polygons with even-odd rule. Which algorithms are there for this ? (may be vector based or raster based)

If raster is OK for you, calculate all cross points of edges vs line (list of intersections per line). Put horizontal lines between raster lines, so [for square] line above has no edge intersections, line under has two intersections.
Now go for every line from left to right per intersection, you start "outside", and every intersection you invert the current state.
(BTW, you may assert that you end "outside" as well ... of course I'm talking about infinite x-res, you may further constrain it to some fixed resolution, but calculation starts where ever is the first intersection, and I would run till the last one, just for that assert check)
It may be tricky to get the almost-horizontal lines correctly, and connection between two edges (imagine sharp end of star, going down and up at the same raster pixel, with only 0.1px x-coord diff, etc.).
I would probably draw some pictures on paper, if clean math will do, or some corner-case logic is needed, then unit test heavily.
If you want full vector, it's basically the same, just it's not per line, but per each end of edge y-coord (creating virtual line) to calculate all intersections with other edges.
Actually this may be tad easier to write, but then it may be tad more difficult to apply the result on raster image. IMHO (far from feeling "sure", been long time since I did this).

Using stencil flipping should be the most efficient raster-based method to do this. Do one pass to populate the stencil buffer ( e.g. glStencilOp(GL_KEEP, GL_KEEP,GL_INVERT)) and a second pass with a full-screen quad that sets the color depending on the stencil buffer's values (e.g. glStencilFunc(GL_EQUAL, 1 ,1))

Related

Edge following with camera

I want to follow the rightmost edge in the following picture with a line following robot.
I tried simple "thresholding", but unfortunately, it includes the blurry white halo:
The reason I threshold is to obtain a clean line from the Sobel edge detector:
Is there a good algorithm which I can use to isolate this edge/move along this edge? The one I am using currently seems error prone, but it's the best one I've been able to figure out so far.
Note: The edge may curve or be aligned in any direction, but a point on the edge will always lie very close to the center of the image. Here's a video of what I'm trying to do. It doesn't follow the edge after (1:35) properly due to the halo screwing up the thresholding.
Here's another sample:
Here, I floodfill the centermost edge to separate it from the little bump in the bottom right corner:
Simplest method (vertical line)
If you know that your image will have black on the right side of the line, here's a simple method:
1) apply the Sobel operator to find the first derivative in the x direction. The result will be an image that is most negative where your gradient is strongest. (Use a large kernel size to average out the halo effect. You can even apply a Gaussian blur to the image first, to get even more averaging if the 7x7 kernel isn't enough.)
2) For each row of the image, find the index of the minimum (i.e. most negative) value. That's your estimate of line position in that row.
3) Do whatever you want with that. (Maybe take the median of those line positions, on the top half and the bottom half of the image, to get an estimate of 2 points that describe the line.)
Slightly more advanced (arbitrary line)
Use this if you don't know the direction of the line, but you do know that it's straight enough that you can approximate it with a straight line.
1)
dx = cv2.Sobel(grayscaleImg,cv2.cv.CV_32F,1,0,ksize=7)
dy = cv2.Sobel(grayscaleImg,cv2.cv.CV_32F,0,1,ksize=7)
angle = np.atan2(dy,dx)
magnitudeSquared = np.square(dx)+np.square(dy)
You now have the angle (in radians) and magnitude of the gradient at each point in your image.
2) From here you can use basic numpy operations to find the line: Filter the points to only keep points where magnitudeSquared > some threshold. Then grab the most common angle (np.bincount() is useful for that). Now you know your line's angle.
3) Further filter the points to only keep points that are close to that angle. You now have all the points on your line. Fit a line through the coordinates of those points.
Most advanced and brittle (arbitrary curve)
If you really need to handle a curve, here's one way:
1) Use your method above to threshold the image. Manually tune the threshold until the white/black division happens roughly where you want it. (Probably 127 is not the right threshold. But if your lighting conditions are consistent, you might be able to find a threshold that works. Confirm it works across multiple images.)
2) Use OpenCV's findcontours() to fit a curve to the white/black boundary. If it's too choppy, use approxPolyDP() to simplify it.

3D Math - Only keeping positions within a certain amount of yards

I'm trying to determine from a large set of positions how to narrow my list down significantly.
Right now I have around 3000 positions (x, y, z) and I want to basically keep the positions that are furthest apart from each other (I don't need to keep 100 positions that are all within a 2 yard radius from each other).
Besides doing a brute force method and literally doing 3000^2 comparisons, does anyone have any ideas how I can narrow this list down further?
I'm a bit confused on how I should approach this from a math perspective.
Well, I can't remember the name for this algorithm, but I'll tell you a fun technique for handling this. I'll assume that there is a semi-random scattering of points in a 3D environment.
Simple Version: Divide and Conquer
Divide your space into a 3D grid of cubes. Each cube will be X yards on each side.
Declare a multi-dimensional array [x,y,z] such that you have an element for each cube in your grid.
Every element of the array should either be a vertex or reference to a vertex (x,y,z) structure, and each should default to NULL
Iterate through each vertex in your dataset, determine which cube the vertex falls in.
How? Well, you might assume that the (5.5, 8.2, 9.1) vertex belongs in MyCubes[5,8,9], assuming X (cube-side-length) is of size 1. Note: I just truncated the decimals/floats to determine which cube.
Check to see if that relevant cube is already taken by a vertex. Check: If MyCubes[5,8,9] == NULL then (inject my vertex) else (do nothing, toss it out! spot taken, buddy)
Let's save some memory
This will give you a nicely simplified dataset in one pass, but at the cost of a potentially large amount of memory.
So, how do you do it without using too much memory?
I'd use a hashtable such that my key is the Grid-Cube coordinate (5,8,9) in my sample above.
If MyHashTable.contains({5,8,9}) then DoNothing else InsertCurrentVertex(...)
Now, you will have a one-pass solution with minimal memory usage (no gigantic array with a potentially large number of empty cubes. What is the cost? Well, the programming time to setup your structure/class so that you can perform the .contains action in a HashTable (or your language-equivalent)
Hey, my results are chunky!
That's right, because we took the first result that fit in any cube. On average, we will have achieved X-separation between vertices, but as you can figure out by now, some vertices will still be close to one another (at the edges of the cubes).
So, how do we handle it? Well, let's go back to the array method at the top (memory-intensive!).
Instead of ONLY checking to see if a vertex is already in the cube-in-question, also perform this other check:
If Not ThisCubeIsTaken()
For each SurroundingCube
If not Is_Your_Vertex_Sufficiently_Far_Away_From_Me()
exit_loop_and_outer_if_statement()
end if
Next
//Ok, we got here, we can add the vertex to the current cube because the cube is not only available, but the neighbors are far enough away from me
End If
I think you can probably see the beauty of this, as it is really easy to get neighboring cubes if you have a 3D array.
If you do some smoothing like this, you can probably enforce a 'don't add if it's with 0.25X' policy or something. You won't have to be too strict to achieve a noticeable smoothing effect.
Still too chunky, I want it smooth
In this variation, we will change the qualifying action for whether a vertex is permitted to take residence in a cube.
If TheCube is empty OR if ThisVertex is closer to the center of TheCube than the Cube's current vertex
InsertVertex (overwrite any existing vertex in the cube
End If
Note, we don't have to perform neighbor detection for this one. We just optimize towards the center of each cube.
If you like, you can merge this variation with the previous variation.
Cheat Mode
For some people in this situation, you can simply take a 10% random selection of your dataset and that will be a good-enough simplification. However, it will be very chunky with some points very close together. On the bright side, it takes a few minutes max. I don't recommend it unless you are prototyping.

Efficient way to fill a convex shape

Say I have a closed shape as seen in image below and I have the edge pixels. What is the most efficient way to fill the shape, i.e. turn pixels 'on' inside the shape if:
1) I have all the edge pixels
2) I have most of the edge pixels and not all of them (as seen in the figure).
Construct the convex hull and add the missing pixels. Then use a scanline algorithm to fill the polygon.
It all depends on the situation.
If you manually created the framebuffer (basically using a byte array or something alike) you have to iterate over all pixels you want to change. So, for example, starting at the leftmost edge of a row:
Find start of shape on row
Jump one right and turn on pixel until found second end of shape on row (or end of row)
Continue on next row
This will of course only work if you have all edge pixels. Take a look at Marching Squares, can be of some assistance.
And please, be more specific. "The most efficient way to fill the shape" depends alot of your underlying rendering library, if it's raster graphics and so on...
EDIT
Note, the algorithm is much faster if you can generate the edge pixels, then there's no need to look for start of edge.
A standard flood fill algorithm will be pretty efficient on a convex shape, and will handle the cases where the shape is less convex than you anticipated. Unfortunately it requires an unbroken outline.
The breaks in the boundary destroy the meaning if the word "inside".
A neural network like a human retina is very efficient at doing this processing.
On a computer you need to take time to define what you mean by "inside". How big a gap? How wriggerly a boundary?
Simulate a largish circular bug bouncing arround the "inside" - too big to go thru the gaps but smaller than the min radius of curvature of the boundary????
Before you can fill the inside of something you would need to determine the exact boundary, in this case that would constitute recognising the circle.
After that you can just check in a box around the circle for every pixel, if it is actually in it. Since you have to do something with every pixel inside the circle and the number of pixels in the circle is linear in the number of pixels of a bounding square (assuming the bounding square's sides have length 'radius * constant' for some constant), this should be close to optimal.

OpenGL GL_SELECT or manual collision detection?

As seen in the image
I draw set of contours (polygons) as GL_LINE_STRIP.
Now I want to select curve(polygon) under the mouse to delete,move..etc in 3D .
I am wondering which method to use:
1.use OpenGL picking and selection. ( glRenderMode(GL_SELECT) )
2.use manual collision detection , by using a pick-ray and check whether the ray is inside each polygon.
I strongly recommend against GL_SELECT. This method is very old and absent in new GL versions, and you're likely to get problems with modern graphics cards. Don't expect it to be supported by hardware - probably you'd encounter a software (driver) fallback for this mode on many GPUs, provided it would work at all. Use at your own risk :)
Let me provide you with an alternative.
For solid, big objects, there's an old, good approach of selection by:
enabling and setting the scissor test to a 1x1 window at the cursor position
drawing the screen with no lighting, texturing and multisampling, assigning an unique solid colour for every "important" entity - this colour will become the object ID for picking
calling glReadPixels and retrieving the colour, which would then serve to identify the picked object
clearing the buffers, resetting the scissor to the normal size and drawing the scene normally.
This gives you a very reliable "per-object" picking method. Also, drawing and clearing only 1 pixel with minimal per-pixel operation won't really hurt your performance, unless you are short on vertex processing power (unlikely, I think) or have really a lot of objects and are likely to get CPU-bound on the number of draw calls (but then again, I believe it's possible to optimize this away to a single draw call if you could pass the colour as per-pixel data).
The colour in RGB is 3 unsigned bytes, but it should be possible to additionally use the alpha channel of the framebuffer for the last byte, so you'd get 4 bytes in total - enough to store any 32-bit pointer to the object as the colour.
Alternatively, you can create a dedicated framebuffer object with a specific pixel format (like GL_R32UI, or even GL_RG32UI if you need 64 bits) for that.
The above is a nice and quick alternative (both in terms of reliability and in implementation time) for the strict geometric approach.
I found that on new GPUs, the GL_SELECT mode is extremely slow. I played with a few different ways of fixing the problem.
The first was to do a CPU collision test, which worked, but wasn't as fast as I would have liked. It definitely slows down when you are casting rays into the screen (using gluUnproject) and then trying to find which object the mouse is colliding with. The only way I got satisfactory speeds was to use an octree to reduce the number of collision tests down and then do a bounding box collision test - however, this resulted in a method that was not pixel perfect.
The method I settled on was to first find all the objects under the mouse (using gluUnproject and bounding box collision tests) which is usually very fast. I then rendered each of the objects that have potentially collided with the mouse in the backbuffer as a different color. I then used glReadPixel to get the color under the mouse, and map that back to the object. glReadPixel is a slow call, since it has to read from the frame buffer. However, it is done once per frame, which ends up taking a negligible amount of time. You can speed it up by rendering to a PBO if you'd like.
Giawa
umanga, Cant see how to reply inline... maybe I should sign up :)
First of all I must apologize for giving you the wrong algo - i did the back face culling one. But the one you need is very similar which is why I got confused... d'oh.
Get the camera position to mouse vector as said before.
For each contour, loop through all the coords in pairs (0-1, 1-2, 2-3, ... n-0) in it and make a vec out of them as before. I.e. walk the contour.
Now do the cross prod of those two (contour edge to mouse vec) instead of between pairs like I said before, do that for all the pairs and vector add them all up.
At the end find the magnitude of the resulting vector. If the result is zero (taking into account rounding errors) then your outside the shape - regardless of facing. If your interested in facing then instead of the mag you can do that dot prod with the mouse vector to find the facing and test the sign +/-.
It works because the algo finds the amount of distance from the vector line to each point in turn. As you sum them up and you are outside then they all cancel out because the contour is closed. If your inside then they all sum up. Its actually Gauss's Law of electromagnetic fields in physics...
See:http://en.wikipedia.org/wiki/Gauss%27s_law and note "the right-hand side of the equation is the total charge enclosed by S divided by the electric constant" noting the word "enclosed" - i.e. zero means not enclosed.
You can still do that optimization with the bounding boxes for speed.
In the past I've used GL_SELECT to determine which object(s) contributed the pixel(s) of interest and then used computational geometry to get an accurate intersection with the object(s) if required.
Do you expect to select by clicking the contour (on the edge) or the interior of the polygon? Your second approach sounds like you want clicks in the interior to select the tightest containing polygon. I don't think that GL_SELECT after rendering GL_LINE_STRIP is going to make the interior responsive to clicks.
If this was a true contour plot (from the image I don't think it is, edges appear to intersect) then a much simpler algorithm would be available.
You cant use select if you stay with the lines because you would have to click on the line pixels rendered not the space inside the lines bounding them which I read as what you wish to do.
You can use Kos's answer but in order to render the space you need to solid fill it which would involve converting all of your contours to convex types which is painful. So I think that would work sometimes and give the wrong answer in some cases unless you did that.
What you need to do is use the CPU. You have the view extents from the viewport and the perspective matrix. With the mouse coord, generate the view to mouse pointer vector. You also have all the coords of the contours.
Take the first coord of the first contour and make a vector to the second coord. Make a vector out of them. Take 3rd coord and make a vector from 2 to 3 and repeat all the way around your contour and finally make the last one from coord n back to 0 again. For each pair in sequence find the cross product and sum up all the results. When you have that final summation vector keep hold of that and do a dot product with the mouse pointer direction vector. If its +ve then the mouse is inside the contour, if its -ve then its not and if 0 then I guess the plane of the contour and the mouse direction are parallel.
Do that for each contour and then you will know which of them are spiked by your mouse. Its up to you which one you want to pick from that set. Highest Z ?
It sounds like a lot of work but its not too bad and will give the right answer. You might like to additionally keep bounding boxes of all your contours then you can early out the ones off of the mouse vector by doing the same math as for the full vector but only on the 4 sides and if its not inside then the contour cannot be either.
The first is easy to implement and widely used.

Collision detection between two general hexahedrons

I have 2 six faced solids. The only guarantee is that they each have 8 vertex3f's (verticies with x,y and z components). Given this, how can I find out if these are colliding?
I'm hesitant to answer after you deleted your last question while I was trying to answer it and made me lose my post. Please don't do that again. Anyway:
Not necessarily optimal, but obviously correct, based on constructive solid geometry:
Represent the two solids each as an intersection of 6 half-spaces. Note that this depends on convexity but nothing else, and extends to solids with more sides. My preferred representation for halfspaces is to choose a point on each surface (for example, a vertex) and the outward-pointing unit normal vector to that surface.
Intersect the two solids by treating all 12 half-spaces as the defining half-spaces for a new solid. (This step is purely conceptual and might not involve any actual code.)
Compute the surface/edge representation of the new solid and check that it's non-empty. One approach to doing this is to initially populate your surface/edge representation with one surface for each of the 12 half-spaces with edges outside the bounds of the 2 solids, then intersect its edges with each of the remaining 11 half-spaces.
It sounds like a bit of work, but there's nothing complicated. Only dot products, cross products (to get the initial representation), and projections.
It seems I'm too dumb to quit.
Consider this. If any edge of solid 1 intersects any face of solid 2, you have a collision. That's not quite comprehensive because there are case when one is is fully contained in the other, which you can test by determining if the center of either is contained in the other.
Checking edge face intersection works like this.
Define the edge as vector starting from one vertex running to the other. Take note of the length, L, of the edge.
Define the plane segments by a vertex, a normal, an in-plane basis, and the positions of the remaining vertices in that basis.
Find the intersection of the line and the plane. In the usual formulation you will be able to get both the length along the line, and the in-plane coordinates of the intersection in the basis that you have chosen.
The intersection must line as length [0,L], and must lie inside the figure in the plane. That last part is a little harder, but has a well known general solution.
This will work. For eloquence, I rather prefer R..'s solution. If you need speed...well, you'll just have to try them and see.
Suppose one of your hexahedrons H1 has vertices (x_1, y_1, z_1), (x_2, y_2, z_2), .... Find the maximum and minimum in each coordinate: x_min = min(x_1, x_2, ...), x_max = max(x_1, x_2,...), and so on. Do the same for the other hexahedron H2.
If the interval [x_min(H1), x_max(H1)] and the interval [x_min(H2), x_max(H2)] do not intersect (that is, either x_max(H1) < x_min(H2) or x_max(H2) < x_min(H1)), then the hexahedrons cannot possibly collide. Repeat this for the y and z coordinates. Qualitatively, this is like looking at the shadow of each hexahedron on the x-axis. If they don't overlap, the polyhedrons can't collide.
If any of the intervals do overlap, you'll have to move on to more precise collision detection. This will be a lot trickier. The obvious brute force method is to check if any of the edges of one intersects any of the faces of the other, but I imagine you can do a lot better than that.
The brute force way to check if an edge intersects a face... First you'd find the intersection of the line defined by the edge with the plane defined by the face (see wikipedia, for example). Then you have to check if that point is actually on the edge and the face. The edge is easy - just see if the coordinates are between the coordinates of the two vertices defining the edge. The face is trickier, especially with no guarantees about it being convex. In the general case, you'll have to just see which side of the half-planes defined by each edge it's on. If it's on the inside half-plane for all of them, it's inside the face. I unfortunately don't have time to type that all up now, but I bet googling could aid you some there. But of course, this is all brute force, and there may be a better way. (And dmckee points out a special case that this doesn't handle)