For the past few weeks, I have been working on an algorithm that finds hidden surfaces of complex meshes and removes them. These hidden surfaces are completely occluded, and will never be seen. Due to the nature of the meshes I'm working with, there are a ton of these hidden triangles. In some cases, there are more hidden surfaces than visible surfaces. As removing them manually is prohibitive for larger meshes, I am looking to automate this with software.
My current algorithm consists of:
Generating several points on the surface of a triangle.
For each point, generate a hemisphere sampler aligned to the normal of the triangle.
Cast rays up into the hemispheres.
If there are less than a certain number of rays unoccluded, I flag the triangle for deletion.
However, this algorithm is causing a lot of grief. It's very inconsistent. While some of the "occluded" faces are not found as occluded by the algorithm, I'm more worried about very visible faces that get removed due to issues with the current implementation. Therefore, I'm wondering about two things, mainly:
Is there a better way to find and remove these hidden surfaces than raytracing?
Should I investigate non-random ray generation? I'm currently generating random directions in a cosine-weighted hemisphere, which could be causing issues. The only reason I haven't investigated this is because I have yet to find an algorithm to generate evenly-spaced rays in a hemisphere.
Note: This is intended to be an object space algorithm. That is, visibility from any angle--not a fixed camera.
I've actually never implemented ray tracing, but I have a few suggestions anyhow. As your goal is to detect every hidden triangle, you could turn the problem around and instead find every visible triangle.
I'm thinking of something along the lines of either:
Ray trace from the outside and towards the centre/perpendicular to the surface, mark any triangle hit as visible.
Cull all others.
or
Choose a view of your model.
Rasterize the model, (for example using a different colour for each triangle).
Mark any triangle visible as visible.
Change the orientation and repeat.
Cull all non-visible triangles.
The advantage of the last one is that it should be relatively cheap to implement using a graphics API, if you can read/write the pixels reliably.
A disadvantage of both would be the resolution needed. Triangles inside small openings that should not be culled may still be, thus the number of rays may be prohibitive (in the first algorithm) or you will require very large off screen frame buffers (in the second).
A couple of ideas that may help.
Use a connectivity test to determine what is connected to your main model (if there is one).
Use a variant of Depth Peeling (I've used it to convert shells into voxels; once you know what is inside the models that you want to keep (the voxels), you can intersect the junk that you want to remove.)
Create a connectivity graph and prune the graph based on the complexity of connected groups.
Related
I would like to optimize my OpenGL program.
It consists in loading a vector of 3D points then applying shaders to them.
But I have billions of points, and my FPS drop to 2 when I try to see the points.
Actually I'm sending every points, and I believe this is what is too much for my computer.
Is making a KD-tree (for example) to store my points and then send to my shaders only the points contained in the viewing frustum an efficient way to optimize my program?
And, since my goal isn't to do research of points, but only use points in the viewing frustum, which tree would be better? Octree? KD-tree?
using trees is definitely a good way to deal with large point clouds. i worked on a point cloud rendering software for a while and we used kd-trees for rendering, and regular voxel grids for analysis.
i don't exactly remember the reasons for/against using an octree, but i guess it depends on the density distribution of your clouds: if you have large point clouds with some small high-density areas, you would have lots of empty cells in the octree, whether for evenly-distributed point clouds, octrees might be simpler. We also had 2.5D maps (from aerial scans: several square kilometers of terrain but only little devation in height) where we used quad trees for some tasks.
also we did not render all the points that were in the frustum, because that degenerates e.g. when you zoom all the way out so the whole point cloud is in the frustum again.
instead, all the inner (non-leaf) nodes in the kd-tree contained a "representative" selection of the points in their children, and we rendered the tree only up to a depth that seemed appropriate depending on the distance from the camera to the bounding volume of each node. this way, for areas that are far away from the camera, you render a thinned out version of the point cloud, an LOD of sorts.
if you want to go fancy: we actually maintained a "front-line" of nodes, that is a line or cut from left to right through the tree up to which all nodes should be rendered. this way we did not need to check each node but only the ones in the cut whether their status ("rendered" or "not rendered") should change. additionally, we had out-of-core point clouds which where larger that the (V)RAM, where we allowed the front to only move farther down the tree if the parent node had been loaded from disk.
kd-trees are a bit harder to build because you need to determine where the split plane is located. for this we used a first pass where we read the locations of all the points in the node, determining the split plane, and then a second pass doing the actual split.
i think we had 4096 points per node (i think we experimented with more and 8k or 16k were fine as well), and did one draw call per node. as long as your point cloud fits in VRAM you can simply put all of it in one large buffer and do draw calls with offsets into that buffer.
I'm working on a 3D building app. The building is done on a 3D grid (like a Rubik's Cube), and each cell of the grid is either a solid cube or a 45 degree slope. To illustrate, here's a picture of a chamfered cube I pulled off of google images:
Ignore the image to the right, the focus is the one on the left. Currently, in the building phase, I have each face of each cell drawn separately. When it comes to exporting it, though, I'd like to simplify it. So in the above cube, I'd like the up-down-left-right-back-front faces to be composed of a single quad each (two triangles), and the edges would be reduced from two quads to single quads.
What I've been trying to do most recently is the following:
Iterate through the shape layer by layer, from all directions, and for each layer figure out a good simplification (remove overlapping edges to create single polygon, then split polygon to avoid holes, use ear clipping to triangulate).
I'm clearly over complicating things (at least I hope I am). If I've got a list of vertices, normals, and indices (currently with lots of duplicate vertices), is there some tidy way to simplify? The limitations are that indices can't be shared between faces (because I need the normals pointing in different directions), but otherwise I don't mind if it's not the fastest or most optimal solution, I'd rather it be easy to implement and maintain.
EDIT: Just to further clarify, I've already performed hidden face removal, that's not an issue. And secondly, it's of utmost importance that there is no degradation in quality, only simplification of the faces themselves (I need to retain the sharp edges).
Thanks goes to Roger Rowland for the great tips! If anyone else stumbles upon this question, here's a short summary of what I did:
First thing to tackle: ensure that the mesh you are attempting to simplify is a manifold mesh! This is a requirement for traversing halfedge data structures. One instance where I has issues with this was overlapping quads and triangles; I initially resolved to just leave the quads whole, rather than splitting them into triangles, because it was easier, but that resulted in edges that broke the halfedge mesh.
Once the mesh is manifold, create a halfedge mesh out of the vertices and faces.
With that done, decimate the mesh. I did it via edge collapsing, determining which edges to collapse through normal deviation (in my case, if the resulting faces from the collapse had normals not equal to their original values, then the collapse was not performed).
I did this via my own implementation at first, but I started running into frustrating bugs, and thus opted to use OpenMesh instead (it's very easy to get started with).
There's still one issue I have yet to resolve: if there are two cubes diagonally to one another, touching, the result is an edge with four faces connected to it: a complex edge! I suspect it'd be trivial to iterate through the edges checking for the number of faces connected, and then resolving by duplicating the appropriate vertices. But with that said, it's not something I'm going to invest the time in fixing, unless it becomes a critical issue later on.
I am giving a theoretical answer.
For the figure left, find all 'edge sharing triangles' with same normal (same x,y,z coordinates)(make it unit normal because of uneffect of direction of positive scaling of vectors). Merge them. Then triangulate it with maximum aspect ratio will give a solution you want.
Another easy and possible way for mesh simplification is I am proposing now.
Take the NORMALS and divide with magnitude(root of sum of squares of coordinates), gives unit normal vector. And take the adjucent triangles and take DOT PRODUCT between them(multiply x,y,z coordinates each and add). It gives the COSINE value of angle between these normals or triangles. Take a range(like 0.99-1) and consider the all adjacent triangles in this range with respect to referring triangle and merge them and retriangulate. We definitely can ignore some triangles in weird directions with smaller areas.
There is also another proposal for a more simple mesh reduction like in your left figure or building figures. Define a pre-defined number of faces (here 6+8 = 14) means value of normals, and classify all faces according to the direction close to these(by dot product) and merge and retriangulate.
Google "mesh simplification". You'll find that this problem is a huge one and is heavily researched. Take a look at these introductory resources: link (p.11 starts the good stuff) and link. CGAL has a good discussion, as well: link.
Once familiar with the issues, you'll have some decisions for applying simplification to your problem. How fast should the simplification be? How important is accuracy? (Iterative vertex clustering is a quick and dirty approach, but its results can be arbitrarily ugly.) Can you rely on a 3rd party library? (i.e. CGAL? GTS doesn't appear active any longer, but there are others) .
I read about octrees and I didn't fully understand how they world work/be implemented in a voxel world where the octree's purpose is to lower the amount of voxels you would render by connecting repeating voxels to one big "voxel".
Here are the questions I want clarification about:
What type of data structure would you use? How could turn a 3-D array of voxels into and array that has different sized voxels that take multiple locations in the array?
What are the nodes and what are they used for?
Does the octree connect the voxels so there are ONLY square shapes or could it be a rectangle or a L shape or an entire Y column of voxels or what?
Do the octrees really improve performance of a voxel game? If so usually by how much?
Quick answers:
A tree:Each node has 8 children, top-back-left, top-back-right, etc. down to a certain levelThe code for this can get quite complex, especially if the voxels can change at runtime.
The type of voxel (colour, material, a list of items)
yep. Cubes onlyMore specifically 1x1, 2x2, 4x4, 8x8 etc. It must be an entire node.If you really want to you could define some sort of patterns, but its no longer a octdtree.
yeah, but it depends on your data. Imagine describing 256 identical blocks individually, or describing it once (like air in Minecraft)
I'd start with trying to understand quadtrees first. You can do that on paper, or make a test program with it. You'll answer these questions yourself if you experiment
An octree done correctly can also help you with neighbour searches which enable you to determine if a face is considered to be "visible" (ie so you end up with a hull of voxels visible). Once you've established your octree you then use this to store your XYZ coords which you then extract into a single array. You then feed this array into your VERTEX Buffer (GL solutions require this) which you can then render in chunk forms as needed (as the camera moves forward etc).
Octree's also by there very nature collapse Cubes into bigger ones if there are ones of the same type... much like Tetris does when you have colors/shapes that "fit" one another.. this in turn can reduce your vertex count and at render you're really drawing a combination of squares and rectangles
If done correctly you will end up with a lot of chunks that only have the outfacing "faces" visible in the vertex buffers. Now you then have to also build your own Occlusion Culling algorithm which then reduces the visibility ontop of this resulting in less rendering required.
I did an example here:
https://vimeo.com/71330826
notice how the outside is only being rendered but the chunks themselves go all the way down to the bottom even though the chunks depth faces should cancel each other out? (needs more optimisation). Also note how the camera turns around and the faces are removed from the rendering buffers?
As seen in the image
I draw set of contours (polygons) as GL_LINE_STRIP.
Now I want to select curve(polygon) under the mouse to delete,move..etc in 3D .
I am wondering which method to use:
1.use OpenGL picking and selection. ( glRenderMode(GL_SELECT) )
2.use manual collision detection , by using a pick-ray and check whether the ray is inside each polygon.
I strongly recommend against GL_SELECT. This method is very old and absent in new GL versions, and you're likely to get problems with modern graphics cards. Don't expect it to be supported by hardware - probably you'd encounter a software (driver) fallback for this mode on many GPUs, provided it would work at all. Use at your own risk :)
Let me provide you with an alternative.
For solid, big objects, there's an old, good approach of selection by:
enabling and setting the scissor test to a 1x1 window at the cursor position
drawing the screen with no lighting, texturing and multisampling, assigning an unique solid colour for every "important" entity - this colour will become the object ID for picking
calling glReadPixels and retrieving the colour, which would then serve to identify the picked object
clearing the buffers, resetting the scissor to the normal size and drawing the scene normally.
This gives you a very reliable "per-object" picking method. Also, drawing and clearing only 1 pixel with minimal per-pixel operation won't really hurt your performance, unless you are short on vertex processing power (unlikely, I think) or have really a lot of objects and are likely to get CPU-bound on the number of draw calls (but then again, I believe it's possible to optimize this away to a single draw call if you could pass the colour as per-pixel data).
The colour in RGB is 3 unsigned bytes, but it should be possible to additionally use the alpha channel of the framebuffer for the last byte, so you'd get 4 bytes in total - enough to store any 32-bit pointer to the object as the colour.
Alternatively, you can create a dedicated framebuffer object with a specific pixel format (like GL_R32UI, or even GL_RG32UI if you need 64 bits) for that.
The above is a nice and quick alternative (both in terms of reliability and in implementation time) for the strict geometric approach.
I found that on new GPUs, the GL_SELECT mode is extremely slow. I played with a few different ways of fixing the problem.
The first was to do a CPU collision test, which worked, but wasn't as fast as I would have liked. It definitely slows down when you are casting rays into the screen (using gluUnproject) and then trying to find which object the mouse is colliding with. The only way I got satisfactory speeds was to use an octree to reduce the number of collision tests down and then do a bounding box collision test - however, this resulted in a method that was not pixel perfect.
The method I settled on was to first find all the objects under the mouse (using gluUnproject and bounding box collision tests) which is usually very fast. I then rendered each of the objects that have potentially collided with the mouse in the backbuffer as a different color. I then used glReadPixel to get the color under the mouse, and map that back to the object. glReadPixel is a slow call, since it has to read from the frame buffer. However, it is done once per frame, which ends up taking a negligible amount of time. You can speed it up by rendering to a PBO if you'd like.
Giawa
umanga, Cant see how to reply inline... maybe I should sign up :)
First of all I must apologize for giving you the wrong algo - i did the back face culling one. But the one you need is very similar which is why I got confused... d'oh.
Get the camera position to mouse vector as said before.
For each contour, loop through all the coords in pairs (0-1, 1-2, 2-3, ... n-0) in it and make a vec out of them as before. I.e. walk the contour.
Now do the cross prod of those two (contour edge to mouse vec) instead of between pairs like I said before, do that for all the pairs and vector add them all up.
At the end find the magnitude of the resulting vector. If the result is zero (taking into account rounding errors) then your outside the shape - regardless of facing. If your interested in facing then instead of the mag you can do that dot prod with the mouse vector to find the facing and test the sign +/-.
It works because the algo finds the amount of distance from the vector line to each point in turn. As you sum them up and you are outside then they all cancel out because the contour is closed. If your inside then they all sum up. Its actually Gauss's Law of electromagnetic fields in physics...
See:http://en.wikipedia.org/wiki/Gauss%27s_law and note "the right-hand side of the equation is the total charge enclosed by S divided by the electric constant" noting the word "enclosed" - i.e. zero means not enclosed.
You can still do that optimization with the bounding boxes for speed.
In the past I've used GL_SELECT to determine which object(s) contributed the pixel(s) of interest and then used computational geometry to get an accurate intersection with the object(s) if required.
Do you expect to select by clicking the contour (on the edge) or the interior of the polygon? Your second approach sounds like you want clicks in the interior to select the tightest containing polygon. I don't think that GL_SELECT after rendering GL_LINE_STRIP is going to make the interior responsive to clicks.
If this was a true contour plot (from the image I don't think it is, edges appear to intersect) then a much simpler algorithm would be available.
You cant use select if you stay with the lines because you would have to click on the line pixels rendered not the space inside the lines bounding them which I read as what you wish to do.
You can use Kos's answer but in order to render the space you need to solid fill it which would involve converting all of your contours to convex types which is painful. So I think that would work sometimes and give the wrong answer in some cases unless you did that.
What you need to do is use the CPU. You have the view extents from the viewport and the perspective matrix. With the mouse coord, generate the view to mouse pointer vector. You also have all the coords of the contours.
Take the first coord of the first contour and make a vector to the second coord. Make a vector out of them. Take 3rd coord and make a vector from 2 to 3 and repeat all the way around your contour and finally make the last one from coord n back to 0 again. For each pair in sequence find the cross product and sum up all the results. When you have that final summation vector keep hold of that and do a dot product with the mouse pointer direction vector. If its +ve then the mouse is inside the contour, if its -ve then its not and if 0 then I guess the plane of the contour and the mouse direction are parallel.
Do that for each contour and then you will know which of them are spiked by your mouse. Its up to you which one you want to pick from that set. Highest Z ?
It sounds like a lot of work but its not too bad and will give the right answer. You might like to additionally keep bounding boxes of all your contours then you can early out the ones off of the mouse vector by doing the same math as for the full vector but only on the 4 sides and if its not inside then the contour cannot be either.
The first is easy to implement and widely used.
I have a path made up of a list of 2D points. I want to turn these into a strip of triangles in order to render a textured line with a specified thickness (and other such things). So essentially the list of 2D points need to become a list of vertices specifying the outline of a polygon that if rendered would render the line. The problem is handling the corner joins, miters, caps etc. The resulting polygon needs to be "perfect" in the sense of no overdraw, clean joins, etc. so that it could feasibly be extruded or otherwise toyed with.
Are there any simple resources around that can provide algorithm insight, code or any more information on doing this efficiently?
I absolutely DO NOT want a full fledged 2D vector library (cairo, antigrain, OpenVG, etc.) with curves, arcs, dashes and all the bells and whistles. I've been digging in multiple source trees for OpenVG implementations and other things to find some insight, but it's all terribly convoluted.
I'm definitely willing to code it myself, but there are many degenerate cases (small segments + thick widths + sharp corners) that create all kinds of join issues. Even a little help would save me hours of trying to deal with them all.
EDIT: Here's an example of one of those degenerate cases that causes ugliness if you were simply to go from vertex to vertex. Red is the original path. The orange blocks are rectangles drawn at a specified width aligned and centered on each segment.
Oh well - I've tried to solve that problem myself. I wasted two month on a solution that tried to solve the zero overdraw problem. As you've already found out you can't deal with all degenerated cases and have zero overdraw at the same time.
You can however use a hybrid approach:
Write yourself a routine that checks if the joins can be constructed from simple geometry without problems. To do so you have to check the join-angle, the width of the line and the length of the joined line-segments (line-segments that are shorter than their width are a PITA). With some heuristics you should be able to sort out all the trivial cases.
I don't know how your average line-data looks like, but in my case more than 90% of the wide lines had no degenerated cases.
For all other lines:
You've most probably already found out that if you tolerate overdraw, generating the geometry is a lot easier. Do so, and let a polygon CSG algorithm and a tesselation algorithm do the hard job.
I've evaluated most of the available tesselation packages, and I ended up with the GLU tesselator. It was fast, robust, never crashed (unlike most other algorithms). It was free and the license allowed me to include it in a commercial program. The quality and speed of the tesselation is okay. You will not get delaunay triangulation quality, but since you just need the triangles for rendering that's not a problem.
Since I disliked the tesselator API I lifted the tesselation code from the free SGI OpenGL reference implementation, rewrote the entire front-end and added memory pools to get the number of allocations down. It took two days to do this, but it was well worth it (like factor five performance improvement). The solution ended up in a commercial OpenVG implementation btw :-)
If you're rendering with OpenGL on a PC, you may want to move the tesselation/CSG-job from the CPU to the GPU and use stencil-buffer or z-buffer tricks to remove the overdraw. That's a lot easier and may be even faster than CPU tesselation.
I just found this amazing work:
http://www.codeproject.com/Articles/226569/Drawing-polylines-by-tessellation
It seems to do exactly what you want, and its licence allows to use it even in commercial applications. Plus, the author did a truly great job to detail his method. I'll probably give it a shot at some point to replace my own not-nearly-as-perfect implementation.
A simple method off the top of my head.
Bisect the angle of each 2d Vertex, this will create a nice miter line. Then move along that line, both inward and outward, the amount of your "thickness" (or thickness divided by two?), you now have your inner and outer polygon points. Move to the next point, repeat the same process, building your new polygon points along the way. Then apply a triangualtion to get your render-ready vertexes.
I ended up having to get my hands dirty and write a small ribbonizer to solve a similar problem.
For me the issue was that I wanted fat lines in OpenGL that did not have the kinds of artifacts that I was seeing with OpenGL on the iPhone. After looking at various solutions; bezier curves and the like - I decided it was probably easiest to just make my own. There are a couple of different approaches.
One approach is to find the angle of intersection between two segments and then move along that intersection line a certain distance away from the surface and treat that as a ribbon vertex. I tried that and it did not look intuitive; the ribbon width would vary.
Another approach is to actually compute a normal to the surface of the line segments and use that to compute the ideal ribbon edge for that segment and to do actual intersection tests between ribbon segments. This worked well except that for sharp corners the ribbon line segment intersections were too far away ( if the inter-segment angle approached 180' ).
I worked around the sharp angle issue with two approaches. The Paul Bourke line intersection algorithm ( which I used in an unoptimized way ) suggested detecting if the intersection was inside of the segments. Since both segments are identical I only needed to test one of the segments for intersection. I could then arbitrate how to resolve this; either by fudging a best point between the two ends or by putting on an end cap - both approaches look good - the end cap approach may throw off the polygon front/back facing ordering for opengl.
See http://paulbourke.net/geometry/lineline2d/
See my source code here : https://gist.github.com/1474156
I'm interested in this too, since I want to perfect my mapping application's (Kosmos) drawing of roads. One workaround I used is to draw the polyline twice, once with a thicker line and once with a thinner, with a different color. But this is not really a polygon, it's just a quick way of simulating one. See some samples here: http://wiki.openstreetmap.org/wiki/Kosmos_Rendering_Help#Rendering_Options
I'm not sure if this is what you need.
I think I'd reach for a tessellation algorithm. It's true that in most case where these are used the aim is to reduce the number of vertexes to optimise rendering, but in your case you could parameterise to retain all the detail - and the possibility of optimising may come in useful.
There are numerous tessellation algorithms and code around on the web - I wrapped up a pure C on in a DLL a few years back for use with a Delphi landscape renderer, and they are not an uncommon subject for advanced graphics coding tutorials and the like.
See if Delaunay triangulation can help.
In my case I could afford to overdraw. I just drow circles with radius = width/2 centered on each of the polyline's vertices.
Artifacts are masked this way, and it is very easy to implement, if you can live with "rounded" corners and some overdrawing.
From your image it looks like that you are drawing box around line segments with FILL on and using orange color. Doing so is going to create bad overdraws for sure. So first thing to do would be not render black border and fill color can be opaque.
Why can't you use GL_LINES primitive to do what you intent to do? You can specify width, filtering, smoothness, texture anything. You can render all vertices using glDrawArrays(). I know this is not something you have in mind but as you are focusing on 2D drawing, this might be easier approach. (search for Textured lines etc.)