I'm looking for a concept to store (in 3 dimensional euclidian space) faces, edges and vertex such that
information (about relation) isn't duplicated
queries for adjecent and neighboring faces/edges/vertex are fast
the mesh is not limited to connected faces of the same winding
definitions
neighbor of a face: the face that shares an edge with this face
neighbor of a vertex: the vertex that is on the other end of an edge sharing that vertex
adjecent edge: an edge that shares the same vertex of an endpoint with this edge
I have considered the Half-Edge data structure, but queries on it only really work when all connected faces have the same winding.
For instance, consider this pseudo code to access such related entities:
face.neighbors #the neighboring faces
face.edges #the edges shared by this face (in the right winding order)
face.verts #the vertex of that face (in the right winding order)
edge.v1, edge.v2 #the two vertex making up an edge
vertex.edges #the edges this vertex shares
vertex.neighbors # the neighbors of this vertex along each shared edge
I would take a look at CGAL, the Computational Geometry Algorithms Library. You might be able to use something directly, or at least get some good ideas. Half-edge sounds like a good idea; in my opinion, you should enforce uniform winding as much as possible. It sounds like you're not interested in that, however.
One idea could be to retain a pair (one element for each winding) in the data structure for a face; this might give you enough flexibility to implement some kind of half-edge-like data structure on top of it, with good efficiency.
I suggest you also take a look at OpenMesh. It is also heavy on the C++ template side like CGAL.
If you don't want to be limited to manifold surfaces, winged-edge is it these days. If "data about relation" is really your top priority then I guess you're out of luck.
I found a abstract about a data structure for 3d mesh. http://wscg.zcu.cz/wscg2006/Papers_2006/Short/E17-full.pdf
Not so much a recommended structure but rather a 3D library used by a pro for computational task that can be built upon.
Michael Garland, a research scientist with NVIDIA Research, have release his graphics library he uses for computational tasks on 3D models and meshes.
libgfx
qslim a mesh simplification app based on libgfx (I have personally used it and ported qvis to MacOSX)
Related
I implemented a data structure for material removal simulation based on a sparxe voxel octree (SVO). Now, I want to visualize the result. Therefore, I need to triangulate my sparse voxel octree.
How can I do that? Can you recommend any fast algorithms for that?
A standard voxel model can be triangulated by using marching cube (MC). But as far as I see, I cannot adapt this algorithm for an SVO. The MC algorithm is based on the 15 base patterns which are used to generate the triangles (with the help of a LUT for better performance). But these patterns doesn't work anymore for SVO voxels, because these voxels can have different sizes depending in the local resolution in the tree branch.
So, how do other people triangulate their SVO?
There's an algorithm called the "Transvoxel Algorithm" you can use with marching cubes. I won't post the details here but you can google it. It does some internal voxel tessellation. I have my own tessellation algorithm which is somewhat simplified in that it has far few cases, however both these only allow for a single level of resolution change at a time.
Your best bet may be to not use MC at all and go to surface nets. The main down side is that it can generate non-manifold geometry (if that's something you care about). There are several other variations of that such as "dual contouring" you might want to look into also. Dual contouring allows for sharp corners but requires Hermite data. I believe there is also a manifold version of dual contouring and/or surface nets at the cost of some added complexity.
In any case all this stuff will work with a voxel octree, but it does require some work.
I try to implement my little CAD and wondering how to organize data for Bezier cubic surface primitives. My primitieves, for example a box, will contain six cubic Bezie patches, which constructes apartelly each other by own data for convenience. Any patch has 16 points. My primitieves will stitched for any iteraction (selecting points): for example any point on edge of patches will share own position with correspondong point of neighbor patches. I could delete duplicated points, but for rendering and updating primitieves I need remain the data untouched and the same time I need robust mouse picking algorithm which pick this points on edges and let move the one point together with corresponding points of neighbor patches.
And I think I have two options:
Organizing data as std::multimap or something else where a few points will take linking through keys, but here I'll have problems with searching the points.
Improving the picking algorithm which provide picking 2-3 points as the one point, but I think it's a bad solution.
What is an common way to solve this problem? Thanks for any advice.
One common and relatively simple way is a pointer or index-based data structure. Example for the latter one:
std::vector<Vector3> vertices;
struct Patch
{
// Zero-based indices of the 16 control points in the vertices vector
uint32_t indices[4][4];
}
std::vector<Patch> patches;
One downside it’s expensive to erase vertices, because the patches need to be fixed adjusting these indices. Another downside it’s expensive to enumerate patches that link to a specific vertex, but if you need to do that often, you can build & maintain a separate index for that, e.g. std::unordered_multimap<uint32_t, uint32_t> lookupPatches;
This has the upside if you’re tessellating these Bézier patches on GPU, it's very efficient to upload both vertices (vertex buffer) and patches (index buffer). E.g. for D3d11, it's Map, memcpy, Unmap.
I'm working on a 3D building app. The building is done on a 3D grid (like a Rubik's Cube), and each cell of the grid is either a solid cube or a 45 degree slope. To illustrate, here's a picture of a chamfered cube I pulled off of google images:
Ignore the image to the right, the focus is the one on the left. Currently, in the building phase, I have each face of each cell drawn separately. When it comes to exporting it, though, I'd like to simplify it. So in the above cube, I'd like the up-down-left-right-back-front faces to be composed of a single quad each (two triangles), and the edges would be reduced from two quads to single quads.
What I've been trying to do most recently is the following:
Iterate through the shape layer by layer, from all directions, and for each layer figure out a good simplification (remove overlapping edges to create single polygon, then split polygon to avoid holes, use ear clipping to triangulate).
I'm clearly over complicating things (at least I hope I am). If I've got a list of vertices, normals, and indices (currently with lots of duplicate vertices), is there some tidy way to simplify? The limitations are that indices can't be shared between faces (because I need the normals pointing in different directions), but otherwise I don't mind if it's not the fastest or most optimal solution, I'd rather it be easy to implement and maintain.
EDIT: Just to further clarify, I've already performed hidden face removal, that's not an issue. And secondly, it's of utmost importance that there is no degradation in quality, only simplification of the faces themselves (I need to retain the sharp edges).
Thanks goes to Roger Rowland for the great tips! If anyone else stumbles upon this question, here's a short summary of what I did:
First thing to tackle: ensure that the mesh you are attempting to simplify is a manifold mesh! This is a requirement for traversing halfedge data structures. One instance where I has issues with this was overlapping quads and triangles; I initially resolved to just leave the quads whole, rather than splitting them into triangles, because it was easier, but that resulted in edges that broke the halfedge mesh.
Once the mesh is manifold, create a halfedge mesh out of the vertices and faces.
With that done, decimate the mesh. I did it via edge collapsing, determining which edges to collapse through normal deviation (in my case, if the resulting faces from the collapse had normals not equal to their original values, then the collapse was not performed).
I did this via my own implementation at first, but I started running into frustrating bugs, and thus opted to use OpenMesh instead (it's very easy to get started with).
There's still one issue I have yet to resolve: if there are two cubes diagonally to one another, touching, the result is an edge with four faces connected to it: a complex edge! I suspect it'd be trivial to iterate through the edges checking for the number of faces connected, and then resolving by duplicating the appropriate vertices. But with that said, it's not something I'm going to invest the time in fixing, unless it becomes a critical issue later on.
I am giving a theoretical answer.
For the figure left, find all 'edge sharing triangles' with same normal (same x,y,z coordinates)(make it unit normal because of uneffect of direction of positive scaling of vectors). Merge them. Then triangulate it with maximum aspect ratio will give a solution you want.
Another easy and possible way for mesh simplification is I am proposing now.
Take the NORMALS and divide with magnitude(root of sum of squares of coordinates), gives unit normal vector. And take the adjucent triangles and take DOT PRODUCT between them(multiply x,y,z coordinates each and add). It gives the COSINE value of angle between these normals or triangles. Take a range(like 0.99-1) and consider the all adjacent triangles in this range with respect to referring triangle and merge them and retriangulate. We definitely can ignore some triangles in weird directions with smaller areas.
There is also another proposal for a more simple mesh reduction like in your left figure or building figures. Define a pre-defined number of faces (here 6+8 = 14) means value of normals, and classify all faces according to the direction close to these(by dot product) and merge and retriangulate.
Google "mesh simplification". You'll find that this problem is a huge one and is heavily researched. Take a look at these introductory resources: link (p.11 starts the good stuff) and link. CGAL has a good discussion, as well: link.
Once familiar with the issues, you'll have some decisions for applying simplification to your problem. How fast should the simplification be? How important is accuracy? (Iterative vertex clustering is a quick and dirty approach, but its results can be arbitrarily ugly.) Can you rely on a 3rd party library? (i.e. CGAL? GTS doesn't appear active any longer, but there are others) .
I read about octrees and I didn't fully understand how they world work/be implemented in a voxel world where the octree's purpose is to lower the amount of voxels you would render by connecting repeating voxels to one big "voxel".
Here are the questions I want clarification about:
What type of data structure would you use? How could turn a 3-D array of voxels into and array that has different sized voxels that take multiple locations in the array?
What are the nodes and what are they used for?
Does the octree connect the voxels so there are ONLY square shapes or could it be a rectangle or a L shape or an entire Y column of voxels or what?
Do the octrees really improve performance of a voxel game? If so usually by how much?
Quick answers:
A tree:Each node has 8 children, top-back-left, top-back-right, etc. down to a certain levelThe code for this can get quite complex, especially if the voxels can change at runtime.
The type of voxel (colour, material, a list of items)
yep. Cubes onlyMore specifically 1x1, 2x2, 4x4, 8x8 etc. It must be an entire node.If you really want to you could define some sort of patterns, but its no longer a octdtree.
yeah, but it depends on your data. Imagine describing 256 identical blocks individually, or describing it once (like air in Minecraft)
I'd start with trying to understand quadtrees first. You can do that on paper, or make a test program with it. You'll answer these questions yourself if you experiment
An octree done correctly can also help you with neighbour searches which enable you to determine if a face is considered to be "visible" (ie so you end up with a hull of voxels visible). Once you've established your octree you then use this to store your XYZ coords which you then extract into a single array. You then feed this array into your VERTEX Buffer (GL solutions require this) which you can then render in chunk forms as needed (as the camera moves forward etc).
Octree's also by there very nature collapse Cubes into bigger ones if there are ones of the same type... much like Tetris does when you have colors/shapes that "fit" one another.. this in turn can reduce your vertex count and at render you're really drawing a combination of squares and rectangles
If done correctly you will end up with a lot of chunks that only have the outfacing "faces" visible in the vertex buffers. Now you then have to also build your own Occlusion Culling algorithm which then reduces the visibility ontop of this resulting in less rendering required.
I did an example here:
https://vimeo.com/71330826
notice how the outside is only being rendered but the chunks themselves go all the way down to the bottom even though the chunks depth faces should cancel each other out? (needs more optimisation). Also note how the camera turns around and the faces are removed from the rendering buffers?
I have a QuadTree which can be subdivided by placing objects in the nodes. I also have a planet made in OpenGL in the form of a Quad Sphere. The problem is i don't know how to put them together. How does a QuadTree store information about the Planet? Do i store vertices in the leaf Quad Tree nodes? And if so how do i split the vertex data into 4 sets without ruining the texturing and normals. If this is the case do i use Indices instead?
So my question in short really is:
How do i store my vertices data in a quad tree so that i can split the terrain on the planet up so that the planet will become higher detail at closer range. I assume this is done by using a Camera as the object that splits the nodes.
I've read many articles and most of them fail to cover this. The Quadtree is one of the most important things for my application as it will allow me to render many planets at the same time while still being able to get good definition at land. A pretty picture of my planet and it's HD sun:
A video of the planet can also be found Here.
I've managed to implement a simple quad tree on a flat plane but i keep getting massive holes as i think i'm getting the positions wrong. It's the last post on here - http://www.gamedev.net/topic/637956-opengl-procedural-planet-generation-quadtrees-and-geomipmapping/ and you can get the src there too. Any ideas how to fix it?
What you're looking for is an algorithm like ROAM (Real-time Optimally Adapting Mesh) to be able to increase or decrease the accuracy of your model based on the distance of the camera. The algorithm will make use of your quadtree then.
Check out this series on gamasutra on how to render a Real-time Procedural Universe.
Edit: the reason why you would use a quadtree with these methods is to minimize the number of vertices in areas where detail is not needed (flat terrain for example). The quadtree definition on wikipedia is pretty good, you should use that as a starting point. The goal is to create child nodes to your quadtree where you have changes in your "height" (you could generate the sides of your cube using an heightmap) until you reach a predefined depth. Maybe, as a first pass, you should try avoiding the quadtree and use a simple grid. When you get that working, you "optimize" your process by adding the quadtree.
To understand how quadtree and terrain data works together to achieve LOD based rendering
Read this paper. Easy to understand with illustrative examples.
I did once implement a LOD on a sphere. The idea is to start with a simple Dipyramid, the upper pyramid representing the northern sphere and the lower one representing the southern sphere. The the bases of the pyramids align with equator, the tips are on poles.
Then you subdivide each triangle into 4 smaller ones as much as you want by connecting the midpoints of the edges of the triangle.
The as much as you want is based on your needs, distance to camera and object placements could be your triggers for subdivision