I'm trying to build a 2.5D mesh from a LIDAR dataset A consisting of ~50,000,000 points (x,y,z) to describe the terrain in an urban area. The final aim is to use this mesh for numerical simulation of flooding and I'm having trouble with too small elements. I took the following steps:
use Point_Processing_3 to create a reduced set B of points (x,y,z) retaining about 1% of the original data ( CGAL::wlop_simplify_and_regularize_point_set)
create a constrained Delaunay triangulation cdt using simplified building footprints as constraints (CGAL::Constrained_Delaunay_triangulation_2)
insert the points from B into cdt if they are located at least 1m from any existing vertex in the triangulation
refine the mesh using Delaunay_mesher_2
#include <CGAL/Delaunay_mesher_2.h>
#include <CGAL/Delaunay_mesh_size_criteria_2.h>
Meshing_engine engine(cdt);
engine.refine_mesh();
So far so good. The resulting mesh looks very nice. However, I end up with quite a lot of very small triangles which will make any hydraulic simulation very difficult. I have no idea how to get rid of these.
The criteria used in combination with refine_mesh() seem to always aim at penalizing large elements, while this is the opposite case. In many areas large triangles will be perfectly fine, but triangles with a size of 0.5sqm should never occur.
Another option would be to manually remove vertices from the triangulation using cdt.remove(vertex_handle). But, given a too small face, how to pick the vertex to remove?
Can anyone help pointing in a direction? I attached an image where one of the problematic areas is marked red.
2.5D mesh with problem areas marked red. Empty areas are buildings.
Related
I have a set of 2D points of a known density I want to mesh by taking the holes in account. Basically, given the following input:
I want something link this:
I tried PCL ConcaveHull, but it doens't handle the holes and splitted mesh very well.
I looked at CGAL Alpha shapes, which seems to go in the right direction (creating a polygon from a point cloud), but I don't know how to get triangles after that.
I though of passing the resulting polygons to a constrained triangulation algorithm and mark domains, but I didn't find how to get a list of polygons.
The resulting triangulated polygon is about a two step process at the least. First you need to triangulate your 2D points (using something like a Delaunay2D algorithm). There you can set the maximum length for the triangles and get the the desired shape. Then you can decimate the point cloud and re-triangulate. Another option is to use the convex hull to get the outside polygon, then extract the inside polygon through a TriangulationCDT algorithm, the apply some PolygonBooleanOperations, obtain the desired polygon, and finaly re-triangulate.
I suggest you look into the Geometric Tools library and specifically the Geometric Samples. I think everything you need is in there, and is much less library and path heavy than CGAL (the algorithms are not free for this type of work unless is a school project) or the PCL (I really like the library for segmentation, but their triangulation breaks often and is slow).
If this solves your problem, please mark it as your answer. Thank you!
I'm still trying to density control (grade) meshes in CGAL. Specifically tet-meshing a polygon surface (or multiple surface manifolds) that I simply load as OFF files. I can also load lists of selected faces or face nodes too.
But I can't seem to get to first base on this with the polygon tet-mesher. All I want to do is assign and enforce a mesh density/size at selected faces in the OFF file.
I CAN get some kinds of mesh density working by inserting 1-D features with volumetric data meshing, but for CAD and 3D printing purposes it has to be computed from an STL-like triangular surface manifold, so volume-based meshing is not do-able.
Is what I'm trying to do even possible in CGAL? It feels to me like it must be, and I'm just missing something obvious.
I really hope someone can help here. FYI i'm mostly working with the Mesh3 example using v4.14.
Thanks very much.
Look at the Mesh_facet_criteria and in particular this constructor where SizingField is where you can control the size. For locating the point wrt a face, you can use the AABB-tree function closest_point_and_primitive().
I'm working on a 3D building app. The building is done on a 3D grid (like a Rubik's Cube), and each cell of the grid is either a solid cube or a 45 degree slope. To illustrate, here's a picture of a chamfered cube I pulled off of google images:
Ignore the image to the right, the focus is the one on the left. Currently, in the building phase, I have each face of each cell drawn separately. When it comes to exporting it, though, I'd like to simplify it. So in the above cube, I'd like the up-down-left-right-back-front faces to be composed of a single quad each (two triangles), and the edges would be reduced from two quads to single quads.
What I've been trying to do most recently is the following:
Iterate through the shape layer by layer, from all directions, and for each layer figure out a good simplification (remove overlapping edges to create single polygon, then split polygon to avoid holes, use ear clipping to triangulate).
I'm clearly over complicating things (at least I hope I am). If I've got a list of vertices, normals, and indices (currently with lots of duplicate vertices), is there some tidy way to simplify? The limitations are that indices can't be shared between faces (because I need the normals pointing in different directions), but otherwise I don't mind if it's not the fastest or most optimal solution, I'd rather it be easy to implement and maintain.
EDIT: Just to further clarify, I've already performed hidden face removal, that's not an issue. And secondly, it's of utmost importance that there is no degradation in quality, only simplification of the faces themselves (I need to retain the sharp edges).
Thanks goes to Roger Rowland for the great tips! If anyone else stumbles upon this question, here's a short summary of what I did:
First thing to tackle: ensure that the mesh you are attempting to simplify is a manifold mesh! This is a requirement for traversing halfedge data structures. One instance where I has issues with this was overlapping quads and triangles; I initially resolved to just leave the quads whole, rather than splitting them into triangles, because it was easier, but that resulted in edges that broke the halfedge mesh.
Once the mesh is manifold, create a halfedge mesh out of the vertices and faces.
With that done, decimate the mesh. I did it via edge collapsing, determining which edges to collapse through normal deviation (in my case, if the resulting faces from the collapse had normals not equal to their original values, then the collapse was not performed).
I did this via my own implementation at first, but I started running into frustrating bugs, and thus opted to use OpenMesh instead (it's very easy to get started with).
There's still one issue I have yet to resolve: if there are two cubes diagonally to one another, touching, the result is an edge with four faces connected to it: a complex edge! I suspect it'd be trivial to iterate through the edges checking for the number of faces connected, and then resolving by duplicating the appropriate vertices. But with that said, it's not something I'm going to invest the time in fixing, unless it becomes a critical issue later on.
I am giving a theoretical answer.
For the figure left, find all 'edge sharing triangles' with same normal (same x,y,z coordinates)(make it unit normal because of uneffect of direction of positive scaling of vectors). Merge them. Then triangulate it with maximum aspect ratio will give a solution you want.
Another easy and possible way for mesh simplification is I am proposing now.
Take the NORMALS and divide with magnitude(root of sum of squares of coordinates), gives unit normal vector. And take the adjucent triangles and take DOT PRODUCT between them(multiply x,y,z coordinates each and add). It gives the COSINE value of angle between these normals or triangles. Take a range(like 0.99-1) and consider the all adjacent triangles in this range with respect to referring triangle and merge them and retriangulate. We definitely can ignore some triangles in weird directions with smaller areas.
There is also another proposal for a more simple mesh reduction like in your left figure or building figures. Define a pre-defined number of faces (here 6+8 = 14) means value of normals, and classify all faces according to the direction close to these(by dot product) and merge and retriangulate.
Google "mesh simplification". You'll find that this problem is a huge one and is heavily researched. Take a look at these introductory resources: link (p.11 starts the good stuff) and link. CGAL has a good discussion, as well: link.
Once familiar with the issues, you'll have some decisions for applying simplification to your problem. How fast should the simplification be? How important is accuracy? (Iterative vertex clustering is a quick and dirty approach, but its results can be arbitrarily ugly.) Can you rely on a 3rd party library? (i.e. CGAL? GTS doesn't appear active any longer, but there are others) .
I have a QuadTree which can be subdivided by placing objects in the nodes. I also have a planet made in OpenGL in the form of a Quad Sphere. The problem is i don't know how to put them together. How does a QuadTree store information about the Planet? Do i store vertices in the leaf Quad Tree nodes? And if so how do i split the vertex data into 4 sets without ruining the texturing and normals. If this is the case do i use Indices instead?
So my question in short really is:
How do i store my vertices data in a quad tree so that i can split the terrain on the planet up so that the planet will become higher detail at closer range. I assume this is done by using a Camera as the object that splits the nodes.
I've read many articles and most of them fail to cover this. The Quadtree is one of the most important things for my application as it will allow me to render many planets at the same time while still being able to get good definition at land. A pretty picture of my planet and it's HD sun:
A video of the planet can also be found Here.
I've managed to implement a simple quad tree on a flat plane but i keep getting massive holes as i think i'm getting the positions wrong. It's the last post on here - http://www.gamedev.net/topic/637956-opengl-procedural-planet-generation-quadtrees-and-geomipmapping/ and you can get the src there too. Any ideas how to fix it?
What you're looking for is an algorithm like ROAM (Real-time Optimally Adapting Mesh) to be able to increase or decrease the accuracy of your model based on the distance of the camera. The algorithm will make use of your quadtree then.
Check out this series on gamasutra on how to render a Real-time Procedural Universe.
Edit: the reason why you would use a quadtree with these methods is to minimize the number of vertices in areas where detail is not needed (flat terrain for example). The quadtree definition on wikipedia is pretty good, you should use that as a starting point. The goal is to create child nodes to your quadtree where you have changes in your "height" (you could generate the sides of your cube using an heightmap) until you reach a predefined depth. Maybe, as a first pass, you should try avoiding the quadtree and use a simple grid. When you get that working, you "optimize" your process by adding the quadtree.
To understand how quadtree and terrain data works together to achieve LOD based rendering
Read this paper. Easy to understand with illustrative examples.
I did once implement a LOD on a sphere. The idea is to start with a simple Dipyramid, the upper pyramid representing the northern sphere and the lower one representing the southern sphere. The the bases of the pyramids align with equator, the tips are on poles.
Then you subdivide each triangle into 4 smaller ones as much as you want by connecting the midpoints of the edges of the triangle.
The as much as you want is based on your needs, distance to camera and object placements could be your triggers for subdivision
I have a set of non-overlapping polygons. These polygons can share nodes, edges, but strictly no overlapping.
Now, I am going to mesh them using Constrainted Delaunay Triangulation (CDT) technique. I can get the mesh without problem.
My problem is, after the mesh, I want to know which mesh element belongs to which original polygon. MY current approach is to compute the centroid for each mesh element, and check which of the original polygon this centroid falls into. But I don't like this approach as it is very computationally intensive.
Is there any efficient ways to do this ( in terms of Big O the runtime)? My projects involve tens of thousands of polygons and I don't want the speed to slow down.
Edit: Make sure that all the vertices in a mesh element share a common face is not going to work, because there are cases where the all the vertices can have more than one common face, as below ( the dotted line forms a mesh element whose vertices have 2 common faces):
I can think of two options, both somehow mentioned :
Maintain the information in your points/vertices. See this other related question.
Recompute the information the way you did, locating each mesh element centroid in the original polygon, but this can be optimized by using a spatial_sort, and locating them sequentially in your input polygon (using the previous result as hint for starting the next point location).
What about labeling each of your original vertices with a polygon id (or several, I guess, since polys can share vertices). Then, if I understand DT correctly, you can look at the three verts in a given triangle in the mesh and see if they share a common label, if so, that mesh came from the labeled polygon.
As Mikeb says label all your original vertices with a polygon id.
Since you want the one that's inside the polygon, just make sure you only go clockwise around the polygons, this makes sure that if the points overlap for two polygons you get the one facing the correct direction.
I would expect this approach to remain close to O(n) where n represents number of points as each triangle can at only have one or two polygons that overlap all three points.
Create a new graph G(V,E) in the following way. For every mesh create a node in V. For every dashed edge create an edge in E that connects the two corresponding meshes. Don't map solid edges into edges in E.
Run ConnectedComponents(G).
Every mesh will be labeled with a label (with 1-to-1 correspondence to polygons.)
Maybe you can call CDT separately for each polygon, and label the triangles with their polygon after each call.