Normal of point via its location on STL mesh model - c++

Can someone tell me the best way to estimate the normal at a point on CAD STL geometry?
This is not exactly a question on code, but rather about efficiency and approach.
I have used an approach in which I compare the point whose normal needs to be estimated with all the triangles in the mesh and check to see if it lies inside the triangle using the barycentric coordinates test. (If the value of each barycentric coordinate lies between 0 and 1, the point lies inside.) This post explains it
https://math.stackexchange.com/questions/4322/check-whether-a-point-is-within-a-3d-triangle
Then I compute the normal of that triangle to get the point normal.
The problem with my approach is that, if I have some 1000 points, and if the mesh has say, 500 triangles, that would mean doing some 500X1000 checks. This takes a lot of time.
Is there an efficient data structure or approach I could use, to pinpoint the right triangle? Or a library that could get the work done?

A relatively easy solution is by using a grid: decompose the space in a 3D array of voxels, and for every voxel keep a list of the triangles that interfere with it.
By interfere, I mean that there is a nonempty intersection between the voxel and the bounding box of the triangle. (When you know the bounding box, it is straight forward to tell what voxels it covers.)
When you want to test a point, find the voxel it belongs to and compare to the list of triangles. You will achieve a speedup equal to N/M, where M is the average number of triangles per voxel.
The voxel size should be chosen carefully. Too small will result in a too big data structure; too large will make the method ineffective. If possible, adjust to "a few" triangles per voxel. (Use the average triangle size - square root of double area - as a starting value.)
For better efficiency, you can compute the exact intersections between the triangles and the voxels, using a 3D polygon clipping algorithm (rather than a mere bounding box test), but this is more complex to implement.

Related

CGAL - Surface mesh parameterisation

I have been using LSCM parameterizer to unwrap a mesh. I would like to obtain a 2d planar model with accurate measurements such that if you make a paper cutout you could wrap it up back to the original model physically.
It seems that SMP::parameterize() is scaling the resulting OFF down to 1mm by 1mm. How to I get an OFF file with accurate measurements?
scaled down.
A paramterization is a UV map, associating 2D coordinates to 3D points, and such coordinates are always between 0,0 and 1,1. That's why you get a 1mm/1mm result. I guess you could compare a 3D edge length with it's 2D version in the map and scale your 2D model by this factor. Maybe perform a mean to be a bit more precise.
CGALs Least Squares Conformal Maps algorithm outputs such that the 2D distance between the two constrained vertices is 1mm. This means that unless, the two vertices you chose to be constrained were exactly 1mm apart, the output surface will be scaled.
The CGAL 'As Rigid As Possible' Parameterization, on the other hand, can output a result that maintains the area. Increasing the λ parameter will improve the preservation of area between the input and output at the expense of maintaining the angles, whereas reducing the λ parameter will do the opposite.
Also note that increasing the number of iterations from the default will improve the output - especially if the unwrapped surface self-intersects.

Circular grid of uniform density

How can I generate a circular grid, made of tiles with uniform area/whose vertices are uniformly distributed?
I'll need to apply the Laplacian operator to the grid at each frame of my program.
Applying the Laplacian was easy with a rectangular grid made of rectangular tiles whose locations were specified in cartesian coordinates, since for a tile at (i,j), I knew the positions of its neighboring tiles to be (i-1,j), (i,j-1), (i+1,j), and (i,j+1).
While I'd like to use polar coordinates, I'm not sure whether querying a tile's neighborhood would be as easy.
I'm working in OpenGl, and could either render triangles or points. Triangles seem more efficient (and have the nice effect of filling the area between their vertices), but seem more amenable to cartesian coordinates. Perhaps I could render points and then polar coordinates would work fine?
The other concern is the density of tiles. I want waves traveling on the surface of this mesh to have the same resolution whether they're at the center or not.
So the two main concerns are: generating the mesh in a way that allows for easy querying of a tiles' neighborhood, and in a way that preserves a uniform density distribution of tiles.
I think you're asking for something impossible.
However, this is a technique for remapping a regular square 2D grid into a circle shape with a relatively low amount of warping. It might suffice for your problem.
You might want to have a look at this paper, it has been written to sample spheres but you might be able to adapt it for a circle.
An option can be to use a polar grid with a constant angular step but varying radial steps, so that all cells have the same area, i.e. (R+dR)²-R²=Cst, giving dR as a function of R.
You may want to reduce the anisotropy (some cells becoming very elongated) by changing the number of cells every now and then (f.i. by doubling). This will introduce singularities in the mesh, i.e. cells with five vertices instead of four.
See the figures in https://mathematica.stackexchange.com/questions/78806/ndsolve-and-fem-support-for-non-conformal-meshes-of-a-disk-with-kernel-crash

Create Topographic 2D Curves from Polygonal Mesh

I'm trying to convert a polygonal 3D mesh into a series of topographic curves that represent the part of the mesh at a specific height for every interval. So far, I've come up with the idea to intersect a horizontal plane with the mesh and get the intersection curve(s). So for this mesh:
I'd intersect a plane repeatedly at a set interval of precision:
and etc.
While this is straightforward to do visually and in a CAD application, I'm completely lost doing this programmatically. How could I achieve calculating this in a programming environment/ what algorithms can I look into to achieve this?
I'm programming in an STL C++ environment (with Boost), loading .obj meshes with this simple loader, and need simple cartesian 2D points to define the output curve.
An option is to process all the faces in turn and for every face determine the horizontal planes that traverses them. For a given plane and face, check all four vertexes in turn and find the changes of sign (of Zvertex - Zplane). There will be exactly two such changes, defining an edge that belongs to a level curve. (Exceptionally you can find four changes of sign, which occurs when the facet isn't planar - join the points in pairs.)
Every time you find an intersection point, you tag it with the (unique) index of the plane and the (unique) index of the edge that was intersected; you also tag it with the index of the other edge that was intersected in that face.
By sorting on the plane index, you can group the intersections per plane.
For a given plane, using a hash table, you can follow the chain of intersections, from edge to edge.
This gives you the desired set of curves.

Interpolate color between voxels

I have a 3D texture containing voxels and I am ray tracing and, everytime i hit a voxel i display the color. The result is nice but you can clearly see the different blocks being separated by one another. i would like to get a smoothing color going from one voxel to the other so I was thinking of doing interpolation.
My problem is that when I hit the voxel I am not sure which other neighbouring voxels to extract the colors from because i don't know if the voxel is part of a wall parallel to some axis or if it is a floor or an isolate part of the scene. Ideally I would have to get, for every voxel, the 26 neighbouring voxels, but that can be quite expensive. Is there any fast and approximate solution for such thing?
PS: I notice that in minecraft there is smooth shadows that form when voxels are placed near each other, maybe that uses a technique that might be adapted for this purpose?

Point in vertex defined box algorithm?

How would I test if a point is within a 3D box that is defined by its 8 points only or by its 6 quads? (Dont have access to normal vectors)
The box is made up of triangles, but the two polygons on each side are aligned so could be considered a quad.
You can test that by forming 6 square pyramids with your point as head and 4 vertices of a each quad as base, then summing up volumes of square pyramids. If sum of volumes is equal to volume of your box, the point is in the box. If sum of volumes is greater than your box's volume, it's outside of the box (sum of volumes would never be less than box volume).
For calculating volume of each square pyramid, you can break it into two tetrahedrons where their volume could be easily calculated by a mix vector product. You can calculate volume of box with mix vector product as well.
Assuming the points have a known order, you could work out the normal vectors. There's no need to normalise them for this sort of test so the cost isn't prohibitive. If you already know it's a cuboid then you need work out only two normals as you can get the third with the cross product, then use the other points to get distances. Obviously you're cross-producting to get normals anyway, so that's more a question about what information you want to expose to whom.
If the points don't have a known order then you can probably apply a miniature version of QuickHull — starting from the initial triangle you should find either that you already have one of the real edge faces (in which case you can use that normal and find the relevant points at the other extreme of that normal plus the requirement of mutual orthogonality to get to all three normals) or that one step gives you at least two real edges, which you'll spot when their local sets of points in front go empty.
A crazy idea, perhaps:-
set up a 3d orthographic projection on a 1x1 pixel viewport
set the camera and near clip plane such the the point of interest is on the near clip plane
render the box without any culling
if only one pixel is rendered then point is inside the box, 0 or 2 or more pixels rendered then the point is outside the box