CGAL - Surface mesh parameterisation - c++

I have been using LSCM parameterizer to unwrap a mesh. I would like to obtain a 2d planar model with accurate measurements such that if you make a paper cutout you could wrap it up back to the original model physically.
It seems that SMP::parameterize() is scaling the resulting OFF down to 1mm by 1mm. How to I get an OFF file with accurate measurements?
scaled down.

A paramterization is a UV map, associating 2D coordinates to 3D points, and such coordinates are always between 0,0 and 1,1. That's why you get a 1mm/1mm result. I guess you could compare a 3D edge length with it's 2D version in the map and scale your 2D model by this factor. Maybe perform a mean to be a bit more precise.

CGALs Least Squares Conformal Maps algorithm outputs such that the 2D distance between the two constrained vertices is 1mm. This means that unless, the two vertices you chose to be constrained were exactly 1mm apart, the output surface will be scaled.
The CGAL 'As Rigid As Possible' Parameterization, on the other hand, can output a result that maintains the area. Increasing the λ parameter will improve the preservation of area between the input and output at the expense of maintaining the angles, whereas reducing the λ parameter will do the opposite.
Also note that increasing the number of iterations from the default will improve the output - especially if the unwrapped surface self-intersects.

Related

Circular grid of uniform density

How can I generate a circular grid, made of tiles with uniform area/whose vertices are uniformly distributed?
I'll need to apply the Laplacian operator to the grid at each frame of my program.
Applying the Laplacian was easy with a rectangular grid made of rectangular tiles whose locations were specified in cartesian coordinates, since for a tile at (i,j), I knew the positions of its neighboring tiles to be (i-1,j), (i,j-1), (i+1,j), and (i,j+1).
While I'd like to use polar coordinates, I'm not sure whether querying a tile's neighborhood would be as easy.
I'm working in OpenGl, and could either render triangles or points. Triangles seem more efficient (and have the nice effect of filling the area between their vertices), but seem more amenable to cartesian coordinates. Perhaps I could render points and then polar coordinates would work fine?
The other concern is the density of tiles. I want waves traveling on the surface of this mesh to have the same resolution whether they're at the center or not.
So the two main concerns are: generating the mesh in a way that allows for easy querying of a tiles' neighborhood, and in a way that preserves a uniform density distribution of tiles.
I think you're asking for something impossible.
However, this is a technique for remapping a regular square 2D grid into a circle shape with a relatively low amount of warping. It might suffice for your problem.
You might want to have a look at this paper, it has been written to sample spheres but you might be able to adapt it for a circle.
An option can be to use a polar grid with a constant angular step but varying radial steps, so that all cells have the same area, i.e. (R+dR)²-R²=Cst, giving dR as a function of R.
You may want to reduce the anisotropy (some cells becoming very elongated) by changing the number of cells every now and then (f.i. by doubling). This will introduce singularities in the mesh, i.e. cells with five vertices instead of four.
See the figures in https://mathematica.stackexchange.com/questions/78806/ndsolve-and-fem-support-for-non-conformal-meshes-of-a-disk-with-kernel-crash

Normal of point via its location on STL mesh model

Can someone tell me the best way to estimate the normal at a point on CAD STL geometry?
This is not exactly a question on code, but rather about efficiency and approach.
I have used an approach in which I compare the point whose normal needs to be estimated with all the triangles in the mesh and check to see if it lies inside the triangle using the barycentric coordinates test. (If the value of each barycentric coordinate lies between 0 and 1, the point lies inside.) This post explains it
https://math.stackexchange.com/questions/4322/check-whether-a-point-is-within-a-3d-triangle
Then I compute the normal of that triangle to get the point normal.
The problem with my approach is that, if I have some 1000 points, and if the mesh has say, 500 triangles, that would mean doing some 500X1000 checks. This takes a lot of time.
Is there an efficient data structure or approach I could use, to pinpoint the right triangle? Or a library that could get the work done?
A relatively easy solution is by using a grid: decompose the space in a 3D array of voxels, and for every voxel keep a list of the triangles that interfere with it.
By interfere, I mean that there is a nonempty intersection between the voxel and the bounding box of the triangle. (When you know the bounding box, it is straight forward to tell what voxels it covers.)
When you want to test a point, find the voxel it belongs to and compare to the list of triangles. You will achieve a speedup equal to N/M, where M is the average number of triangles per voxel.
The voxel size should be chosen carefully. Too small will result in a too big data structure; too large will make the method ineffective. If possible, adjust to "a few" triangles per voxel. (Use the average triangle size - square root of double area - as a starting value.)
For better efficiency, you can compute the exact intersections between the triangles and the voxels, using a 3D polygon clipping algorithm (rather than a mere bounding box test), but this is more complex to implement.

Interpolate color between voxels

I have a 3D texture containing voxels and I am ray tracing and, everytime i hit a voxel i display the color. The result is nice but you can clearly see the different blocks being separated by one another. i would like to get a smoothing color going from one voxel to the other so I was thinking of doing interpolation.
My problem is that when I hit the voxel I am not sure which other neighbouring voxels to extract the colors from because i don't know if the voxel is part of a wall parallel to some axis or if it is a floor or an isolate part of the scene. Ideally I would have to get, for every voxel, the 26 neighbouring voxels, but that can be quite expensive. Is there any fast and approximate solution for such thing?
PS: I notice that in minecraft there is smooth shadows that form when voxels are placed near each other, maybe that uses a technique that might be adapted for this purpose?

Math Behind Flash Vector Graphics?

I've been searching for vector graphics and flash for quite some time but I haven't really found what I was looking for. Can anyone tell me exactly what area of mathematics is required for building vector images in 3D space? Is this just vector math? I saw some C++ libraries for it but I wasn't sure if it was the sort of vectors meant to for smaller file size like flash images are. Thanks in advance.
If you're wanting to do something from scratch (there are plenty of open-source libraries out there if you don't), keep in mind that "vector graphics" (this is different than the idea of a 3D space vector) themselves are typically based on parametric curves like Bezier curves, which are essentially 3rd degree polynomials for each x, y, and/or z point parameterized from a value t that goes from 0 to 1. Now projecting the texture-map image you create with those curves (i.e., the so-called "vector graphics" image) onto triangle polygon via uv coordinates would involve some interpolation, which is fairly straight forward linear algebra, as you would utilize the barycentric coordinate of the 3D point on the surface of the triangle polygon in order to calculate the uv point you want to look-up from the texture.
So essentially the steps are:
Create the parametric-curve based image (i.e, the "vector graphic") and make a texture map out of it
That texture map will have uv coordinates
When you rasterize the 3D triangle polygon, you will get a barycentric coordinate on the surface of the triangle from the actual 3D points of the triangle polygon. Those points of the polygon should also have UV coordinates assigned to them.
Use the barycentric coordinates to calculate the uv coordinate on the texture map.
When you get that color from the texture map, then shade the triangle (i.e, calculate lighting, etc. if that's what you're doing, or just save that color of the pixel if there is no lighting).
Please note I haven't gotten into antialiasing, that's a completely different beast. Best thing if you don't know what you're doing there is to simply brute-force antialias through super-sampling (i.e., render a really big image and then average pixels to shrink it back to the desired size).
If you've taken multivariable calculus, the concepts behind parametric curves and surfaces should be familiar, and a basic understanding of linear algebra would be necessary in order to work with barycentric coordinates and linear interpolation from 3D vectors.

Marching Cubes, voxels, need a bit of suggestions

I'm trying to construct a proper destructible terrain, just for research purposes.
Well, everything went fine, but resolution is not satisfying me enough.
I have seen a lot of examples how people implement MC algorithm, but most of them,
as far as I understand, uses functions to triangulate final mesh, which is not
appropriate for me.
I will try briefly to explain how I'm constructing my terrain, and maybe someone
of you will give me suggestion how to improve, or to increase resolution of final terrain.
1) Precalculating MC triangles.
I'm running simple loop through MC lookup tables for each case(0-255) and calculating triangles
in rage: [0,0,0] - [1,1,1].
No problems here.
2) Terrain
I have terrain class, which stores my voxels.
In general, it looks like this:
int size = 32;//Size of each axis.
unsigned char *voxels = new unsigned char[(size * size * size)/8];
So, each axis is 32 units of size long, but, I store voxel information per bit.
Meaning if bit is turned on (1), there is something, and there should be draw something.
I have couple of functions:
TurnOn(x,y,z);
TurnOff(x,y,z);
to turn location of voxel on or off. (Helps to work with bits).
Once terrain is allocated, I'm running perlin noise, and turning bits on or off.
My terrain class has one more function, to extract Marching Cubes case number (0-255) from x,y,z location:
unsigned char GetCaseNumber(x,y,z);
by determining if neighbours of that voxel is turned on or off.
No problems here.
3) Rendering part
I'm looping for each axis, extracting case number, then getting precalculated triangles by case,
translating to x,y,z coordinates, and drawing those triangles.
no problems here.
So result looks like this:
But as you can see, in any single location, resolution is not comparable to for example this:
(source: angelfire.com)
I have seen in MC examples that people are using something called "iso values", which I don't understand.
Any suggestions how to improve my work, or what is iso values, and how to implement it in uniform grid would be truly lovely.
The problem is that your voxels are a binary mask (just on or off).
This is great for the "default" marching cubes algorithm, but it it does mean you get sharp edges in your mesh.
The smooth example is probably generated from smooth scalar data.
Imagine that if your data varies smoothly between 0 and 1.0, and you set your threshold to 0.5. Now, after you detect which configuration a given cube is, you look at the all the vertices generated.
Say, that you have a vertex on an edge between two voxels, one with value 0.4 and the other 0.7. Then you move the vertex to the position where you would get exactly 0.5 (the threshold) when interpolating between 0.4 and 0.7. So it will be closer to the 0.4 vertex.
This way, each vertex is exactly on the interpolated iso surface and you will generate much smoother triangles.
But it does require that your input voxels are scalar (and vary smoothly). If your voxels are bi-level (all either 0 or 1), this will produce the same triangles as you got earlier.
Another idea (not the answer to your question but perhaps useful):
To just get smoother rendering, without mathematical correctness, it could be worthwile to compute an average normal vector for each vertex, and use that normal for each triangle connecting to it. This will hide the sharp edges.