Circular grid of uniform density - c++

How can I generate a circular grid, made of tiles with uniform area/whose vertices are uniformly distributed?
I'll need to apply the Laplacian operator to the grid at each frame of my program.
Applying the Laplacian was easy with a rectangular grid made of rectangular tiles whose locations were specified in cartesian coordinates, since for a tile at (i,j), I knew the positions of its neighboring tiles to be (i-1,j), (i,j-1), (i+1,j), and (i,j+1).
While I'd like to use polar coordinates, I'm not sure whether querying a tile's neighborhood would be as easy.
I'm working in OpenGl, and could either render triangles or points. Triangles seem more efficient (and have the nice effect of filling the area between their vertices), but seem more amenable to cartesian coordinates. Perhaps I could render points and then polar coordinates would work fine?
The other concern is the density of tiles. I want waves traveling on the surface of this mesh to have the same resolution whether they're at the center or not.
So the two main concerns are: generating the mesh in a way that allows for easy querying of a tiles' neighborhood, and in a way that preserves a uniform density distribution of tiles.

I think you're asking for something impossible.
However, this is a technique for remapping a regular square 2D grid into a circle shape with a relatively low amount of warping. It might suffice for your problem.

You might want to have a look at this paper, it has been written to sample spheres but you might be able to adapt it for a circle.

An option can be to use a polar grid with a constant angular step but varying radial steps, so that all cells have the same area, i.e. (R+dR)²-R²=Cst, giving dR as a function of R.
You may want to reduce the anisotropy (some cells becoming very elongated) by changing the number of cells every now and then (f.i. by doubling). This will introduce singularities in the mesh, i.e. cells with five vertices instead of four.
See the figures in https://mathematica.stackexchange.com/questions/78806/ndsolve-and-fem-support-for-non-conformal-meshes-of-a-disk-with-kernel-crash

Related

CGAL - Surface mesh parameterisation

I have been using LSCM parameterizer to unwrap a mesh. I would like to obtain a 2d planar model with accurate measurements such that if you make a paper cutout you could wrap it up back to the original model physically.
It seems that SMP::parameterize() is scaling the resulting OFF down to 1mm by 1mm. How to I get an OFF file with accurate measurements?
scaled down.
A paramterization is a UV map, associating 2D coordinates to 3D points, and such coordinates are always between 0,0 and 1,1. That's why you get a 1mm/1mm result. I guess you could compare a 3D edge length with it's 2D version in the map and scale your 2D model by this factor. Maybe perform a mean to be a bit more precise.
CGALs Least Squares Conformal Maps algorithm outputs such that the 2D distance between the two constrained vertices is 1mm. This means that unless, the two vertices you chose to be constrained were exactly 1mm apart, the output surface will be scaled.
The CGAL 'As Rigid As Possible' Parameterization, on the other hand, can output a result that maintains the area. Increasing the λ parameter will improve the preservation of area between the input and output at the expense of maintaining the angles, whereas reducing the λ parameter will do the opposite.
Also note that increasing the number of iterations from the default will improve the output - especially if the unwrapped surface self-intersects.

Normal of point via its location on STL mesh model

Can someone tell me the best way to estimate the normal at a point on CAD STL geometry?
This is not exactly a question on code, but rather about efficiency and approach.
I have used an approach in which I compare the point whose normal needs to be estimated with all the triangles in the mesh and check to see if it lies inside the triangle using the barycentric coordinates test. (If the value of each barycentric coordinate lies between 0 and 1, the point lies inside.) This post explains it
https://math.stackexchange.com/questions/4322/check-whether-a-point-is-within-a-3d-triangle
Then I compute the normal of that triangle to get the point normal.
The problem with my approach is that, if I have some 1000 points, and if the mesh has say, 500 triangles, that would mean doing some 500X1000 checks. This takes a lot of time.
Is there an efficient data structure or approach I could use, to pinpoint the right triangle? Or a library that could get the work done?
A relatively easy solution is by using a grid: decompose the space in a 3D array of voxels, and for every voxel keep a list of the triangles that interfere with it.
By interfere, I mean that there is a nonempty intersection between the voxel and the bounding box of the triangle. (When you know the bounding box, it is straight forward to tell what voxels it covers.)
When you want to test a point, find the voxel it belongs to and compare to the list of triangles. You will achieve a speedup equal to N/M, where M is the average number of triangles per voxel.
The voxel size should be chosen carefully. Too small will result in a too big data structure; too large will make the method ineffective. If possible, adjust to "a few" triangles per voxel. (Use the average triangle size - square root of double area - as a starting value.)
For better efficiency, you can compute the exact intersections between the triangles and the voxels, using a 3D polygon clipping algorithm (rather than a mere bounding box test), but this is more complex to implement.

Interpolate color between voxels

I have a 3D texture containing voxels and I am ray tracing and, everytime i hit a voxel i display the color. The result is nice but you can clearly see the different blocks being separated by one another. i would like to get a smoothing color going from one voxel to the other so I was thinking of doing interpolation.
My problem is that when I hit the voxel I am not sure which other neighbouring voxels to extract the colors from because i don't know if the voxel is part of a wall parallel to some axis or if it is a floor or an isolate part of the scene. Ideally I would have to get, for every voxel, the 26 neighbouring voxels, but that can be quite expensive. Is there any fast and approximate solution for such thing?
PS: I notice that in minecraft there is smooth shadows that form when voxels are placed near each other, maybe that uses a technique that might be adapted for this purpose?

Math Behind Flash Vector Graphics?

I've been searching for vector graphics and flash for quite some time but I haven't really found what I was looking for. Can anyone tell me exactly what area of mathematics is required for building vector images in 3D space? Is this just vector math? I saw some C++ libraries for it but I wasn't sure if it was the sort of vectors meant to for smaller file size like flash images are. Thanks in advance.
If you're wanting to do something from scratch (there are plenty of open-source libraries out there if you don't), keep in mind that "vector graphics" (this is different than the idea of a 3D space vector) themselves are typically based on parametric curves like Bezier curves, which are essentially 3rd degree polynomials for each x, y, and/or z point parameterized from a value t that goes from 0 to 1. Now projecting the texture-map image you create with those curves (i.e., the so-called "vector graphics" image) onto triangle polygon via uv coordinates would involve some interpolation, which is fairly straight forward linear algebra, as you would utilize the barycentric coordinate of the 3D point on the surface of the triangle polygon in order to calculate the uv point you want to look-up from the texture.
So essentially the steps are:
Create the parametric-curve based image (i.e, the "vector graphic") and make a texture map out of it
That texture map will have uv coordinates
When you rasterize the 3D triangle polygon, you will get a barycentric coordinate on the surface of the triangle from the actual 3D points of the triangle polygon. Those points of the polygon should also have UV coordinates assigned to them.
Use the barycentric coordinates to calculate the uv coordinate on the texture map.
When you get that color from the texture map, then shade the triangle (i.e, calculate lighting, etc. if that's what you're doing, or just save that color of the pixel if there is no lighting).
Please note I haven't gotten into antialiasing, that's a completely different beast. Best thing if you don't know what you're doing there is to simply brute-force antialias through super-sampling (i.e., render a really big image and then average pixels to shrink it back to the desired size).
If you've taken multivariable calculus, the concepts behind parametric curves and surfaces should be familiar, and a basic understanding of linear algebra would be necessary in order to work with barycentric coordinates and linear interpolation from 3D vectors.

How to create an even sphere with triangles in OpenGL?

Is there a formula that generates a set of coordinates of triangles whose vertices are located on a sphere?
I am probably looking for something that does something similar to gluSphere. Yet, I need to color the different triangles in specfic colors so that it seems I can't use gluSphere.
Also: I do understand that gluSphere draws edges along lines with equal longitudes and lattitudes which entails the triangles being small at the poles compared to their size at the equator. Now, if such a formula would generate the triangles such that their difference in size is minimized, that would be great.
To calculate the normals and the uv map.
Fortunately there is an amazing trick for calculating the normals, on a sphere. If you think about it, the normals on a sphere are indeed nothing more than the direction from the centre of the sphere, to that point!! Furthermore, if you think it through, that means the normals literally equal the point! i.e., it's the same vector! - just don't forget to normalise the length, for the normal.
You can win bar bets on that one: "is there a shape where all the normals happen to be exactly ... equal to the vertices?" At first glance you'd think, that's impossible, no such coincidental shape could exist. But of course the answer is simply "a sphere with radius one!" Heh!
Regarding the UVs. It is relatively easy on a sphere, assuming you're projecting to 2D in the "obvious" manner, a "rectangle-style" map projection. In that case the u and v is basically just the longitude / latitude of any point, normalised to 0,1.
Hope it helps!
Here's the all-time-classic web page that beautifully explains how to build an icosphere .. http://blog.andreaskahler.com/2009/06/creating-icosphere-mesh-in-code.html
Start with a unit icosahedron. Then apply muliple homogenous subdivisions of the triangles, normalizing the resulting vertices distance to the origin.