Calculate vertex normals in triangulated geometry with edge detection - c++

No duplicate of Most efficient algorithm to calculate vertex normals from set of triangles for Gouraud shading, as the edge-detection issue is not discussed.
How to computationally calculate normals for every vertex in a triangulated geometry, to be used in a Gouraud shader for a nice display, but with keeping track of edges? Is there a free, fast and performant standard solution for this?
I have been assigned the above mentioned task to fix a routine that produces visible artefacts. The normals should be input data to a simple Gouraud shader to "smoothen" the displayed geometry at coherent face. The routine should also be able to find edges so that they could be used by some other part of the software later on and not be "smoothened over".
The data is read from .stl files that do not contain any normal information, so all face normals must be calculated using the triangles' coordinates.
This is how the geometry looks without interpolation:
This is what the interpolation algorithm does so far:
The rounded plains look quite well, but the interpolation gravely fails at places where the found edge next to a flat surface is not strong enough to trigger the edge-detection algorithm, but also not weak enough to not be visible. The consequence is a misplaced normal that is propagated through the whole triangle(s).
I wonder if there is a standard solution for this, as this problem should occur quite often when working with this type of geometry. And even if there isn't, does anyone of you know common problems with this task and how to avoid them to get some decent results?
Any help would be much appreciated!
Edit: On the algorithm:
The edge detection is not just about the triangle normal. Please consider the following example (the problem is essentially the same in 3D):
All vertices share the same angle, i.e. 30°. (The angles are not exactly correct, but you get the idea...) However, only the outer two of them should be recognised as an edge, so there has to be another measure relevant for this question. So far, I have tried a triangle's
circumcircle radius
longest edge
GTS triangle quality measure
that can modify the "minimum angle" at which two triangles are considered to share an edge. The longest edge method looks most promising (although far from perfect), but I think there is still something I overlooked...

There is a publicly available publication about this coming from Cal-Tech, in a form of a technical report, called "Discrete Differential-Geometry Operators for Triangulated 2-Manifolds".
The algorithms proposed there propose
a unified derivation that ensures accuracy and tight error bounds, leading to simple formulae that are straightforward to implement.
In the report, the algorithms are used for curvature calculations, but the curvature calculation involves an accurate mean curvature normal calculation. This information enables you to incorporate feature edge detection - the authors have used it for that purpose for feature detection on noisy meshes. Also, the mean curvature normal can be then used for shading.

Related

Efficiently providing geometry for terrain physics

I have been researching different approaches to terrain systems in game engines for a bit now, trying to familiarize myself with the work. A number of the details seem straightforward, but I am getting hung up on a single detail.
For performance reasons many terrain solutions utilize shaders to generate parts or all of the geometry, such as vertex shaders to generate positions or tessellation shaders for LoD. At first I figured those approaches were exclusively for renders that weren't concerned about physics simulations.
The reason I say that is because as I understand shaders at the moment, the results of a shader computation generally are discarded at the end of the frame. So if you rely on shaders heavily then the geometry information will be gone before you could access it and send it off to another system (such as physics running on the CPU).
So, am I wrong about shaders? Can you store the results of them generating geometry to be accessed by other systems? Or am I forced to keep the terrain geometry on CPU and leave the shaders to the other details?
Shaders
You understand parts of the shaders correctly, that is: after a frame, the data is stored as a final composed image in the backbuffer.
BUT: Using transform feedback it is possible to capture transformed geometry into a vertex buffer and reuse it. Transform Feedback happens AFTER the vertex/geometry/tessellation shader, so you could use the geometry shader to generate a terrain (or visible parts of it once), push it through transform-feedback and store it.
This way, you potentially could use CPU collision detection with your terrain! You can even combine this with tessellation.
You will love this: A Framework for Real-Time, Deformable Terrain.
For the LOD and tessellation: LOD is not the prerequisite of tessellation. You can use tessellation to allow some more sophisticated effects such as adding a detail by recursive subdivision of rough geometry. Linking it with LOD is simply a very good optimization avoiding RAM-memory based LOD-mesh-levels, since you just have your "base mesh" and subdivide it (Although this will be an unsatisfying optimization imho).
Now some deeper info on GPU and CPU exclusive terrain.
GPU Generated Terrain (Procedural)
As written in the NVidia article Generating Complex Procedural Terrains Using the GPU:
1.2 Marching Cubes and the Density Function Conceptually, the terrain surface can be completely described by a single function, called the
density function. For any point in 3D space (x, y, z), the function
produces a single floating-point value. These values vary over
space—sometimes positive, sometimes negative. If the value is
positive, then that point in space is inside the solid terrain.
If the value is negative, then that point is located in empty space
(such as air or water). The boundary between positive and negative
values—where the density value is zero—is the surface of the terrain.
It is along this surface that we wish to construct a polygonal mesh.
Using Shaders
The density function used for generating the terrain, must be available for the collision-detection shader and you have to fill an output buffer containing the collision locations, if any...
CUDA
See: https://www.youtube.com/watch?v=kYzxf3ugcg0
Here someone used CUDA, based on the NVidia article, which however implies the same:
In CUDA, performing collision detection, the density function must be shared.
This will however make the transform feedback techniques a little harder to implement.
Both, Shaders and CUDA, imply resampling/recalculation of the density at at least one location, just for the collision detection of a single object.
CPU Terrain
Usually, this implies a RAM-memory stored set of geometry in the form of vertex/index-buffer pairs, which are regularly processed by the shader-pipeline. As you have the data available here, you will also most likely have a collision mesh, which is a simplified representation of your terrain, against which you perform collision.
Alternatively you could spend your terrain a set of colliders, marking the allowed paths, which is imho performed in the early PS1 Final Fantasy games (which actually don't really have a terrain in the sense we understand terrain today).
This short answer is neither extensively deep nor complete. I just tried to give you some insight into some concepts used in dozens of solutions.
Some more reading: http://prideout.net/blog/?tag=opengl-transform-feedback.

GLSL Tessellated Environment - Gaps Between Patches

So I have been writing a program that uses a tessellation shader and a height map to draw an environment. It starts out as a 32x32 plane, and when it gets more tessellated the heights of each square vertex are determined by the height map.
I want it so that the closer the patch is to the camera, the more tessellated it gets. However, I have discovered that this causes gaps between patches. If a patch is more tessellated than one next to it, the different resolutions cause gaps.
Here, a picture is worth a thousand words:
If two patches have the same resolution then there are no gaps. How can I get around this problem? I'm completely stuck.
The UV coordinates along the edges need to vary uniformly for this to be seamless. When done at the same level of subdivision there are some pretty reliable invariance guarantees. However, this rarely happens when the two edges are sub-divided at different rates.
Technically what you have is known as a T-Junction. It occurs because two surfaces that are supposed to share an edge actually diverge slightly. The insertion of a new displaced vertex creates two primitives, neither of which share an edge with the one primitive belonging to the adjacent patch.
You need to make the outer tessellation level identical for patches that share edges*:
   
   *As shown in this diagram from GPU Gems 2
I am sure you are already familiar with Continuous Level Of Detail problem. Searching for this in the web gives several methods solving the gap problem. One such site is here, where I copied the picture below.
One thing interesting in your case is that the tesselation does not seem to increment / decrement in 2^n fashion. So, for your case, maybe adding faces to the four boundaries of each block of terrain mesh, acting as curtains, might be the only feasible solution.
If you look at the picture below, you'll see the boundaries have vertical faces.
Side effect is, if the gap is big enough, then it might be seen as a cliff. You'll need to adjust the tessellation between detail levels to minimize this side effect.
Here is what I ended up doing;
So I realized that only the outer tessellation levels between two patches have to match, the inner tessellation levels can be whatever they want. TCS in GLSL have you fill out the following to determine how much tessellation is done:
gl_TessLevelInner[0]
gl_TessLevelInner[1]
gl_TessLevelOuter[0]
gl_TessLevelOuter[1]
gl_TessLevelOuter[2]
gl_TessLevelOuter[3]
The four TessLevelOuter represent the tessellation levels of the four sides of the patch (the square). You are passed along the locations of each corner of the square. In order to determine the inner tessellation levels I average these four locations together and get the distance of the result from the camera. Now for each edges, which is controlled by the outer tessellation levels, I average the two appropriate corner locations and get the distance of that from the camera. Since that will have to match the patch next to it, since they share corners, this works.

Programming my own triangle rasterization for OpenGL?

I am trying to render rounded triangles to increase performance. To illustrate what I mean, see the picture below:
I tried in the CPU, now is there a way to move this algorithm somehow to the GPU? I can change the method's code that calls the fragment shader?
By the way if I can do it, then what programming language I need to re-make it to?
I am using an OpenGL 2.1 GPU with just 20GB-30GB memory bandwidth.
Read the paper Resolution Independent Curve Rendering using Programmable Graphics Hardware by Charles Loop and Jim Blinn.
Short version: assuming you have an efficient inside/outside test for your curve, render the enclosing hull shape as triangle(s), use a fragment shader to discard the pixels outside the curve.
Second the concern by Aeluned that transferring the algorithm to the GPU won't automatically make it faster.
I'm not sure exactly what you're up to, but it seems a bit dubious. You can actually end up hurting performance trying to do some of these custom calculations in a shader to render a circle or ellipse.
Modern GPU hardware can push billions of triangles a second. You're probably splitting hairs here.
In any case, if you want to 'fake' the geometry, this may be of interest to you: https://alfonse.bitbucket.io/oldtut/Illumination/Tutorial%2013.html
Well on OpenGL 2.1 you do not have geometry shaders (3.2+) so you can forget about GPU.
You can not improve rasterizing performance of convex shapes by your curved triangles
complexity of rasterization of any convex polygon is the same as any triangle of the same area
the difference is only in:
number of passing vertexes
memory transfer
with geometry shader this will be better with your triangles usage
number of boundary lines
boundary lines rasterization for filling
will be worse with your triangles usage
(need to join more triangles instead of single shape polygon)
So its not a good idea to implement this for better performance in your case.
The only thing i can think of to use this for is to ease up manual generation of shapes.
In that case just write a function glRoundedTriangle(....)
which generate the correct vertexes,colors,normals,texture coordinates from given input parameters.
how it would looked like is unknown because you did not specify the rounded triangle geometry/shape and/or input parameters (for example 3 points + 3 signed curve radiuses ?)
To improve performance in OpenGL use VBO/VAO

Marching Cubes, voxels, need a bit of suggestions

I'm trying to construct a proper destructible terrain, just for research purposes.
Well, everything went fine, but resolution is not satisfying me enough.
I have seen a lot of examples how people implement MC algorithm, but most of them,
as far as I understand, uses functions to triangulate final mesh, which is not
appropriate for me.
I will try briefly to explain how I'm constructing my terrain, and maybe someone
of you will give me suggestion how to improve, or to increase resolution of final terrain.
1) Precalculating MC triangles.
I'm running simple loop through MC lookup tables for each case(0-255) and calculating triangles
in rage: [0,0,0] - [1,1,1].
No problems here.
2) Terrain
I have terrain class, which stores my voxels.
In general, it looks like this:
int size = 32;//Size of each axis.
unsigned char *voxels = new unsigned char[(size * size * size)/8];
So, each axis is 32 units of size long, but, I store voxel information per bit.
Meaning if bit is turned on (1), there is something, and there should be draw something.
I have couple of functions:
TurnOn(x,y,z);
TurnOff(x,y,z);
to turn location of voxel on or off. (Helps to work with bits).
Once terrain is allocated, I'm running perlin noise, and turning bits on or off.
My terrain class has one more function, to extract Marching Cubes case number (0-255) from x,y,z location:
unsigned char GetCaseNumber(x,y,z);
by determining if neighbours of that voxel is turned on or off.
No problems here.
3) Rendering part
I'm looping for each axis, extracting case number, then getting precalculated triangles by case,
translating to x,y,z coordinates, and drawing those triangles.
no problems here.
So result looks like this:
But as you can see, in any single location, resolution is not comparable to for example this:
(source: angelfire.com)
I have seen in MC examples that people are using something called "iso values", which I don't understand.
Any suggestions how to improve my work, or what is iso values, and how to implement it in uniform grid would be truly lovely.
The problem is that your voxels are a binary mask (just on or off).
This is great for the "default" marching cubes algorithm, but it it does mean you get sharp edges in your mesh.
The smooth example is probably generated from smooth scalar data.
Imagine that if your data varies smoothly between 0 and 1.0, and you set your threshold to 0.5. Now, after you detect which configuration a given cube is, you look at the all the vertices generated.
Say, that you have a vertex on an edge between two voxels, one with value 0.4 and the other 0.7. Then you move the vertex to the position where you would get exactly 0.5 (the threshold) when interpolating between 0.4 and 0.7. So it will be closer to the 0.4 vertex.
This way, each vertex is exactly on the interpolated iso surface and you will generate much smoother triangles.
But it does require that your input voxels are scalar (and vary smoothly). If your voxels are bi-level (all either 0 or 1), this will produce the same triangles as you got earlier.
Another idea (not the answer to your question but perhaps useful):
To just get smoother rendering, without mathematical correctness, it could be worthwile to compute an average normal vector for each vertex, and use that normal for each triangle connecting to it. This will hide the sharp edges.

Which geometrical calculations can be accelerated using OpenGL

I need to accelerate some programs that use intensive calculations where surface calculations from the intersection between cubes, spheres and similar are needed. Using CUDA I need to specify all the formuale I need, of course, in order to analytically calculate information related to intersections. But since I only need a good approximation of the resulting surface, I read about OpenGL can calculate or estimate such surfaces. I wonder if you could give me your opinion or point me to relevant references
If you just need to render those objects, you could use the stencil buffer to evaluate whatever boolean operations you need: http://www.opengl.org/resources/code/samples/advanced/advanced97/notes/node11.html
Any quantities that could be computed from either a perspective or orthographic projection of the intersection surface could be deduced from such a rendering together with its depth buffer. If you need to extract the whole intersection, you can try using depth peeling together with stencilled CSG to extract a layered representation of the complete intersection, though it can be very inaccurate on the parts of the surface which are parallel to the viewing direction and you will need to do some extra work to stitch the layers back together:
http://developer.download.nvidia.com/SDK/10/opengl/src/dual_depth_peeling/doc/DualDepthPeeling.pdf
EDIT: This will work for arbitrary, free form surfaces and is a fairly standard technique. But it does have its limitations, in that the accuracy you get will be fairly poor and you may have to project onto multiple views in order to get some adequate covering of your object. As an example, here is an application to collision detection: http://www.cs.ucl.ac.uk/staff/b.spanlang/ISBCICSOWH.pdf
OpenGL is of even less use here than CUDA or OpenCL, since it's primarily targeted at drawing triangular tesselated meshes. Of course you can do sophisticated geometrical computations in the various shader stages of modern OpenGL. The problem is, that the result of all those computations is a pixel based picture. There is a feedback mechanism to retrieve the processed vertex data, but that only gives you a mesh.
Intersections of anything planar or/and with spheres is actually quite easy and can be done analytically. The real hard stuff is intersecting freeform curved surfaces (Bezìer or NURBS). Those usually don't have a closed solution, so what you need to do is numerically aproximating a trim curve that best fits the intersection.