How to calculate consistent tangent vectors over a mesh surface? - opengl

Bump mapping in OpenGL shaders is usually done in tangent space, which has the normal, tangent and binormal as base vectors.
According to my book, OpenGL Shading Language, it is required that the base vectors are consistently oriented across the surface of the object for the lighting equations to interpolate correctly. It also defines that by consistent, it means consistent with respect to the normal map texture coordinates.
So given the vertex positions, normals and normal map texture coordinates for an arbitrary mesh, how can I calculate consistent tangent vectors?

Calculating tangent and bitangent vectors so they orient correctly with texture coordinates, and correctly match the normals is actually fairly complicated.
A good code sample I have used in the past is this one:
http://www.terathon.com/code/tangent.html
Crytek also has a presentation on this topic. Their implementation also solves many common problems with tangent space calculation:
http://crytek.com/cryengine/presentations/triangle-mesh-tangent-space-calculation

Related

What property the "vertex texcoord" should have when calculating tangent Space

I'm using OpenMesh to handle triangle meshes.
I have done mesh parametrization to set vertex texcoord and my whole understanding about vertex texcoord is getting from there. It should be a changeable value to vertex if I'm not getting it wrong.
But now I want to calculate the tangent space for every vertex and all tutorials talk about "vertex texcoord" like it's a fixed property to vertex.
Here's one tutorials I read, and it says
If the mesh we are working on doesn't have texcoord we'll skip the Tangent Space phase, because is not possible to create an arbitrary UV Map in the code, UV Maps are design dependents and change the way as the texture is made.
So, what property the "texcoord" should have when calculating tangent Space
Thank you!
It's unclear what you're asking exactly, so hopefully this will help your understanding.
The texture-coordinates (texcoord) of each vertex are set during the model design phase and are loaded with the mesh. They contains the UV coordinates that the vertex is mapped to within the texture.
The tangent space is formed of the tangent, bitangent and normal (TBN) vectors at each point. The normal is either loaded with the mesh or can be calculated by averaging the normals of triangles meeting at the vertex. The tangent is the direction in which the U coordinate of the texcoord changes the most, i.e. the partial derivative of the model-space position by U. Similarily the bitangent is the partial derivative of position by V. Tangent and bitangent can be calculated together with the normals of each face and then averaged at the vertices, just like normals do.
For flat faces, tangent and bitangent are perpendicular to the normal by construction. However, due to the averaging at the vertices they may not be perpendicular anymore. Also even for flat faces, the tangent may not be perpendicular to the bitangent (e.g. imagine a skewed checker board texture mapping). However, to simplify the inversion of the TBN matrix, it is sometimes approximated with an orthogonal matrix, or even a quaternion. Even though this approximation isn't valid for skewed-mapped textures, it may still give plausible results. When orthogonality is assumed, the bitangent can be calculated as a cross product between the tangent and the normal.

Calculating Per-Vertex Tangents for GLSL

Many answers I've seen online to similar questions provide calculations for a tangent-space matrix, but I would like to know how to calculate per-vertex tangents to send to shaders as a vertex attribute. I understand that the tangent for each vertex must be similar to neighboring vertices to avoid visual lighting artefacts, so arbitrary perpendicular vectors cannot be chosen against a normal.
Extra info- I have data such as normals, vertex positions and texture coordinates that I have obtained through a parser I wrote for .obj (wavefront) model files.
I can imagine this would be a relatively simple problem, but not being a mathematician or even particularly competent in the area, the answer does not jump out at me.
Just take the derivatives of the UV (texture-coordinates) to align your tangent spaces. I.e. one vector of your tangent space is the normal, and you can pick the others two freely. By rotating them such that they are coincident with the UV coordinate derivates, you get a smooth tangent field. However, that's just half of the story, as this is indeed a pretty tricky business. Here's some additional reading:
http://www.terathon.com/code/tangent.html
http://image.diku.dk/projects/media/morten.mikkelsen.08.pdf
http://www.crytek.com/download/Triangle_mesh_tangent_space_calculation.pdf
There are also multiple tangent spaces used in different applications, so you have to think about which one you want as well. For comprehensive coverage, search for the mikktspace library (mikktspace.h and mikktspace.c.)

Normal Mapping Questions

I'm implementing tangent space normal mapping in my OpenGL app, and I have a few questions.
1) I know that, naturally, the TBN matrix is not always orthogonal because the texture co-ordinates might have skewing. I know that you can re-orthogonalize it using the Gramm-Schmidt process. My question is this - Does the Gramm-Schmidt process introduce visible artifacts? Will I get the best visual quality by using pure unmodified normal/tangent/bitangent?
2) I notice that in a lot of tutorials, they do the lighting calculations in tangent space instead of view space. Why is this? I plan to use a deferred renderer, so my normals have to be saved into a buffer, am I correct in that they should be in view space when saved? What would I be missing out on by using view space instead of tangent space in the calculations? If I'm using view space, do I need the inverse of the TBN matrix?
3) In the fragment shader, do the tangent and bitangent have to be re-normalized? After multiplying the incoming (normalized) bump map normal, does that then have to be renormalized? Would the orthogonality (see question 1) affect this? What vectors do or do not need to be renormalized within the fragment shader?
Does the Gramm-Schmidt process introduce visible artifacts?
It depends on the desired outcome. But the usual answer would be yes, since normal map vectors are in tangent space. Orthogonalizing tangent space will skew normal mapping calculations if the texture coordinates define a nonorthogonal base.
I notice that in a lot of tutorials, they do the lighting calculations in tangent space instead of view space. Why is this?
Doing normal mapping in view space would require each normal map texel to be transformed into view space. This is more expensive, than transforming lighting vectors into tangent space in the vertex shader and let the barycentric interpolation stage (which is far more efficient) to its work.
In the fragment shader, do the tangent and bitangent have to be re-normalized?
No. You should normalize the individual vectors after transforming them (normal map, direction to light source and viewpoint) but before doing the illumination calculations.

Light computation (shading) in model, tangent or camera space?

I'm currently trying to implement bump mapping which requires to have a "tangent space". I read through some tutorials, specifically the following two:
http://www.terathon.com/code/tangent.html
http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-13-normal-mapping/
Both tutorials avoid expensive matrix computation in the fragment shader which would be required if the shading computation would happen in the camera space as usual (as I'm used to, at least).
They introduce the tangent space which might be different per vertex (or even per fragment if the surface is smoothed). If I understand it correctly, for efficient bump mapping (i.e. to minimize computations in the fragment shader), they convert everything needed for light computation into this tangent space using the vertex shader. But I wonder if model space is a good alternative to compute light shading in.
My questions regarding this topic are:
For shading computation in tangent space, what exactly do I pass between vertex and fragment shaders? Do I really need to convert light positions in tangent space, requiring O(number of lights) of varying variables? This will for example not work for deferred shading or if the light positions aren't known for some other reason in the vertex shader. There has to be a (still efficient) alternative, which I guess is shading computation in model space.
If I pass model space varyings, is it a good idea to still perform shading computations in tangent space, i.e. convert light positions in fragment shader? Or is it better to perform shading computations in model space? Which will be faster? (In both cases I need a TBN matrix, but one case requires a model-to-tangent transform, the other a tangent-to-model transform.)
I currently pass per-vertex normal, tangent and bitangent (orhtonormal) to the vertex shader. If I understand it correctly, the orthonormalization is only required if I want to quickly build a model-to-tangent space matrix which requires inversion of a matrix containing the TBN vectors. If they are orthogonal, this is simply a transposition. But if I don't need vectors in the tangent space, I don't need an inversion but simply the original TBN vectors in a matrix which is then the tangent-to-model matrix. Wouldn't this simplify everything?
Normal mapping is usually done in tangent space because the normal maps are given in this space. So if you pre-transform the (relatively little) input data to tangent space in the vertex shader, you don't need extra computation in the fragment shader. That requires that all input data is available, of course. I haven't done bump mapping with deferred shading, but using the model space seems to be a good idea. World space would probably be even better, because you'll need world space vectors in the end to render to the G-buffers.
If you pass model space vectors, I would recommend to perform the calculations in this space. Then the fragment shader would have to transform one normal from tangent space to model space. In the other case it would have to transform n light attributes from model space to tangent space, which should take n times longer.
If you don't need the inverse TBN matrix, a non-orthonormal coordinate system should be fine. At least I don't see any reason, why it should not.

OpenGL Shaders - Normals in Gouraud and Phong shading?

I can't seem to understand the OpenGL pipeline process from a vertex to a pixel.
Can anyone tell me how important are vertex normals on these two shading techinques? As far as i know, in gouraud, lighting is calculated at each vertex, then the result color is interpolated across the polygon between vertices (is this done in fragment operations, before rasterizing?), and phong shading consists of interpolating first the vertices normals and then calculating the illumination on each of these normals.
Another thing is when bump mapping is applied to lets say a plane (2 triangles) and a brick texture as diffuse with its respect bump map, all of this with gouraud shading.
Bump mapping consist on altering the normals by a gradient depending on a bump map. But what normals does it alter and when (at the fragment shader?) if there are only 4 normals (4 vertices = plane), and all 4 are the same. In Gouraud you interpolate the color of each vertex after the illumination calculation, but this calculation is done after altering the normals.
How does the lighting work?
Vertex normals are absoloutely essential for both Gouraud and Phong shading.
In Gouraud shading the lighting is calculated per vertex and then interpolated across the triangle.
In Phong shading the normal is interpolated across the triangle and then the calculation is done per-pixel/fragment.
Bump-mapping refers to a range of different technologies. When doing normal mapping (probably the most common variety these days) the normals, bi-tangent (often erroneously called bi-normal) and tangent are calculated per-vertex to build a basis matrix. This basis matrix is then interpolated across the triangle. The normal retrieved from the normal map is then transformed by this basis matrix and then the lighting is performed per pixel.
There are extensions to the normal mapping technique above that allow bumps to hide other bumps behind them. This is, usually, performed by storing a height map along with the normal map and then ray marching through the height map to find parts that are being obscured. This technique is called Relief Mapping.
There are other older forms such as DUDV bump mapping (Which was implemented in DirectX 6 as Environment Mapped, bump mapping or EMBM).
You also have emboss bump mapping which was a really early way of doing bump mapping
Edit: In answer to your comment, emboss bump mapping CAN be performed on gouraud shaded triangles. Other forms of bump-mapping are, necessarily, per-pixel (due to the fact they work by modifying the surface normals on a per-pixel (or, at least, per-texel) basis). I wouldn't be surprised if there were other methods that can be performed with per-vertex lighting but I can't think of any off the top of my head. The results will look pretty rubbish compared to doing it on a per-pixel basis, though.
Re: Tangents and Bi-Tangents are actually quite simple once you get your head round them (took me years though, tbh ;)). Any 3D coordinate frame can be defined by a set of vectors that form an orthogonal basis matrix. By setting up the normal, tangent and bi-tangent per vertex you are merely setting up the coordinate frame at each vertex. From this you have the ability to transform a world or object space vector into the triangle's own coordinate frame. From here you can simply translate a light vector (or position) into the coordinate frame of a given pixel on the surface of the triangle. This then means that the normals in the normal map don't need to be stored in the object's space and hence as those triangles move around (when being animated, for example) the normals are already being handled in their own local space.
Normal mapping, one of the techniques to simulate bumped surfaces basically perturbs the per-pixel normals before you compute the light equation on that pixel.
For example, one way to implement requires you to interpolate surface normals and binormal (2 of the tangent space basis) and compute the third per-pixel (2+1 vectors which are the tangent basis). This technique also requires to interpolate the light vector. With those 3 (2+1 computed) vectors (named tagent space basis) you have a way to change the light vector from object space into tagent space. This is because these 3 vectors can be arranged as a 3x3 matrix which can be used to change the basis of your light direction vector.
Then it is simply a matter of using that tagent-space light vector and compute the light equation per pixel, where it most basic form would be a dot product between the tagent-space light vector and the normal map (your bump texture).
This is how a normal maps looks like (the normal component is stored in each channel of the texture and is already in tangent space):
This is one way, you can compute things in view space but the above is more easy to understand.
Old bump mapping was way simpler and was also kind of a fake effect.
All bump mapping techniques operate at pixel level, as they perturb in one way or other, how the surface is rendered. Even the old emboss bump mapping did some computation per pixel.
EDIT: I added a few more clarifications, when I have some spare minutes I will try to add some math and examples. Although there are great resources out there that explain this in great detail.
First of all, you don't need to understand the whole graphics pipeline to write a simple shader :). But, of course, you should know whats going on. You could read the graphics pipeline chapter in real-time rendering, 3rd edition (möller, hofmann, akenine-moller). What you describe is per-vertex and per-fragment lighting. For both calculations the vertex normals are part of the equation. For the bump mapping shader you alter the interpolated normals. So after rasterization you have fragments where missing data has to be caculated to determine the final pixel color.