What property the "vertex texcoord" should have when calculating tangent Space - opengl

I'm using OpenMesh to handle triangle meshes.
I have done mesh parametrization to set vertex texcoord and my whole understanding about vertex texcoord is getting from there. It should be a changeable value to vertex if I'm not getting it wrong.
But now I want to calculate the tangent space for every vertex and all tutorials talk about "vertex texcoord" like it's a fixed property to vertex.
Here's one tutorials I read, and it says
If the mesh we are working on doesn't have texcoord we'll skip the Tangent Space phase, because is not possible to create an arbitrary UV Map in the code, UV Maps are design dependents and change the way as the texture is made.
So, what property the "texcoord" should have when calculating tangent Space
Thank you!

It's unclear what you're asking exactly, so hopefully this will help your understanding.
The texture-coordinates (texcoord) of each vertex are set during the model design phase and are loaded with the mesh. They contains the UV coordinates that the vertex is mapped to within the texture.
The tangent space is formed of the tangent, bitangent and normal (TBN) vectors at each point. The normal is either loaded with the mesh or can be calculated by averaging the normals of triangles meeting at the vertex. The tangent is the direction in which the U coordinate of the texcoord changes the most, i.e. the partial derivative of the model-space position by U. Similarily the bitangent is the partial derivative of position by V. Tangent and bitangent can be calculated together with the normals of each face and then averaged at the vertices, just like normals do.
For flat faces, tangent and bitangent are perpendicular to the normal by construction. However, due to the averaging at the vertices they may not be perpendicular anymore. Also even for flat faces, the tangent may not be perpendicular to the bitangent (e.g. imagine a skewed checker board texture mapping). However, to simplify the inversion of the TBN matrix, it is sometimes approximated with an orthogonal matrix, or even a quaternion. Even though this approximation isn't valid for skewed-mapped textures, it may still give plausible results. When orthogonality is assumed, the bitangent can be calculated as a cross product between the tangent and the normal.

Related

Using a tangent-space normal map with world-space normals

I'm working on deferred shading in GLSL. I have all of my model normals in world space, and the positions as well. The whole thing is working exactly as it should with just the world-space normals of the polygons, but I'm unsure of how to add normal mapping. I know it's recommended to do lighting in view-space, and I've tried that but I couldn't get even a quarter as far with that as I have with world-space. All I have left to fix now is adding normals. I have a regular normal map, which is in tangent-space. I've tried the following:
Multiplying the world-space normal by the normal-map didn't seem to work.
I tried adding the normal-map's normals, even though I knew that wouldn't work.
At this point I'm not really sure what else to try. Is there any easy way to do this?
Normal-rendering frag shader:
#ifdef NORMALMAP
uniform sampler2D m_Normals;
#endif
varying vec3 normal;
varying vec2 texCoord;
void main() {
vec3 normals;
#ifdef NORMALMAP
normals = normalize(normal * (texture(m_Normals, texCoord).xyz));
#else
normals = normal;
#endif
gl_FragColor = vec4(normals, 1.0);
}
The variable "normal" is the world-space normal passed in from the vertex shader.
My light positions are all in normal space.
I do not know what "normal space" means here be honest. There are "normal spaces" in mathematics, but that is quite unrelated and there is no standard coordinate space referred to as "normal space."
My question is how to convert tangent to world.
Your polygon's world-space normal is inadequate to solve this problem (in fact, you need a per-vertex normal). To transform from tangent-space to object-space you need three things:
your object-space normal vector
a vector representing the change in texture s coordinates
a vector representing the change in texture t coordinates
Those three things together form the tangent-space -> object-space change of basis matrix (usually referred to as TBN because the vectors are called Tangent, Bitangent and Normal).
However, you do not want object-space normals in this question, so you need to multiply that matrix by your model matrix so that it skips over object-space and goes straight to world-space. Likewise, if your normal vector is already in world-space you need to transform it back to object-space. It is possible that object-space and world-space are the same in this case particularly since we are only dealing with a 3x3 matrix here (no translation). But I cannot be sure from the description you have given.
One last thing to note, in this case you want the inverse of the traditional TBN matrix. Traditionally it is used to transform vectors into tangent-space, but you want to transform them out. The inverse of a rotation matrix is the same as its transpose, so all you really have to do is put those 3 vectors across your rows instead of columns1.
1That assumes that your T, B and N vectors are at right-angles. That is not necessarily the case since they are derived from your texture coordinates. It is all going to come down to what you use to compute your Tangent and Bitangent vectors; a lot of software will re-orthogonalize them for you.

Normal Mapping Questions

I'm implementing tangent space normal mapping in my OpenGL app, and I have a few questions.
1) I know that, naturally, the TBN matrix is not always orthogonal because the texture co-ordinates might have skewing. I know that you can re-orthogonalize it using the Gramm-Schmidt process. My question is this - Does the Gramm-Schmidt process introduce visible artifacts? Will I get the best visual quality by using pure unmodified normal/tangent/bitangent?
2) I notice that in a lot of tutorials, they do the lighting calculations in tangent space instead of view space. Why is this? I plan to use a deferred renderer, so my normals have to be saved into a buffer, am I correct in that they should be in view space when saved? What would I be missing out on by using view space instead of tangent space in the calculations? If I'm using view space, do I need the inverse of the TBN matrix?
3) In the fragment shader, do the tangent and bitangent have to be re-normalized? After multiplying the incoming (normalized) bump map normal, does that then have to be renormalized? Would the orthogonality (see question 1) affect this? What vectors do or do not need to be renormalized within the fragment shader?
Does the Gramm-Schmidt process introduce visible artifacts?
It depends on the desired outcome. But the usual answer would be yes, since normal map vectors are in tangent space. Orthogonalizing tangent space will skew normal mapping calculations if the texture coordinates define a nonorthogonal base.
I notice that in a lot of tutorials, they do the lighting calculations in tangent space instead of view space. Why is this?
Doing normal mapping in view space would require each normal map texel to be transformed into view space. This is more expensive, than transforming lighting vectors into tangent space in the vertex shader and let the barycentric interpolation stage (which is far more efficient) to its work.
In the fragment shader, do the tangent and bitangent have to be re-normalized?
No. You should normalize the individual vectors after transforming them (normal map, direction to light source and viewpoint) but before doing the illumination calculations.

Normal mapping, how to transform vectors

In the vertex shader we usually create TBN matrix:
vec3 n = normalize(gl_NormalMatrix * gl_Normal);
vec3 t = normalize(gl_NormalMatrix * Tangent.xyz);
vec3 b = normalize(gl_NormalMatrix * Bitangent.xyz);
mat3 tbn = mat3(t, b, n);
This matrix transform vertices from Tangent space into Eye/Camera space.
Now for normal mapping (done in forward rendering) we have two options:
inverse tbn matrix and transform light_vector and view_direction and send those vectors to the fragment shader. After that those vectors are in Tangent space.
that way in the fragment shader we only need to read normal from a normal map. Since such normals are in Tangent space (by "definition") they match with out transformed light_vector and view_direction.
this way we do light calculations in Tangent space.
pass the tbn matrix into fragment shader and then transform each normal read from a normal map by that. That way we transform such normal into View space.
this way we do light calculations in Eye/Camera space
Option 1 seems to be faster: we have most transformations in the vertex shader, only one read from normal map.
Option 2 requires to transform each normal from normal map by the TBN matrix. But it seems a bit simpler.
Questions:
Which option is better?
Is there any performance loss? (maybe texture read will "cover" costs of doing the matrix transformation)
Which option is more often used?
I'll tell you this much right now - depending on your application, option 1 may not even be possible.
In a deferred shading graphics engine, you have to compute the light vector in your fragment shader, which rules out option 1. You cannot keep the TBN matrix around when it comes time to do lighting in deferred shading either, so you would transform your normals into world-space or view-space (this is not favored very often anymore) ahead of time when you build your normal G-Buffer (the calculation of the TBN matrix can be done in the vertex shader and passed to the fragment shader as a flat mat3). Then you sample the normal map using the basis and write it in world space.
I can tell you from experience that the big name graphics engines (e.g. Unreal Engine 4, CryEngine 3, etc.) actually do lighting in world-space now. They also employ deferred shading, so for these engines neither option you proposed above is used at all :)
By the way, you are wasting space in your vertex buffer if you are actually storing the normal, binormal and tangent vectors. They are basis vectors for an orthonormal vector space, so they are all at right angles. Because of this, you can compute the third vector given any two by taking the cross product. Furthermore, since they are at right angles and should already be normalized you do not need to normalize the result of the cross product (recall that |a x b| = |a| * |b| * sin(a,b)). Thus, this should suffice in your vertex shader:
// Normal and tangent should be orthogonal, so the cross-product
// is also normalized - no need to re-normalize.
vec3 binormal = cross (normal, tangent);
TBN = mat3 (tangent, binormal, normal);
This will get you on your way to world-space normals (which tend to be more efficient for a lot of popular post-processing effects these days). You probably will have to re-normalize the matrix if you intend to alter it to produce view-space normals.

Low polygon cone - smooth shading at the tip

If you subdivide a cylinder into an 8-sided prism, calculating vertex normals based on their position ("smooth shading"), it looks pretty good.
If you subdivide a cone into an 8-sided pyramid, calculating normals based on their position, you get stuck on the tip of the cone (technically the vertex of the cone, but let's call it the tip to avoid confusion with the mesh vertices).
For each triangular face, you want to match the normals along both edges. But because you can only specify one normal at each vertex of a triangle, you can match one edge or the other, but not both. You can compromise by choosing a tip normal that is the average of the two edges, but now none of your edges look good. Here is a detail of what choosing the average normal for each tip vertex looks like.
In a perfect world, the GPU could rasterize a true quad, not just triangles. Then we could specify each face with a degenerate quad, allowing us to specify a different normal for the two adjoining edges of each triangle. But all we have to work with are triangles... We can cut the cone into multiple "stacks", so that the edge discontinuities are only visible at the tip of the cone rather than along the whole thing, but there will still be a tip!
Anybody have any tricks for smooth-shaded low-poly cones?
I was struggling with cones in modern OpenGL (i.e. shaders) made up from triangles a bit but then I found a surprisingly simple solution! I would say it is much better and simpler than what is suggested in the currently accepted answer.
I have an array of triangles (obviously each has 3 vertices) which form the cone surface. I did not care about the bottom face (circular base) as this is really straightforward. In all my work I use the following simple vertex structure:
position: vec3 (was automatically converted to vec4 in the shader by adding 1.0f as the last element)
normal_vector: vec3 (was kept as vec3 in the shaders as it was used for calculation dot product with the light direction)
color: vec3 (I did not use transparency)
In my vertex shader I was only transforming the vertex positions (multiplying by projection and model-view matrix) and also transforming the normal vectors (multiplying by transformed inverse of model-view matrix). Then the transformed positions, normal vectors and untransformed colors were passed to fragment shader where I calculated the dot product of light direction and normal vector and multiplied this number with the color.
Let me start with what I did and found unsatisfactory:
Attempt#1: Each cone face (triangle) was using a constant normal vector, i.e. all vertices of one triangle had the same normal vector.
This was simple but did not achieve smooth lighting, each face had a constant color because all fragments of the triangle had the same normal vector. Wrong.
Attempt#2: I calculated the normal vector for each vertex separately. This was easy for the vertices on the circular base of the cone but what should be used for the tip of the cone? I used the normal vector of the whole triangle (i.e. the same value as in attempt#). Well this was better because I had smooth lighting in the part closer to the base of the cone but not smooth near the tip. Wrong.
But then I found the solution:
Attempt#3: I did everything as in attempt#2 except I assigned the normal vector in the cone-tip vertices equal to zero vector vec3(0.0f, 0.0f, 0.0f). This is the key to the trick! Then this zero normal vector is passed to the fragment shader, (i.e. between vertex and fragment shaders it is automatically interpolated with the normal vectors of the other two vertices). Of course then you need to normalize the vector in the fragment (!) shader because it does not have constant size of 1 (which I need for the dot product). So I normalize it - of course this is not possible for the very tip of the cone where the normal vector has the size of zero. But it works for all other points. And that's it.
There is one important thing to remember, either you can only normalize the normal vector in the fragment shader. Sure you will get error if you try to normalize vector of zero size in C++. So If you need normalization before entering into fragment shader for some reason make sure you exclude the normal vectors of size of zero (i.e. the tip of the cone or you will get error).
This produces smooth shading of the cone in all points except the very point of the cone-tip. But that point is just not important (who cares about one pixel...) or you can handle it in a special way. Another advantage is that you can use even very simple shader. The only change is to normalize the normal vectors in the fragment shader rather than in vertex shader or even before.
Yes, it certainly is a limitation of triangles. I think showing the issue as you approach a cone from a cylinder makes the problem quite clear:
Here's some things you could try...
Use quads (as #WhitAngl says). To hell with new OpenGL, there is a use for quads after all.
Tessellate a bit more evenly. Setting the normal at the tip to a common up vector removes any harsh edges, though looks a bit strange against the unlit side. Unfortunately this goes against your question title, low polygon cone.
Making sure your cone is centred around the object space origin (or procedurally generating it in the vertex shader), use the fragment position to generate the normal...
in vec2 coneSlope; //normal x/z magnitude and y
in vec3 objectSpaceFragPos;
uniform mat3 normalMatrix;
void main()
{
vec3 osNormal = vec3(normalize(objectSpaceFragPos.xz) * coneSlope.x, coneSlope.y);
vec3 esNormal = normalMatrix * osNormal;
...
}
Maybe there's some fancy tricks you can do to reduce fragment shader ops too.
Then there's the whole balance of tessellating more vs more expensive shaders.
A cone is a fairly simple object and, while I like the challenge, in practice I can't see this being an issue unless you want lots of cones. In which case you might get into geometry shaders or instancing. Better yet you could draw the cones using quads and raycast implicit cones in the fragment shader. If the cones are all on a plane you could try normal mapping or even parallax mapping.

How to calculate consistent tangent vectors over a mesh surface?

Bump mapping in OpenGL shaders is usually done in tangent space, which has the normal, tangent and binormal as base vectors.
According to my book, OpenGL Shading Language, it is required that the base vectors are consistently oriented across the surface of the object for the lighting equations to interpolate correctly. It also defines that by consistent, it means consistent with respect to the normal map texture coordinates.
So given the vertex positions, normals and normal map texture coordinates for an arbitrary mesh, how can I calculate consistent tangent vectors?
Calculating tangent and bitangent vectors so they orient correctly with texture coordinates, and correctly match the normals is actually fairly complicated.
A good code sample I have used in the past is this one:
http://www.terathon.com/code/tangent.html
Crytek also has a presentation on this topic. Their implementation also solves many common problems with tangent space calculation:
http://crytek.com/cryengine/presentations/triangle-mesh-tangent-space-calculation