As I understand, TBN matrix is used for every triangle of a 3D shape. The calculation takes 3 vertices coordinates and 3 pairs of UV coordinates as an input and finds the transformation of the UV plane that matches 3 UV points to 3 vertices in the object local space.
Since all the input information is available in 3D software after UV unwrapping of a model, why this calculations are not done at the moment of creating a normal map image and not encoded in it right away?
Why it needs to be done at a shader level?
Related
I am trying to calculate a normal map from subdivisions of a mesh, there are 2 meshes a UV unwrapped base mesh which contains quads and triangles and a subdivision mesh that contains only quads.
Suppose a I have a quad with all the coordinates of the vertices both in object space and UV space (quad is not flat), the quad's face normal and a pixel with it's position in UV space.
Can I calculate the TBN matrix for the given quad and write colors to the pixel, if so then is it different for quads?
I ask this because I couldn't find any examples for calculating a TBN matrix for quads, only triangles ?
Before answering your question, let me start by explaining what the tangents and bitangents that you need actually are.
Let's forget about triangles, quads, or polygons for a minute. We just have a surface (given in whatever representation) and a parameterization in form of texture coordinates that are defined at every point on the surface. We could then define the surface as: xyz = s(uv). uv are some 2D texture coordinates and the function s turns these texture coordinates into 3D world positions xyz. Now, the tangent is the direction in which the u-coordinate increases. I.e., it is the derivative of the 3D position with respect to the u-coordinate: T = d s(uv) / du. Similarly, the bitangent is the derivative with respect to the v-coordinate. The normal is a vector that is perpendicular to both of them and usually points outwards. Remember that the three vectors are usually different at every point on the surface.
Now let's go over to discrete computer graphics where we approximate our continuous surface s with a polygon mesh. The problem is that there is no way to get the exact tangents and bitangents anymore. We just lost to much information in our discrete approximation. So, there are three common ways how we can approximate the tangents anyway:
Store the vectors with the model (this is usually not done).
Estimate the vectors at the vertices and interpolate them in the faces.
Calculate the vectors for each face separately. This will give you a discontinuous tangent space, which produces artifacts when the dihedral angle between two neighboring faces is too big. Still, this is apparently what most people are doing. And it is apparently also what you want to do.
Let's focus on the third method. For triangles, this is especially simple because the texture coordinates are interpolated linearly (barycentric interpolation) across the triangle. Hence, the derivatives are all constant (it's just a linear function). This is why you can calculate tangents/bitangents per triangle.
For quads, this is not so simple. First, you must agree on a way to interpolate positions and texture coordinates from the vertices of the quad to its inside. Oftentimes, bilinear interpolation is used. However, this is not a linear interpolation, i.e. the tangents and bitangents will not be constant anymore. This will only happen in special cases (if the quad is planar and the quad in uv space is a parallelogram). In general, these assumptions do not hold and you end up with different tangents/bitangents/normals for every point on the quad.
One way to calculate the required derivatives is by introducing an auxiliary coordinate system. Let's define a coordinate system st, where the first corner of the quad has coordinates (0, 0) and the diagonally opposite corner has (1, 1) (the other corners have (0, 1) and (1, 0)). These are actually our interpolation coordinates. Therefore, given an arbitrary interpolation scheme, it is relatively simple to calculate the derivatives d xyz / d st and d uv / d st. The first one will be a 3x2 matrix and the second one will be a 2x2 matrix (these matrices are called Jacobians of the interpolation). Then, given these matrices, you can calculate:
d xyz / d uv = (d xyz / d st) * (d st / d uv) = (d xyz / d st) * (d uv / d st)^-1
This will give you a 3x2 matrix where the first column is the tangent and the second column is the bitangent.
I have a texture of the earth which I want to map onto a sphere.
As it is a unit sphere and the model itself has no texture coordinates, the easiest thing I could think of is to just calculate spherical coordinates for each vertex and use them as texture coordinates.
textureCoordinatesVarying = vec2(atan(modelPositionVarying.y, modelPositionVarying.x)/(2*M_PI)+.5, acos(modelPositionVarying.z/sqrt(length(modelPositionVarying.xyz)))/M_PI);
When doing this in the fragment shader, this works fine, as I calculate the texture coordinates from the (interpolated) vertex positions.
But when I do this in the vertex shader, which I also would do if the model itself has texture coordinates, I get the result as shown in the image below. The vertices are shown as points and a texture coordinate (u) lower than 0.5 is red while all others are blue.
So it looks like that the texture coordinate (u) of two adjacent red/blue vertices have value (almost) 1.0 and 0.0. The variably is then smoothly interpolated and therefore yields values somewhere between 0.0 and 1.0. This of course is wrong, because the value should either be 1.0 or 0.0 but nothing in between.
Is there a way to work with spherical coordinates as texture coordinates without getting those effects shown above? (if possible, without changing the model)
This is a common problem. The seams between two texture coordinate topologies, where you want the texture coordinate to seamlessly wrap from 1.0 to 0.0 requires the mesh to properly handle this. To do this, the mesh must duplicate every vertex along the seam. One of the vertices will have a 0.0 texture coordinate and will be connected to the vertices coming from the right (in your example). The other will have a 1.0 texture coordinate and will be connected to the vertices coming from the left (in your example).
This is a mesh problem, and it is best to solve it in the mesh itself. The same position needs two different texture coordinates, so you must duplicate the position in question.
Alternatively, you could have the fragment shader generate the texture coordinate from an interpolated vertex normal. Of course, this is more computationally expensive, as it requires doing a conversion from a direction to a pair of angles (and then to the [0, 1] texture coordinate range).
I'm using OpenMesh to handle triangle meshes.
I have done mesh parametrization to set vertex texcoord and my whole understanding about vertex texcoord is getting from there. It should be a changeable value to vertex if I'm not getting it wrong.
But now I want to calculate the tangent space for every vertex and all tutorials talk about "vertex texcoord" like it's a fixed property to vertex.
Here's one tutorials I read, and it says
If the mesh we are working on doesn't have texcoord we'll skip the Tangent Space phase, because is not possible to create an arbitrary UV Map in the code, UV Maps are design dependents and change the way as the texture is made.
So, what property the "texcoord" should have when calculating tangent Space
Thank you!
It's unclear what you're asking exactly, so hopefully this will help your understanding.
The texture-coordinates (texcoord) of each vertex are set during the model design phase and are loaded with the mesh. They contains the UV coordinates that the vertex is mapped to within the texture.
The tangent space is formed of the tangent, bitangent and normal (TBN) vectors at each point. The normal is either loaded with the mesh or can be calculated by averaging the normals of triangles meeting at the vertex. The tangent is the direction in which the U coordinate of the texcoord changes the most, i.e. the partial derivative of the model-space position by U. Similarily the bitangent is the partial derivative of position by V. Tangent and bitangent can be calculated together with the normals of each face and then averaged at the vertices, just like normals do.
For flat faces, tangent and bitangent are perpendicular to the normal by construction. However, due to the averaging at the vertices they may not be perpendicular anymore. Also even for flat faces, the tangent may not be perpendicular to the bitangent (e.g. imagine a skewed checker board texture mapping). However, to simplify the inversion of the TBN matrix, it is sometimes approximated with an orthogonal matrix, or even a quaternion. Even though this approximation isn't valid for skewed-mapped textures, it may still give plausible results. When orthogonality is assumed, the bitangent can be calculated as a cross product between the tangent and the normal.
I'm able to convert my existing height-map into a normal map by sampling the surrounding pixels, like in this question Generating a normal map from a height map? except I'm doing it on CPU.
I have a sphere that I want to normal map in object-space. How to I apply the above normal-map to the normals on my sphere vertices?
Normal maps do not modify vertex normals. They are used for details smaller than vertices.
In your fragment shader, look up the normal at the fragment's texture coordinates and modify the fragment's normal with it.
As stated before, normal mapping is done per-pixel, so applying it on your sphere's vertices would not work.
In your fragment shader you have to supply/calculate a tangent and bitangent along with the mesh normal vectors.
Then you can use a 3x3 matrix of the normal, tangent and bitangent vectors, and the normal vector you read from the normalmap to calculate the new normal vector.
There's a great tutorial on this topic here:
http://ogldev.atspace.co.uk/www/tutorial26/tutorial26.html
I was wondering how I could takes each sphere's vertex stored in my buffer, and calculate the normal to each of them separately. Is it possible to do it in OpenGL?
OpenGL is typically used to render meshes, not compute them. And normals are part of a mesh's data. It is usually the responsibility of the builder of the mesh to supply normals.
In the case of a sphere, normals are dead simple to compute perfectly. For a given vector position P on the sphere who's center is C, the normal is norm(P - C), where norm normalizes the vector.