Normal mapping, how to transform vectors - opengl

In the vertex shader we usually create TBN matrix:
vec3 n = normalize(gl_NormalMatrix * gl_Normal);
vec3 t = normalize(gl_NormalMatrix * Tangent.xyz);
vec3 b = normalize(gl_NormalMatrix * Bitangent.xyz);
mat3 tbn = mat3(t, b, n);
This matrix transform vertices from Tangent space into Eye/Camera space.
Now for normal mapping (done in forward rendering) we have two options:
inverse tbn matrix and transform light_vector and view_direction and send those vectors to the fragment shader. After that those vectors are in Tangent space.
that way in the fragment shader we only need to read normal from a normal map. Since such normals are in Tangent space (by "definition") they match with out transformed light_vector and view_direction.
this way we do light calculations in Tangent space.
pass the tbn matrix into fragment shader and then transform each normal read from a normal map by that. That way we transform such normal into View space.
this way we do light calculations in Eye/Camera space
Option 1 seems to be faster: we have most transformations in the vertex shader, only one read from normal map.
Option 2 requires to transform each normal from normal map by the TBN matrix. But it seems a bit simpler.
Questions:
Which option is better?
Is there any performance loss? (maybe texture read will "cover" costs of doing the matrix transformation)
Which option is more often used?

I'll tell you this much right now - depending on your application, option 1 may not even be possible.
In a deferred shading graphics engine, you have to compute the light vector in your fragment shader, which rules out option 1. You cannot keep the TBN matrix around when it comes time to do lighting in deferred shading either, so you would transform your normals into world-space or view-space (this is not favored very often anymore) ahead of time when you build your normal G-Buffer (the calculation of the TBN matrix can be done in the vertex shader and passed to the fragment shader as a flat mat3). Then you sample the normal map using the basis and write it in world space.
I can tell you from experience that the big name graphics engines (e.g. Unreal Engine 4, CryEngine 3, etc.) actually do lighting in world-space now. They also employ deferred shading, so for these engines neither option you proposed above is used at all :)
By the way, you are wasting space in your vertex buffer if you are actually storing the normal, binormal and tangent vectors. They are basis vectors for an orthonormal vector space, so they are all at right angles. Because of this, you can compute the third vector given any two by taking the cross product. Furthermore, since they are at right angles and should already be normalized you do not need to normalize the result of the cross product (recall that |a x b| = |a| * |b| * sin(a,b)). Thus, this should suffice in your vertex shader:
// Normal and tangent should be orthogonal, so the cross-product
// is also normalized - no need to re-normalize.
vec3 binormal = cross (normal, tangent);
TBN = mat3 (tangent, binormal, normal);
This will get you on your way to world-space normals (which tend to be more efficient for a lot of popular post-processing effects these days). You probably will have to re-normalize the matrix if you intend to alter it to produce view-space normals.

Related

Set projection mode in OpenGL 3?

In old OpenGL versions spatial projection (where objects become smaller with growing distance) could be enabled via a call to
glMatrixMode(GL_PROJECTION);
(at least as far as I remember this was the call to enable this mode).
In OpenGL 3 - when this slow stack-mode is no longer used - this function does not work any more.
So how can I have the same spatial effect here? What is the intended way for this?
You've completely misunderstood what glMatrixMode actually did. The old, legacy OpenGL fixed function pipeline kept a set of matrices around, which were all used indiscriminately when drawing stuff. The two most important matrices were:
the modelview matrix, which is used to describe the transformation from model local space into view space. View space is still kind of abstract, but it can be understood as the world transformed into the coordinate space of the "camera". Illumination calculations happened in that space.
the projection matrix, which is used to describe the transformation from view space into clip space. Clip space is an intermediary stage right before reaching device coordinates (there are few important details involved in this, but those are not important right now), which mostly involves applying the homogenous divide i.e. scaling the clip coordinate vector by the reciprocal of its w-component.
The fixed transformation pipeline always was
position_view := Modelview · position
do illumination calculations with position_view
position_clip := Projection · position_view
position_pre_ndc := position_clip · 1/position_clip.w
In legacy OpenGL the modelview and projection matrix are always there. glMatrixMode is a selector, which of the existing matrices are subject to the operations done by the matrix manipulation functions. One of these functions is glFrustum which generates and multiplies a perspective matrix, i.e. a matrix which will create a perspective effect through the homogenous divide.
So how can I have the same spatial effect here? What is the intended way for this?
You generate a perspective matrix of the desired properties, and use it to transform the vertex attribute you designate as model local position into clip space and submit that into the gl_Position output of the vertex shader. The usual way to do this is by passing in a modelview and a projection matrix as uniforms.
The most bare bones GLSL vertex shader doing that would be
#version 330
uniform mat4 modelview;
uniform mat4 projection;
in vec4 pos;
void main(){
gl_Position = projection * modelview * pos;
}
As for generating the projection matrix: All the popular computer graphics math libraries got you covered and have functions for that.

What property the "vertex texcoord" should have when calculating tangent Space

I'm using OpenMesh to handle triangle meshes.
I have done mesh parametrization to set vertex texcoord and my whole understanding about vertex texcoord is getting from there. It should be a changeable value to vertex if I'm not getting it wrong.
But now I want to calculate the tangent space for every vertex and all tutorials talk about "vertex texcoord" like it's a fixed property to vertex.
Here's one tutorials I read, and it says
If the mesh we are working on doesn't have texcoord we'll skip the Tangent Space phase, because is not possible to create an arbitrary UV Map in the code, UV Maps are design dependents and change the way as the texture is made.
So, what property the "texcoord" should have when calculating tangent Space
Thank you!
It's unclear what you're asking exactly, so hopefully this will help your understanding.
The texture-coordinates (texcoord) of each vertex are set during the model design phase and are loaded with the mesh. They contains the UV coordinates that the vertex is mapped to within the texture.
The tangent space is formed of the tangent, bitangent and normal (TBN) vectors at each point. The normal is either loaded with the mesh or can be calculated by averaging the normals of triangles meeting at the vertex. The tangent is the direction in which the U coordinate of the texcoord changes the most, i.e. the partial derivative of the model-space position by U. Similarily the bitangent is the partial derivative of position by V. Tangent and bitangent can be calculated together with the normals of each face and then averaged at the vertices, just like normals do.
For flat faces, tangent and bitangent are perpendicular to the normal by construction. However, due to the averaging at the vertices they may not be perpendicular anymore. Also even for flat faces, the tangent may not be perpendicular to the bitangent (e.g. imagine a skewed checker board texture mapping). However, to simplify the inversion of the TBN matrix, it is sometimes approximated with an orthogonal matrix, or even a quaternion. Even though this approximation isn't valid for skewed-mapped textures, it may still give plausible results. When orthogonality is assumed, the bitangent can be calculated as a cross product between the tangent and the normal.

Using a tangent-space normal map with world-space normals

I'm working on deferred shading in GLSL. I have all of my model normals in world space, and the positions as well. The whole thing is working exactly as it should with just the world-space normals of the polygons, but I'm unsure of how to add normal mapping. I know it's recommended to do lighting in view-space, and I've tried that but I couldn't get even a quarter as far with that as I have with world-space. All I have left to fix now is adding normals. I have a regular normal map, which is in tangent-space. I've tried the following:
Multiplying the world-space normal by the normal-map didn't seem to work.
I tried adding the normal-map's normals, even though I knew that wouldn't work.
At this point I'm not really sure what else to try. Is there any easy way to do this?
Normal-rendering frag shader:
#ifdef NORMALMAP
uniform sampler2D m_Normals;
#endif
varying vec3 normal;
varying vec2 texCoord;
void main() {
vec3 normals;
#ifdef NORMALMAP
normals = normalize(normal * (texture(m_Normals, texCoord).xyz));
#else
normals = normal;
#endif
gl_FragColor = vec4(normals, 1.0);
}
The variable "normal" is the world-space normal passed in from the vertex shader.
My light positions are all in normal space.
I do not know what "normal space" means here be honest. There are "normal spaces" in mathematics, but that is quite unrelated and there is no standard coordinate space referred to as "normal space."
My question is how to convert tangent to world.
Your polygon's world-space normal is inadequate to solve this problem (in fact, you need a per-vertex normal). To transform from tangent-space to object-space you need three things:
your object-space normal vector
a vector representing the change in texture s coordinates
a vector representing the change in texture t coordinates
Those three things together form the tangent-space -> object-space change of basis matrix (usually referred to as TBN because the vectors are called Tangent, Bitangent and Normal).
However, you do not want object-space normals in this question, so you need to multiply that matrix by your model matrix so that it skips over object-space and goes straight to world-space. Likewise, if your normal vector is already in world-space you need to transform it back to object-space. It is possible that object-space and world-space are the same in this case particularly since we are only dealing with a 3x3 matrix here (no translation). But I cannot be sure from the description you have given.
One last thing to note, in this case you want the inverse of the traditional TBN matrix. Traditionally it is used to transform vectors into tangent-space, but you want to transform them out. The inverse of a rotation matrix is the same as its transpose, so all you really have to do is put those 3 vectors across your rows instead of columns1.
1That assumes that your T, B and N vectors are at right-angles. That is not necessarily the case since they are derived from your texture coordinates. It is all going to come down to what you use to compute your Tangent and Bitangent vectors; a lot of software will re-orthogonalize them for you.

Light computation (shading) in model, tangent or camera space?

I'm currently trying to implement bump mapping which requires to have a "tangent space". I read through some tutorials, specifically the following two:
http://www.terathon.com/code/tangent.html
http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-13-normal-mapping/
Both tutorials avoid expensive matrix computation in the fragment shader which would be required if the shading computation would happen in the camera space as usual (as I'm used to, at least).
They introduce the tangent space which might be different per vertex (or even per fragment if the surface is smoothed). If I understand it correctly, for efficient bump mapping (i.e. to minimize computations in the fragment shader), they convert everything needed for light computation into this tangent space using the vertex shader. But I wonder if model space is a good alternative to compute light shading in.
My questions regarding this topic are:
For shading computation in tangent space, what exactly do I pass between vertex and fragment shaders? Do I really need to convert light positions in tangent space, requiring O(number of lights) of varying variables? This will for example not work for deferred shading or if the light positions aren't known for some other reason in the vertex shader. There has to be a (still efficient) alternative, which I guess is shading computation in model space.
If I pass model space varyings, is it a good idea to still perform shading computations in tangent space, i.e. convert light positions in fragment shader? Or is it better to perform shading computations in model space? Which will be faster? (In both cases I need a TBN matrix, but one case requires a model-to-tangent transform, the other a tangent-to-model transform.)
I currently pass per-vertex normal, tangent and bitangent (orhtonormal) to the vertex shader. If I understand it correctly, the orthonormalization is only required if I want to quickly build a model-to-tangent space matrix which requires inversion of a matrix containing the TBN vectors. If they are orthogonal, this is simply a transposition. But if I don't need vectors in the tangent space, I don't need an inversion but simply the original TBN vectors in a matrix which is then the tangent-to-model matrix. Wouldn't this simplify everything?
Normal mapping is usually done in tangent space because the normal maps are given in this space. So if you pre-transform the (relatively little) input data to tangent space in the vertex shader, you don't need extra computation in the fragment shader. That requires that all input data is available, of course. I haven't done bump mapping with deferred shading, but using the model space seems to be a good idea. World space would probably be even better, because you'll need world space vectors in the end to render to the G-buffers.
If you pass model space vectors, I would recommend to perform the calculations in this space. Then the fragment shader would have to transform one normal from tangent space to model space. In the other case it would have to transform n light attributes from model space to tangent space, which should take n times longer.
If you don't need the inverse TBN matrix, a non-orthonormal coordinate system should be fine. At least I don't see any reason, why it should not.

How to transform back-facing vertices in GLSL when creating a shadow volume

I'm writing a game using OpenGL and I am trying to implement shadow volumes.
I want to construct the shadow volume of a model on the GPU via a vertex shader. To that end, I represent the model with a VBO where:
Vertices are duplicated such that each triangle has its own unique three vertices
Each vertex has the normal of its triangle
For reasons I'm not going to get into, I was actually doing the above two points anyway, so I'm not too worried about the vertex duplication
Degenerate triangles are added to form quads inside the edges between each pair of "regular" triangles
Using this model format, inside the vertex shader I am able to find vertices that are part of triangles that face away from the light and move them back to form the shadow volume.
What I have left to figure out is what transformation exactly I should apply to the back-facing vertices.
I am able to detect when a vertex is facing away from the light, but I am unsure what transformation I should apply to it. This is what my vertex shader looks like so far:
uniform vec3 lightDir; // Parallel light.
// On the CPU this is represented in world
// space. After setting the camera with
// gluLookAt, the light vector is multiplied by
// the inverse of the modelview matrix to get
// it into eye space (I think that's what I'm
// working in :P ) before getting passed to
// this shader.
void main()
{
vec3 eyeNormal = normalize(gl_NormalMatrix * gl_Normal);
vec3 realLightDir = normalize(lightDir);
float dotprod = dot(eyeNormal, realLightDir);
if (dotprod <= 0.0)
{
// Facing away from the light
// Need to translate the vertex along the light vector to
// stretch the model into a shadow volume
//---------------------------------//
// This is where I'm getting stuck //
//---------------------------------//
// All I know is that I'll probably turn realLightDir into a
// vec4
gl_Position = ???;
}
else
{
gl_Position = ftransform();
}
}
I've tried simply setting gl_position to ftransform() - (vec4(realLightDir, 1.0) * someConstant), but this caused some kind of depth-testing bugs (some faces seemed to be visible behind others when I rendered the volume with colour) and someConstant didn't seem to affect how far the back-faces are extended.
Update - Jan. 22
Just wanted to clear up questions about what space I'm probably in. I must say that keeping track of what space I'm in is the greatest source of my shader headaches.
When rendering the scene, I first set up the camera using gluLookAt. The camera may be fixed or it may move around; it should not matter. I then use translation functions like glTranslated to position my model(s).
In the program (i.e. on the CPU) I represent the light vector in world space (three floats). I've found during development that to get this light vector in the right space of my shader I had to multiply it by the inverse of the modelview matrix after setting the camera and before positioning the models. So, my program code is like this:
Position camera (gluLookAt)
Take light vector, which is in world space, and multiply it by the inverse of the current modelview matrix and pass it to the shader
Transformations to position models
Drawing of models
Does this make anything clearer?
the ftransform result is in clip-space. So this is not the space you want to apply realLightDir in. I'm not sure which space your light is in (your comment confuses me), but what is sure is that you want to add vectors that are in the same space.
On the CPU this is represented in world
space. After setting the camera with
gluLookAt, the light vector is multiplied by
the inverse of the modelview matrix to get
it into eye space (I think that's what I'm
working in :P ) before getting passed to
this shader.
multiplying a vector by the inverse of the mv matrix brings the vector from view space to model space. so you're saying your light-vector (in world space), is applied a transform that does view->model. It makes little sense to me.
We have 4 spaces:
model space: the space where your gl_Vertex is defined in.
world space: a space that GL does not care about in general, that represents an arbitrary space to locate the models in. It's usually what the 3d engine works in (it maps to our general understanding of world coordinates).
view space: a space that corresponds to the referencial of the viewer. 0,0,0 is where the viewer is, looking down Z. Obtained by multiplying gl_Vertex by the modelview
clip space: the magic space that the matrix projection brings us in. result of ftransform is in this space (so is gl_ModelViewProjectionMatrix * gl_Vertex )
Can you clarify exactly which space your light direction is in ?
What you need to do, however, is make the light vector addition in either model, world or view space: Bring all the bits of your operation in the same space. E.g. for model space, just compute the light direction in model space on CPU, and do a:
vec3 vertexTemp = gl_Vertex + lightDirInModelSpace * someConst
then you can bring that new vertex position in clip space with
gl_Position = gl_ModelViewProjectionMatrix * vertexTemp
Last bit, don't try to apply vector additions in clip-space. It won't generally do what you think it should do, as at that point you are necessarily dealing with homogeneous coordinates with non-uniform w.