Normal Mapping and keep the tangents - opengl

Let's say you have a 3D mesh with normal map provided with. The mesh owns as well tangents, bitangents and normals.
From the tangents, bitangents and normals, you could build the TBN matrix that is a matrix that transform tangent space to world space. That way, to get the real normal you just have to do something like that :
mat3 TBN = mat3(tangent, bitangent, normal);
vec3 realNormal = TBN * normalFromTheNormalMap;
However, how to get the real tangent and bitangent from this system?

You have to Orthogonalize the vectors. A common way for the Orthogonalization is the Gram–Schmidt Orthonormalization.
This algorithm uses the circumstance that, the dot product of 2 vectors is equal the cosine of the angle between the 2 vectors multiplied by the magnitude (length) of both vectors.
dot( N, T ) == length( N ) * length( T ) * cos( angle_N_T )
This follows, that the dot product of 2 unit vectors (normalized vectors) is equal the cosine of the angle between the 2 vectors, because the length of a unit vector is 1.
uN = normalize( A )
uT = normalize( B )
cos( angle_T_N ) == dot( uT, uN )
If realNormal is a normalized vector (its length is 1) and tangent and binormal are orthogonal, then the realTangent and the the realBinormal can be calculated like this:
realTangent = normalize( tangent - realNormal * dot(tangent, realNormal) );
realBinormal = binormal - realNormal * dot(binormal, realNormal);
realBinormal = normalize( realBinormal - realTangent * dot(realBinormal, realTangent) );
If tangent and binormal are normalized vectors too, then the normalize function can be substituted by dividing with the dot product of the source vector and the real vector:
realTangent = tangent - realNormal * dot(tangent, realNormal);
realTangent /= dot(tangent, realTangent);
realBinormal = binormal - realNormal * dot(binormal, realNormal);
realBinormal = realBinormal - realTangent * dot(realBinormal, realTangent);
realBinormal /= dot(binormal, realBinormal);
See further How to calculate Tangent and Binormal?.

Related

Verification of transformation matrix usage in vertex shader. Correctness or normals transformation

I need to be able to modify vertex coordinates accordingly to a transformation matrix, but I have per-vertex lighting, so I am not sure, that my approach is correct for normals:
#version 120
uniform mat4 transformationMatrix;
void main() {
vec3 normal, lightDir;
vec4 diffuse, ambient, globalAmbient;
float NdotL;
// Transformation part
normal = gl_NormalMatrix * gl_Normal * transpose(mat3(transformationMatrix));
gl_Position = gl_ModelViewProjectionMatrix * transformationMatrix * gl_Vertex;
// Calculate color
lightDir = normalize(vec3(gl_LightSource[0].position));
NdotL = max(abs(dot(normal, lightDir)), 0.0);
diffuse = gl_Color * gl_LightSource[0].diffuse;
ambient = gl_Color * gl_LightSource[0].ambient;
globalAmbient = gl_LightModel.ambient * gl_Color;
gl_FrontColor = NdotL * diffuse + globalAmbient + ambient;
}
I perform all transformations in lines 8-9. Could You comment whether it is correct way or not?
If you want to create a normal matrix, then you have to use the inverse transpose of the upper left 3*3, of the 4*4 matrix.
See Why transforming normals with the transpose of the inverse of the modelview matrix?
and Why is the transposed inverse of the model view matrix used to transform the normal vectors?
This would mean that you have to write your code like this:
normal = gl_NormalMatrix * transpose(inverse(mat3(transformationMatrix))) * gl_Normal;
But, if a vector is multiplied to a matrix from the left, the result corresponds to to multiplying a column vector to the transposed matrix from the right.
See GLSL Programming/Vector and Matrix Operations
This means you can write the code like this and avoid the transpose operation:
normal = gl_NormalMatrix * (gl_Normal * inverse(mat3(transformationMatrix)));
If the 4*4 matrix transformationMatrix is a Orthogonal matrix, this means the X, Y, and Z axis are Orthonormal (unit vectors and they are normal to each other), then it is sufficent to use the the upper left 3*3. In this case the inverse matrix is equal the transposed matrix.
See In which cases is the inverse matrix equal to the transpose?
This will simplify your code:
normal = gl_NormalMatrix * mat3(transformationMatrix) * gl_Normal;
Of course this can also be expressed like this:
normal = gl_NormalMatrix * (gl_Normal * transpose(mat3(transformationMatrix)));
Note, this is not the same as you do in your code, becaues the * operations are processed from the left to the right (see GLSL - The OpenGL Shading Language 4.6, 5.1 Operators, page 97) and the result of
vec3 v;
mat3 m1, m2;
(m1 * v) * m2
is not equal
m1 * (v * m2);
The normal transformation does not look correct.
Since v * transpose(M) is exactly the same as M * v, you didn't do any special case handling for non-uniform scaling at all.
What you are looking for is most probably to use the inverse-transpose matrix:
normal = gl_NormalMatrix * transpose(inverse(mat3(transformationMatrix))) * gl_Normal;
For more details about the math behind this, have a look at this.

Volume rendering from inside volume

We've been doing lots of work trying to volume render 3D cloud fields in WebGL. The approach we've taken so far is outlined here - the start position of each ray is the current position in the front face of the volume cube, and the end position is calculated from a previous pass, which encodes the xyx vales as a backface texture.
How can we extend this to work when the camera is inside the volume? Do we need to create smaller volume cubes on the fly? Can we just change the shader to start marching from the camera instead of the front face, and project onto the back of the cube?
We're not really sure where to start with this!
Thanks in advance
Render only a single pass.
In that pass you render the back faces only. The camera position needs to be translated from world coordinates into a coordinate system that is build by the 3 axes with their sizes of the volume box you render. Your goal is to create a 4x4 matrix where the all column vectors are a vec4(...,0) and x,y,z of these vectors are defined by x,y,z-axis directions with length of the volume box. If the box is parallel to x axis, that vector is (1,0,0). If it is stretched to (2,0,0) then that is its own x-axis and that will be the column vector for column 0 in the matrix. Do so with y and z axis with their length. The last column vector in the matrix is the position of the box as vec4(tx,ty,tz,1) as this matrix then defines a coordinate system and you use it to transform the camera position into the uniform (0,0,0)-(1,1,1) box of the volume.
Create the inverse of that volumes box matrix and multiply the cam as vec4( campos, 1) from the right side to the invVolMatrix. Send the resulting vec3 as UNIFORM to shader.
Render only backfaces with (0,0,0) to (1,1,1) coordinates on their respective volBox corners - as you already did. Now you have in your shader
uniform campos
back face voltex coordinate
you know your volbox is a unit cube in a local coordinate system with diagonal from (0,0,0) to (1,1,1)
In the shader do:
varying vec3 vLocalUnitTexCoord; // backface interpolated coordinate
uniform vec3 LOCAL_CAM_POS; // localised camPos
struct AABB {
vec3 min; // (0,0,0)
vec3 max; // (1,1,1)
};
struct Ray {
vec3 origin; vec3 dir;
};
float getUnitAABBEntry( in Ray r ) {
AABB b;
b.min = vec3( 0 );
b.max = vec3( 1 );
// compute clipping for box.min and box.max corner
vec3 rInvDir = vec3( 1.0 ) / r.dir;
vec3 tMinima = ( b.min - r.origin ) * rInvDir;
vec3 tMaxima = ( b.max - r.origin ) * rInvDir;
// sort for nearest corner
vec3 tEntries = min( tMinima, tMaxima );
// find first real entry value of 3 t-distance values in vec3 container
vec2 tMaxEntryCandidates = max( vec2( tEntries.st ), vec2( tEntries.pp ) );
float tMaxEntry = max( tMaxEntryCandidates.s, tMaxEntryCandidates.t );
}
vec3 getCloserPos( in vec3 camera, in vec3 frontFaceIntersection, in float t ) {
float useFrontCoord = 0.5 + 0.5 * sign( t );
vec3 startPos = mix( camera, frontFaceIntersection, useFrontCoord );
return startPos;
}
vec4 main(void)
{
Ray r;
r.origin = LOCAL_CAM_POS;
r.dir = normalize( vLocalUnitTexCoord - LOCAL_CAM_POS );
float t = getUnitAABBEntry( r );
vec3 frontFaceLocalUnitTexCoord = r.origin + r.dir * t;
vec3 startPos = getCloserPos( LOCAL_CAM_POS, frontFaceLocalUnitTexCoord, t );
// loop for integration follows here
vec3 start = startpos;
vec3 end = vLocalUnitTexCoord;
...for loop..etc...
}
Happy coding!

GLSL can still see triangles after normal mapping

I was under the assumption that normal mapping should eliminate the visibility of triangles on a mesh, as lighting will be calculated based on unique normals per fragment instead of per vertex. As you can see in the image below, the normal map is definitely working but triangles are still visible. Is this an error?
I compute tangents as follows :
vec3 vert1( vertices[a+1] - vertices[a] );
vec3 vert2( vertices[a+2] - vertices[a] );
vec2 uv1( uvs[a+1] - uvs[a] );
vec2 uv2( uvs[a+2] - uvs[a] );
float r = (uv1.x * uv2.y) - (uv1.y * uv2.x);
vec3 tangent(vert1 * uv2.y - vert2 * uv1.y)*r;
Vertex Shader :
mat3 TBN_MATRIX;
TBN_MATRIX[0] = (MODEL_MATRIX * vec4( tangent,0 )).xyz;
TBN_MATRIX[2] = (MODEL_MATRIX * vec4( normal,0 )).xyz;
TBN_MATRIX[1] = cross( TBN_MATRIX[2], TBN_MATRIX[0] );
Fragment Shader :
fragment_normal = normalize( TBN_MATRIX * vec3(( 2 * texture( normal_map, uv_coordinates ).rgb ) - 1.0 ) );
My first thought is that a cross product is somehow not enough for the bitangent?

OpenGL, target spot-light "following me around the room"!

I'm implementing a target spotlight. I have the light cone, fall-off and all of that down and working OK. The problem is that as I rotate the camera around some point in space, the lighting seems to following it, i.e. regardless of where the camera is the light is always at the same angle relative to the camera.
Here's what I'm doing in my vertex shader:
void main()
{
// Compute vertex normal in eye space.
attrib_Fragment_Normal = (Model_ViewModelSpaceInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;
// Compute position in eye space.
vec4 position = Model_ViewModelSpace * vec4(attrib_Position, 1.0);
// Compute vector between light and vertex.
attrib_Fragment_Light = Light_Position - position.xyz;
// Compute spot-light cone direction vector.
attrib_Fragment_Light_Direction = normalize(Light_LookAt - Light_Position);
// Compute vector from eye to vertex.
attrib_Fragment_Eye = -position.xyz;
// Output texture coord.
attrib_Fragment_Texture = attrib_Texture;
// Return position.
gl_Position = Camera_Projection * position;
}
I have a target spotlight defined by Light_Position and Light_LookAt (look-at being the point in space the spotlight is looking at of course). Both position and lookAt are already in eye space. I computed eye space CPU-side by subtracting the camera position from them both.
In the vertex shader I then go on to make a light-cone vector from the light position to the light lookAt point, which informs the pixel shader where the main axis of the light cone is.
At this point I'm wondering if I have to transform the vector as well and if so by what? I've tried the inverse transpose of the view matrix, with no luck.
Can anyone take me through this?
Here's the pixel shader for completeness:
void main(void)
{
// Compute N dot L.
vec3 N = normalize(attrib_Fragment_Normal);
vec3 L = normalize(attrib_Fragment_Light);
vec3 E = normalize(attrib_Fragment_Eye);
vec3 H = normalize(L + E);
float NdotL = clamp(dot(L,N), 0.0, 1.0);
float NdotH = clamp(dot(N,H), 0.0, 1.0);
// Compute ambient term.
vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour;
// Diffuse.
vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL;
// Specular.
float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength;
vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity;
// Light attenuation (so we don't have to use 1 - x, we step between Max and Min).
float d = length(-attrib_Fragment_Light);
float attenuation = smoothstep( Light_Attenuation_Max,
Light_Attenuation_Min,
d);
// Adjust attenuation based on light cone.
vec3 S = normalize(attrib_Fragment_Light_Direction);
float LdotS = dot(-L, S);
float CosI = Light_Cone_Min - Light_Cone_Max;
attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0);
// Final colour.
Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation;
}
Thanks for the responses below. I still can't work this out. I'm now transforming the light into eye-space CPU-side. So no transforms of the light should be necessary, but it still doesn't work.
// Compute eye-space light position.
Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition;
MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition);
// Compute eye-space light direction vector.
Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition);
MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection);
MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection);
... and in the vertex shader, I'm doing this (below). As far as I can see, light is in eye space, vertex is transformed into eye space, lighting vector (attrib_Fragment_Light) is in eye space. Yet the vector never changes. Forgive me for being a bit thick!
// Transform normal from model space, through world space and into eye space (world * view * normal = eye).
attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;
// Transform vertex into eye space (world * view * vertex = eye)
vec4 position = Model_WorldView * vec4(attrib_Position, 1.0);
// Compute vector from eye space vertex to light (which has already been put into eye space).
attrib_Fragment_Light = Light_Position - position.xyz;
// Compute vector from the vertex to the eye (which is now at the origin).
attrib_Fragment_Eye = -position.xyz;
// Output texture coord.
attrib_Fragment_Texture = attrib_Texture;
It looks here like you're subtracting Light_Position, which I assume you want to be a world space coordinate (since you seem dismayed that it's currently in eye space), from position, which is an eye space vector.
// Compute vector between light and vertex.
attrib_Fragment_Light = Light_Position - position.xyz;
If you want to subtract two vectors, they must both be in the same coordinate space. If you want to do your lighting computations in world space, then you should use a world space position vector, not a view space position vector.
That means multiplying the attrib_Position variable with the Model matrix, not the ModelView matrix, and using this vector as the basis for your light computation.
You can't compute eye position by just subtracting the camera position, you have to multiply by the modelview matrix.

GLSL - Calculate Surface Normal

I have a simple vertex shader, written in GLSL, and I was wondering if someone could aid me in calculating the normals for the surface. I am 'upgrading' a flat surface, so the current light model looks... weird. Here is my current code:
varying vec4 oColor;
varying vec3 oEyeNormal;
varying vec4 oEyePosition;
uniform float Amplitude; // Amplitude of sine wave
uniform float Phase; // Phase of sine wave
uniform float Frequency; // Frequency of sine wave
varying float sinValue;
void main()
{
vec4 thisPos = gl_Vertex;
thisPos.z = sin( ( thisPos.x + Phase ) * Frequency) * Amplitude;
// Transform normal and position to eye space (for fragment shader)
oEyeNormal = normalize( vec3( gl_NormalMatrix * gl_Normal ) );
oEyePosition = gl_ModelViewMatrix * thisPos;
// Transform vertex to clip space for fragment shader
gl_Position = gl_ModelViewProjectionMatrix * thisPos;
sinValue = thisPos.z;
}
Does anyone have any ideas?
Ok, let's just take this from the differential geometry perspective. You got a parametric surface with parameters s and t:
X(s,t) = ( s, t, A*sin((s+P)*F) )
So we first compute the tangents of this surface, being the partial derivatives after our two parameters:
Xs(s,t) = ( 1, 0, A*F*cos((s+P)*F) )
Xt(s,t) = ( 0, 1, 0 )
Then we just need to compute the cross product of these to get the normal:
N = Xs x Xt = ( -A*F*cos((s+P)*F), 0, 1 )
So your normal can be computed completely analytical, you don't actually need the gl_Normal attribute:
float angle = (thisPos.x + Phase) * Frequency;
thisPos.z = sin(angle) * Amplitude;
vec3 normal = normalize(vec3(-Amplitude*Frequency*cos(angle), 0.0, 1.0));
// Transform normal and position to eye space (for fragment shader)
oEyeNormal = normalize( gl_NormalMatrix * normal );
The normalization of normal might not be neccessary (since we normalize the transformed normal anyway), but right at the moment I'm not sure if an unnormalized normal would behave correctly in the presence of non-uniform scaling. Of course, if you want the normal to point into the negative z-direction you need to negate it.
Well, the way over a surface in space wouldn't have been neccessary. We can also just think with the sine curve inside the x-z-plane, since the y-part of the normal is zero anyway, as only z depends on x. So we just take the tangent to the curve z=A*sin((x+P)*F), whose slope is the derivative of z, being the x-z-vector (1, A*F*cos((x+P)*F)), the normal to this is then just (-A*F*cos((x+P)*F), 1) (switch coords and negate one), being x and z of the (unnormalized) normal. Well, no 3D vectors and partial derivatives, but the outcome is the same.
Furthermore you should tweak your performance:
oEyeNormal = normalize(vec3(gl_NormalMatrix * gl_Normal));
There is no need to cast it to a vec3 since gl_NormalMatrix is a 3x3 Matrix.
There is no need to normalize your incoming normal in your vertex shader, since you don't do any length based calculation in it. Some sources say that incoming normals should always be normalized by the application so that there is no need for it at all in the vertex shader. But since that's out of the hands of the shader developer I still normalize them when I calculate vertex based lighting (gouraud).