Transform a vec2 into another space - opengl

In a OpenGL fragment-shader, I need to transform a vec2 that represents a xy pair I need to another coordinate space.
I got the mat4 transformation-matrix for this, but can simply transform by:
vec2 transformed = (tansformMat4 * vec4(xy, 0.0f)).xy ?
I guess the result would not be correct, since the the 0s would get claculated into the x and y part.

4D vectors and Matrices are used for 3D Affine Transformations.
You can use the equivalent 2D Affine Transformation which is a 3D vector with the z-value to 1 (and then discarded):
vec3 transformed3d = transformMat3 * vec3(x, y, 1.0);
vec2 transformed2d(transformed3d.x, transformed3d.y);
Or alternatively, you can use can use the aforementioned 3D Affine Transformation with a 4D vector and Matrix by setting the z-value to zero and the w-value to 1 for homogeneous coordinates and then using the x and y values directly in a parallel projection:
vec4 transformed4d = transformMat4 * vec3(x, y, 0.0, 1.0);
vec2 transformed2d(transformed4d.x, transformed4d.y);

Related

OpenGL Normal Mapping

I'm trying to implement Normal Mapping, using a simple cube that i created. I followed this tutorial https://learnopengl.com/Advanced-Lighting/Normal-Mapping but i can't really get how normal mapping should be done when drawing 3d objects, since the tutorial is using a 2d object.
In particular, my cube seems almost correctly lighted but there's something i think it's not working how it should be. I'm using a geometry shader that will output green vector normals and red vector tangents, to help me out. Here i post three screenshot of my work.
Directly lighted
Side lighted
Here i actually tried calculating my normals and tangents in a different way. (quite wrong)
In the first image i calculate my cube normals and tangents one face at a time. This seems to work for the face, but if i rotate my cube i think the lighting on the adiacent face is wrong. As you can see in the second image, it's not totally absent.
In the third image, i tried summing all normals and tangents per vertex, as i think it should be done, but the result seems quite wrong, since there is too little lighting.
In the end, my question is how i should calculate normals and tangents.
Should i consider per face calculations or sum vectors per vertex across all relative faces, or else?
EDIT --
I'm passing normal and tangent to the vertex shader and setting up my TBN matrix. But as you can see in the first image, drawing face by face my cube, it seems that those faces adjacent to the one i'm looking directly (that is well lighted) are not correctly lighted and i don't know why. I thought that i wasn't correctly calculating my 'per face' normal and tangent. I thought that calculating some normal and tangent that takes count of the object in general, could be the right way.
If it's right to calculate normal and tangent as visible in the second image (green normal, red tangent) to set up the TBN matrix, why does the right face seems not well lighted?
EDIT 2 --
Vertex shader:
void main(){
texture_coordinates = textcoord;
fragment_position = vec3(model * vec4(position,1.0));
mat3 normalMatrix = transpose(inverse(mat3(model)));
vec3 T = normalize(normalMatrix * tangent);
vec3 N = normalize(normalMatrix * normal);
T = normalize(T - dot(T, N) * N);
vec3 B = cross(N, T);
mat3 TBN = transpose(mat3(T,B,N));
view_position = TBN * viewPos; // camera position
light_position = TBN * lightPos; // light position
fragment_position = TBN * fragment_position;
gl_Position = projection * view * model * vec4(position,1.0);
}
In the VS i set up my TBN matrix and i transform all light, fragment and view vectors to tangent space; doing so i won't have to do any other calculation in the fragment shader.
Fragment shader:
void main() {
vec3 Normal = texture(TextSamplerNormals,texture_coordinates).rgb; // extract normal
Normal = normalize(Normal * 2.0 - 1.0); // correct range
material_color = texture2D(TextSampler,texture_coordinates.st); // diffuse map
vec3 I_amb = AmbientLight.color * AmbientLight.intensity;
vec3 lightDir = normalize(light_position - fragment_position);
vec3 I_dif = vec3(0,0,0);
float DiffusiveFactor = max(dot(lightDir,Normal),0.0);
vec3 I_spe = vec3(0,0,0);
float SpecularFactor = 0.0;
if (DiffusiveFactor>0.0) {
I_dif = DiffusiveLight.color * DiffusiveLight.intensity * DiffusiveFactor;
vec3 vertex_to_eye = normalize(view_position - fragment_position);
vec3 light_reflect = reflect(-lightDir,Normal);
light_reflect = normalize(light_reflect);
SpecularFactor = pow(max(dot(vertex_to_eye,light_reflect),0.0),SpecularLight.power);
if (SpecularFactor>0.0) {
I_spe = DiffusiveLight.color * SpecularLight.intensity * SpecularFactor;
}
}
color = vec4(material_color.rgb * (I_amb + I_dif + I_spe),material_color.a);
}
Handling discontinuity vs continuity
You are thinking about this the wrong way.
Depending on the use case your normal map may be continous or discontinous. For example in your cube, imagine if each face had a different surface type, then the normals would be different depending on which face you are currently in.
Which normal you use is determined by the texture itself and not by any blending in the fragment.
The actual algorithm is
Load rgb values of normal
Convert to -1 to 1 range
Rotate by the model matrix
Use new value in shading calculations
If you want continous normals, then you need to make sure that the charts in the texture space that you use obey that the limits of the texture coordinates agree.
Mathematically that means that if U and V are regions of R^2 that map to the normal field N of your Shape then if the function of the mapping is f it should be that:
If lim S(x_1, x_2) = lim S(y_1, y_2) where {x1,x2} \subset U and {y_1, y_2} \subset V then lim f(x_1, x_2) = lim f(y_1, y_2).
In plain English, if the cooridnates in your chart map to positions that are close in the shape, then the normals they map to should also be close in the normal space.
TL;DR do not belnd in the fragment. This is something that should be done by the normal map itself when its baked, not'by you when rendering.
Handling the tangent space
You have 2 options. Option n1, you pass the tangent T and the normal N to the shader. In which case the binormal B is T X N and the basis {T, N, B} gives you the true space where normals need to be expressed.
Assume that in tangent space, x is side, y is forward z is up. Your transformed normal becomes (xB, yT, zN).
If you do not pass the tangent, you must first create a random vector that is orthogonal to the normal, then use this as the tangent.
(Note N is the model normal, where (x,y,z) is the normal map normal)

shadow mapping - transforming a view space position to the shadow map space

I use deferred rendering and I store a fragment position in the camera view space. When I perform a shadow calculation I need to transform a camera view space to the shadow map space. I build a shadow matrix this way:
shadowMatrix = shadowBiasMatrix * lightProjectionMatrix * lightViewMatrix * inverseCameraViewMatrix;
shadowBiasMatrix shifts values from [-1,1] to [0,1] range.
lightProjectionMatrix that's orthographic projection matrix for a directional light. lightViewMatrix looks at the frustum center and contains a light direction.
inverseCameraViewMatrix transforms a fragment position from a camera view space to the world space.
I wonder if it is correct to multiply the inverse camera view matrix with the other matrices ? Maybe I should use the inverse camera view matrix separately ?
First scenario:
vec4 shadowCoord = shadowMatrix * vec4(cameraViewSpacePosition, 1.0);
Second scenario, a inverse camera view matrix is use separately:
vec4 worldSpacePosition = inverseCameraViewSpaceMatrix * vec4(cameraViewSpacePosition, 1.0);
vec4 shadowCoord = shadowMatrix * worldSpacePosition;
Precomputing the shadow matrix in the described way is the correct approach and should work as expected.
Because of the associativity of matrix multiplication the results of the three scenarios should be identical (ignoring floating point precision) and are thus interchangeable.
But because these calculations are done in the fragment shader, it is best to premultiply the matrixes in the main program to do as few operations as possible per fragment.
I'm also writing a deferred renderer currently and calculate my matrices in the same way without any problems.
// precomputed: lightspace_mat = light_projection * light_view * inverse_cam_view
// calculate position in clip-space of the lightsource
vec4 lightspace_pos = lightspace_mat * vec4(viewspace_pos, 1.0);
// perspective divide
lightspace_pos/=lightspace_pos.w;
// move range from [-1.0, 1.0] to [0.0, 1.0]
lightspace_pos = lightspace_pos * vec4(0.5) + vec4(0.5);
// sample shadowmap
float shadowmap_depth = texture(shadowmap, lightspace_pos.xy).r;
float fragment_depth = lightspace_pos.z;
I also found this tutorial using a simillar approach, that could be helpfull: http://www.codinglabs.net/tutorial_opengl_deferred_rendering_shadow_mapping.aspx
float readShadowMap(vec3 eyeDir)
{
mat4 cameraViewToWorldMatrix = inverse(worldToCameraViewMatrix);
mat4 cameraViewToProjectedLightSpace = lightViewToProjectionMatrix * worldToLightViewMatrix * cameraViewToWorldMatrix;
vec4 projectedEyeDir = cameraViewToProjectedLightSpace * vec4(eyeDir,1);
projectedEyeDir = projectedEyeDir/projectedEyeDir.w;
vec2 textureCoordinates = projectedEyeDir.xy * vec2(0.5,0.5) + vec2(0.5,0.5);
const float bias = 0.0001;
float depthValue = texture2D( tShadowMap, textureCoordinates ) - bias;
return projectedEyeDir.z * 0.5 + 0.5 < depthValue;
}
The eyeDir that comes in input is in View Space. To find the pixel in
the shadow map we need to take that point and covert it into the
light's clip space, which means going from Camera View Space into
World Space, then into Light View Space, than into Light Projection
Space/Clip space. All these transformations are done using matrices;
if you are not familiar with space changes you may want to read my
article about spaces and transformations.
Once we are in the right space we calculate the texture coordinates
and we are finally ready to read from the shadow map. Bias is a small
offset that we apply to the values in the map to avoid that because of
rounding errors a point ends up shading itself! So we shift all the
map back a bit so that all the values in the map are slightly smaller
than they should.

Rotation implementation errors

I am trying to implement a rotation for a camera with the following function
mat3 rotate(const float degrees, const vec3& axis) {
mat3 matrix=mat3(cos(degrees),0.0f,-sin(degrees),
0.0f,1.0f,0.0f,
sin(degrees),0.0f,cos(degrees)
);
mat4 m4=mat4(matrix) ;
return (m4 * vec4(axis,1));
}
However I am not able to convert mat4 to mat3. Also the multiplication of mat4 and vec3 is giving compilation errors
In the return statement you're multiplying a 4x4 matrix by a 4 dimensional vector.
return (m4 * vec4(axis,1));
This returns a vec4, yet your function states it returns a mat3.
But I really think you need to review what you're trying to do. A camera uses a 4x4 matrix to define it's view matrix and glm already supplies convenience functions for rotating matrices.
glm::mat4 viewMatrix( 1.0f );
viewMatrix *= glm::rotate( angle, axis );
There are rules for matrix multiplication, specifically: For AB the number of columns in matrix A must be equal to the number of rows in matrix B. Multiplying a matrix by a vector is valid because vector is like a one dimensional matrix, however, it must follow the above rule so multiplying a mat4 (a 4 by 4 matrix) by vec3 (a 3x1 matrix) is impossible because the number of columns is not equal to the number of rows, what you can do is create a vec4 (4x1 matrix) and fill the last component with 1, thus allowing you to multiply it with a mat4 matrix. You can read more about matrix multiplications here.
Another thing, like #David Saxon said, you have function returning mat3 while your return statement looks like this return (m4 * vec4(axis,1));, which will result in a vec4 (4x1 matrix), which is probably the reason you're getting this compilation error.
If you only want rotate a vector you don't have to use a 4x4 matrix, you can just multiply the mat3 by the vec3 you want to rotate, which will result in a vec3 representing the rotated vector.

normal mapping, TBN matrix calculation

I just want to be sure I understand TBN matrix calculation correctly
In vertex shader we usually use:
vec3 n = normalize(gl_NormalMatrix * gl_Normal);
vec3 t = normalize(gl_NormalMatrix * Tangent.xyz);
vec3 b = normalize(gl_NormalMatrix * Bitangent.xyz);
mat3 tbn = mat3(t, b, n);
As I understand this tbn matrix transforms a vector from Tangent space to Eye space. Actually we want the reverse - transform a vector from eye space to Tangent space. Thus we need to invert the tbn matrix:
tbn = transpose(tbn); // transpose should be OK here for doing matrix inversion
note: tbn - should contain only rotations, for such case we can use transpose to inverse the matrix.
And we can transform our vectors:
vec3 lightT = tbn * light_vector;
... = tbn * ...
In several tutorials, source codes I've found that authors used something like this:
light.x = dot(light, t);
light.y = dot(light, b);
light.z = dot(light, n);
The above code does the same as multiplying by the transposed(tbn) matrix.
The question:
Should we use transposed tbn matrix just as I explained above? Or maybe I am missing something?
Note by that solution we have vectors (light_vector) transformed into TBN in the vertex shader, then in fragment shader we only have to get normal from the normal map. Other option is to create TBN matrix that transform from TBN space into eye space and then in the fragment shader transform each read normal from the normal map.
transpose is not inversion of matrix !!!
my TBN matrixes in vertex shader look like this:
uniform mat4x4 tm_l2g_dir;
layout(location=3) in vec3 tan;
layout(location=4) in vec3 bin;
layout(location=5) in vec3 nor;
out smooth mat3 pixel_TBN;
void main()
{
vec4 p;
//...
p.xyz=tan.xyz; p.w=1.0; pixel_TBN[0]=normalize((tm_l2g_dir*p).xyz);
p.xyz=bin.xyz; p.w=1.0; pixel_TBN[1]=normalize((tm_l2g_dir*p).xyz);
p.xyz=nor.xyz; p.w=1.0; pixel_TBN[2]=normalize((tm_l2g_dir*p).xyz);
//...
}
where:
tm_l2g_dir is transformation matrix from local model space to global scene space whitout any movement (changes only direction) for your code it is your Normal matrix
tan,bin,nor are vectors for TBN matrix (as part of my models)
nor - is normal vector to surface from actual Vertex position (can calculate as vector multiplication of two vertices from this vertex)
tan,bin are perpendicular vectors usually parallel to texture mapping axises or teselation of your model. If you choose wrong tan/bin vectors than sometimes some lighting artifacts can occur. for example if you have cylinder than bin is its rotation axis and tan is perpendicular to it along the circle (tangent)
tan,bin,nor should be perpendicular to each other
you can calculate TBN also automatic but that will cause some artifacts. to do it you simply chose as tan vector one vertice used for normal computation and bin=nor x bin
in fragment shader i use pixel_TBN with normal map to compute the real normal for fragment
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
in smooth vec2 pixel_txr;
in smooth mat3 pixel_TBN;
uniform sampler2D txr_normal;
out layout(location=0) vec4 frag_col;
const vec4 v05=vec4(0.5,0.5,0.5,0.5);
//------------------------------------------------------------------
void main(void)
{
vec4 col;
vec3 normal;
col=(texture2D(txr_normal,pixel_txr.st)-v05)*2.0; // normal/bump maping
normal=pixel_TBN*col.xyz;
// col=... other stuff to compute col
frag_col=col;
}
//------------------------------------------------------------------

glscalef effect on normal vectors

So I have a .3ds mesh that I imported into my project and it's quite large so I wanted to scale it down in size using glscalef. However the rendering algorithm I use in my shader makes use of normal vector values and so after scaling my rendering algorithm no longer works exactly as it should. So how do remedy this? Is there a glscalef for normals as well?
Normal vectors are transformed by the transposed inverse of the modelview matrix. However a second constraint is, that normals be of unit length and scaling the modelview changes that. So in your shader, you should apply a normalization step
#version 330
uniform mat4x4 MV;
uniform mat4x4 P;
uniform mat4x4 N; // = transpose(inv(MV));
in vec3 vertex_pos; // vertes position attribute
in vec3 vertex_normal; // vertex normal attribute
out vec3 trf_normal;
void main()
{
trf_normal = normalize( (N * vec4(vertex_normal, 1)).xyz );
gl_Position = P * MV * vertex_pos;
}
Note that "normalization" is the process of turning a vector into colinear vector of its own with unit length, and has nothing to do with the concept of surface normals.
To transform normals, you need to multiply them by the inverse transpose of the transformation matrix. There are several explanations of why this is the case, the best one is probably here
Normal is transformed by MV^IT, the inverse transpose of the modelview matrix, read the redbook, it explains everything.