Rotation implementation errors - c++

I am trying to implement a rotation for a camera with the following function
mat3 rotate(const float degrees, const vec3& axis) {
mat3 matrix=mat3(cos(degrees),0.0f,-sin(degrees),
0.0f,1.0f,0.0f,
sin(degrees),0.0f,cos(degrees)
);
mat4 m4=mat4(matrix) ;
return (m4 * vec4(axis,1));
}
However I am not able to convert mat4 to mat3. Also the multiplication of mat4 and vec3 is giving compilation errors

In the return statement you're multiplying a 4x4 matrix by a 4 dimensional vector.
return (m4 * vec4(axis,1));
This returns a vec4, yet your function states it returns a mat3.
But I really think you need to review what you're trying to do. A camera uses a 4x4 matrix to define it's view matrix and glm already supplies convenience functions for rotating matrices.
glm::mat4 viewMatrix( 1.0f );
viewMatrix *= glm::rotate( angle, axis );

There are rules for matrix multiplication, specifically: For AB the number of columns in matrix A must be equal to the number of rows in matrix B. Multiplying a matrix by a vector is valid because vector is like a one dimensional matrix, however, it must follow the above rule so multiplying a mat4 (a 4 by 4 matrix) by vec3 (a 3x1 matrix) is impossible because the number of columns is not equal to the number of rows, what you can do is create a vec4 (4x1 matrix) and fill the last component with 1, thus allowing you to multiply it with a mat4 matrix. You can read more about matrix multiplications here.
Another thing, like #David Saxon said, you have function returning mat3 while your return statement looks like this return (m4 * vec4(axis,1));, which will result in a vec4 (4x1 matrix), which is probably the reason you're getting this compilation error.
If you only want rotate a vector you don't have to use a 4x4 matrix, you can just multiply the mat3 by the vec3 you want to rotate, which will result in a vec3 representing the rotated vector.

Related

How to remove transformations from a matrix

I have a Transfromation matrix , which is combination of three other transformation matrixes.
glm::mat4 Matrix1 = position * rotation * scaling;
glm::mat4 Matrix2 = position * rotation * scaling;
glm::mat4 Matrix3 = position * rotation * scaling;
glm::mat4 transMatrix = Matrix1 * Matrix2 * Matrix3;
if sometime later i just want to remove the effect of Matrix1 from the transMatrix.
How can i do that ?
In short you may simply multiply by the inverse of Matrix1:
glm::mat4 Matrix2And3 = glm::inverse(Matrix1) * transMatrix;
Order of operation is important, if you wanted to remove Matrix3, you would need make transMatrix * inverse(Matrix3) instead. If it was Matrix2, you would need to remove matrix1 (or 3) and then matrix2.
However, matrix inversion should be avoided at all costs, since it's very inefficient, and for your situation it is avoidable.
What you call Matrix is actually just a 3D Pose: Position + Rotation + Size.
Assuming you are using uniform scaling (Size = float) your mat3 component of your Pose, i.e., is a Orthogonal matrix, these type of matrices have a special property which is:
Inverse(O) == Transpose(O)
Calculating the transpose of a matrix is a lot simpler than the inverse, this means that you can do the following, to achieve the same results but a lot faster:
mat4 inv = (mat4)transpose((mat3)Matrix1);
inv[3] = glm::vec4(-position1, 1.0f);
mat4 Matrix2And3 = inv * transMatrix;
If you want to go even further, I recommend you create Pose class and provide cast operators for mat4 and mat3, to take full performance at ease.

Transform a vec2 into another space

In a OpenGL fragment-shader, I need to transform a vec2 that represents a xy pair I need to another coordinate space.
I got the mat4 transformation-matrix for this, but can simply transform by:
vec2 transformed = (tansformMat4 * vec4(xy, 0.0f)).xy ?
I guess the result would not be correct, since the the 0s would get claculated into the x and y part.
4D vectors and Matrices are used for 3D Affine Transformations.
You can use the equivalent 2D Affine Transformation which is a 3D vector with the z-value to 1 (and then discarded):
vec3 transformed3d = transformMat3 * vec3(x, y, 1.0);
vec2 transformed2d(transformed3d.x, transformed3d.y);
Or alternatively, you can use can use the aforementioned 3D Affine Transformation with a 4D vector and Matrix by setting the z-value to zero and the w-value to 1 for homogeneous coordinates and then using the x and y values directly in a parallel projection:
vec4 transformed4d = transformMat4 * vec3(x, y, 0.0, 1.0);
vec2 transformed2d(transformed4d.x, transformed4d.y);

glscalef effect on normal vectors

So I have a .3ds mesh that I imported into my project and it's quite large so I wanted to scale it down in size using glscalef. However the rendering algorithm I use in my shader makes use of normal vector values and so after scaling my rendering algorithm no longer works exactly as it should. So how do remedy this? Is there a glscalef for normals as well?
Normal vectors are transformed by the transposed inverse of the modelview matrix. However a second constraint is, that normals be of unit length and scaling the modelview changes that. So in your shader, you should apply a normalization step
#version 330
uniform mat4x4 MV;
uniform mat4x4 P;
uniform mat4x4 N; // = transpose(inv(MV));
in vec3 vertex_pos; // vertes position attribute
in vec3 vertex_normal; // vertex normal attribute
out vec3 trf_normal;
void main()
{
trf_normal = normalize( (N * vec4(vertex_normal, 1)).xyz );
gl_Position = P * MV * vertex_pos;
}
Note that "normalization" is the process of turning a vector into colinear vector of its own with unit length, and has nothing to do with the concept of surface normals.
To transform normals, you need to multiply them by the inverse transpose of the transformation matrix. There are several explanations of why this is the case, the best one is probably here
Normal is transformed by MV^IT, the inverse transpose of the modelview matrix, read the redbook, it explains everything.

What extractly mat3(a mat4 matrix) statement in glsl do?

I'm doing a per fragment lighting and when correcting normal vecter, i got this code: vec3 f_normal = mat3(MVI) * normal; Where MVI is: mat4 MVI = transpose(inverse(ModelViewMatrix));. So what is return after mat3(MVI) statement?
mat3(MVI) * normal
Returns the upper 3x3 matrix from the 4x4 matrix and multiplies the normal by that. This matrix is called the 'normal matrix'. You use this to bring your normals from world space to eye space. The upper 3x3 portion of the matrix is important for scale and rotation, while the rest is only for translation (and normals are never translated)
To take a normal from world space to eye space, you just need the 3x3 inverse transpose of the model view matrix. Unless your matrix in othro normal (no non-uniform scale) in that case the original matrix is the same as its inverse transpose.
From GLSL types: "There are no restrictions on size when doing matrix construction from another matrix. So you can construct a 4x2 matrix from a 2x4 matrix; only the corresponding elements are copied.". So, you get the MVI's top-left 3x3 submatrix as a result.

How do you transform a directional light to screen coordinates in modern GLSL?

I am switching my shaders away from relying on using the OpenGL fixed function pipeline. Previously, OpenGL automatically transformed the lights for me, but now I need to do it myself.
I have the following GLSL code in my vertex shader:
//compute all of the light vectors. we want normalized vectors
//from the vertex towards the light
for(int i=0;i < NUM_LIGHTS; i++)
{
//if w is 0, it's a directional light, it is a vector
//pointing from the world towards the light source
if(u_lightPos[i].w == 0)
{
vec3 transformedLight = normalize(vec3(??? * u_lightPos[i]));
v_lightDir[i] = transformedLight;
}
else
{
//this is a positional light, transform it by the view*projection matrix
vec4 transformedLight = u_vp_matrix * u_lightPos[i];
//subtract the vertex from the light to get a vector from the vertex to the light
v_lightDir[i] = normalize(vec3(transformedLight - gl_Position));
}
}
What do I replace the ??? with in the line vec3 transformedLight = normalize(vec3(??? * u_lightPos[i]));?
I already have the following matrices as inputs:
uniform mat4 u_mvp_matrix;//model matrix*view matrix*projection matrix
uniform mat4 u_vp_matrix;//view matrix*projection matrix
uniform mat3 u_normal_matrix;//normal matrix for this model
I don't think it's any of these. It's obviously not the mvp matrix or the normal matrix, because those are specific to the current model, and I'm trying to convert to view coordinates. I don't think it's the vp matrix either though, because that includes translations and I don't want to translate a vector.
Can I compute the matrix that I need from what is currently given to the shader, and if so, what do I need to compute? If not, what matrix do I need to add and how should it be computed?
You can use the same view_projection matrix. Yes, that matrix does include a translation, but because the light vector has w = 0 the matrix multiplication will not apply it.
You are supposed to transform the light position/direction into view space on the CPU, not in a shader. It's a uniform; it's done once per frame per light. It doesn't change, so there's no need to have every vertex compute it. It's a waste of time.