I have a Transfromation matrix , which is combination of three other transformation matrixes.
glm::mat4 Matrix1 = position * rotation * scaling;
glm::mat4 Matrix2 = position * rotation * scaling;
glm::mat4 Matrix3 = position * rotation * scaling;
glm::mat4 transMatrix = Matrix1 * Matrix2 * Matrix3;
if sometime later i just want to remove the effect of Matrix1 from the transMatrix.
How can i do that ?
In short you may simply multiply by the inverse of Matrix1:
glm::mat4 Matrix2And3 = glm::inverse(Matrix1) * transMatrix;
Order of operation is important, if you wanted to remove Matrix3, you would need make transMatrix * inverse(Matrix3) instead. If it was Matrix2, you would need to remove matrix1 (or 3) and then matrix2.
However, matrix inversion should be avoided at all costs, since it's very inefficient, and for your situation it is avoidable.
What you call Matrix is actually just a 3D Pose: Position + Rotation + Size.
Assuming you are using uniform scaling (Size = float) your mat3 component of your Pose, i.e., is a Orthogonal matrix, these type of matrices have a special property which is:
Inverse(O) == Transpose(O)
Calculating the transpose of a matrix is a lot simpler than the inverse, this means that you can do the following, to achieve the same results but a lot faster:
mat4 inv = (mat4)transpose((mat3)Matrix1);
inv[3] = glm::vec4(-position1, 1.0f);
mat4 Matrix2And3 = inv * transMatrix;
If you want to go even further, I recommend you create Pose class and provide cast operators for mat4 and mat3, to take full performance at ease.
Related
I need to be able to modify vertex coordinates accordingly to a transformation matrix, but I have per-vertex lighting, so I am not sure, that my approach is correct for normals:
#version 120
uniform mat4 transformationMatrix;
void main() {
vec3 normal, lightDir;
vec4 diffuse, ambient, globalAmbient;
float NdotL;
// Transformation part
normal = gl_NormalMatrix * gl_Normal * transpose(mat3(transformationMatrix));
gl_Position = gl_ModelViewProjectionMatrix * transformationMatrix * gl_Vertex;
// Calculate color
lightDir = normalize(vec3(gl_LightSource[0].position));
NdotL = max(abs(dot(normal, lightDir)), 0.0);
diffuse = gl_Color * gl_LightSource[0].diffuse;
ambient = gl_Color * gl_LightSource[0].ambient;
globalAmbient = gl_LightModel.ambient * gl_Color;
gl_FrontColor = NdotL * diffuse + globalAmbient + ambient;
}
I perform all transformations in lines 8-9. Could You comment whether it is correct way or not?
If you want to create a normal matrix, then you have to use the inverse transpose of the upper left 3*3, of the 4*4 matrix.
See Why transforming normals with the transpose of the inverse of the modelview matrix?
and Why is the transposed inverse of the model view matrix used to transform the normal vectors?
This would mean that you have to write your code like this:
normal = gl_NormalMatrix * transpose(inverse(mat3(transformationMatrix))) * gl_Normal;
But, if a vector is multiplied to a matrix from the left, the result corresponds to to multiplying a column vector to the transposed matrix from the right.
See GLSL Programming/Vector and Matrix Operations
This means you can write the code like this and avoid the transpose operation:
normal = gl_NormalMatrix * (gl_Normal * inverse(mat3(transformationMatrix)));
If the 4*4 matrix transformationMatrix is a Orthogonal matrix, this means the X, Y, and Z axis are Orthonormal (unit vectors and they are normal to each other), then it is sufficent to use the the upper left 3*3. In this case the inverse matrix is equal the transposed matrix.
See In which cases is the inverse matrix equal to the transpose?
This will simplify your code:
normal = gl_NormalMatrix * mat3(transformationMatrix) * gl_Normal;
Of course this can also be expressed like this:
normal = gl_NormalMatrix * (gl_Normal * transpose(mat3(transformationMatrix)));
Note, this is not the same as you do in your code, becaues the * operations are processed from the left to the right (see GLSL - The OpenGL Shading Language 4.6, 5.1 Operators, page 97) and the result of
vec3 v;
mat3 m1, m2;
(m1 * v) * m2
is not equal
m1 * (v * m2);
The normal transformation does not look correct.
Since v * transpose(M) is exactly the same as M * v, you didn't do any special case handling for non-uniform scaling at all.
What you are looking for is most probably to use the inverse-transpose matrix:
normal = gl_NormalMatrix * transpose(inverse(mat3(transformationMatrix))) * gl_Normal;
For more details about the math behind this, have a look at this.
I use deferred rendering and I store a fragment position in the camera view space. When I perform a shadow calculation I need to transform a camera view space to the shadow map space. I build a shadow matrix this way:
shadowMatrix = shadowBiasMatrix * lightProjectionMatrix * lightViewMatrix * inverseCameraViewMatrix;
shadowBiasMatrix shifts values from [-1,1] to [0,1] range.
lightProjectionMatrix that's orthographic projection matrix for a directional light. lightViewMatrix looks at the frustum center and contains a light direction.
inverseCameraViewMatrix transforms a fragment position from a camera view space to the world space.
I wonder if it is correct to multiply the inverse camera view matrix with the other matrices ? Maybe I should use the inverse camera view matrix separately ?
First scenario:
vec4 shadowCoord = shadowMatrix * vec4(cameraViewSpacePosition, 1.0);
Second scenario, a inverse camera view matrix is use separately:
vec4 worldSpacePosition = inverseCameraViewSpaceMatrix * vec4(cameraViewSpacePosition, 1.0);
vec4 shadowCoord = shadowMatrix * worldSpacePosition;
Precomputing the shadow matrix in the described way is the correct approach and should work as expected.
Because of the associativity of matrix multiplication the results of the three scenarios should be identical (ignoring floating point precision) and are thus interchangeable.
But because these calculations are done in the fragment shader, it is best to premultiply the matrixes in the main program to do as few operations as possible per fragment.
I'm also writing a deferred renderer currently and calculate my matrices in the same way without any problems.
// precomputed: lightspace_mat = light_projection * light_view * inverse_cam_view
// calculate position in clip-space of the lightsource
vec4 lightspace_pos = lightspace_mat * vec4(viewspace_pos, 1.0);
// perspective divide
lightspace_pos/=lightspace_pos.w;
// move range from [-1.0, 1.0] to [0.0, 1.0]
lightspace_pos = lightspace_pos * vec4(0.5) + vec4(0.5);
// sample shadowmap
float shadowmap_depth = texture(shadowmap, lightspace_pos.xy).r;
float fragment_depth = lightspace_pos.z;
I also found this tutorial using a simillar approach, that could be helpfull: http://www.codinglabs.net/tutorial_opengl_deferred_rendering_shadow_mapping.aspx
float readShadowMap(vec3 eyeDir)
{
mat4 cameraViewToWorldMatrix = inverse(worldToCameraViewMatrix);
mat4 cameraViewToProjectedLightSpace = lightViewToProjectionMatrix * worldToLightViewMatrix * cameraViewToWorldMatrix;
vec4 projectedEyeDir = cameraViewToProjectedLightSpace * vec4(eyeDir,1);
projectedEyeDir = projectedEyeDir/projectedEyeDir.w;
vec2 textureCoordinates = projectedEyeDir.xy * vec2(0.5,0.5) + vec2(0.5,0.5);
const float bias = 0.0001;
float depthValue = texture2D( tShadowMap, textureCoordinates ) - bias;
return projectedEyeDir.z * 0.5 + 0.5 < depthValue;
}
The eyeDir that comes in input is in View Space. To find the pixel in
the shadow map we need to take that point and covert it into the
light's clip space, which means going from Camera View Space into
World Space, then into Light View Space, than into Light Projection
Space/Clip space. All these transformations are done using matrices;
if you are not familiar with space changes you may want to read my
article about spaces and transformations.
Once we are in the right space we calculate the texture coordinates
and we are finally ready to read from the shadow map. Bias is a small
offset that we apply to the values in the map to avoid that because of
rounding errors a point ends up shading itself! So we shift all the
map back a bit so that all the values in the map are slightly smaller
than they should.
I am trying to implement a rotation for a camera with the following function
mat3 rotate(const float degrees, const vec3& axis) {
mat3 matrix=mat3(cos(degrees),0.0f,-sin(degrees),
0.0f,1.0f,0.0f,
sin(degrees),0.0f,cos(degrees)
);
mat4 m4=mat4(matrix) ;
return (m4 * vec4(axis,1));
}
However I am not able to convert mat4 to mat3. Also the multiplication of mat4 and vec3 is giving compilation errors
In the return statement you're multiplying a 4x4 matrix by a 4 dimensional vector.
return (m4 * vec4(axis,1));
This returns a vec4, yet your function states it returns a mat3.
But I really think you need to review what you're trying to do. A camera uses a 4x4 matrix to define it's view matrix and glm already supplies convenience functions for rotating matrices.
glm::mat4 viewMatrix( 1.0f );
viewMatrix *= glm::rotate( angle, axis );
There are rules for matrix multiplication, specifically: For AB the number of columns in matrix A must be equal to the number of rows in matrix B. Multiplying a matrix by a vector is valid because vector is like a one dimensional matrix, however, it must follow the above rule so multiplying a mat4 (a 4 by 4 matrix) by vec3 (a 3x1 matrix) is impossible because the number of columns is not equal to the number of rows, what you can do is create a vec4 (4x1 matrix) and fill the last component with 1, thus allowing you to multiply it with a mat4 matrix. You can read more about matrix multiplications here.
Another thing, like #David Saxon said, you have function returning mat3 while your return statement looks like this return (m4 * vec4(axis,1));, which will result in a vec4 (4x1 matrix), which is probably the reason you're getting this compilation error.
If you only want rotate a vector you don't have to use a 4x4 matrix, you can just multiply the mat3 by the vec3 you want to rotate, which will result in a vec3 representing the rotated vector.
Given this vertex shader:
attribute vec3 vertex;
uniform mat4 mvp;
void main() {
gl_Position = mvp * vec4(vertex, 1.0);
}
And this fragment shader:
void main() {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
Which is able to render the data below, when the mvp matrix is the identity or if the model matrix is a scale, rotate, or tranlate transform:
GLfloat values[] = {
-1.0, -1.0, +0.0,
+1.0, -1.0, +0.0,
+0.0, +1.0, +0.0,
};
Why does the following usage of Qt's QMatrix4x4::lookAt and QMatrix4x4::perspective cause the scene to be rendered as if no object is there?
QMatrix4x4 model;
QMatrix4x4 view;
view.lookAt(
QVector3D(0.0, 0.0, 10.0), // Eye
QVector3D(0.0, 0.0, 0.0), // Focal Point
QVector3D(0.0, 1.0, 0.0)); // Up vector
QMatrix4x4 proj;
// Window size is fixed at 800.0 by 600.0
proj.perspective(45.0, 800.0 / 600.0, 1.0, 100.0);
QMatrix4x4 mvp = (model * view * proj);
What I am looking for is not only how to fix the code but by which means I can attempt to debug these things in the future.
Just on a hunch I changed mvp to p * v * m and it fixed the issue. Why is it mvp if you have to do the multiplication in the opposite order? I know matrix multiplication is not transitive. That is if A and B are matrices, A * B != B * A if A and B are not I.
It's called MVP because... somebody named it that way. ;)
It makes some sense, though. It basically lists the transformations in the order they are applied. You first apply the Model matrix to your vertices, then the View matrix to the result of that, then the projection matrix to the result of both.
Or mathematically, for an input vertex vObj, you could write:
vWorld = M * vObj
vEye = V * vWorld
vClip = P * vEye
If you substitute the equations, you get:
vClip = P * vEye = P * (V * vWorld) = P * (V * (M * vObj))
Matrix multiplications are associative, so this can be rewritten as:
P * (V * (M * vObj)) = (P * V * M) * vObj
Therefore, the combined matrix is calculated as P * V * M.
Reto Koradi is right. And it's not because of memory layout or something, it's because OpenGL uses column vectors - matrix with four rows and one column or 4x1 matrix. Transformations are Matrix4x4 * Vector to meet criteria for matrix multiplications (result is column vector again).
Another approach is to define vector as row (1x4 matrix) and then all transformations are vWorld = vObj * M etc to meet criteria for matrix multiplication resulting in row vector. Out of sudden, last row is rewritten as vClip = vObj * M * V * P.
Matrix multiplication is always the same, you should not need to care how matrices are stored in the memory (well, unless it's linear array and you need to address element at row/column), but transform matrices are different depending on definition of vector.
In OpenGL always compose transforms from right to left. Remember that left-most matrix is applied last.
For some reason (history?), vectors are usually considered column vectors and transform matrices are applied from right to left, but as noted in the comment below, it's possible to use both approaches in GL Shading Language (GLSL).
It's to do with the dimensionality of matrices and mathematical notation. The dimensions of a matrix are defined as rows x columns. So a 1x3 matrix is M = [a b c]. Then a 4x4 matrix is as expected.
Multiplying two matrices of dimension AxB and CxD can only be done if B=C (row into column and sum the result).
A list of N vertices with XYZW coordinates can be defined as a matrix Nx4 or 4xN in size, but only 4xN works with the definition of the multiplication operator if the block of vertices come after the matrix:
V' (4xN) = M (4x4) x V (4xN)
So vertices are considered as column vectors to make this notation work.
I'm updating an old OpenGL project and I'm switching all the (deprecated) glMatrix() functions for matrices and quaternions, and I'm having trouble getting the rotation working.
My drawing looks like this:
//these two are supposedly working
mat4 mProjection = perspective(FOV, aspectRatio, near, far);
mat4 mView = lookAt(cameraPosition, cameraCenter, headsUp);
mat4 mModel = mat4(1.0f);
mat4 mMVP = mProjection * mView * mModel;
What I'm trying to do now is to apply rotation to an object around a specific point (like the object's center).
I tried:
mat4 mModelRotation = rotate(mModel, object->RotationY(), vec3(0.0, 1.0, 0.0)); //RotationY being an angle in degrees
mat4 mMVP = mProjection * mView * mModel * mModelRotation;
But this causes the object to rotate around one of it's edges, not it's center.
I'd like to know how can I apply Quaternions to rotate the object around any point I pass as parameter for example.
I'm unexperienced with matrices, since I avoided them because I could use the glMatrix() functions before, so I don't understand much about the relation between the spatial position and them, and trying to update them to Quaternions is looking even more complicated.
I've read about the logic of Quaternions and how to use them, technically, but I don't understand where their values comes from.
For example:
//axis is a unit vector
local_rotation.w = cosf( fAngle/2)
local_rotation.x = axis.x * sinf( fAngle/2 )
local_rotation.y = axis.y * sinf( fAngle/2 )
local_rotation.z = axis.z * sinf( fAngle/2 )
total = local_rotation * total
I read this, and I have no clue what these values are. Axis is a unit vector... of what? fAngle I assume it's the angle I want to rotate, but since Quaternions use an arbitrary axis, how do I get the value for each of the XYZ axis, and how do I specify it in the Quaternion?
So, I'm looking for any practical example/tutorial of a Quaternion, so I can understand what's going on.
The only information I have when I want to rotate an object is the axis I want to rotate (x, y OR z, not all of them, but a combination of them in the final result), and a value in degrees.
I'm not much of a math person, so any tutorial that doesn't use shortcuts is highly appreciated.
OK, let say that you have a model ML (set of points of a model) and a point P to rotate that ML.
All the rotations are referred at to the origin, so you need to move the set of points ML taking the point P as the origin, make the rotation of all the points and then move it back.
How to do this ?, simple, for each point ML(k) (a point in the set) you do:
ML(k)-P --> with this you move the points, using the P point as origin
then rotate:
ROT * (ML(k)-P)
and finally, you move it back:
ROT(ML(k)-P) + P
As quaternions, you replace the matrix mult by q and -q
q * (ML(k)-p) * -q + p
That should work.