Such as gl_FragColor = v1 * v2, i can't really get how does it multiplies and it seems that the reference give the explanation of vector multiply matrix.
ps: The type of v1 and v2 are both vec4.
The * operator works component-wise for vectors like vec4.
vec4 a = vec4(1.0, 2.0, 3.0, 4.0);
vec4 b = vec4(0.1, 0.2, 0.3, 0.4);
vec4 c = a * b; // vec4(0.1, 0.4, 0.9, 1.6)
The GLSL Language Specification says under section 5.10 Vector and Matrix Operations:
With a few exceptions, operations are component-wise. Usually, when an
operator operates on a vector or matrix, it is operating independently
on each component of the vector or matrix, in a component-wise
fashion. [...] The exceptions are matrix multiplied by vector, vector
multiplied by matrix, and matrix multiplied by matrix. These do not
operate component-wise, but rather perform the correct linear
algebraic multiply.
Related
I need to be able to modify vertex coordinates accordingly to a transformation matrix, but I have per-vertex lighting, so I am not sure, that my approach is correct for normals:
#version 120
uniform mat4 transformationMatrix;
void main() {
vec3 normal, lightDir;
vec4 diffuse, ambient, globalAmbient;
float NdotL;
// Transformation part
normal = gl_NormalMatrix * gl_Normal * transpose(mat3(transformationMatrix));
gl_Position = gl_ModelViewProjectionMatrix * transformationMatrix * gl_Vertex;
// Calculate color
lightDir = normalize(vec3(gl_LightSource[0].position));
NdotL = max(abs(dot(normal, lightDir)), 0.0);
diffuse = gl_Color * gl_LightSource[0].diffuse;
ambient = gl_Color * gl_LightSource[0].ambient;
globalAmbient = gl_LightModel.ambient * gl_Color;
gl_FrontColor = NdotL * diffuse + globalAmbient + ambient;
}
I perform all transformations in lines 8-9. Could You comment whether it is correct way or not?
If you want to create a normal matrix, then you have to use the inverse transpose of the upper left 3*3, of the 4*4 matrix.
See Why transforming normals with the transpose of the inverse of the modelview matrix?
and Why is the transposed inverse of the model view matrix used to transform the normal vectors?
This would mean that you have to write your code like this:
normal = gl_NormalMatrix * transpose(inverse(mat3(transformationMatrix))) * gl_Normal;
But, if a vector is multiplied to a matrix from the left, the result corresponds to to multiplying a column vector to the transposed matrix from the right.
See GLSL Programming/Vector and Matrix Operations
This means you can write the code like this and avoid the transpose operation:
normal = gl_NormalMatrix * (gl_Normal * inverse(mat3(transformationMatrix)));
If the 4*4 matrix transformationMatrix is a Orthogonal matrix, this means the X, Y, and Z axis are Orthonormal (unit vectors and they are normal to each other), then it is sufficent to use the the upper left 3*3. In this case the inverse matrix is equal the transposed matrix.
See In which cases is the inverse matrix equal to the transpose?
This will simplify your code:
normal = gl_NormalMatrix * mat3(transformationMatrix) * gl_Normal;
Of course this can also be expressed like this:
normal = gl_NormalMatrix * (gl_Normal * transpose(mat3(transformationMatrix)));
Note, this is not the same as you do in your code, becaues the * operations are processed from the left to the right (see GLSL - The OpenGL Shading Language 4.6, 5.1 Operators, page 97) and the result of
vec3 v;
mat3 m1, m2;
(m1 * v) * m2
is not equal
m1 * (v * m2);
The normal transformation does not look correct.
Since v * transpose(M) is exactly the same as M * v, you didn't do any special case handling for non-uniform scaling at all.
What you are looking for is most probably to use the inverse-transpose matrix:
normal = gl_NormalMatrix * transpose(inverse(mat3(transformationMatrix))) * gl_Normal;
For more details about the math behind this, have a look at this.
I am trying to implement a rotation for a camera with the following function
mat3 rotate(const float degrees, const vec3& axis) {
mat3 matrix=mat3(cos(degrees),0.0f,-sin(degrees),
0.0f,1.0f,0.0f,
sin(degrees),0.0f,cos(degrees)
);
mat4 m4=mat4(matrix) ;
return (m4 * vec4(axis,1));
}
However I am not able to convert mat4 to mat3. Also the multiplication of mat4 and vec3 is giving compilation errors
In the return statement you're multiplying a 4x4 matrix by a 4 dimensional vector.
return (m4 * vec4(axis,1));
This returns a vec4, yet your function states it returns a mat3.
But I really think you need to review what you're trying to do. A camera uses a 4x4 matrix to define it's view matrix and glm already supplies convenience functions for rotating matrices.
glm::mat4 viewMatrix( 1.0f );
viewMatrix *= glm::rotate( angle, axis );
There are rules for matrix multiplication, specifically: For AB the number of columns in matrix A must be equal to the number of rows in matrix B. Multiplying a matrix by a vector is valid because vector is like a one dimensional matrix, however, it must follow the above rule so multiplying a mat4 (a 4 by 4 matrix) by vec3 (a 3x1 matrix) is impossible because the number of columns is not equal to the number of rows, what you can do is create a vec4 (4x1 matrix) and fill the last component with 1, thus allowing you to multiply it with a mat4 matrix. You can read more about matrix multiplications here.
Another thing, like #David Saxon said, you have function returning mat3 while your return statement looks like this return (m4 * vec4(axis,1));, which will result in a vec4 (4x1 matrix), which is probably the reason you're getting this compilation error.
If you only want rotate a vector you don't have to use a 4x4 matrix, you can just multiply the mat3 by the vec3 you want to rotate, which will result in a vec3 representing the rotated vector.
Given this vertex shader:
attribute vec3 vertex;
uniform mat4 mvp;
void main() {
gl_Position = mvp * vec4(vertex, 1.0);
}
And this fragment shader:
void main() {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
Which is able to render the data below, when the mvp matrix is the identity or if the model matrix is a scale, rotate, or tranlate transform:
GLfloat values[] = {
-1.0, -1.0, +0.0,
+1.0, -1.0, +0.0,
+0.0, +1.0, +0.0,
};
Why does the following usage of Qt's QMatrix4x4::lookAt and QMatrix4x4::perspective cause the scene to be rendered as if no object is there?
QMatrix4x4 model;
QMatrix4x4 view;
view.lookAt(
QVector3D(0.0, 0.0, 10.0), // Eye
QVector3D(0.0, 0.0, 0.0), // Focal Point
QVector3D(0.0, 1.0, 0.0)); // Up vector
QMatrix4x4 proj;
// Window size is fixed at 800.0 by 600.0
proj.perspective(45.0, 800.0 / 600.0, 1.0, 100.0);
QMatrix4x4 mvp = (model * view * proj);
What I am looking for is not only how to fix the code but by which means I can attempt to debug these things in the future.
Just on a hunch I changed mvp to p * v * m and it fixed the issue. Why is it mvp if you have to do the multiplication in the opposite order? I know matrix multiplication is not transitive. That is if A and B are matrices, A * B != B * A if A and B are not I.
It's called MVP because... somebody named it that way. ;)
It makes some sense, though. It basically lists the transformations in the order they are applied. You first apply the Model matrix to your vertices, then the View matrix to the result of that, then the projection matrix to the result of both.
Or mathematically, for an input vertex vObj, you could write:
vWorld = M * vObj
vEye = V * vWorld
vClip = P * vEye
If you substitute the equations, you get:
vClip = P * vEye = P * (V * vWorld) = P * (V * (M * vObj))
Matrix multiplications are associative, so this can be rewritten as:
P * (V * (M * vObj)) = (P * V * M) * vObj
Therefore, the combined matrix is calculated as P * V * M.
Reto Koradi is right. And it's not because of memory layout or something, it's because OpenGL uses column vectors - matrix with four rows and one column or 4x1 matrix. Transformations are Matrix4x4 * Vector to meet criteria for matrix multiplications (result is column vector again).
Another approach is to define vector as row (1x4 matrix) and then all transformations are vWorld = vObj * M etc to meet criteria for matrix multiplication resulting in row vector. Out of sudden, last row is rewritten as vClip = vObj * M * V * P.
Matrix multiplication is always the same, you should not need to care how matrices are stored in the memory (well, unless it's linear array and you need to address element at row/column), but transform matrices are different depending on definition of vector.
In OpenGL always compose transforms from right to left. Remember that left-most matrix is applied last.
For some reason (history?), vectors are usually considered column vectors and transform matrices are applied from right to left, but as noted in the comment below, it's possible to use both approaches in GL Shading Language (GLSL).
It's to do with the dimensionality of matrices and mathematical notation. The dimensions of a matrix are defined as rows x columns. So a 1x3 matrix is M = [a b c]. Then a 4x4 matrix is as expected.
Multiplying two matrices of dimension AxB and CxD can only be done if B=C (row into column and sum the result).
A list of N vertices with XYZW coordinates can be defined as a matrix Nx4 or 4xN in size, but only 4xN works with the definition of the multiplication operator if the block of vertices come after the matrix:
V' (4xN) = M (4x4) x V (4xN)
So vertices are considered as column vectors to make this notation work.
I'm currently implementing matrices in my enigne. With standard glTranslate and glRotate and then ftransform() in the shader it works. Done manually not.
How i give Matrix to the Shader:
public static void loadMatrix(int location, Matrix4f matrix)
{
FloatBuffer buffer = BufferUtils.createFloatBuffer(16);
matrix.store(buffer);
buffer.flip();
glUniformMatrix4(location, false, buffer);
}
Sending viewMatrix:
shaderEngine.loadMatrix(glGetUniformLocation(shaderEngine.standard, "viewMatrix"), camera.viewMatrix);
shaderEngine.loadMatrix(glGetUniformLocation(shaderEngine.obj, "viewMatrix"), camera.viewMatrix);
System.out.println(camera.viewMatrix.toString());
In the shader i get it with:
uniform mat4 viewMatrix;
and in the shaders main i set the frag color:
gl_FragColor = vec4(viewMatrix[0][3] / -256,0,0,1);
Which is BLACK (so viewMatrix[0][3] == 0) while my matrix output in java looks like this:
1.0 0.0 0.0 -128.0
0.0 1.0 0.0 -22.75
0.0 0.0 1.0 -128.0
0.0 0.0 0.0 1.0
Your confusion seems to come from how array subscripts address the elements of a matrix in GLSL. The first subscript is the column and the second is the row.
Thus, unless you transpose your matrix, column 1 row 4 == 0.0.
If you transpose your matrix or swap the subscripts, you will get -128.0.
That second parameter in the call to glUniformMatrix4 (...) allows you to transpose the matrix before GLSL gets its hands on it, by the way. This will allow you to treat everything as row-major if that is more natural to you.
My problem was that i was giving the matrices to the shader when the used program was 0.
My issue is that I have a (working) orthographic vertex and fragment shader pair that allow me to specify center X and Y of a sprite via 'translateX' and 'translateY' uniforms being passed in. I multiply by a projectionMatrix that is hardcoded and works great. Everything works as far as orthographic operation. My incoming geometry to this shader is based around 0, 0, 0 as its center point.
I want to now figure out what that center point (0, 0, 0 in local coordinate space) becomes after the translations. I need to know this information in the fragment shader. I assumed that I can create a vector at 0, 0, 0 and then apply the same translations. However, I'm NOT getting the correct answer.
My question: what I am I doing wrong, and how can I even debug what's going on? I know that the value being computed must be wrong, but I have no insight in to what it is. (My platform is Xcode 4.2 on OS X developing for OpenGL ES 2.0 iOS)
Here's my vertex shader:
// Vertex Shader for pixel-accurate rendering
attribute vec4 a_position;
attribute vec2 a_texCoord;
varying vec2 v_texCoord;
uniform float translateX;
uniform float translateY;
// Set up orthographic projection
// this is for 640 x 960
mat4 projectionMatrix = mat4( 2.0/960.0, 0.0, 0.0, -1.0,
0.0, 2.0/640.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
void main()
{
// Set position
gl_Position = a_position;
// Translate by the uniforms for offsetting
gl_Position.x += translateX;
gl_Position.y += translateY;
// Translate
gl_Position *= projectionMatrix;
// Do all the same translations to a vector with origin at 0,0,0
vec4 toPass = vec4(0, 0, 0, 1); // initialize. doesn't matter if w is 1 or 0
toPass.x += translateX;
toPass.y += translateY;
toPass *= projectionMatrix;
// this SHOULD pass the computed value to my fragment shader.
// unfortunately, whatever value is sent, isn't right.
//v_translatedOrigin = toPass;
// instead, I use this as a workaround, since I do know the correct values for my
// situation. of course this is hardcoded and is terrible.
v_translatedOrigin = vec4(500.0, 200.0, 0.0, 0.0);
}
EDIT: In response to my orthographic matrix being wrong, the following is what wikipedia has to say about ortho projections, and my -1's look right. because in my case for example the 4th element of my mat should be -((right+left)/(right-left)) which is right of 960 left of 0, so -1 * (960/960) which is -1.
EDIT: I've possibly uncovered the root issue here - what do you think?
Why does your ortho matrix have -1's in the bottom of each column? Those should be zeros. Granted, that should not affect anything.
I'm more concerned about this:
gl_Position *= projectionMatrix;
What does that mean? Matrix multiplication is not commutative; M * a is not the same as a * M. So which side do you expect gl_Position to be multiplied on?
Oddly, the GLSL spec does not say (I filed a bug report on this). So you should go with what is guaranteed to work:
gl_Position = projectionMatrix * gl_Position;
Also, you should use proper vectorized code. You should have one translate uniform, which is a vec2. Then you can just do gl_Position.xy = a_position.xy + translate;. You'll have to fill in the Z and W with constants (gl_Position.zw = vec2(0, 1);).
Matrices in GLSL are column major. The first four values are the first column of the matrix, not the first row. You are multiplying with a transposed ortho matrix.
I have to echo Nicol Bolas's sentiment. Two wrongs happening to make things work is frustrating, but doesn't make them any less wrong. The fact that things are showing up where you expect is likely because the translation portion of your matrix is 0, 0, 0.
The equation you posted is correct, but the notation is row major, and OpenGL is column major:
I run afoul of this stuff every new project I start. This site is a really good resource that helped me keep these things straight. They've got another page on projection matrices.
If you're not sure if your orthographic projection is correct (right now it isn't), try plugging the same values into glOrtho, and reading the values back out of GL_PROJECTION.