I have similar code as in this question:
some opengl and glm explanation
I have a combined matrix that I pass as a single uniform
//C++
mat4 combinedMatrix = projection * view * model;
//GLSL doesn't work
out_position = combinedMatrix * vec4(vertex, 1.0);
It doesn't work. But if I do all the multiplication in the shader so I pass in each individual matrix and get
//GLSL works
out_position = projection * view * model * vec4(vertex, 1.0);
It works.
I can't see anything wrong with my matrices in the C++ code.
The following works too
//C++
mat4 combinedMatrix = projection * view * model;
vec4 p = combinedMatrix * v;
//pass in vertex p as a vec4
//GLSL works
out_position = vertex
I think the problem could be in the matrix multiplication you do in your code.
How the following multiplication is performed?
mat4 combinedMatrix = projection * view * model
It looks to me quite odd, matrix multiplication cannot be done in this way unless I am totally wrong.
This is the way I perform it:
for (i=0; i<4; i++) {
tmp.m[i][0] = (srcA->m[i][0] * srcB->m[0][0]) +
(srcA->m[i][1] * srcB->m[1][0]) +
(srcA->m[i][2] * srcB->m[2][0]) +
(srcA->m[i][3] * srcB->m[3][0]) ;
tmp.m[i][1] = (srcA->m[i][0] * srcB->m[0][1]) +
(srcA->m[i][1] * srcB->m[1][1]) +
(srcA->m[i][2] * srcB->m[2][1]) +
(srcA->m[i][3] * srcB->m[3][1]) ;
tmp.m[i][2] = (srcA->m[i][0] * srcB->m[0][2]) +
(srcA->m[i][1] * srcB->m[1][2]) +
(srcA->m[i][2] * srcB->m[2][2]) +
(srcA->m[i][3] * srcB->m[3][2]) ;
tmp.m[i][3] = (srcA->m[i][0] * srcB->m[0][3]) +
(srcA->m[i][1] * srcB->m[1][3]) +
(srcA->m[i][2] * srcB->m[2][3]) +
(srcA->m[i][3] * srcB->m[3][3]) ;
}
memcpy(result, &tmp, sizeof(PATRIA_Matrix));
Probably I am wrong on this but I am quite sure you should follow this PATH.
The way I see your example it looks to me a pointer multiplication :( (though I don't have the specific of your mat4 matrix class/struct).
Related
Im having troubles with creating skeletal animation, my model is having incorrect transformations
void Animation(glm::mat4 a[])
{
float Factor= fmod(glfwGetTime(),1.0);
for(int b=0;b<BoneList.size();b++)
{
Bone *BoneT = &BoneList[b];
aiMatrix4x4 temp = inverse;
while(BoneT)
{
aiVector3D sc= BoneT->ScaleFrame[0] + (Factor * (BoneT->ScaleFrame[1] - BoneT->ScaleFrame[0]));
aiMatrix4x4 S=aiMatrix4x4();
S[0][0]=sc.x;
S[1][1]=sc.y;
S[2][2]=sc.z;
aiVector3D tr= BoneT->LocFrame[0] + (Factor * (BoneT->LocFrame[1] - BoneT->LocFrame[0]));
aiMatrix4x4 T=aiMatrix4x4();
T[0][3]=tr.x;
T[1][3]=tr.y;
T[2][3]=tr.z;
aiQuaternion R;
aiQuaternion::Interpolate(R, BoneT->RotFrame[0], BoneT->RotFrame[1], Factor);
R = R.Normalize();
temp*=BoneT->NodeTransform*(T* aiMatrix4x4(R.GetMatrix()) * S );
BoneT=BoneT->BoneParent;
}
temp*=BoneList[b].offset;
temp.Transpose();
ai_to_glm(temp,a[b]);
}
}
im creating a Temp aiMatrix4x4 to preserve assimp matrix multiplcation order, then i convert the aiMatrix4x4 to glm::mat4 using function:
void ai_to_glm(const aiMatrix4x4 &from, glm::mat4 &to)
{
to[0][0] = from[0][0];
to[0][1] = from[0][1];
to[0][2] = from[0][2];
to[0][3] = from[0][3];
to[1][0] = from[1][0];
to[1][1] = from[1][1];
to[1][2] = from[1][2];
to[1][3] = from[1][3];
to[2][0] = from[2][0];
to[2][1] = from[2][1];
to[2][2] = from[2][2];
to[2][3] = from[2][3];
to[3][0] = from[3][0];
to[3][1] = from[3][1];
to[3][2] = from[3][2];
to[3][3] = from[3][3];
}
however the end frame of the model looks like this:
i noticed if i removed the translation matrix from the function, the model looks closer what it was supposed to be
the skinning is done in shader
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec2 Tpos;
layout (location = 2) in ivec4 Bones;
layout (location = 3) in vec4 Weight;
uniform mat4 view;
uniform mat4 proj;
uniform mat4 LocRot;
uniform mat4 Test[8];
out vec3 vertexColor;
out vec2 TextCoord;
void main()
{
mat4 BoneTransform = Test[Bones.x]* Weight.x;
BoneTransform += Test[Bones.y] * Weight.y;
BoneTransform += Test[Bones.z] * Weight.z;
BoneTransform += Test[Bones.w] * Weight.w;
gl_Position = proj * view *LocRot* BoneTransform *vec4(aPos, 1.0);
vertexColor = vec3(1,1,1);
TextCoord = Tpos;
}
and the uniform is acessed
for(int i=0; i<4;i++)
{
glUniformMatrix4fv(glGetUniformLocation(ActiveShader->ID, "Test[0]")+i, 1, GL_FALSE, glm::value_ptr(AnimMatrix[i]));
}
what i am aware:
-inverse matrix is identity matrix, ,which doesnt do anything right now to this model.
-some weight sum arent equal 1.0 but i think its not the problem
-changing matrix multiplication order doesnt solve it
model is created and exported in blender
link to model https://send.firefox.com/download/fe0b85d3f4581630/#6S0Vr9EIjgLNN03rerMW0w
my bet is that ai_to_glm function is at fault here, but i am not sure.
Edit: I noticed that rotations are flipped aswell, as shown on images, however multiplying it by inverseai (inverted root bone transformation) does nothing.
Update: i transposed the assimp matrix before conversion and it fixed most problems, but the offsets and parent inheritance is bugged out
before any suspiction, i had no idea that i had account on stack overflow, and i answer this question from my real account
to fix this, it required multiple things:
Iterating bones and assigning children/parents were using unstable pointers and were corrupted, after solving it it fixed the major thing
i have used codingadventures's answer from question Matrix calculations for gpu skinning
my ai_to_glm was wrong, and after replacing it with
glm::mat4 ai_to_glm(aiMatrix4x4* from)
{
glm::mat4 to = glm::mat4(1.0f);
to[0][0] = (GLfloat)from->a1; to[0][1] = (GLfloat)from->b1; to[0][2] = (GLfloat)from->c1; to[0][3] = (GLfloat)from->d1;
to[1][0] = (GLfloat)from->a2; to[1][1] = (GLfloat)from->b2; to[1][2] = (GLfloat)from->c2; to[1][3] = (GLfloat)from->d2;
to[2][0] = (GLfloat)from->a3; to[2][1] = (GLfloat)from->b3; to[2][2] = (GLfloat)from->c3; to[2][3] = (GLfloat)from->d3;
to[3][0] = (GLfloat)from->a4; to[3][1] = (GLfloat)from->b4; to[3][2] = (GLfloat)from->c4; to[3][3] = (GLfloat)from->d4;
return to;
};
after doing that, it got fixed
This is the code that produces the projection, view and model matrices that get sent to the shader:
GL.glEnable(GL.GL_BLEND)
GL.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA)
arguments['texture'].bind()
arguments['shader'].bind()
arguments['shader'].uniformi('u_Texture', arguments['texture'].slot)
proj = glm.ortho(0.0, float(arguments['screenWidth']), 0.0, float(arguments['screenHeight']), -1.0, 1.0)
arguments['cameraXOffset'] = (float(arguments['cameraXOffset']) / 32) / float(arguments['screenWidth'])
arguments['cameraYOffset'] = (- float(arguments['cameraYOffset']) / 32) / float(arguments['screenHeight'])
print('{}, {}'.format(arguments['cameraXOffset'], arguments['cameraYOffset']))
view = glm.translate(glm.mat4(1.0), glm.vec3(float(arguments['cameraXOffset']), float(arguments['cameraYOffset']), 0.0))
model = glm.translate(glm.mat4(1.0), glm.vec3(0.0, 0.0, 0.0))
arguments['shader'].uniform_matrixf('u_Proj', proj)
arguments['shader'].uniform_matrixf('u_View', view)
arguments['shader'].uniform_matrixf('u_Model', model)
The projection matrix goes from 0.0 to screen width, and from 0.0 to screen height. That allows me to use the actual width in pixels of the tiles (32x32) when determining the vertex floats. Also, when the user presses the wasd keys, the camera accumulates offsets that span the width or height of a tile (always 32). Unfortunately, to reflect that offset in the view matrix, it seems that I need to normalize it, and I can't figure out how to do it so a single movement in any cardinal direction spans a single tile and nothing more. It constantly accumulates an error, so at the end of the map in any direction it shows a band of background (white in this case, for now).
This is the most important part that determines how much it will scroll with the given camera offsets:
arguments['cameraXOffset'] = (float(arguments['cameraXOffset']) / 32) / float(arguments['screenWidth'])
arguments['cameraYOffset'] = (- float(arguments['cameraYOffset']) / 32) / float(arguments['screenHeight'])
Can any of you figure out if that "normalization" for the sake of the view matrix is correct? Or is this a rounding issue? In that case, could I solve it somehow?
Vertex shader:
#version 330 core
layout(location = 0) in vec3 position;
layout(location = 1) in vec4 color;
layout(location = 2) in vec2 texCoord;
out vec4 v_Color;
out vec2 v_TexCoord;
uniform mat4 u_Proj;
uniform mat4 u_View;
uniform mat4 u_Model;
void main()
{
gl_Position = u_Model * u_View * u_Proj * vec4(position, 1.0);
v_TexCoord = texCoord; v_Color = color;
}
FINAL VERSION:
Solved. As mentioned by the commenter, had to change this line in the vertex shader:
gl_Position = u_Model * u_View * u_Proj * vec4(position, 1.0);
to:
gl_Position = u_Proj * u_View * u_Model * vec4(position, 1.0);
The final version of the code, that finally allows the user to scroll exactly one tile over:
arguments['texture'].bind()
arguments['shader'].bind()
arguments['shader'].uniformi('u_Texture', arguments['texture'].slot)
proj = glm.ortho(0.0, float(arguments['screenWidth']), 0.0, float(arguments['screenHeight']), -1.0, 1.0)
arguments['cameraXOffset'] = (float(arguments['cameraXOffset']) / 32) / arguments['screenWidth']
arguments['cameraYOffset'] = (float(-arguments['cameraYOffset']) / 32) / arguments['screenHeight']
view = glm.translate(glm.mat4(1.0), glm.vec3(float(arguments['cameraXOffset']), float(arguments['cameraYOffset']), 0.0))
model = glm.translate(glm.mat4(1.0), glm.vec3(0.0, 0.0, 0.0))
arguments['shader'].uniform_matrixf('u_Proj', proj)
arguments['shader'].uniform_matrixf('u_View', view)
arguments['shader'].uniform_matrixf('u_Model', model)
You have to flip the order of the matrices when you transform the vertex coordinate to the clip space coordinate:
gl_Position = u_Proj * u_View * u_Model * vec4(position, 1.0);
See GLSL Programming/Vector and Matrix Operations:
Furthermore, the *-operator can be used for matrix-vector products of the corresponding dimension, e.g.:
vec2 v = vec2(10., 20.);
mat2 m = mat2(1., 2., 3., 4.);
vec2 w = m * v; // = vec2(1. * 10. + 3. * 20., 2. * 10. + 4. * 20.)
Note that the vector has to be multiplied to the matrix from the right.
If a vector is multiplied to a matrix from the left, the result corresponds to to multiplying a column vector to the transposed matrix from the right. This corresponds to multiplying a column vector to the transposed matrix from the right:
Thus, multiplying a vector from the left to a matrix corresponds to multiplying it from the right to the transposed matrix:
vec2 v = vec2(10., 20.);
mat2 m = mat2(1., 2., 3., 4.);
vec2 w = v * m; // = vec2(1. * 10. + 2. * 20., 3. * 10. + 4. * 20.)
This also applies to the matrix multiplication itself. The first matrix which has to be applied to the vector, has to be the most right matrix and the last matrix the most left, in the row of concatenated matrices.
I need to be able to modify vertex coordinates accordingly to a transformation matrix, but I have per-vertex lighting, so I am not sure, that my approach is correct for normals:
#version 120
uniform mat4 transformationMatrix;
void main() {
vec3 normal, lightDir;
vec4 diffuse, ambient, globalAmbient;
float NdotL;
// Transformation part
normal = gl_NormalMatrix * gl_Normal * transpose(mat3(transformationMatrix));
gl_Position = gl_ModelViewProjectionMatrix * transformationMatrix * gl_Vertex;
// Calculate color
lightDir = normalize(vec3(gl_LightSource[0].position));
NdotL = max(abs(dot(normal, lightDir)), 0.0);
diffuse = gl_Color * gl_LightSource[0].diffuse;
ambient = gl_Color * gl_LightSource[0].ambient;
globalAmbient = gl_LightModel.ambient * gl_Color;
gl_FrontColor = NdotL * diffuse + globalAmbient + ambient;
}
I perform all transformations in lines 8-9. Could You comment whether it is correct way or not?
If you want to create a normal matrix, then you have to use the inverse transpose of the upper left 3*3, of the 4*4 matrix.
See Why transforming normals with the transpose of the inverse of the modelview matrix?
and Why is the transposed inverse of the model view matrix used to transform the normal vectors?
This would mean that you have to write your code like this:
normal = gl_NormalMatrix * transpose(inverse(mat3(transformationMatrix))) * gl_Normal;
But, if a vector is multiplied to a matrix from the left, the result corresponds to to multiplying a column vector to the transposed matrix from the right.
See GLSL Programming/Vector and Matrix Operations
This means you can write the code like this and avoid the transpose operation:
normal = gl_NormalMatrix * (gl_Normal * inverse(mat3(transformationMatrix)));
If the 4*4 matrix transformationMatrix is a Orthogonal matrix, this means the X, Y, and Z axis are Orthonormal (unit vectors and they are normal to each other), then it is sufficent to use the the upper left 3*3. In this case the inverse matrix is equal the transposed matrix.
See In which cases is the inverse matrix equal to the transpose?
This will simplify your code:
normal = gl_NormalMatrix * mat3(transformationMatrix) * gl_Normal;
Of course this can also be expressed like this:
normal = gl_NormalMatrix * (gl_Normal * transpose(mat3(transformationMatrix)));
Note, this is not the same as you do in your code, becaues the * operations are processed from the left to the right (see GLSL - The OpenGL Shading Language 4.6, 5.1 Operators, page 97) and the result of
vec3 v;
mat3 m1, m2;
(m1 * v) * m2
is not equal
m1 * (v * m2);
The normal transformation does not look correct.
Since v * transpose(M) is exactly the same as M * v, you didn't do any special case handling for non-uniform scaling at all.
What you are looking for is most probably to use the inverse-transpose matrix:
normal = gl_NormalMatrix * transpose(inverse(mat3(transformationMatrix))) * gl_Normal;
For more details about the math behind this, have a look at this.
I'm taking a course in WebGL at NTNU. I'm currently exploring what the shaders do and how I can use them.
An example we have shows us that we compute a projection matrix, then set it in the vertex shader, then make a draw call. I wanted to try to do this matrix computation in a shader.
This means I have to put the code somewhere else than the main() function in the vertex shader, since that one is invoked many times per draw call.
Vertex shader:
uniform vec3 camRotation;
attribute vec3 position;
void main() {
// I want this code to run only once per draw call
float rX = camRotation[0];
float rY = camRotation[1];
float rZ = camRotation[2];
mat4 camMatrix = mat4(
cos(rY) * cos(rZ), cos(rZ) * sin(rX) * sin(rY) - cos(rX) * sin(rZ), sin(rX) * sin(rZ) + cos(rX) * cos(rZ) * sin(rY), 0, //
cos(rY) * sin(rZ), cos(rX) * cos(rZ) + sin(rX) * sin(rY) * sin(rZ), cos(rX) * sin(rY) * sin(rZ) - cos(rZ) * sin(rX), 0, //
-sin(rY), cos(rY) * sin(rX), cos(rX) * cos(rY), 0, //
0, 0, 0, 1
);
// End of code in question
gl_Position = camMatrix * vec4(position, 1);
gl_PointSize = 5.0;
}
Is it possible? Am I a fool for trying?
AFAIK, there's no way to do that. You should compute camMatrix in your JS code and pass it to the shader via uniform:
uniform mat4 camMatrix;
attribute vec3 position;
void main() {
gl_Position = camMatrix * vec4(position, 1);
gl_PointSize = 5.0;
}
Now you need to compute matrix in JS:
// assuming that program is your compiled shader program and
// gl is your WebGL context.
const cos = Math.cos;
const sin = Math.sin;
gl.uniformMatrix4fv(gl.getUniformLocation(program, 'camMatrix'), [
cos(rY) * cos(rZ), cos(rZ) * sin(rX) * sin(rY) - cos(rX) * sin(rZ), sin(rX) * sin(rZ) + cos(rX) * cos(rZ) * sin(rY), 0,
cos(rY) * sin(rZ), cos(rX) * cos(rZ) + sin(rX) * sin(rY) * sin(rZ), cos(rX) * sin(rY) * sin(rZ) - cos(rZ) * sin(rX), 0,
-sin(rY), cos(rY) * sin(rX), cos(rX) * cos(rY), 0,
0, 0, 0, 1
]);
No its not possible, the whole concept of shaders is to be vectorizable so they can run in parallel. Even if you could there wouldn't be much gain as the GPUs speed advantage is(besides other things) inherently based on its capability to do computations in parallel. That aside, usually you have a combined view projection matrix that remains static during all draw calls(of a frame) and a model/world matrix attached to each object you're drawing.
The projection matrix does what its name implies projecting the points in a either perspective or orthogonal manner(you can think of this as the lense of your camera).
The view matrix is a transform to translate/rotate that projection(camera position and orientation) while the per-object world/model matrix contains the transformations(translation,rotation and scale) of the individual object.
In your shader you then transform your vertex position to world space using the per-object model/world matrix and then finally transform it to camera space using the premultiplied ViewProjection matrix:
gl_Position = matViewProjection * (matWorld * vPosition)
As you're drawing points depending on your usecase you could reduce the world matrix to just a translation vector.
I want to calculate the tangentspace in GLSL.
Here is the important part from my code:
// variables passed from vertex to fragment program //
in vec3 vertexNormal;
in vec2 textureCoord;
in vec3 lightPosition;
in vec3 vertexPos;
in mat4 modelView;
in mat4 viewMatrix;
// TODO: textures for color and normals //
uniform sampler2D normal;
uniform sampler2D texture;
// this defines the fragment output //
out vec4 color;
void main() {
// ###### TANGENT SPACE STUFF ############
vec4 position_eye = modelView * vec4(vertexPos,1.0);
vec3 q0 = dFdx(position_eye.xyz);
vec3 q1 = dFdy(position_eye.xyz);
vec2 st0 = dFdx(textureCoord.st);
vec2 st1 = dFdy(textureCoord.st);
float Sx = ( q0.x * st1.t - q1.x * st0.t) / (st1.t * st0.s - st0.t * st1.s);
float Tx = (-q0.x * st1.s + q1.x * st0.s) / (st1.t * st0.s - st0.t * st1.s);
q0.x = st0.s * Sx + st0.t * Tx;
q1.x = st1.s * Sx + st1.t * Tx;
vec3 S = normalize( q0 * st1.t - q1 * st0.t);
vec3 T = normalize(-q0 * st1.s + q1 * st0.s);
vec3 n = texture2D(normal,textureCoord).xyz;
n = smoothstep(-1,1,n);
mat3 tbn = (mat3(S,T,n));
// #######################################
n = tbn * n; // transfer the read normal to worldSpace;
vec3 eyeDir = - (modelView * vec4(vertexPos,1.0)).xyz;
vec3 lightDir = (modelView * vec4(lightPosition.xyz, 1.0)).xyz;
After this code there is a phong shading which will be mixed with the texture. Applying the shaders to a normal texture without normalMapping everything works fine.
I need to calculate this in the shader for later other dynamic parts.
Can someone tell me what is going wrong?
This is how it currently looks like:
Can someone tell me what is going wrong?
You're trying to compute the tangent-space basis matrix in your shader; that's what's wrong. You can't actually do that.
dFdx/y computes the rate-of-change of the given value, locally in screen-space, across the surface of a primitive. In other words, it computes the derivative of the given value over the primitive. Your input values are linearly interpolated.
The derivative of a line is a constant. And linear interpolation produces linear results. Therefore, every fragment from each primitive will get the same derivative for the inputs/outputs. Therefore, every fragment will compute the same S and T values, since they're based entirely on the derivatives.
That's why you're getting a faceted surface: two of the three matrix components will be identical across a triangle's surface.
Your computation doesn't work because it can't work. You're going to have to do what everyone else does: calculate the NBT matrix offline and pass them as per-vertex attributes. Or use some known property of the mesh to compute them. But this? It isn't going to work.