OpenGL/GSL not working without setting Transpose to GL_TRUE - opengl

GLM matrices doesnt seem to work without transfom
glm::mat4 proj = glm::ortho(0.0f,960.0f,0.0f,540.0f,-1.0f, 1.0f);
GL_TRUE has to be set :
glUniformMatrix4fv(GetUniformLocation(name),1 ,GL_TRUE,&matrix[0][0])
isn't GLM already suppose to be in column major form?

If you don't wan to transpose the matrix, then ythe vector has to be multiplied to the matrix from the right in the shader code:
mat4 transformation;
vec4 vertexPosition;
gl_Position = transformation * vertexPosition;
Explanation:
See GLSL Programming/Vector and Matrix Operations:
Furthermore, the *-operator can be used for matrix-vector products of the corresponding dimension, e.g.:
vec2 v = vec2(10., 20.);
mat2 m = mat2(1., 2., 3., 4.);
vec2 w = m * v; // = vec2(1. * 10. + 3. * 20., 2. * 10. + 4. * 20.)
Note that the vector has to be multiplied to the matrix from the right.
If a vector is multiplied to a matrix from the left, the result corresponds to to multiplying a column vector to the transposed matrix from the right. This corresponds to multiplying a column vector to the transposed matrix from the right:
Thus, multiplying a vector from the left to a matrix corresponds to multiplying it from the right to the transposed matrix:
vec2 v = vec2(10., 20.);
mat2 m = mat2(1., 2., 3., 4.);
vec2 w = v * m; // = vec2(1. * 10. + 2. * 20., 3. * 10. + 4. * 20.)
This means:
If a matrix is defined like this:
mat4 m44 = mat4(
vec4( Xx, Xy, Xz, 0.0),
vec4( Yx, Xy, Yz, 0.0),
vec4( Zx Zy Zz, 0.0),
vec4( Tx, Ty, Tz, 1.0) );
And the matrix uniform mat4 transformation is set like this (see glUniformMatrix4fv:
glUniformMatrix4fv( .... , 1, GL_FALSE, &(m44[0][0] );
Then that the vector has to be multiplied to the matrix from the right:
gl_Position = transformation * vertexPosition;
But of course, the matrix can be set up transposed:
mat4 m44 = mat4(
vec4( Xx, Yx, Zx, Tx),
vec4( Xy, Yy, Zy, Ty),
vec4( Xz Yz Zz, Tz),
vec4( 0.0, 0.0, 0.0, 1.0) );
Or can be transposet when set to the uniform variable:
glUniformMatrix4fv( .... , 1, GL_TRUE, &(m44[0][0] );
Then that the vector has to be multiplied to the matrix from the left:
gl_Position = vertexPosition * transformation;
Note, that the glm API documentation refers to The OpenGL Shading Language specification 4.20.

Related

OpenGL texture transformation when two textures on same mesh

I have a situation where i have two textures on a single mesh. I want to transform these textures independently. I have base code wherein i was able to load and transform one texture. Now i have code to load two textures but the issue is that when i try to transform the first texture both of them gets
transformed as we are modifying texture coordinates.
Green one is the first texture and star is the second texture.
I have no idea how to transform just the second texture. Guide me with any solution you have.
You can do it in many ways , one of them would be to have two different texture Matrices.
and than pass them to the vertex shader.
#version 400 compatibility
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
layout (location = 2) in vec2 aTexCoord;
out vec2 TexCoord;
out vec2 TexCoord2;
uniform mat4 textureMatrix;
uniform mat4 textureMatrix2;
void main()
{
vec4 mTex2;
vec4 mTex;
Normal = mat3(NormalMatrix) * aNormal;
Tex2Matrix = textureMatrix2;
ViewDirMatrix = textureMatrix;
mTex = textureMatrix * vec4( aTexCoord.x , aTexCoord.y , 0.0 , 1.0 ) ;
mTex2 = textureMatrix2 * vec4( aTexCoord.x , aTexCoord.y , 0.0 , 1.0 ) ;
TexCoord = vec2(mTex.x , mTex.y );
TexCoord2 = vec2(mTex2.x , mTex2.y );
FragPos = vec3( ubo_model * (vec4( aPos, 1.0 )));
gl_Position = ubo_projection * ubo_view * (vec4(FragPos, 1.0));
}
This is how you can create a texture matrix.
glm::mat4x4 GetTextureMatrix()
{
glm::mat4x4 matrix = glm::mat4x4(1.0f);
matrix = glm::translate(matrix, glm::vec3(-PositionX + 0.5, PositionY + 0.5, 0.0));
matrix = glm::scale(matrix, glm::vec3(1.0 / ScalingX, 1.0 / ScalingY, 0.0));
matrix = glm::rotate(matrix, glm::radians(RotationX) , glm::vec3(1.0, 0.0, 0.0));
matrix = glm::rotate(matrix, glm::radians( RotationY), glm::vec3(0.0, 1.0, 0.0));
matrix = glm::rotate(matrix, glm::radians(-RotationZ), glm::vec3(0.0, 0.0, 1.0));
matrix = glm::translate(matrix, glm::vec3(-PositionX -0.5, -PositionY -0.5, 0.0));
matrix = glm::translate(matrix, glm::vec3(PositionX, PositionY, 0.0));
return matrix;
}

How to rotate a 2D square in OpenGL?

I have a basic square. How can i rotate it ?
let vertices = vec![
x, y, uv1.0, uv1.0, layer, // top left
// ---
x + w, y, uv2.0, uv2.1, layer, // top right
// ---
x + w, y + h, uv3.0, uv3.1, layer, // bottom right
// ---
x, y + h, uv4.0, uv4.1, layer // bottom left
This is my orthographic projection matrix.
let c_str_vert = CString::new("MVP".as_bytes()).unwrap();
let modelLoc = gl::GetUniformLocation(shaderProgram, c_str_vert.as_ptr());
let model = cgmath::ortho(0.0, SCR_WIDTH as f32, SCR_HEIGHT as f32, 0.0, -1.0, 1.0);
gl::UniformMatrix4fv(modelLoc, 1, gl::FALSE, model.as_ptr());
#version 430 core
layout(location = 0) in vec2 position;
// ...
uniform mat4 MVP;
void main() {
gl_Position = MVP * vec4(position.x, position.y, 0.0, 1.0);
// ...
}
I have a lot of squares but I don't want to rotate them all.
width and height is 100px, how can I turn my square to make it look like this?
I know that I can achieve this by using transformation matrices. I did it in web development while working on svg, but I have no idea how I can get this working in OpenGl.
I would like to know how can i multiply matrix and vector in OpenGL.
I would like to know how can i multiply matrix and vector in OpenGL.
You already multiplying matrices with a vector
gl_Position = MVP * vec4(position.x, position.y, 0.0, 1.0);
All you have to do is multiply your MVP matrix with your rotation matrix and than with your vector
gl_Position = MVP * rotationMatrix * vec4(position.x, position.y, 0.0, 1.0);
The rotation Matrix should be also a 4x4 matrix
i tried the library nalgebra and nalgebra-glm. API cgmath confuses me.
let mut ortho = glm::ortho(0.0, SCR_WIDTH as f32, SCR_HEIGHT as f32, 0.0, -1.0, 1.0);
ortho = glm::translate(&ortho, &glm::vec3(0.0, 50.0, 0.0));
let rad = 30.0 * (PI / 360.0);
ortho = glm::rotate(&ortho, rad, &glm::vec3(0.0, 0.0, 1.0));
ortho = glm::translate(&ortho, &glm::vec3(0.0, -50.0, 0.0) );
Thank you for all the answers

How to change ortho matrix size in shader

I tried scaling orthogonal projection matrix but it seems it doesn't scale its dimensions.I am using orthogonal projection for directional lighting. But if the main camera is above ground I want to make my ortho matrix range bigger.
For example if camera is zero , orho matrix is
glm::ortho(-10.0f , 10.0f , -10.0f , 10.0f , 0.01f, 4000.0f)
but if camera goes 400 in y direction i want this matrix to be like
glm::ortho(-410.0f , 410.0f , -410.0f , 410.0f , 0.01f, 4000.0f)
but I want to do this in the shader with matrix multiplication or addition
The Orthographic Projection matrix can be computed by:
r = right, l = left, b = bottom, t = top, n = near, f = far
X: 2/(r-l) 0 0 0
y: 0 2/(t-b) 0 0
z: 0 0 -2/(f-n) 0
t: -(r+l)/(r-l) -(t+b)/(t-b) -(f+n)/(f-n) 1
In glsl, for instance:
float l = -410.0f;
float r = 410.0f;
float b = -410.0f;
float t = 410.0f;
float n = 0.01f;
float f = 4000.0f;
mat4 projection = mat4(
vec4(2.0/(r-l), 0.0, 0.0, 0.0),
vec4(0.0, 2.0/(t-b), 0.0, 0.0),
vec4(0.0, 0.0, -2.0/(f-n), 0.0),
vec4(-(r+l)/(r-l), -(t+b)/(t-b), -(f+n)/(f-n), 1.0)
);
Furthermore you can scale the orthographic projection matrix:
c++
glm::mat4 ortho = glm::ortho(-10.0f, 10.0f, -10.0f, 10.0f, 0.01f, 4000.0f);
float scale = 10.0f / 410.0f;
glm::mat4 scale_mat = glm::scale(glm::mat4(1.0f), glm::vec3(scale, scale, 1.0f));
glm::mat4 ortho_new = ortho * scale_mat;
glsl
float scale = 10.0 / 410.0;
mat4 scale_mat = mat4(
vec4(scale, 0.0, 0.0, 0.0),
vec4(0.0, scale, 0.0, 0.0),
vec4(0.0, 0.0, 1.0, 0.0),
vec4(0.0, 0.0, 0.0, 1.0)
);
mat4 ortho_new = ortho * scale_mat;

What values should I send to normalized view matrix so a tilemap scrolling only spans a tile?

This is the code that produces the projection, view and model matrices that get sent to the shader:
GL.glEnable(GL.GL_BLEND)
GL.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA)
arguments['texture'].bind()
arguments['shader'].bind()
arguments['shader'].uniformi('u_Texture', arguments['texture'].slot)
proj = glm.ortho(0.0, float(arguments['screenWidth']), 0.0, float(arguments['screenHeight']), -1.0, 1.0)
arguments['cameraXOffset'] = (float(arguments['cameraXOffset']) / 32) / float(arguments['screenWidth'])
arguments['cameraYOffset'] = (- float(arguments['cameraYOffset']) / 32) / float(arguments['screenHeight'])
print('{}, {}'.format(arguments['cameraXOffset'], arguments['cameraYOffset']))
view = glm.translate(glm.mat4(1.0), glm.vec3(float(arguments['cameraXOffset']), float(arguments['cameraYOffset']), 0.0))
model = glm.translate(glm.mat4(1.0), glm.vec3(0.0, 0.0, 0.0))
arguments['shader'].uniform_matrixf('u_Proj', proj)
arguments['shader'].uniform_matrixf('u_View', view)
arguments['shader'].uniform_matrixf('u_Model', model)
The projection matrix goes from 0.0 to screen width, and from 0.0 to screen height. That allows me to use the actual width in pixels of the tiles (32x32) when determining the vertex floats. Also, when the user presses the wasd keys, the camera accumulates offsets that span the width or height of a tile (always 32). Unfortunately, to reflect that offset in the view matrix, it seems that I need to normalize it, and I can't figure out how to do it so a single movement in any cardinal direction spans a single tile and nothing more. It constantly accumulates an error, so at the end of the map in any direction it shows a band of background (white in this case, for now).
This is the most important part that determines how much it will scroll with the given camera offsets:
arguments['cameraXOffset'] = (float(arguments['cameraXOffset']) / 32) / float(arguments['screenWidth'])
arguments['cameraYOffset'] = (- float(arguments['cameraYOffset']) / 32) / float(arguments['screenHeight'])
Can any of you figure out if that "normalization" for the sake of the view matrix is correct? Or is this a rounding issue? In that case, could I solve it somehow?
Vertex shader:
#version 330 core
layout(location = 0) in vec3 position;
layout(location = 1) in vec4 color;
layout(location = 2) in vec2 texCoord;
out vec4 v_Color;
out vec2 v_TexCoord;
uniform mat4 u_Proj;
uniform mat4 u_View;
uniform mat4 u_Model;
void main()
{
gl_Position = u_Model * u_View * u_Proj * vec4(position, 1.0);
v_TexCoord = texCoord; v_Color = color;
}
FINAL VERSION:
Solved. As mentioned by the commenter, had to change this line in the vertex shader:
gl_Position = u_Model * u_View * u_Proj * vec4(position, 1.0);
to:
gl_Position = u_Proj * u_View * u_Model * vec4(position, 1.0);
The final version of the code, that finally allows the user to scroll exactly one tile over:
arguments['texture'].bind()
arguments['shader'].bind()
arguments['shader'].uniformi('u_Texture', arguments['texture'].slot)
proj = glm.ortho(0.0, float(arguments['screenWidth']), 0.0, float(arguments['screenHeight']), -1.0, 1.0)
arguments['cameraXOffset'] = (float(arguments['cameraXOffset']) / 32) / arguments['screenWidth']
arguments['cameraYOffset'] = (float(-arguments['cameraYOffset']) / 32) / arguments['screenHeight']
view = glm.translate(glm.mat4(1.0), glm.vec3(float(arguments['cameraXOffset']), float(arguments['cameraYOffset']), 0.0))
model = glm.translate(glm.mat4(1.0), glm.vec3(0.0, 0.0, 0.0))
arguments['shader'].uniform_matrixf('u_Proj', proj)
arguments['shader'].uniform_matrixf('u_View', view)
arguments['shader'].uniform_matrixf('u_Model', model)
You have to flip the order of the matrices when you transform the vertex coordinate to the clip space coordinate:
gl_Position = u_Proj * u_View * u_Model * vec4(position, 1.0);
See GLSL Programming/Vector and Matrix Operations:
Furthermore, the *-operator can be used for matrix-vector products of the corresponding dimension, e.g.:
vec2 v = vec2(10., 20.);
mat2 m = mat2(1., 2., 3., 4.);
vec2 w = m * v; // = vec2(1. * 10. + 3. * 20., 2. * 10. + 4. * 20.)
Note that the vector has to be multiplied to the matrix from the right.
If a vector is multiplied to a matrix from the left, the result corresponds to to multiplying a column vector to the transposed matrix from the right. This corresponds to multiplying a column vector to the transposed matrix from the right:
Thus, multiplying a vector from the left to a matrix corresponds to multiplying it from the right to the transposed matrix:
vec2 v = vec2(10., 20.);
mat2 m = mat2(1., 2., 3., 4.);
vec2 w = v * m; // = vec2(1. * 10. + 2. * 20., 3. * 10. + 4. * 20.)
This also applies to the matrix multiplication itself. The first matrix which has to be applied to the vector, has to be the most right matrix and the last matrix the most left, in the row of concatenated matrices.

fast way to rotate a matrix about its current position

If I have a matrix then the point O(0,0,0) will be translated to some point P(x, y, z). So rotating a matrix about its current position is effectively multiplying the matrix by a rotation matrix about P.
So I want a function like:
mat4 rotate(mat4 matrix, vec3 axis, float angle);
my current idea is:
vec4 p = {0, 0, 0, 1};
p = p * matrix;
generate translation matrix T, from point p
generate rotation matrix R, from axis and angle
return matrix * T * R * -T;
but I feel like there should be a more efficient way to do this...
Yep, that's how I'd do it. But one subtle correction, reverse the order of -T and T:
return matrix * -T * R * T
You want to first 'undo' the translational origin of matrix, then rotate, then re-do translational origin. This is easier to see if you take, for example, a traditional scale/rotate/translate matrix (S * R2 * T), expand it, then you can see more easily:
(S * R2 * T) * -T * R * T
Is doing what you want.
EDIT: With respect to efficiency, totally depends on usage. No, this is not 'great' -- usually you have more information about matrix that will allow you to do this in a less round-about way. E.g., if the matrix is constructed from S * R * T above, obviously we could have simply changed the way that the matrix is constructed in the first place -- S * R2 * R * T, injecting the rotation where it should be without having to 'undo' anything.
But unless you are doing this in real-time on 10K+ matrix that need to be recomputed each time, then it should not be a problem.
If matrix is coming from an unknown source and you need to modify it ex-post-facto, indeed, there really isn't any other choice.
In common an transformation matrix (OpenGL/glsl/glm) is defined like this:
mat4 m44 = mat4(
vec4( Xx, Xy, Xz, 0.0), // x-axis
vec4( Yx, Xy, Yz, 0.0), // y-axis
vec4( Zx Zy Zz, 0.0), // z-axis
vec4( Tx, Ty, Tz, 1.0) ); // translation
A translation matrix looks like this:
mat4 translate = mat4(
vec4( 0.0, 0.0, 0.0, 0.0),
vec4( 0.0, 0.0, 0.0, 0.0),
vec4( 0.0 0.0 0.0, 0.0),
vec4( Tx, Ty, Tz, 1.0) );
And a rotation matrix (e.g. around Y-Axis) looks like this:
float angle;
mat4 rotate = mat4(
vec4( cos(angle), 0, sin(angle), 0 ),
vec4( 0, 1, 0, 0 ),
vec4( -sin(angle), 0, cos(angle), 0 ),
vec4( 0, 0, 0, 1 ) )
A matrix multiplication C = A * B works like this:
mat4 A, B, C;
// C = A * B
for ( int k = 0; k < 4; ++ k )
for ( int j = 0; j < 4; ++ j )
C[k][j] = A[0][l] * B[k][0] + A[1][j] * B[k][1] + A[2][j] * B[k][2] + A[3][j] * B[k][3];
This means that the result of translate * rotate is:
mat4 m = mat4(
vec4( cos(angle), 0, sin(angle), 0 ),
vec4( 0, 1, 0, 0 ),
vec4( -sin(angle), 0, cos(angle), 0 ),
vec4( tx, ty, tz, 1 ) );
This means if you want to rotate a matrix M around its origin, then you have to split the matrix in a "oriantation" matrix and a "translation" matrtix. Rotate the orientation matrix and add the translation matrix again:
mat4 M, R;
float Tx = M[3][0];
float Ty = M[3][1];
float Tz = M[3][2];
M[3][0] = 0.0; M[3][1] = 0.0; M[3][2] = 0.0;
mat4 MR = R * M;
MR[3][0] = Tx; MR[3][1] = Ty; M[3][2] = Tz;