How do you access the individual elements of a glsl mat4? - glsl

Is it possible to access the individual elements of a glsl mat4 type matrix? How?

The Section 5.6 of the GLSL reference manual says you can access mat4 array elements using operator[][] style syntax in the following way:
mat4 m;
m[1] = vec4(2.0); // sets the second column to all 2.0
m[0][0] = 1.0; // sets the upper left element to 1.0
m[2][3] = 2.0; // sets the 4th element of the third column to 2.0
Remember, OpenGL defaults to column major matrices, which means access is of the format mat[col][row]. In the example, m[2][3] sets the 4th ROW (index 3) of the 3rd COLUMN (index 2) to 2.0. In the example m[1]=vec4(2.0), it is setting an entire column at once (because m[1] refers to column #2, when only ONE index is used it means that COLUMN. m[1] refers to the SECOND COLUMN VECTOR).

Related

GLSL vec2 use as array in float?

im trying to make something on shader toy: https://www.shadertoy.com/view/wsffDN
(original ref: https://www.shadertoy.com/view/3dtSD7)
bufferA line 18
i want to know why uv was declared as uv
vec2 uv = (fragCoord.xy - iResolution.xy*.5) / iResolution.y;
, but this line
sceneColor = vec3((uv[0] + stagger) / initpack + 0.05*0., -0, 0.05);
uv[0] is used as a float
how does this work, and what uv's value become?
It is perfectly legal to access the components of any vec type (or mat type for that matter) with array syntax. You can even use a non-constant array index (well, depending on the GLSL version, but 1.30+ versions allow it). uv[0] does exactly what it looks like: access the first element of the vector.

Transformation in vertex shader only works with post-multiplying

I am currently in the process of learning OpenGL and GLSL to write a simple software that loads models, display them on the screen, transform them etc.
As a first stage, I wrote a pure-C++ program without using OpenGL.
it works great, and it uses a Row-major matrix representation:
So for instance mat[i][j] means row i and column j.
class mat4
{
vec4 _m[4]; // vec4 is a struct with 4 fields
...
}
This is the relevant matrix multiplication method:
mat4 operator*(const mat4& m) const
{
mat4 a(0.0);
for (int i = 0; i < 4; ++i)
{
for (int j = 0; j < 4; ++j)
{
for (int k = 0; k < 4; ++k)
{
a[i][j] += _m[i][k] * m[k][j];
}
}
}
return a;
}
In order to get from model space to clip space I do as follows in C++:
vec4 vertexInClipSpace = projectionMat4 * viewMat4 * modelMat4 * vertexInModelSpace;
Now, trying to implement that in a GLSL Shader (Version 1.5) yields weird results. It works, but only if I post multiply the vertex instead of pre-multiplying it and in addition transpose each of the matrices.
uniform mat4 m;
uniform mat4 v;
uniform mat4 p;
void main()
{
// works ok, but using post multiplication and transposed matrices :
gl_Position = vec4(vertex, 1.0f) * m * v * p;
}
Although mathematically OK as v2 = P * V * M * v1 is the same as transpose(v2) = transpose(v1) * transpose(M) * transpose(V) * transpose(P) ,
I obviously don't get something because I have not seen even 1 reference where they post multiply a vertex in the vertex shader.
To sum up, here are specific questions:
Why does this works? is it even legal to post multiply in glsl?
How can I pass my C++ matrices so that they work properly inside the shader?
Links to related Questions:
link 1
link 2
EDIT:
Problem was sort of "solved" by altering the "transpose" flag in the call to:
glUniformMatrix4fv(
m_modelTransformID,
1,
GL_TRUE,
&m[0][0]
);
Now the multiplication in the shader is a pre-multiplication:
gl_Position = MVP * vec4(vertex, 1.0f);
Which kind of left me puzzled as the mathematics doesn't make sense for a column-major matrices that are a transpose of row major matrices.
could someone please explain?
Citing OpenGL faq:
For programming purposes, OpenGL matrices are 16-value arrays with
base vectors laid out contiguously in memory. The translation
components occupy the 13th, 14th, and 15th elements of the 16-element
matrix, where indices are numbered from 1 to 16 as described in
section 2.11.2 of the OpenGL 2.1 Specification.
Column-major versus row-major is purely a notational convention. Note
that post-multiplying with column-major matrices produces the same
result as pre-multiplying with row-major matrices. The OpenGL
Specification and the OpenGL Reference Manual both use column-major
notation. You can use any notation, as long as it's clearly stated.
About some conventions:
Row vs Column Vector
Multiply 2 matrices is possible only if the number of columns of the left matrix is equal to the number of rows of the right matrix.
MatL[r1,c] x MatR[c,r2]
So, if you are working on a piece of paper, considering that a vector is a 1 dimensional matrix, if you want to multiply a 4vec for 4x4matrix then the vector should be:
a row vector if you post-multiply the matrix
a colum vector if you pre-multiply the matrix
Into a computer you can consider 4 consecutive values either as a column or a row (there's no concept of dimension), so you can post-multiply or pre-multiply a vector for the same matrix. Implicitly you are sticking with one of the 2 conventions.
Row Major vs Column Major layout
Computer memory is a continuous space of locations. The concept of multiple dimensions doesn't exist, it's a pure convention. All matrix elements are stored continuously into a one dimensional memory.
If you decide to store a 2 dimensional entity, you have 2 conventions:
storing consecutive row elements in memory (row-major)
storing consecutive column elements in memory (column-major)
Incidentally, transposing the elements of a matrix stored in row major, it's equivalent to store its elements in column major order.
That implies, that swapping the order of the multiplication between a vector and a matrix is equivalent to multiply the same vector in the same order with the transposed matrix.
Open GL
It doesn't officially prescribes any convention, as stated above. I suggest you to look at OpenGL convention as if the translation is stored in the last column and the matrix layout is column major.
Why does this works? is it even legal to post multiply in glsl?
It is legal. As far as you are consistent across you code, either convention/multiplication order is fine.
How can I pass my C++ matrices so that they work properly inside the
shader?
If you are using 2 different convention in C++ and in the shader, than you can either transpose the matrix and keep the same multiplication order, or don't transpose the matrix and invert the multiplication order.
If you got any gaps see Understanding 4x4 homogenous transform matrices.
If you swap between column major (OpenGL matrices) and row major (DX and Your matrices) order of matrices then it is the same as transpose so you're right. What you are missing is that:
For orthogonal and orthonormal homogenous transform matrices if you
transpose a matrix it is the same as if you're invert it
Which is answer to your question I think.
transpose(M) = inverse(M)
The other question if it is OK to post multiply a vertex that is only matter of convention and it is not forbidden in GLSL. The whole point of GLSL is that you can do almost anything there.

matrix order in skeletal animation using assimp

I had followed this tutorial and got the output animation for a rigged model as expected. The tutorial uses assimp, glsl and c++ to load a rigged model from a file. However, there were things that I couldn't figure out.
First thing is assimp's transformation matrix are row major matrices and the tutorial uses a Matrix4f class which uses those transformation matrices just as they are i.e. row major order. The constructor of that Matrix4f class is as given:
Matrix4f(const aiMatrix4x4& AssimpMatrix)
{
m[0][0] = AssimpMatrix.a1; m[0][2] = AssimpMatrix.a2; m[0][2] = AssimpMatrix.a3; m[0][3] = AssimpMatrix.a4;
m[1][0] = AssimpMatrix.b1; m[1][3] = AssimpMatrix.b2; m[1][2] = AssimpMatrix.b3; m[1][3] = AssimpMatrix.b4;
m[2][0] = AssimpMatrix.c1; m[2][4] = AssimpMatrix.c2; m[2][2] = AssimpMatrix.c3; m[2][3] = AssimpMatrix.c4;
m[3][0] = AssimpMatrix.d1; m[3][5] = AssimpMatrix.d2; m[3][2] = AssimpMatrix.d3; m[3][3] = AssimpMatrix.d4;
}
However, in the tutorial for calculating the final node transformation, the calculations are done expecting the matrices to be in column major order, which is shown below:
Matrix4f NodeTransformation;
NodeTransformation = TranslationM * RotationM * ScalingM; //note here
Matrix4f GlobalTransformation = ParentTransform * NodeTransformation;
if(m_BoneMapping.find(NodeName) != m_BoneMapping.end())
{
unsigned int BoneIndex = m_BoneMapping[NodeName];
m_BoneInfo[BoneIndex].FinalTransformation = m_GlobalInverseTransform * GlobalTransformation * m_BoneInfo[BoneIndex].BoneOffset;
m_BoneInfo[BoneIndex].NodeTransformation = GlobalTransformation;
}
Finally, since the matrices calculated are in row major order, it is specified so while passing the matrices in the shader by setting GL_TRUE flag in the following function. Then, openGL knows it is in row major order as openGL itself uses column major order.
void SetBoneTransform(unsigned int Index, const Matrix4f& Transform)
{
glUniformMatrix4fv(m_boneLocation[Index], 1, GL_TRUE, (const GLfloat*)Transform);
}
So, how does the calculation done considering column major order
transformation = translation * rotation * scale * vertices
yield a correct output. I expected that for the calculation to hold true, each matrices should first be transposed to change to column order, followed by the above calculation and finally transposed again to obtain back row order matrix, which is also discussed in this link. However, doing so produced a horrible output. Is there something that I am missing here?
You are confusing two different things:
the layout the data has in memory (row vs. column major order)
the mathematical interpretation of the operations (things like multiplication order)
It is often claimed that when working with row major vs. column major, things have to be transposed and matrix multipication order hase to be reversed. But this is not true.
What is true is that mathematically, transpose(A*B) = transpose(B) * transpose(A). However, that is irrelevant here, because the matrix storage order is independent of, and orthogonal to, the mathematical interpretation of the matrices.
What I mean by this is: In math, it is exactly defined what a row and a column of a matrix is, and each element can be uniquely addressed by these two "coordinates". All the matrix operations are defined based on this convention. For example, in C=A*B, the element in the first row and the first column of C, is calculated as the dot product of the first row of A (transposed to a column vector) and the first column of B.
Now, the matrix storage order just defines how the matrix data is laid out in memory. As a generalization, we could define a function f(row,col) mapping each (row, col) pair to some memory address. We now could write or matrix functions using f, and we could change f to adapt row-major, column-major or something completely else (like a Z order curve, if we want some fun).
It doesn't matter what f we actually use (as long as the mapping is bijective), the operation C=A*B will always have the same result. What changes is just the data in memory, but we have also to use f to interpet that data. We could just write a simple print function, also using f, to print the matrix as the 2D array in columns x rows as a typical human would expect.
The confusion comes from this fact when you use a matrix in a different layout than the implementation of the matrix functions is designed on.
If you have a matrix library which is internally assuimg colum-major layout, and pass in data in row-major format, it is as if you transformed that matrix before - and only at this point, things get screwed up.
To confuse things even more, there is another issue related to this: the matrix * vector vs vector * matrix issue. Some people like to write x' = x * M (with v' and v being row vectors), while others like to write y' = N *y (with column vectors). It is clear that mathematically, M*x = transpose((transpose(x) * transpose(M)), so that people often also confuse this with row- vs column-major order effects - but it is also totally independent of that. It is just a matter of convention if you want to use the one or the other.
So, to finally answer your question:
The transformation matrices created there are written for the convention of multyplying matrix * vector, so that Mparent * Mchild is the correct matrix multiplication order.
Up to this point, the actual data layout in memory does not matter at all. It only begins to matter because now, we are interfacing a different API, with its own conventions. GL's default order is column-major. The matrix class in use is written for row-major memory layout. So you just transpose at this point, so that GL's interpretation of that matrix matches your other library's.
The alternative would be not convert them and account for that by incorporating the implicit operation created by this into the system - either by changing the multiplication order in the shader, or by adjusting the operations which created the matrix in the first place. However, I would not recommend going that path, because the resulting code will be totally unintuitive, because in the end, this would mean working with column-major matrices in a matrix class using a row-major interpretation.
Yes, the memory layout is similar for glm and assimp : data.html
But, according to the doc page : classai_matrix4x4t
The assimp matrix is always row-major whereas the glm matrix is always col-major meaning you need to create a transponse on conversion:
inline static Mat4 Assimp2Glm(const aiMatrix4x4& from)
{
return Mat4(
(double)from.a1, (double)from.b1, (double)from.c1, (double)from.d1,
(double)from.a2, (double)from.b2, (double)from.c2, (double)from.d2,
(double)from.a3, (double)from.b3, (double)from.c3, (double)from.d3,
(double)from.a4, (double)from.b4, (double)from.c4, (double)from.d4
);
}
inline static aiMatrix4x4 Glm2Assimp(const Mat4& from)
{
return aiMatrix4x4(from[0][0], from[1][0], from[2][0], from[3][0],
from[0][1], from[1][1], from[2][1], from[3][1],
from[0][2], from[1][2], from[2][2], from[3][2],
from[0][3], from[1][3], from[2][3], from[3][3]
);
}
PS: The abcd stands for row and 1234 stands for col in assimp.

Reset vertex attributes with disabled array between two runs of the same shader?

I would like to plot a bunch of curves from multi-dimensional data. For each curve I have a dataset of M variables, where each variable is either a vector of length N or just a scalar value:
x1 = [x11,x12,.......,x1N] OR x1 = X1 (scalar value)
x2 = [x21,x22,.......,x2N] OR x2 = X2
....
xM = [xM1,xM2,.......,xMN] OR xM = XM
My curve shader takes three float attributes x,y,z which represent the variables that are currently on display.
For each curve and each x,y,z, I bind a vertex buffer containing the data for the respective variable to the attribute if the data is a vector. Drawing multiple curves with only vector data works fine.
If the data for some variable is just a scalar number, I disable the attribute array and set the attribute value (for example X1) with:
glDisableVertexAttribArray(xLocation);
glVertexAttrib1f(xLocation,X1);
Now to my question: It seems that all curves use the same value for any vertex attribute with disabled array in the shader, (the one for the last curve that I draw) even though i reset the values between glDrawArray() calls. Is it just not possible to use more than one value for an attribute with disabled array in a shader, or should it be possible and I have a bug?

Calculating divergence of vector in GLSL (or gradient of vector)

I have a situation in GLSL where I need to calculate the divergence of a vector in fragment shader
vec3 posVector;
Divergence is mathematically given by
It's a dot product between vector and Gradient.
Does anyone how to compute this ?
The divergence of the position vector is the the divergence of the identity vector field
F: ℝ³ -> ℝ³
F(r_) = r_
and div of that is both const and known:
div(r_) = 3.