I'm getting thoroughly confused over matrix definitions. I have a matrix class, which holds a float[16] which I assumed is row-major, based on the following observations:
float matrixA[16] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 };
float matrixB[4][4] = { { 0, 1, 2, 3 }, { 4, 5, 6, 7 }, { 8, 9, 10, 11 }, { 12, 13, 14, 15 } };
matrixA and matrixB both have the same linear layout in memory (i.e. all numbers are in order). According to http://en.wikipedia.org/wiki/Row-major_order this indicates a row-major layout.
matrixA[0] == matrixB[0][0];
matrixA[3] == matrixB[0][3];
matrixA[4] == matrixB[1][0];
matrixA[7] == matrixB[1][3];
Therefore, matrixB[0] = row 0, matrixB[1] = row 1, etc. Again, this indicates row-major layout.
My problem / confusion comes when I create a translation matrix which looks like:
1, 0, 0, transX
0, 1, 0, transY
0, 0, 1, transZ
0, 0, 0, 1
Which is laid out in memory as, { 1, 0, 0, transX, 0, 1, 0, transY, 0, 0, 1, transZ, 0, 0, 0, 1 }.
Then when I call glUniformMatrix4fv, I need to set the transpose flag to GL_FALSE, indicating that it's column-major, else transforms such as translate / scale etc don't get applied correctly:
If transpose is GL_FALSE, each matrix is assumed to be supplied in
column major order. If transpose is GL_TRUE, each matrix is assumed to
be supplied in row major order.
Why does my matrix, which appears to be row-major, need to be passed to OpenGL as column-major?
matrix notation used in opengl documentation does not describe in-memory layout for OpenGL matrices
If think it'll be easier if you drop/forget about the entire "row/column-major" thing. That's because in addition to row/column major, the programmer can also decide how he would want to lay out the matrix in the memory (whether adjacent elements form rows or columns), in addition to the notation, which adds to confusion.
OpenGL matrices have same memory layout as directx matrices.
x.x x.y x.z 0
y.x y.y y.z 0
z.x z.y z.z 0
p.x p.y p.z 1
or
{ x.x x.y x.z 0 y.x y.y y.z 0 z.x z.y z.z 0 p.x p.y p.z 1 }
x, y, z are 3-component vectors describing the matrix coordinate system (local coordinate system within relative to the global coordinate system).
p is a 3-component vector describing the origin of matrix coordinate system.
Which means that the translation matrix should be laid out in memory like this:
{ 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, transX, transY, transZ, 1 }.
Leave it at that, and the rest should be easy.
---citation from old opengl faq--
9.005 Are OpenGL matrices column-major or row-major?
For programming purposes, OpenGL matrices are 16-value arrays with base vectors laid out contiguously in memory. The translation components occupy the 13th, 14th, and 15th elements of the 16-element matrix, where indices are numbered from 1 to 16 as described in section 2.11.2 of the OpenGL 2.1 Specification.
Column-major versus row-major is purely a notational convention. Note that post-multiplying with column-major matrices produces the same result as pre-multiplying with row-major matrices. The OpenGL Specification and the OpenGL Reference Manual both use column-major notation. You can use any notation, as long as it's clearly stated.
Sadly, the use of column-major format in the spec and blue book has resulted in endless confusion in the OpenGL programming community. Column-major notation suggests that matrices are not laid out in memory as a programmer would expect.
I'm going to update this 9 years old answer.
A mathematical matrix is defined as m x n matrix. Where m is a number of rows and n is number of columns. For the sake of completeness, rows are horizontals, columns are vertical. When denoting a matrix element in mathematical notation Mij, the first element (i) is a row index, the second one (j) is a column index. When two matrices are multiplied, i.e. A(m x n) * B(m1 x n1), the resulting matrix has number of rows from the first argument(A), and number of columns of the second(B), and number of columns of the first argument (A) must match number of rows of the second (B). so n == m1. Clear so far, yes?
Now, regarding in-memory layout. You can store matrix two ways. Row-major and column-major. Row-major means that effectively you have rows laid out one after another, linearly. So, elements go from left to right, row after row. Kinda like english text. Column-major means that effectively you have columns laid out one after another, linearly. So elements start at top left, and go from top to bottom.
Example:
//matrix
|a11 a12 a13|
|a21 a22 a23|
|a31 a32 a33|
//row-major
[a11 a12 a13 a21 a22 a23 a31 a32 a33]
//column-major
[a11 a21 a31 a12 a22 a32 a13 a23 a33]
Now, here's the fun part!
There are two ways to store 3d transformation in a matrix.
As I mentioned before, a matrix in 3d essentially stores coordinate system basis vectors and position. So, you can store those vectors in rows or in columns of a matrix. When they're stored as columns, you multiply a matrix with a column vector. Like this.
//convention #1
|vx.x vy.x vz.x pos.x| |p.x| |res.x|
|vx.y vy.y vz.y pos.y| |p.y| |res.y|
|vx.z vy.z vz.z pos.z| x |p.z| = |res.z|
| 0 0 0 1| | 1| |res.w|
However, you can also store those vectors as rows, and then you'll be multiplying a row vector with a matrix:
//convention #2 (uncommon)
| vx.x vx.y vx.z 0|
| vy.x vy.y vy.z 0|
|p.x p.y p.z 1| x | vz.x vz.y vz.z 0| = |res.x res.y res.z res.w|
|pos.x pos.y pos.z 1|
So. Convention #1 often appears in mathematical texts. Convention #2 appeared in DirectX sdk at some point. Both are valid.
And in regards of the question, if you're using convention #1, then your matrices are column-major. And if you're using convention #2, then they're row major. However, memory layout is the same in both cases
[vx.x vx.y vx.z 0 vy.x vy.y vy.z 0 vz.x vz.y vz.z 0 pos.x pos.y pos.z 1]
Which is why I said it is easier to memorize which element is which, 9 years ago.
To summarize the answers by SigTerm and dsharlet: The usual way to transform a vector in GLSL is to right-multiply the transformation matrix by the vector:
mat4 T; vec4 v; vec4 v_transformed;
v_transformed = T*v;
In order for that to work, OpenGL expects the memory layout of T to be, as described by SigTerm,
{1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, transX, transY, transZ, 1 }
which is also called 'column major'. In your shader code (as indicated by your comments), however, you left-multiplied the transformation matrix by the vector:
v_transformed = v*T;
which only yields the correct result if T is transposed, i.e. has the layout
{ 1, 0, 0, transX, 0, 1, 0, transY, 0, 0, 1, transZ, 0, 0, 0, 1 }
(i.e. 'row major'). Since you already provided the correct layout to your shader, namely row major, it was not necessary to set the transpose flag of glUniform4v.
You are dealing with two separate issues.
First, your examples are dealing with the memory layout. Your [4][4] array is row major because you've used the convention established by C multi-dimensional arrays to match your linear array.
The second issue is a matter of convention for how you interpret matrices in your program. glUniformMatrix4fv is used to set a shader parameter. Whether your transform is computed for a row vector or column vector transform is a matter of how you use the matrix in your shader code. Because you say you need to use column vectors, I assume your shader code is using the matrix A and a column vector x to compute x' = A x.
I would argue that the documentation of glUniformMatrix is confusing. The description of the transpose parameter is a really roundabout way of just saying that the matrix is transposed or it isn't. OpenGL itself is just transporting that data to your shader, whether you want to transpose it or not is a matter of convention you should establish for your program.
This link has some good further discussion: http://steve.hollasch.net/cgindex/math/matrix/column-vec.html
I think that the existing answers here are very unhelpful, and I can see from the comments that people are left feeling confused after reading them, so here is another way of looking at this situation.
As a programmer, if I want to store an array in memory, I cannot store a rectangular grid of numbers, because computer memory doesn't work like that, I have to store the numbers in a linear sequence.
Lets say I have a 2x2 matrix and I initialize it in my code like this:
const matrix = [a, b, c, d];
I can successfully use this matrix in other parts of my code provided I know what each of the array elements represents.
The OpenGL specification defines what each index position represents, and this is all you need to know to construct an array and pass it to OpenGL and have it do what you expect.
The row or column major issue only comes into play when I want to write my matrix in a document that describes my code, because mathematicians write matrixes as rectangular grids of numbers. However this is just a convention, a way of writing things down, and has no impact on the code I write or the arrangement of numbers in memory on my computer. You could easily re-write these mathematics papers using some other notation, and it would work just as well.
For the array above, I have two options for writing this array in my documentation as a rectangular grid:
|a b| OR |a c|
|c d| |b d|
Whichever way I choose to write my documentation, this will have no impact on my code or the order of the numbers in memory on my computer, it's just documentation.
In order for people reading my documentation to know the order that I stored the values in the linear array in my program, I can specify that this is a column major or row major representation of the array as a matrix. If it is in column major order then I should traverse the columns to get the linear arrangement of numbers. If this is a row major representation then I should traverse the rows to get the linear arrangement of numbers.
In general, writing documentation in row major order makes life easier for programmers, because if I want to translate this matrix
|a b c|
|d e f|
|g h i|
into code, I can write it like this:
const matrix = [
a, b, c
d, e, f
g, h, i
];
For example:
GLM stores matrix values as m[4][4]. But it treats matrices as if they have a column major order. Even though for 2 dimensional array m[x][y] in C x represents a row and y represents a column, which means that matrix represented by this array has in fact row major order. The trick is to treat m[x][y] as if x represents a column and y represents a row. It is like you transposing the matrix without performing any additional operations to achieve that.
What I have currently is causing my 3D object to become flat. But it is looking at my target.
Vector4 up;
newMatrix.SetIdentity();
up.set_x(0);
up.set_y(1);
up.set_z(0);
Vector4 zaxis = player_->transform().GetTranslation() - spider_->transform().GetTranslation();
zaxis.Normalise();
Vector4 xaxis = CrossProduct(up, zaxis);
xaxis.Normalise();
Vector4 yaxis = CrossProduct(zaxis, xaxis);
newMatrix.set_m(0, 0, xaxis.x()); newMatrix.set_m(0, 1, xaxis.y()); newMatrix.set_m(0, 2, xaxis.z());
newMatrix.set_m(1, 0, yaxis.x()); newMatrix.set_m(1, 1, yaxis.y()); newMatrix.set_m(1, 2, yaxis.z());
newMatrix.set_m(2, 0, zaxis.x()); newMatrix.set_m(2, 1, zaxis.y()); newMatrix.set_m(2, 2, zaxis.z());
Excuse the method for putting values into the matrix, I'm working with what my framework gives me.
Vector4 Game::CrossProduct(Vector4 v1, Vector4 v2)
{
Vector4 crossProduct;
crossProduct.set_x((v1.y() * v2.z()) - (v2.y() * v2.z()));
crossProduct.set_y((v1.z() * v2.x()) - (v1.z() * v2.x()));
crossProduct.set_z((v1.x() * v2.y()) - (v1.x() * v2.y()));
return crossProduct;
}
What am I doing wrong here?
Note that I have added the forth line before with the 1 in the corner before just in case, with no change.
you got problem when (0,1,0) is close to parallel to the direction you want to look at. Than the cross product will fail leading in one or two basis vectors to be zero which could lead to 2D appearance. But that would happen only if your objects are offset only in y axis between each other. To avoid that you can test dot product between the up and view direction and if the abs of result is bigger than 0.7 use (1,0,0) instead (as a up or right or whatever...).
Also as We know nothing about your notations we can not confirm your setting the matrix is correct (it might be or may be not, it could be transposed etc.) for more info take a look at:
Understanding 4x4 homogenous transform matrices
But most likely your matrix holds also the perspective transform and by setting its cells you are overriding it completely leading to no perspective hence 2D output. In such case you should multiply original perspective matrix with your new matrix and use the result.
I have an Nx3 Eigen matrix.
I have an Nx1 Egein marix.
I'm trying to get the coefficient multiplication of each row in the Nx3 by the corresponding scal in the Nx1 so I can scale a bunch of 3d vectors.
I'm sure I'm overlooking something obvious but I can't get it to work.
#include <Eigen/Dense>
MatrixXf m(4, 3);
m << 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12;
MatrixXf dots(4, 1)
dots << 2,2,2,2;
I want to resulting matrix to be Nx3 like so:
2,4,6
8,10,12,
14,16,18,
20,22,24
You can use broadcasting:
m = m.colwise().cwiseProduct(dots);
or observe that all you want to do is to apply a non uniform scaling:
m = dots.asDiagonal() * m;
Both expressions will generate similar code.
Okay, so I got something working. I'm probably doing something wrong but this worked for me so I thought I would share. I wrote my first line of c++ a week ago so I figure I deserve some grace. Anyone with a better solution is encouraged to post.
// scalar/coefficient multiplication (not matrix) on Nx3 x N. For multiplying dot products by vectors
void N3xNcoefIP(MatrixXf &A, MatrixXf &B) {
A.array() *= B.replicate(1, A.size()).array();
}
Using the mathematics library GLM, I use this code to combine the euler angle rotations to a rotation matrix.
#include <GLM/gtc/matrix_transform.hpp>
using namespace glm;
mat4 matrix = rotate(mat4(1), X, vec3(1, 0, 0))
* rotate(mat4(1), Y, vec3(0, 1, 0))
* rotate(mat4(1), Z, vec3(0, 0, 1));
Does this result in an euler angle sequenze of XYZ or ZYX? I am not sure since matrix multiplication behave not the same as scalar multiplications.
Remember that matrix calculation, in openGL, use a notation knows as vector column (http://en.wikipedia.org/wiki/Column_vector). So, any point transformation will be expressed by a system of linear equation, expressed in vector column notation like this:
[P'] = M.[P], where M = M1.M2.M3
This means that the first transformation that is applied to the points, expressed by vector [P] is M3, after that by M2 and at last by M1.
Answering your question, the resulting Euler angle will be ZXY, once Z rotation transformation is the last matrix that you write to form a matrix multiplication.
I'm currently in the process of writing a function to find an "exact" bounding-sphere for a set of points in 3D space. I think I have a decent understanding of the process so far, but I've gotten stuck.
Here's what I'm working with:
A) Points in 3D space
B) 3x3 covariance matrix stored in a 4x4 matrix class (referenced by cells m0,m1,m2,m3,m4,ect; instead of rows and cols)
I've found the 3 eigenvalues for the covariance matrix of the points, and I've set up a function to convert a matrix to reduced row echelon form (rref) via Gaussian elimination.
I've tested both of those functions against figures in examples I've found online, and they appear to be working correctly.
The next step is to find the eigenvectors using the equation:
(M - λ*I)*V
... where M is the covariance matrix, λ is one of the eigenvalues, I is the identity matrix, and V is the eigenvector.
However, I don't seem to be constructing the 4x3 matrix correctly before rref'ing it, as the far right column where the eigenvector components should be calculated are 0 before and after running rref. I understand why they are zero after (without any constants, the simplest solution to a linear system of equations is all coefficients of zero), but I'm at a loss as to what to put there.
Here's the function so far:
Vect eigenVector(const Matrix &M, const float eval) {
Matrix A = Matrix(M);
A -= Matrix(IDENTITY)*eval;
A.rref();
return Vect(A[m3],A[m7],A[m11]);
}
The 3x3 covariance matrix is passed as M, and the eigenvalue as eval. Matrix(IDENTITY) returns an identity matrix. m3,m7, and m11 correspond to the far-right column of a 4x3 matrix.
Here's the example 3x3 matrix (stored in a 4x4 matrix class) I'm using to test the functions:
Matrix(1.5f, 0.5f, 0.75f, 0,
0.5f, 0.5f, 0.25f, 0,
0.75f, 0.25f, 0.5f, 0,
0, 0, 0, 0);
I'm correctly (?) getting the eigenvalues of 2.097, 0.3055, 0.09756 from my other function.
eigenVector() above correctly subtracts the passed eigenvalue from the diagonal (0,0 1,1 2,2)
Matrix A after rref():
[(1, 0, 0, -0),
(-0, 1, 0, -0),
(-0, -0, 1, -0),
(0, 0, 0, -2.09694)]
For the rref() function, I'm using a translated python function found here:
http://elonen.iki.fi/code/misc-notes/python-gaussj/index.html
What should the matrix I pass to rref() look like to get an eigenvector out?
Thanks
(M - λI)V is not an equation, it's just an expression. However, (M - λI)V = 0 is. And it's the equation that relates eigenvectors to eigenvalues.
So assuming your rref function works, I would imagine that you create an augmented matrix as [(M - λI) | 0], where 0 denotes a zero-vector. This sounds like what you're doing already, so I would have to assume that your rref function is broken. Or alternatively, it doesn't know how to handle 4x4 matrices (as opposed to 4x3 matrices, which is what it would expect for an augmented matrix).
Ah, with a few more hours of grueling research, I've managed to solve my problem.
The issue is that there is no "one" set of eigenvectors but rather an infinite number with varying magnitudes.
The method I chose was to use a REF (row echelon form) instead of RREF, leaving enough information in the matrix to allow me to substitute in an arbitrary value for z, and work backwards to solve for y and x. I then normalized the vector to get a unit eigenvector, which should work for my purposes.
My final code:
Vect eigenVector(const Matrix &M, const float eVal) {
Matrix A = Matrix(M);
A -= Matrix(IDENTITY)*eVal;
A.ref();
float K = 16; // Arbitrary value
float J = -K*A[m6]; // Substitute in K to find J
float I = -K*A[m2]-J*A[m1]; // Substitute in K and J to find I
Vect eVec = Vect(I,J,K);
eVec.norm(); // Normalize eigenvector
return eVec;
}
The only oddity is that the eigenvectors come out facing in the opposite direction than I expected (they were negated!), but that's a moot problem.