2D Matrix to 3D Matrix - opengl

I have 2D transformations stored in a plain ol' 3x3 Matrix. How can I re-format that one to a Matrix I can shove over to OpenGL in order to transform orthogonal shapes, polygons and suchlike.
How do I have to put the values so the transformations are preserved?
(On an unrelated note, is there a fast way to invert a 3x3 Matrix?)

Some explanation about transformation matrices: All the columns, except the last one, describe the orientation of a new coordinate system in the base of the current coordinate system. So the first column is the X vector of the new coordinate system, as seen from the current, the second is the new Y vector and the 3rd is the new Z. So far this only covers the rotation. The last column is used for the relative offset. The last row and the bottom most right value are used for the homogenous transformations. It's best to leave the last row 0, ..., 0, 1
In your case you're missing the Z values, so we just insert a identity transform there, so that incoming values are left as they are.
Say this is your original matrix:
xx xy tx
yx yy ty
0 0 1
This matrix is missing the Z transformation. Inserting identity means: Leave Z as is, and don't mix with the rest. So ·z = z· = 0, except zz = 1. This gives you the following matrix
↓
xx xy 0 tx
yx yy 0 ty
0 0 1 0 ←
0 0 0 1
You can apply that onto the current OpenGL matrix stack with glMultMatrix if OpenGL version is below 3 core profile. Be aware that OpenGL numbers the matrix in column major order i.e, the indices in the array go like this (hexadecimal digits)
0 4 8 c
1 5 9 d
2 6 a e
3 7 b f
This contrary to the usual C notation which is
0 1 2 3
4 5 6 7
8 9 a b
c d e f
With OpenGL-3 core and later you've to do matrix management and manipulation yourself, anyway.
EDIT for second part of question
If by inverting one means finding the matrix M^-1 for a given matrix M, so that M^1 * M = M * M^1 = 1. For 3×3 matrices the determinant inversion method requires less operations than Gauss-Jordan elemination and is thus the most efficient way to do it. Already for 4×4 matrices determinant inversion is slower than every other method. http://www.sosmath.com/matrix/inverse/inverse.html
If you know that your matrix is orthonormal, then you may just transpose the upper left part except bottom row and rightmost column, and negate the sign of the rightmost column, except the very bottom right element. This exploits the fact that for orthonormal matrices M^-1 = M^T.

Just add the fourth row and column. For example given
2 3 3
3 2 4
0 0 1
Create the following
2 3 3 0
3 2 4 0
0 0 1 0
0 0 0 1
The transformation still occurs on the x-y plane even though it is now in three-space.

Related

Creating a graph from only the edge lengths between nodes

If there is a graph with N nodes and I'm only given a N*N matrix of the distances from each node to every other (the diagonal is of course 0), what would be the most efficient way of generating a graph with as few edges as possible?
for n = 4 and the matrix
0 1 2 3
1 0 3 4
2 3 0 5
3 4 5 0
only having 3 edges would be enough, all connected to the 1st node.
the edge from 1 and 2 would have length 1
the edge from 1 and 3 would have length 2
the edge from 1 and 4 would have length 3
" N*N matrix of the distances from each node to every other (the diagonal is of course 0)"
This is called the adjacency matrix. It completely specifies the graph. Each non-sero value defines an edge between two vertices. You cannot reduce the number of edges below the number of non zero value without changing the graph.
Simply create a full graph using the matrix as the incidence matrix, then remove unneeded edges.
Edge (a,b) is unneeded if and only if there is another path from a to b of the same length as (a,b). That is, if there is vertex c different from both a and b such that
distance(a,b) = distance(a,c) + distance(c,b)
This will not work if some distances not on the diagonal are zero or negative.

I am having trouble using Eigen translate() and rotate() and doesn't behave as expected

I am trying to find transformation matrices between different coordinate frames. In order to rotate, we basically multiply the rotation matrices and append the translation vector to obtain the final homogeneous matrix.
Here I have attached a snippet of my code where tf_matrix and output are Eigen::Transform variables.
tf_matrix.setIdentity();
tf_matrix.rotate( output.rotation() );
tf_matrix.translate( output.translation() );
When I look at their outputs, it seems like it is generating the rotation and translation matrix into 4x4 matrices and multiplying it instead of appending the translation vector
Output:
//This is rotation matrix
output.rotation()
1 0 0
0 0.0707372 -0.997495
0 0.997495 0.0707372
//translation vector
output.translation()
0.3
0.3
0.3
//After applying rotate() and translate() tf_matrix.transform.matrix() looks like the below
1 0 0 0.3
0 0.0707372 -0.997495 -0.278027
0 0.997495 0.0707372 0.32047
0 0 0 1
//Printing just the tf_matrix.transform.rotation()
1 0 0
0 0.0707372 -0.997495
0 0.997495 0.0707372
//Printing just the tf_matrix.transform.translation()
0.3
-0.278027
0.32047
//Ideally it should look like the below
1 0 0 0.3
0 0.0707372 -0.997495 0.3
0 0.997495 0.0707372 0.3
0 0 0 1
What did I try
I tried to generate a simple 4x4 identity Eigen::Trnasform and append it to the output matrix after the rotation, but the value 1 of the identity matrix gets added
I also tried, multiply tf_matrix.col(3) += output_matrix.col(3) , but it faces similar issues as above.
I am not sure how to go about rotation because my understanding is that I need to just multiply the 3x3 rotation matrix and append/add the 3x3 translation vector to the final column of this matrix. It seems like Eigen should be able to handle this without me writing extra code. But, this rotate, translate clearly doesn't give the right answers.
Could you please point out what am I missing if any or if there's a better way to go about it.
The order of operations is reversed from what you seem to expect: see here. Suppose you have a coordinate in R3 that you want to translate (matrix Mt) and then rotate (matrix Mr), you might expect to write Vec3 = Vec3 * Mt * Mr. Many game-engines and math libraries (eg Ogre, XNA, CRYENGINE, Unity, I believe) use this order of operations. However, Eigen requires Vec3 = Mr * Mt * Vec3; in Eigen, the coordinate being passed through is a column vector, in game engines, it is a row vector. Correspondingly, the matrices in the two different forms are transposes of one another.
To solve you problem:
tf_matrix.setIdentity();
tf_matrix = output.rotation() * tf_matrix;
tf_matrix = translate * tf_matrix;
or
tf_matrix = translate * output.rotation();
The pretranslate() and premultiply() methods can also be used to do this.

how to make a adjacency matrix in C++ for a surface

I am working on Graphite. Its is in C++ built-in libraries.
I am trying to make a surface which would be a sphere. The sphere is in form of a mesh.
How can I make adjacency matrix and degree matrix for a surface so that I can compute the Laplacian matrix?
I am working on a surface which I need to deform it later for that I need an adjacency matrix and degree matrix for the surface.
I am using C++.
Thanks in advance.
How to create adjacency matrix
Here is how you can create an adjacency matrix:
#include <vector>
vector < vector<int> > adjacencyMatrix(vertices);
How to derive degree matrix
Consider the following adjacency matrix that represents an unweighted directed graph:
1 0 1
1 0 0
1 0 0
The index of the row represents the vertex. So, row 0 represents vertex 0, 1 represents vertex 1...
The vertex 0 is connected to itself and has an incoming edge from vertex 2. Similarly, 1 has an incoming edge from 0, and 2 has one with 1.
We have to find out how many edges end at each vertex. Using that information, we create following degree matrix:
2 0 0
0 1 0
0 0 1
The above shows that vertex 0 has 2 edges ending on it, vertex 1 has 1, and vertex 2 has 1.
Since each row in the adjacency matrix represents the incoming connections for that vertex, all you have to do it sum up each row and store them in another matrix (i.e. degree matrix). Since row 0 had a sum of 2, that means you store at the (0, 0) position of the degree matrix the value 2. Similarly, since row 1 had a sum of 1, you store that value at the (1, 1) position...
Let me know if you need me to actually code this. I'm assuming you understand and can take it from here.
Note: the above works for an adjacency matrix for unweighted directed graph. You will have to modify it slightly for other types of graphs.
If you have a undirected graph with N nodes, you need an NxN matrix (initialized to 0).
For each edge (a,b) in your graph mark the entry at (a,b) and (b,a) with 1 in the adjacency matrix, while the degree matrix is incremented with 1 at (a,a) and at (b,b).
For a simple graph (no multiple edges and self-loops) you have L = D - A.

Understand Translation Matrix in OpenGL

Assume we want to translate a point p(1, 2, 3, w=1) with a vector v(a, b, c, w=0) to a new point p'
Note: w=0 represents a vector and w=1 represent a point in OpenGL, please correct me if I'm wrong.
In Affine transformation definition, we have:
p + v = p'
=> p(1, 2, 3, 1) + v(a, b, c, 0) = p(1 + a, 2 + b, 3 + c, 1)
=> point + vector = point (everything works as expected)
In OpenGL, the translation matrix is as following:
1 0 0 a
0 1 0 b
0 0 1 c
0 0 0 1
I assume (a, b, c, 1) is the vector from Affine transformation definition
why we have w=1, but not w=0 such as
1 0 0 a
0 1 0 b
0 0 1 c
0 0 0 0
Note: w=0 represents a vector and w=1 represent a point in OpenGL, please correct me if I'm wrong.
You are wrong. First of all, this hasn't really anything to do with OpenGL. This is about homogenous coordinates, which is a purely mathematical concept. It works by embedding an n-dimensional vector space into an n+1 dimensional vector space. In the 3D case, we use 4D homogenous coordinates, with the definition that the homogenous vector (x, y, z, w) represents the 3D point (x/w, y/w, z/w) in cartesian coordinates.
As a result, for any w != 0, you get a certain finite point, and for w = 0, you are discribing an infinitely far away point into a specific direction. This means that the homogenous coordinates are more powerful in the regard that they can actually describe infinitely far away points with finite coordinates (which is something which comes very handy for perspective transformations, where infinitely far away points are mapped to finite points, and vice versa).
You can, as a shortcut, imagine (x,y,z,0) as some direction vector. But for a point, it is not just w=1, but any w value unequal 0. Conceptually, this means that any cartesian 3D point is represented by a line in homogenous space (we did go up one dimension, so this actually makes sense).
I assume (a, b, c, 1) is the vector from Affine transformation definition why we have w=1, but not w=0?
Your assumption is wrong. One thing about homogenous coordinates is that we do not apply a translation in the 4D space. We get the effect of the translation in the 3D space by actually doing a shearing operation in 4D space.
So what we really want to do in homogenous space is
(x + w *a, y + w*b, z+ w*c, w)
since the 3D interpretation of the resulting vector will then be
(x + w*a) / w == x/w + a
(y + w*b) / w == y/w + b
(z + w*c) / w == z/w + c
which will represent the translation that we were after.
So to try to make this even more clear:
What you wrote in your question:
p(1, 2, 3, 1) + v(a, b, c, 0) = p(1 + a, 2 + b, 3 + c, 1)
Is explicitely not what we want to do. What you describe is an affine translation with respect to the 4D vector space.
But what we actually want is a translation in the 3D cartesian coordinates, so
(1, 2, 3) + (a, b, c) = (1 + a, 2 + b, 3 + c)
Applying your formula would actually mean doing a translation in the homogenous space, which would have the effect of doing a translation which is scaled by the w coordinate, while the formula I gave will always translate the point by (a,b,c), no matter what w we chose for the point.
This is of course not true if we chose w=0. Then, we will get no change at all, which is also correct because a translation will never change directions - your formula would change the direction. Your formula is correct only for w=1, which is aonly a special case. But the key point here is that we are not doing a vector addition after all, but a matrix * vector multiplication. And homogenous coordinates just allow us (among other, more powerful things), to represent a translation via matrix multiplication. But this does not mean that we can just interpret the last column as a translation vector as if we did vector addition.
Simple Answer
The reason is the way how matrix multiplications work. If you multiply a matrix by a vector then the w-component of the result is the inner product of the 4th line of the matrix with the vector. After applying the transformation, a point should still be a point and a direction should be a direction. If you would set that to a 0-vector, the result will always be 0 and thus, the resulting vector will have changed from position (w=1) to direction (w=0).
More detailed answer
The definition of a affine transformation is:
x' = A * x + t,
where is a A is a linear map and t a translation. Traditionally, linear maps are written by mathematicians in matrix form. Note, that t is here, similar to x, a 3-dimensional vector. It would now be cumbersome (and less general, thinking of projective mappings), if we would always have to handle the linear mapping matrix and the translation vector. This can be solved by introducing an additional dimension to the mapping, the so-called homogeneous coordinate, which allows us to store the linear mapping as well as the translation vector in a combined 4x4 matrix. This is called augmented matrix and by definition,
x' A | t x
[ ] = [ | ] * [ ]
1 0 | 1 1
It should also be noted, that affine transformations can now be combined very easily by just multiplying there augmented matrices, which would be hard to do in matrix plus vector notation.
One should also note, that the bottom-right 1 is not part of the translation vector, which is still 3-dimensional, but of the matrix augmentation.
You might also want to read the section about "Augmented matrix" here: https://en.wikipedia.org/wiki/Affine_transformation#Augmented_matrix

(Pseudo)-Inverse of N by N matrix with zero determinant

I would like to take the inverse of a nxn matrix to use in my GraphSlam.
The issues that I encountered:
.inverse() Eigen-library (3.1.2) doesn't allow zero values, returns NaN
The LAPACK (3.4.2) library doesn't allow to use a zero determinant, but allows zero values (used example code from Computing the inverse of a matrix using lapack in C)
Seldon library (5.1.2) wouldn't compile for some reason
Did anyone successfully implemented an n x n matrix inversion code that allows negative, zero-values and a determinant of zero? Any good library (C++) recommendations?
I try to calculate the omega in the following for GraphSlam:
http://www.acastano.com/others/udacity/cs_373_autonomous_car.html
Simple example:
[ 1 -1 0 0 ]
[ -1 2 -1 0 ]
[ 0 -1 1 0 ]
[ 0 0 0 0 ]
Real example would be 170x170 and contain 0's, negative values, bigger positive values.
Given simple example is used to debug the code.
I can calculate this in matlab (Moore-Penrose pseudoinverse) but for some reason I'm not able to program this in C++.
A = [1 -1 0 0; -1 2 -1 0; 0 -1 1 0; 0 0 0 0]
B = pinv(A)
B=
[0.56 -0.12 -0.44 0]
[-0.12 0.22 -0.11 0]
[-0.44 -0.11 0.56 0]
[0 0 0 0]
For my application I can (temporarily) remove the dimension with zero's.
So I am going to remove the 4th column and the 4th row.
I can also do that for my 170x170 matrix, the 4x4 was just an example.
A:
[ 1 -1 0 ]
[ -1 2 -1 ]
[ 0 -1 1 ]
So removing the 4th column and the 4th row wouldn’t bring a zero determinant.
But I can still have a zero determinant if my matrix is as above.
This when the sum of each row or each column is zero. (Which I will have all the time in GraphSlam)
The LAPACK-solution (Moore-Penrose Inverse based) worked if the determinant was not zero (used example code from Computing the inverse of a matrix using lapack in C). But failed as a "pseudoinverse" with a determinant of zero.
SOLUTION: (all credits to Frank Reininghaus), using SVD(singular value decomposition)
http://sourceware.org/ml/gsl-discuss/2008-q2/msg00013.html
Works with:
Zero values (even full 0 rows and full 0 columns)
Negative values
Determinant of zero
A^-1:
[0.56 -0.12 -0.44]
[-0.12 0.22 -0.11]
[-0.44 -0.11 0.56]
If all you want is to solve problem of the form Ax=B (or equivalently compute products of the form A^-1 * b), then I recommend you not to compute the inverse or pseudo-inverse of A, but directly solve for Ax=b using an appropriate rank-revealing solver. For instance, using Eigen:
x = A.colPivHouseholderQr().solve(b);
x = A.jacobiSvd(ComputeThinU|ComputeThinV).solve(b);
Your Matlab command does not calculate the inverse in your case because the matrix has determinat zero. The pinv commmand calculates the Moore-Penrose pseudoinverse. pinv(A) has some of, but not all, the properties of inv(A).
So you are not doing the same thing in C++ and in Matlab!
Previous
As in my comment. Now as answer. You must make sure that you invert invertible matrices. That means
det A != 0
Your example matrix has determinant equals zero. This is not an invertible matrix. I hope you don't try on this one!
For example a given matrix has determinant zero if there is a full row or column of zero entries.
Are you sure it's because of the zero/negative values, and not because your matrix is non-invertible?
A matrix only has an inverse if its determinant is nonzero (mathworld link), and the matrix example you posted in the question has a zero determinant and so it has no inverse.
That should explain why those libraries do not allow you to take the inverse of the matrix given, but I can't say if the same reasoning holds for your full size 170x170 matrix.
If your matrixes is kind of covariance or weight matrices you can use "generalized cholesky inversion" instead of SVD. The results will be more acceptable for practical use