I know what is column-major and how to deal with it. Question what was there purpose to implement system in that way? Any technical or conceptual restricts?
By arranging matrices like that in memory, you have immediate access to the column vectors (obviously). When using right associative multiplication (i.e. what OpenGL and most other graphics systems do to allow for the easy chaining of transformations) the column vectors of a matrix are the basis vectors of the coordinate system the matrix is mapping to.
And having easy access to these basis vectors is kind of useful for further graphics operations, like setting up mirroring planes, billboards, etc.
TL;DR: When doing graphics programming you often want to use the basis vectors of a transformation for other things. If right associative multiplication is used, the basis vectors are the columns of the transformation matrices.
Related
Eigen has Eigen::SparseMatrix, what's the equivalent feature in GLM? I've looked through its documentation, and googled, couldn't find it. But it's hard to believe glm doesn't have sparse matrix.
But it's hard to believe glm doesn't have sparse matrix.
Why is that so hard to believe? Sparse matrices are outside of GLM's job description.
GLM is intended to mimic the OpenGL shading language's vector/matrix facilities. Obviously it adds its own stuff, but that's the core of the system.
Sparse matrices aren't part of GLSL, so they're not part of GLM. And sparse matrices are kind of outside of standard graphics work, at least as far as common 3D or 2D transformation tasks are concerned.
This is also why its predefined vector and matrix types only go up to 4.
GLM is not a generic matrix/vector library.
I've been messing around with OpenGL a little bit and I don't fully understand what the purpose of the matrices are for. Is it to provide animation for the objects or something?
Matrices are used to represent transformations of points and vectors in openGL. I suggest you brush up on some linear algebra, and, in particular, you learn about transformation matrices. You cannot be a good graphics programmer without understanding transformations!
For 3D vectors, 4x4 matrix stores nicely all the needed transforms (move, rotate, scale, and project) in one simple package. And not only that, you can cascade transformations together by simple multiply operation. I think that is the main reason for those. Of course, you can also have 3x3 rotation matrices, as well as quaternions involved in transforms: still, 4x4 matrix can store all those transforms, although extracting single operations out of that can be pretty tricky.
The MultMatrix appears to only multiply 4x4 matrices, which makes sense for OpenGL purposes, but I was wondering if a more general matrix multiplication function existed within OpenGL.
No, as can be easily verified by looking at the documentation, including the GL Shader Language. The largest matrix data type is a 4x4.
It is very true there is a whole craft of of getting GPUs to do more general purpose math, including string and text manipulation for e.g. cryptographic purposes, by using OpenGL primitives in very tricky ways. However you asked for a general purpose matrix multiplication function.
OpenCL is a somewhat different story. It doesn't have a multiply primitive, but it's designed for general numeric computation, so examples and libraries are very common.
You can also easily code a general matrix multiply for NVIDIA processors in CUDA. Their tutorials include the design of such a routine.
A lot of people think, that legacy OpenGL's (up to OpenGL-2.1) matrix multiplication would be in some way faster. This is not the case. The fixed function pipeline matrix manipulation functions are all executed on the CPU and only update the GPU matrix register on demand before a drawing call.
There's no benefit in using OpenGL for doing matrix math multiplication. If you want do to GPGPU computing you must do this using either OpenCL or compute shaders and to actually benefit from it, it must be applied to a very well parallelized problem.
In newer OpenGL specifications, matrix manipulation functions are removed. You need to calculate the transformation matrices by hand and pass them to the shaders. Although glRotate, glScale, etc. disappeared, I didn't see anything in exchange...
My question:
how do you handle the transformations? Do you dig the theory and implement all by hand, or use some predefined libraries? Is there any "official" OpenGL solution?
For example, datenwolf points to his hand made C library in this post. For Java users (Android) there is AffineTransform class, but it applies to 3x3 matrices, so it needs an extra effort to apply it to OpenGL mat4
What is your solution?
how do you handle the transformations? Do you dig the theory and implement all by hand, or use some predefined libraries?
Either way goes. But the thing is: In a real program that deals with 3D geometry you need those transformation matrices for a lot more than just rendering stuff. Say you have some kind of physics simulation running. The position of rigid objects is usually represented by their transformation matrix. So if doing a physics sim, you've got that transformation matrix lying around somewhere anyway, so you just use that.
In fully integrated simulation engines you'll also want to avoid redundancies, so you take some physics simulation library like ODE, Bullet or so and modify it in a way that it can work directly on your object representing structures without copying the data into library specific records for procressing and then back.
So you usually end up with some mixture. Some of the math comes in preexisting libraries, others you implement yourself.
I agree with datenwolf, but to give an example I use Eigen, which is a fantastic general purpose matrix math library.
above glsl 3.0 the glTraslate(),glRotate(),fTransform() etc. functions are deprecated.. but still can be use.
one better way is to use some math library like GLM http://glm.g-truc.net/ which is compatible with the glsl specifications.
The projection matrix, model matrix and view matrix are passed to the shader as uniform variables.
Hi I've been doing some research about matrix inversion (linear algebra) and I wanted to use C++ template programming for the algorithm , what i found out is that there are number of methods like: Gauss-Jordan Elimination or LU Decomposition and I found the function LU_factorize (c++ boost library)
I want to know if there are other methods , which one is better (advantages/disadvantages) , from a perspective of programmers or mathematicians ?
If there are no other faster methods is there already a (matrix) inversion function in the boost library ? , because i've searched alot and didn't find any.
As you mention, the standard approach is to perform a LU factorization and then solve for the identity. This can be implemented using the LAPACK library, for example, with dgetrf (factor) and dgetri (compute inverse). Most other linear algebra libraries have roughly equivalent functions.
There are some slower methods that degrade more gracefully when the matrix is singular or nearly singular, and are used for that reason. For example, the Moore-Penrose pseudoinverse is equal to the inverse if the matrix is invertible, and often useful even if the matrix is not invertible; it can be calculated using a Singular Value Decomposition.
I'd suggest you to take a look at Eigen source code.
Please Google or Wikipedia for the buzzwords below.
First, make sure you really want the inverse. Solving a system does not require inverting a matrix. Matrix inversion can be performed by solving n systems, with unit basis vectors as right hand sides. So I'll focus on solving systems, because it is usually what you want.
It depends on what "large" means. Methods based on decomposition must generally store the entire matrix. Once you have decomposed the matrix, you can solve for multiple right hand sides at once (and thus invert the matrix easily). I won't discuss here factorization methods, as you're likely to know them already.
Please note that when a matrix is large, its condition number is very likely to be close to zero, which means that the matrix is "numerically non-invertible". Remedy: Preconditionning. Check wikipedia for this. The article is well written.
If the matrix is large, you don't want to store it. If it has a lot of zeros, it is a sparse matrix. Either it has structure (eg. band diagonal, block matrix, ...), and you have specialized methods for solving systems involving such matrices, or it has not.
When you're faced with a sparse matrix with no obvious structure, or with a matrix you don't want to store, you must use iterative methods. They only involve matrix-vector multiplications, which don't require a particular form of storage: you can compute the coefficients when you need them, or store non-zero coefficients the way you want, etc.
The methods are:
For symmetric definite positive matrices: conjugate gradient method. In short, solving Ax = b amounts to minimize 1/2 x^T A x - x^T b.
Biconjugate gradient method for general matrices. Unstable though.
Minimum residual methods, or best, GMRES. Please check the wikipedia articles for details. You may want to experiment with the number of iterations before restarting the algorithm.
And finally, you can perform some sort of factorization with sparse matrices, with specially designed algorithms to minimize the number of non-zero elements to store.
depending on the how large the matrix actually is, you probably need to keep only a small subset of the columns in memory at any given time. This might require overriding the low-level write and read operations to the matrix elements, which i'm not sure if Eigen, an otherwise pretty decent library, will allow you to.
For These very narrow cases where the matrix is really big, There is StlXXL library designed for memory access to arrays that are mostly stored in disk
EDIT To be more precise, if you have a matrix that does not fix in the available RAM, the preferred approach is to do blockwise inversion. The matrix is split recursively until each matrix does fit in RAM (this is a tuning parameter of the algorithm of course). The tricky part here is to avoid starving the CPU of matrices to invert while they are pulled in and out of disk. This might require to investigate in appropiate parallel filesystems, since even with StlXXL, this is likely to be the main bottleneck. Although, let me repeat the mantra; Premature optimization is the root of all programming evil. This evil can only be banished with the cleansing ritual of Coding, Execute and Profile
You might want to use a C++ wrapper around LAPACK. The LAPACK is very mature code: well-tested, optimized, etc.
One such wrapper is the Intel Math Kernel Library.