Difference between glMulMatrix and glTranslate (or glRotate) - opengl

I'm used to modern OpenGL syntax, but I have to work on an existing code that uses the old syntax, which is so confusing for me despite the fact that it's supposed to be simpler!
I was wondering what's the difference between using glMulMatrix (and passing a translation matrix) and using glTranslate and passing x, y, and z for translation?
Same question applies for glMulMatrix (and passing a rotation matrix) or using glRotate and passing x, y, and z rotations?

With glMultMatrix, you need to create the matrix (in column major order), by yourself, and pass it to OpenGL, using that function. By using glTranslate or glRotate, you only need to pass parameters and let OpenGL construct it.
After that, the processing will be the same. In the past, for beginners, using glTranslate or glRotate was easier to understand and use, while for more advanced user, glMultMatrix was more natural, due the fact that usually you accumulate transformations (scene graph) and pass it to compound scenes.

Related

Intrinsic Parameters of Camera

I'm trying to do triangulation for 3D reconstruction and I came across an interesting observation which I cannot justify.
I have 2 sets of images. I know the correspondences and I'm finding the intrinsic and extrinsic parameters using a direct linear transformation. While I'm able to properly reconstruct the original scene, the intrinsic parameters are different even though the pictures are taken from the same camera. How is it possible to have different intrinsic parameters if the camera is the same? Also, if the intrinsic parameters are different, how am I able to reconstruct the scene perfectly?
Thank you
You haven't specified what you mean by "different", so i'm just going to point two possible sources of differences that come to mind. Let's denote the matrix of intrinsic parameters with K.
The first possible difference could just come from a scaling difference. If the second time you estimate your intrinsics matrix, you end up with a matrix
K_2=lambda*K
then it doesn't make any difference when projecting or reprojecting, since for any 3d point X you'll have
K_2*X=K*lambda*X //X is the same as lambda*X in projective geometry
The same thing happens when you backproject the point: you just obtain a direction, and then your estimation algorithm (e.g. least squares or a simpler geometric solution) takes care of estimating the depth.
The second reason for the difference you observe could just come from numerical imprecisions. Since you haven't given any information regarding the magnitude of the difference, I'm not sure if that is relevant to your case.

How to compare relative rotation between objects?

I have two objects in 3D space (using OpenGL to render it all) of the same type of a class. These objects store xyz offsets and a rotation matrix representing your standard 4x4 rotational matrix.
A stripped down example:
class object {
float rotMatrix[16];
float xyzTrans[3];
//a displaylist for drawing the object
}
I'm using GLUI for UI controls which makes storing the transformations in format pretty simple.
The problem:
I need to define a "correct" orientation for one object with respect to the other. For example, if the first object is facing directly down the z-axis, and the second one is the same but also rotated roughly 45deg around the x-axis, this would be deemed "correct" and my functions do what they need to do. This can vary of course, maybe its the same z but rotated on the x and y, or maybe even rotated a little bit around each axis. The definition of "correct" rotations will be stored in the object for comparison.
Ideally I'm hoping to do something like:
bool checkRotation(object* first, object* second) {
//do some comparison stuff
if (eachCheck < someTolerance)
return true;
return false;
}
Is this possible to do by comparing two rotational matrices? Do I need to convert to quaternions and use those?
This question is the closest I've found to what I'm asking, but it's just different enough to be confusing.
Not a complete answer, but too long for a comment:
If you have two honest rotation matrices, then they should be invertible, and of determinant 1. Call the matrices A and B. If you want to check that the images A(X) and B(X) of the same object X under the two rotations is "close" in the sense that you can go from A(X) to B(X) by rotation around a specified axis, this is equivalent to checking whether the matrix obtained by taking A times the inverse of B is "nearly" a rotation around that axis. So this is probably the kind of thing you want to look at.
I'm not too familiar with the OpenGL matrix math functions so can't provide any code, sorry.

Is there any downside to using pre-multiplication order for matrices in GLSL?

The old matrix stack in OpenGL dictated that the matrix multiplication order had to be post-multiplication. Modern OpenGL defers these matrix operations to the shader, where we're now free to choose to use operator*(vec4,mat4) instead of operator*(mat4,vec4). I happen to like pre-multiplication better, as I find that it makes code more readable. E.g. with post-multiplication we have
mat4 mvp = vp * m;
while with pre-multiplication it becomes
mat4 mvp = m * vp;
which makes more sense to me.
Anyways, my question is: Is there any downside to this? Other than OpenGL people not being used to it? There doesn't seem to be any change in performance.
The old matrix stack in OpenGL dictated that the matrix multiplication order had to be post-multiplication.
No it didn't. The fixed function matrix stack would have worked just as fine with left associative multiplication (e.g. glRotate :: M = R ยท M) assuming row major matrices and row vectors, i.e. writing everything transposed. In fact mathematically this makes zero difference, it's exactly the same.
There doesn't seem to be any change in performance.
It's not a question about performance, but convenience. Most mathematical folk (computer scientists, physicists, mathematicians) are used to use column vectors and to read expressions right to left. It's common notation and that's why its done that way.
Well, you can do what you want.. Most people think of vectors as columns.. but you can use row vectors.. However, you should compute things like:
mat4 modelView = view * model
in the application. If you would compute it in the vertex shader.. it is evaluated for each vertex.

Transformation Matrix Multiplication by matrix type, C++

Theoretically, let us assume we were to hard-code matrix multiplications for each different combination of 3D Homogeneous (4x4) transformation matrix (translation, rotation, scaling), and then for also each possible result of those (translation-rotation, translation-scaling, scaling-rotation)...
Suppose we were to handle matrix multiplication like that, a different function for each matrix type combination, where each matrix has an extra variable (type), and with the specific functions to use being determined at runtime (using a function pointer array). If we applied this kind of matrix multiplication, could it theoretically be faster than doing basic, standard 4x4 homogeneous matrix multiplication (which is still admittedly faster than generic 4x4 matrix multiplication)?
I'm doing this right now, its kinda hellish to code. I'm going to test it against standard matrix multiplication in the end, and compare results. I just wanted to see what other people think the results might be. Any ideas?
I think a better idea is to store only position and orientation of an object instead of the whole matrix. You only compute the matrix for rendering purpose, once after all transformations. The transformations are done by adding translations (for the position) and multiplying quaternions (for the orientation).

Optimize scene graph

I have a standard scene graph written in opengl with c++.
My scene graph has nodes and shapes.
Nodes are matrices, and they draw all of their children after applying their matrix.
void Node::draw(Affine3f amatrix) const
{
amatrix = amatrix * matrix;
for (Drawable* child : childern)
{
child->draw(amatrix);
}
}
Shapes are simply packaged vbos, they take the matrix from the draw call, set it as the uniform modelview matrix, and then draw the vbo.
void Shape::draw(Affine3f mat) const
{
renderer.setModelView(mat);
myVertices.draw();
}
I love this design, it is very simple and flexible. But, it is very inefficient, with tons of CPU side matrix multiplications and tons of draw calls.
My question is:
How can I optimize this design, removing both unneeded matrix multiplications and unnecessary draw calls?
Like not recalculating matrices each draw(only calculate changed) and uniting the shapes so that they can be drawn with one call.
Some more information:
Shapes are static(for now), the vertices contained will never change.
There is a mix of static geometry(live at root node with no manipulation) and dynamic geometry(children of manipulated nodes)
For one thing, I would pass a const & for the incoming matrices. You are passing by value, and if you have some draw functions that don't end up needing to do anything special with the matrix, it's a lot of unnecessary copying.
If you want to prevent matrix calculations if a matrix hasn't changed, you will need to have a "dirty" flag to determine whether or not a matrix's value changed since you last used it. RenderWare did something like this with its matrix stuff.
Otherwise, like in the comment, without seeing your overall design, there's nothing inherently wrong with what you have.
Are you drawing every element in the tree, even if it isn't visible? If so, you should check out octrees to filter invisible nodes.
You could also try and do most matrix computations in the shaders, by passing them as variables. I also see that your matrices are affine, but maybe you still do an expensive inverse calculation in your implementation. If that's the case, you can check my tutorial to see how to make it cheap.