Find final world coordinates from model matrix or quaternion - c++

I am displaying an object in OpenGL using a model matrix for the object, which I build from my pre-stored object location AND a quaternion applied to the rotation. I need to find the final cartesian coordinates in 3D of my object AFTER the rotations and transformations applied (the coordinates that the object appears at on the screen). How can I get the plain coordinates?

If I understand correctly, you have an object; if you rendered it without applying any transformation, its center would be at [0,0,0].
You have a point, [a,b,c], in 3D space. You apply a translation to the modelview matrix. Now, if you rendered the object, its center would be at [a,b,c] in world space coordinates.
You have a quaternion, [qw,qx,qy,qz]. You create a rotation matrix, M, from this and apply it to the modelview matrix. Now you want to know the new coordinates, [a',b',c'], of the object's center in world space.
If this is true, then the easiest way is to just do the matrix multiplication yourself:
a' = m11*a + m12*b + m13*c
b' = m21*a + m22*b + m23*c
c' = m31*a + m32*b + m33*c
where
[m11 m12 m13]
M = [m21 m22 m23]
[m31 m32 m33]
But perhaps you're not actually building M. Another way would be to use the quaternion directly, although that essentially involves building the rotation matrix and then using it.
There should be no need to actually use gluProject. When you apply the rotation to the modelview matrix, the matrix multiply is done there. So you could just get the values from the matrix itself:
double mv[16];
glGetDoublev(GL_MODELVIEW_MATRIX,mv);
a' = mv[13];
b' = mv[14];
c' = mv[15];
This tells you where the modelview matrix is moving the model's origin.

Re-implement gluProject() and apply everything but the viewport transform.

Related

how do I maintain relative transformation b/w 2 objects after changing transformation of any one without a scenegraph?

Say I have 2 objects, a camera and a cube, both on XZ plane, the cube has some arbitrary rotation, and camera is facing the cube.
now if a transformation R is applied to the camera such that it has a new rotation and position.
I want to move the cube in front of the camera using transformation R1, such that in the view it looks exactly as before R was applied, meaning relative distance, rotation and scale b/w the 2 objects remain same after both R and R1.
Following image gives a gist of the problem.
Assume that there's no scenegraph that we can use.
I've posed the problem mainly in 2D but I'm trying to solve it in 3D, so rotations can have all yaw, pitch and roll, and translations can be anywhere in 3D space.
EDIT:
I forgot to add what I have done so far.
I figured out how to maintain relative distance b/w camera and cube, I can project cube's position to get world to screen point, then unproject the screen point in new camera position to get new world position.
However for rotation, I have tried this
I thought I can apply same rotation as R in R1, this didn't work, it appears to work if rotation happens only in one axis, if rotation happens in more than one axes, it does not work.
I thought I can take delta rotation b/w camera and cube, and simply apply camera's rotation to the cube and then multiply delta rotation, this also didn't work
Let M and V be the model and view matrices before you move the camera, M2 and V2 be the matrices after you move the camera. To be clear: model matrix transforms the coordinates from object local coordinates into world coordinates; a view matrix transforms from world coordinates into clip-space camera coordinates. Consequently V*M*p transforms the position p into clip-space.
For the position on the screen to stay constant, we need V*M*p = V2*M2*p to be true for all p (that's assuming that the FOV doesn't change). Therefore V*M = V2*M2, or
M2 = inverse(V2)*V*M
If you apply the camera transformation on the right (V2 = V*R) then the above expression for M2 can be simplified:
M2 = inverse(R)*M
(that is you apply inverse(R) on the left of the model matrix to compensate).
Alternatively, ask yourself if you really need to keep the object coordinates in the world reference frame. It may be easier to not to apply the view matrix when rendering that object at all; that would effectively keep it relative to the camera at all times without any additional tweaks. That would have better numerical stability too.

Implementing arcball rotation axis without projection matrix?

I have a question about implementing the arcball in opengl es, using Android Studio.
After calculating the rotation axis, I should reverse the axis through the rendering pipeline back to the object space, so that the rotation could be applied in the object space.
This part would be written like:
obj_rotateAxis = normalize(vec3(inverse(mat3(camera->projMatrix) * mat3(camera->viewMatrix) * mat3(teapot->worldMatrix)) * rotateAxis));
However, I heard that the correct form should be like:
obj_rotateAxis = normalize(vec3(inverse(mat3(camera->viewMatrix) * mat3(teapot->worldMatrix)) * rotateAxis));
where projMatrix is discarded. Why do we not consider the projection matrix when we implement the arcball, although projection transform is done for the object?
As far as I know, you use the arcBall to compute an angle of rotation you will apply to your object. When you want to rotate an object you want to make it rotate from the origin of the world (world matrice) or from the viewpoint (view matrice).
The projection matrice doesn 't represent the actual location of your object. The farest you are located the smaller you will get you don't want the depth to have an effect on your rotation.
So you compute the rotation from the view point or the origin and then you let the projection matrice do its job at render.

3D rotation in OpenGL

So I'm trying to do some rotation operations on an image in openGL based on quaternion information, and I'm wondering, is there a way to define the location of my image by a vector (let's say (001)), and then apply the quaternion to that vector to rotate my image around an arbitrary origin? I've been using GLM for all the math work. (Using C++)
Or is there a better way to do this that I haven't figured out yet?
If you want to rotate around a point P = {x, y, z} then you can simply translate by -P, rotate around the origin and then translate back by P.
The order in which the transforms should be applied are:
scale -> translation to point of rotation -> rotation -> translation
So your final matrix should be computed:
glm::mat4 finalTransform = translationMat * rotationMat * translationToPointOfRotationMat * scaleMat;

Camera projection matrix: why transpose rotation matrix?

In the following:
http://cvlab.epfl.ch/files/content/sites/cvlab2/files/data/strechamvs/rathaus.tar.gz
there's a README file that says:
a 3D point X will be projected into the images in the usual way:
x = K[R^T|-R^T t]X
I remember that 3D to 2D Camera Projection Matrix requires the R rotation, not the transpose R matrix, i.e. I expect:
x = K[R|-R t]X
Why does it say R^T and not simply R ?
It depends in which direction R was determined. I.e. is it a transformation of the camera in the global reference frame, or is it a transformation of the points in the local camera's reference frame.
The true answer is: Don't worry just check that what you've got is right.
Since R^T == R^{-1}, it seems like the upper formula just expects the rotation to be available in the reverse direction of the below one. Just make sure to use the direction they expect as input.

OpenGL ModelView Confusion

I am using opengl 2.0 with the fixed function pipeline. It seems that in opengl 2.0
they push the vertices through the model-view stack which is basically (view matrix * model matrix), in which the model matrix doesn't provide any transformation really it brings an object say a cube to be centered at (0,0,0) if the model view matrix has a identity matrix loaded.Also the camera it self would be located at (0,0,0) looking down the negative z axis.
So if I use a translate call with the cube I am I really moving the cube in Eye space ?
From what I learned the generalized viewing pipeline is
Vertices -> Modelling Matrix -> World Space, Objects in World Space -> Viewing Matrix -> Eye Space, Eye Space Objects -> Projection Matrix -> Clipping Space , Then normalization ect
So If I switch to the
model view matrix stack()
loadidentity ()
gltranslate ( up 5 units in the negative z direction)
gldrawcube()
it would move the cube from center of the eye space according to the translation ?
I think my confusion is that I don't know what is loaded into the model view matrix stack when the program starts, I assume it is an identity matrix that brings everything to the centre of the eye space.
In a newly created OpenGL context all matrices are identity, i.e. vectors go through untransformed. In fixed function OpenGL vertex transformation skips the "world" step, collapsing object→world and world→eye into a single transformation. This is no big deal however. Lighting calculations are easiest in eye space anyway. And since fixed function OpenGL doesn't know shaders (except as extension), there's no need to do things in world space.
glTranslate, glRotate, glScale don't transform objects. They manipulate the matrix on top of the stack active for manipulation. So ultimately they contribute to the transformation, but not on the object, but the vertex (position) level.
it would move the cube from center of the eye space according to the translation?
Indeed, but what's "moved" (actually transformed) are the cube's vertices; and it may be not just a translation.
EDIT due to comment
The key thing to understand is transformation composition. First and foremost a transformation is a mapping
T: R^4 -> R^4, v' = v |-> T(v)
There's a subset of transformation, namely the linear transformations which can be represented by matrix multiplication:
v' = T * v
one can concatenate transformations, i.e. v = v |-> T'○T (v) again for the subset of linear transformations, written in matrix form you can expand this to
v' = T * v
v'' = T' * v'
=>
v'' = T' * T * v
Now let V denote the viewing transform and W the world transform. So the total transform is
M = V * W
The order of matrix multiplication matters (i.e. matrix multiplication is not commutative):
∃ M, N ∊ {Matrices}: M * N ≠ N * M
The view transform V is the transform of the whole world so that it is moved in a way, that the camera in the world ends up being at the origin, viewing down the negative Z axis. So let V' be the transform that moves "the camera" from the origin at it's place in the world, the inverse of that movement moves the world in a way that the camera comes to rest at the origin. So
V = inv(V')
Last but not least given some matrices A, B, C for which holds
A = B * C
then
inv(A) = inv(C) * inv(B)
i.e. the order of operations reversed. So if you "position" your "camera" using inverse OpenGL matrix operations, the order of the operations must be reversed. And since the overall order of operation matters the viewing transformations must happen before the model transformations.