Transforming Bounding Boxes referenced to an object? - c++

I'm trying to implement AABBs/OOBBs with MathGeoLib since the ease to operate with BBs (and because I wanted to test some things with that library).
The problem is that the engine's objects transformations are based on glm since we started with glm (and they work properly) but when it comes to transform the OOBBs according to an object, it doesn't work very well.
What I basically do is to pass to a function the game object's translation, orientation and scale (I tried to pass a global matrix but it doesn't work, it seems to 'add' the transformation instead of setting it, and I can't access the oobb's matrix). That function does the next:
glm::vec3 pos = passedPosition - OOBBPreviousPos;
glm::mat4 Transformation = glm::translate(glm::mat4(1.0f), pos) *
glm::mat4_cast(passedRot) * glm::scale(glm::mat4(1.0f), passedScale);
glm::mat4 resMat = glm::transpose(Transformation);
math::float4x4 mat = math::float4x4::identity;
mat.Set(glm::value_ptr(resMat));
Which basically transposes the glm matrix (I have seen that that's they way of 'translating' them), passes it to a float* and then it constructs the MathGeoLib matrix with that. I have debugged it and the values seem to be right according to the object, so the next thing I do is actually transform the OOBB and then, enclose the AABB to have it inside, like this:
m_OBB.Transform(mat);
m_AABB.SetNegativeInfinity(); //Sets AABB to "null"
m_AABB.Enclose(m_OBB);
The final behaviour is pretty strange, believe me if I say that is the most close I've been from having it right, I've been some days testing different things and nothing works better (passing global/local matrices directly, trying different ways of passing/constructing transformation data, checking if the glm-MathGLib is correct...). It rotates but not around its own axis, and the scaling gets him crazy (although translation works). Its current behaviour can be seen here: https://gfycat.com/quarrelsomefineduck (blue cubes are AABBs, green ones are OOBBs).
Am I doing something wrong with the mathematics calculations or data transfer?

I still been looking on that but then some friend made me look into another direction, so I finally solved it (or better said: I "worked-around it") by storing an initial object's AABB and passing to the mentioned function the game object's global matrix. Then, inside the function, I used another MathGeoLib function to transform the OOBB.
That function finally looks like:
glm::mat4 resMat = glm::transpose(GlobalMatrixPassed);
math::float4x4 mat = math::float4x4::identity;
mat.Set(glm::value_ptr(resMat)); //"Translate" glm matrix passed into a MathGeoLib one
m_OOBB.SetFrom(m_InitialAABB); //Set OOBB from the initial aabb
m_OOBB.Transform(mat); //Transform it
m_AABB.SetFrom(m_OOBB); //Set the AABB in function of the transformed OOBB

Related

What order should the view matrix be caluculated in?

Currently I have been calculating the view matrix like this:
viewMatrix = cameraRot * cameraTrans
and the model matrix like this:
modelMatrix = modelTrans * modelScale
where cameraTrans and modelTrans are translation matrices, modelScale is a scaling matrix, and cameraRot and modelRot are rotation matrices produced by quaternions.
Is this correct? I've been googling this for a few hours, and no one mentions the order for the view matrix, just the model matrix. It all seems to work, but I wrote the matrix and quaternion implementations myself so I cant' tell if this is a bug.
(Note: The matrices are row major)
Let us talk about transformation between coordinate system. Suppose you have a point defined on a local system. You want to describe it in a global system, so what you do is rotate this point, in order to align its axis, and then, translate it to its final position. You can described this mathematically by:
Pg = T*R*Pl, where M = T*R
In this way, M allows you describe any point, defined in a local coordinate system, into a global coordinate system.
You can do the same with camera, but what you really want is do exactly the inverse that you have done before, i.e., you want to describe any point in global coordinate system to the camera local coordinate system:
Pc = X*Pg, but what is the value of X?
You know that:
Pg = Tc*Rc*Pc, so Pc = inv(Tc*RC)*Pg
in order words:
X = inv(Tc*Rc) = inv(Rc) * inv(Tc)
Therefore, to describe a point, from its local coordinate system, to the camera coordinate system, you just need to concatenate those two matrices:
Pc = inv(Rc)*inv(Tc)*T*R*P, where
M' = inv(Rc)*inv(Tc)*T*R
Note that some systems (glm library, for example), define this matrix (X) as lookAt and its definition can be found here. I would suggest you to here this article too
What you have is correct.
modelMatrix = modelTranslation * modelRotation * modelScale; // M=TRS
viewMatrix = cameraOrientation * cameraTranslation; // V=OT
To make this easier to remember, first note that matrices are essentially applied backwards. Let us consider that M=SRT. So you have a cube and you translate it. But if you rotate, it will rotate from the original pivot point. Then, once you apply a scaling factor, the model will be skewed because the scaling applies after the rotation. This is all hard to deal with- M=TRS is much easier for most purposes once you consider that. This is a little hard to describe in words, so let me know if you'd like some pictures.

How to rotate object using the 3D graphics pipeline ( Direct3D/GL )?

I have some problems with trying to animate the rotation of mesh objects.
If to make the rotation process once all is fine. Meshes are rotated normally and the final image from the WebGL buffer looks pretty fine.
http://s22.postimg.org/nmzvt9zzl/311.png
But if to use the rotation in loop (with each new frame record) then the meshes are starting to look very weird, look the next screenshot:
http://s22.postimg.org/j2dpecga9/312.png
I won't provide here the programming code, because the issue is depend on incorrect 3D graphics handling.
I think some OpenGL/Direct3D developers may give an advice how to fix it, because this question relates to the 3D-programming subject and to the some specific GL or D3D function/method. Also I think the way of work with rotation is the same both in OpenGL and Direct3D because of linear algebra and affine affine transformations.
If you are really interested what I'm using, so the answer is WebGL.
Let me describe how do I use the rotation of object.
The simple rotation is made using the quaternions. Any mesh object I define has its quaternion property.
If to rotate the object then the method rotate() is doing the next:
// Some kind of pseudo-code
function rotateMesh( vector, angle ) {
var tempQuatenion = math.convertRotationToQuaternion( vector, angle );
this.quaternion = math.multiplyQuaternions( this.quaternion, tempQuatenion );
}
I use the next piece of code in Renderer class to handle the mesh translation and rotation:
// Some kind of pseudo-code
// for each mesh, which is added to scene
modelViewMatrix = new IdentityMatrix()
translateMatrixByVector( modelViewMatrix, mesh.position )
modelViewMatrix.multiplyByMatrix( mesh.quaternion.toMatrix() )
So... I want to ask you if the logic above is correct then I provide some sources of math functions which are used for quaternions, rotations, etc...
If the logic above is incorrect so I think it makes no sense to provide something else. Because it's required to fix the main logic.
Quaternion multiplication is not commutative, i.e., if A and B are quaternions then A * B != B * A. If you want to rotate quaternion A by quaternion B, you need to do A = B * A, so this:
this.quaternion = math.multiplyQuaternions( this.quaternion, tempQuatenion );
should have its arguments reversed.
In addition, as mentioned by #ratchet-freak in comments, you should make sure your quaternions are always normalized, otherwise transformations other than rotation may happen.

How to calculate the new global position of a child object based on a relative change in the parent?

I have a spatial structure where I keep a few objects. All the positions of the objects are global.
I'm now trying to create a parent/child system but I'm having trouble with the math. What I tried at first was, everytime I move an object, I also move all of its children by the same amount. That works but I also need rotation, so I tried using matrixes. I built a model matrix for the child. It was built using the relative position/rotation/scale to the parent:
glm::mat4 child_model;
//"this" is the parent
child_model = glm::translate(child_model, child_spatial.position - this->m_position);
child_model = child_model * glm::toMat4(glm::inverse(this->m_rotation) * child_spatial.rotation);
child_model = glm::scale(child_model, child_spatial.scale - this->m_scale);
I would then rotate/translate/scale the child matrix by the amount the parent was rotated/moved/scaled and then I would decompose the resulting matrix back to the global child:
child_model = child_model * glm::toMat4(this->m_rotation * rotation);
child_model = glm::translate(child_model, this->m_position + position);
child_model = glm::scale(child_model, this->m_scale * scale);
where position/rotation/scale are defined as:
//How much the parent changed
position = this->position - m_position;
rotation = glm::inverse(m_rotation) * this->rotation;
scale = this->scale - m_scale;
and:
glm::decompose(child_model, d_scale, d_rotation, d_translation, d_skew, d_perspective);
child_spatial.position = d_translation;
child_spatial.rotation = d_rotation;
child_spatial.scale = d_scale;
But it doesn't work and I'm not sure what is wrong. Everything just spins/moves out of control. What am I missing here?
This problem is similar to joint animation used for computer animations. First you have to use or construct transformation matrices for each object. However, the transformation matrix of each child object should be with respect to their parent's coordinate system (i.e., child coordinate should be in parent's coordinate space). We'll call this matrix a ToParentMatrix. In addition, each object needs to have another matrix which transforms it all the way to the root (which is in world space). We'll call this matrix aToRootMatrix. When we multiply these two matrices we get a 'ToWorldMatrix' which contains the position and orientation in world space. So the transformation of each object to world space is as follows in a 2-joint hierarchie (root, child1 (of root) and child2 (of child1)):
RootTransform = ToWorldMatrix; // The root object is allready in world space;
Child1.ToRootMatrix = RootTransform;
Child1.ToWorldMatrix = Child1.ToRootMatrix*Child1.ToParentMatrix;
Child2.ToRootMatrix = Child1.ToWorldMatrix;
Child2.ToWorldMatrix = Child2.ToRootMatrix*Child2.ToParentMatrix;
For more information, search for joint (or skeletal) animation and forward kinematics. Also Frank Luna's book has two nice chapters about skeletal animation.
You are way overthinking things because you are missing some math fundamentals. There are tons of linear algebra for 3d graphics tutorials and courses just a google away!
You need to read through some of them. Your mistake is that you think of a matrix as "how to apply scale/translate/rotate". You need to change that point of view. Matrices describe local spaces. It does not matter how you set them up. So stop thinking about "how things move". Think of every matrix as a space (base). Multiplying a vector by a matrix moves it into that space. And then you move it to the next space. Think of a space as a relative coordinate system.
This is probably not very helpful, but what I am trying to say is: Spend some time to study how linear algebra works! You will need it and it is quite easy in the end. I have a hard time finding a good tutorial but maybe https://www.khanacademy.org/math/linear-algebra looks good.

'Ray' creation for raypicking not fully working

I'm trying to implement a 'raypicker' for selecting objects within my project. I do not fully understand how to implement this, but I understand conceptually how it should work. I've been trying to learn how to do this, but most tutorials I find go way over my head. My current code is based on one of the recent tutorials I found, here.
After several hours of revisions, I believe the problem I'm having with my raypicker is actually the creation of the ray in the first place. If I substitute/hardcode my near/far planes with a coordinate that would undisputably be located within the region of a triangle, the picker identifies it correctly.
My problem is this: my ray creation doesn't seem to fully take my current "camera" or perspective into account, so camera rotation won't affect where my mouse is.
I believe to remedy this I need something like using gluUnProject() or something, but whenever I used this the x,y,z coordinates returned would be incredibly small,
My current ray creation is a mess. I tried to use methods that others proposed initially, but it seemed like whatever method I tried it never worked with my picker/intersection function.
Here's the code for my ray creation:
void oglWidget::mousePressEvent(QMouseEvent *event)
{
QVector3D nearP = QVector3D(event->x()+camX, -event->y()-camY, -1.0);
QVector3D farP = QVector3D(event->x()+camX, -event->y()-camY, 1.0);
int i = -1;
for (int x = 0; x < tileCount; x++)
{
bool rayInter = intersect(nearP, farP, tiles[x]->vertices);
if (rayInter == true)
i = x;
}
if (i != -1)
{
tiles[i]->showSelection();
}
else
{
for (int x = 0; x < tileCount; x++)
tiles[x]->hideSelection();
}
//tiles[0]->showSelection();
}
To repeat, I used to load up the viewport, model & projection matrices, and unproject the mouse coordinates, but within a 1920x1080 window, all I get is values in the range of -2 to 2 for x y & z for each mouse event, which is why I'm trying this method, but this method doesn't work with camera rotation and zoom.
I don't want to do pixel color picking, because who knows I may need this technique later on, and I'd rather not give up after the amount of effort I put in so far
As you seem to have problems constructing your rays, here's how I would do it. This has not been tested directly. You could do it like this, making sure that all vectors are in the same space. If you use multiple model matrices (or stacks thereof) the calculation needs to be repeated separately with each of them.
use pos = gluUnproject(winx, winy, near, ...) to get the position of the mouse coordinate on the near plane in model space; near being the value given to glFrustum() or gluPerspective()
origin of the ray is the camera position in model space: rayorig = inv(modelmat) * camera_in_worldspace
the direction of the ray is the normalized vector from the position from 1. to the ray origin: raydir = normalize(pos - rayorig)
On the website linked they use two points for the ray and they don't seem to normalize the ray direction vector, so this is optional.
Ok, so this is the beginning of my trail of breadcrumbs.
I was somehow having issues with the QT datatypes for the matrices, and the logic pertaining to matrix transformations.
This particular problem in this question resulted from not actually performing any transformations whatsoever.
Steps to solving this problem were:
Converting mouse coordinates into NDC space (within the range of -1 to 1: x/screen width * 2 - 1, y - height / height * 2 - 1)
grabbing the 4x4 matrix for my view matrix (can be the one used when rendering, or re calculated)
In a new vector, have it equal the inverse view matrix multiplied by the inverse projection matrix.
In order to build the ray, I had to do the following:
Take the previously calculated value for the matrices that were multiplied together. This will be multiplied by a vector 4 (array of 4 spots), where it will hold the previously calculated x and y coordinates, as well as -1, then +1.
Then this vector will be divided by the last spot value of the entire vector
Create another vector 4 which was just like the last, but instead of -1, put "1" .
Once again divide that by its last spot value.
Now the coordinates for the ray have been created at the far and near planes, so it can intersect with anything along it in the scene.
I opened a series of questions (because of great uncertainty with my series of problems), so parts of my problem overlap in them too.
In here, I learned that I needed to take the screen height into consideration for switching the origin of the y axis for a Cartesian system, since windows has the y axis start at the top left. Additionally, retrieval of matrices was redundant, but also wrong since they were never declared "properly".
In here, I learned that unProject wasn't working because I was trying to pull the model and view matrices using OpenGL functions, but I never actually set them in the first place, because I built the matrices by hand. I solved that problem in 2 fold: I did the math manually, and I made all the matrices of the same data type (they were mixed data types earlier, leading to issues as well).
And lastly, in here, I learned that my order of operations was slightly off (need to multiply matrices by a vector, not the reverse), that my near plane needs to be -1, not 0, and that the last value of the vector which would be multiplied with the matrices (value "w") needed to be 1.
Credits goes to those individuals who helped me solve these problems:
srobins of facepunch, in this thread
derhass from here, in this question, and this discussion
Take a look at
http://www.realtimerendering.com/intersections.html
Lot of help in determining intersections between various kinds of geometry
http://geomalgorithms.com/code.html also has some c++ functions one of them serves your purpose

Creating the camera transform and view matrices

This is a topic that I come across often online, but most websites do a poor job properly explaining this. I'm currently creating my own Camera class as part of a 3D renderer built from scratch (how better to understand what happens, right?). But I've hit a snag with creating the World and View matrices for the camera.
As I understand it, the world matrix of a camera is essentially the matrix that places the camera where in needs to be in world space; which is only necessary when you need to render something in its position and according to its orientation. The View matrix, on the other hand, is the matrix that places the camera from that position to the origin of world space, facing along the z axis in one direction or another (negative for right-handed, positive for left-handed, I believe). Am I correct so far?
Given a matrix with its position defined as m_Eye and a lookat defined as m_LookAt, how do I generate the world matrix? More importantly, how do I generate the view matrix without having to perform an expensive inverse operations? I know that the rotation element's inverse is equal to its transpose, so I'm thinking that will factor into it. Regardless, this is the code I have been tinkering with. As an aside, I'm using a right-handed coordinate system.
The following code should generate the appropriate local coordinate axis:
m_W = AlgebraHelper::Normalize(m_Eye - m_LookAt);
m_U = AlgebraHelper::Normalize(m_Up.Cross(m_W));
m_V = m_W.Cross(m_U);
The following code is what I have, so far, for generating the World matrix (note, I also work with row-based matrices, so m_12 indicates the first row and second column):
Matrix4 matrix = Matrix4Identity;
matrix.m_11 = m_U.m_X;
matrix.m_12 = m_U.m_Y;
matrix.m_13 = m_U.m_Z;
matrix.m_21 = m_V.m_X;
matrix.m_22 = m_V.m_Y;
matrix.m_23 = m_V.m_Z;
matrix.m_31 = m_W.m_X;
matrix.m_32 = m_W.m_Y;
matrix.m_33 = m_W.m_Z;
matrix.m_41 = m_Eye.m_X;
matrix.m_42 = m_Eye.m_Y;
matrix.m_43 = m_Eye.m_Z;
Is this a good way to calculate the World matrix and, subsequently, how do I extract the View matrix?
Thank you in advance for any help in the matter.