New to OpenGl I am trying to make sure i did this part right, im told to build the world matrix from the position, scaling and rotation information.
From the material i found online my understanding is
p^world = p^world∗p^Model
P^Model = Scaling * Rotation * Translation
Therefore i coded the following:
glm::mat4 Model::GetWorldMatrix() const
{
// #TODO 2, you must build the world matrix from the position, scaling and rotation informations
glm::mat4 pModel = GetScaling() * GetRotationAngle() * GetPosition();
glm::mat4 worldMatrix(1.0f);
worldMatrix = worldMatrix* pModel;
// #TODO 4 - Maybe you should use the parent world transform when you do hierarchical modeling
return worldMatrix;
}
void Model::SetPosition(glm::vec3 position)
{
mPosition = position;
}
void Model::SetScaling(glm::vec3 scaling)
{
mScaling = scaling;
}
void Model::SetRotation(glm::vec3 axis, float angleDegrees)
{
mRotationAxis = axis;
mRotationAngleInDegrees = angleDegrees;
}
Is this correct?? Thank you for your time and help.
The way to do it is to save one 4x4 Matrix for every Model. This matrix is although called the ModelMatrix.
All important informations (position,rotation,scale) of the object are saved in this matrix. If you want to translate,rotate or scale your object you generate a transformation matrix and multiply it from the left side to your model matrix. model = trafo * model; You can generate these transformation matrices with GLM http://glm.g-truc.net/0.9.2/api/a00245.html .
Related
So, I have a C ++ project which is simulating a car.
My program is working in 2D(only in the XY plane), it feed with odometries data gave by a rosbag, giving him his position in the XYplane depending of the world origin. Everything is fine in 2D.
But when I am in 3D, meaning I can rotate my car around several axes, not only Z anymore.
I realized that my car is rotating around the axes of the "world" axes, when I would like them to turn around my vehicle axes.
In order to test it, I did a dummy code where my vehicle is suppose to do a 90deg rotation one the Z axe and then a 90 deg rotation on Y axe.
I can do to have my vehicle rotating around his own axes ?
How would the math would be ?
In the following the two rotations are done around the world axes.
Here is a piece of my code to illustrate:
void NodeImpl::callback_odometry(motion_common_msg::Odometry input_msg)
{
//Getting Direction3D
//---
frame_id = input_msg.header.frame_id;
stamp = input_msg.header.stamp;
transform_from_odometry.setOrigin(new_position_3D);
tf::Quaternion q;
q.setRPY(0.0, 0.0, 0.0);
transform_from_odometry.setRotation(q);
if(input_msg.pos_x >= 5.0)
{
double cos90 = 0;
double sin90 = 1;
tf::Matrix3x3 Rz90;
tf::Matrix3x3 Ry90;
Rz90.setValue(cos90, - sin90, 0, sin90, cos90, 0, 0, 0, 1);
Ry90.setValue(cos90, 0, sin90, 0, 1, 0, -sin90, 0, cos90);
tf::Quaternion qz90;
tf::Quaternion qy90;
Rz90.getRotation(qz90);
Ry90.getRotation(qy90);
qz90.normalize();
qy90.normalize();
tf::Transform tf_new_rot;
tf_new_rot.setRotation(qz90);
transform_from_odometry.setRotation(qy90);
transform_from_odometry.mult (transform_from_odometry, tf_new_rot);
}
broadcast();
}
void NodeImpl::broadcast()
{
static tf::TransformBroadcaster br;
br.sendTransform(tf::StampedTransform(transform_from_odometry, stamp, frame_id, "ego_vehicle_rear_axis"));
}
I'm not sure which library you are using, so I'll try to give some generic advice on this.
Global vs Local rotations are just a matter of the matrix multiplication order. Let R be the final rotation matrix. When you multiply the X, Y and Z matrices using the following order R=X*Y*Z, then this would give you Global rotations, whereas R=Z*Y*X will give you Local rotations.
The problem with the above is that it limits your freedom for local rotations to the specific order Z-Y-X. For example, if you want to rotate, first on the x-axis, then on the y-axis, and then on the z-axis, the above would work fine. Anything else, is not going to give you the results you want. You would have to change the order of matrix multiplications.
If you want to rotate about an axis, let's say the y-axis, which is local to your object, then, you'd need to know where this axis is. You need to keep a reference of the current axes after each transformation and then, use the Rotation matrix from axis and angle to rotate about your current local y-axis.
/* from the wiki link above */
Mat3 Axis_Matrix(float angle_, const Vec3 &axis_)
{
return Mat3{ cos(angle_)+pow(axis_.x,2)*(1.0-cos(angle_)) , (axis_.x*axis_.y*(1.0-cos(angle_)))-(axis_.z*sin(angle_)) , (axis_.x*axis_.z*(1.0-cos(angle_)))+(axis_.y*sin(angle_)),
(axis_.y*axis_.x*(1.0-cos(angle_)))+(axis_.z*sin(angle_)) , cos(angle_)+pow(axis_.y,2)*(1.0 - cos(angle_)) , (axis_.y*axis_.z*(1.0-cos(angle_)))-(axis_.x*sin(angle_)),
(axis_.z*axis_.x*(1.0-cos(angle_)))-(axis_.y*sin(angle_)) , (axis_.z*axis_.y*(1.0-cos(angle_)))+(axis_.x*sin(angle_)) , cos(angle_)+pow(axis_.z,2)*(1.0 - cos(angle_)) };
}
You can basically create your own structure that does all this:
struct CoordinateSystem
{
Vec3 current_x_axis;
Vec3 current_y_axis;
Vec3 current_z_axis;
Mat3 local;
void setRotationX(float angle_)
{
local *= Axis_Matrix(angle_, current_x_axis);
update();
}
void setRotationY(float angle_)
{
local *= Axis_Matrix(angle_, current_y_axis);
update();
}
void setRotationZ(float angle_)
{
local *= Axis_Matrix(angle_, current_z_axis);
update();
}
void update()
{
current_x_axis = {-1.f, 0.f, 0.f} * local;
current_y_axis = { 0.f, 1.f, 0.f} * local;
current_z_axis = { 0.f, 0.f,-1.f} * local;
}
};
Then you can just say:
setRotationX(45);
setRotationZ(10);
setRotationY(62);
setRotationX(34);
setRotationY(81);
And all the rotations would be local to your object.
I have two objects that I want to parent together so that Tri is a child of Torus. When I do so and multiply the matricies together by adding the parents modelView to the childs, the child jumps in space initially, over to the right and up by a few units. Where do I insert an offset into this, and how do I calculate it?
obj = make_shared<Object>(*this);
obj->rename("tri");
obj->type->val_s = "tri";
obj->t->val_3 = glm::vec3(-4.f, 1.5f, 0.f);
allObj.push_back(obj);
obj = make_shared<Object>(*this);
obj->rename("torus");
obj->type->val_s = "obj";
obj->t->val_3 = glm::vec3(3.f, 2.f, 0.f);
allObj.push_back(obj);
//Matrix
scaleM = glm::scale(glm::mat4(), s->val_3);
rotationM = glm::toMat4(r_quat);
glm::vec3 usablePivot = t->val_3 - pivot->val_3;
glm::mat4 localAxis1M = glm::translate(glm::mat4(), usablePivot);
glm::mat4 localAxis2M = glm::translate(glm::mat4(), -usablePivot);
translationM = glm::translate(glm::mat4(), t->val_3);
modelM = translationM * localAxis2M * rotationM * scaleM * localAxis1M;
//View
usableView = myGL->ViewM;
//Projection
usableProjection = myGL->ProjectionM;
//MVP
if (parent == "world") { MVP = usableProjection * usableView * modelM; }
else { MVP = usableProjection * usableView * parentTo->modelM * modelM; }
Matrix order in mine OpenGL apps:
sub-object matrices
the lowest in ownership hierarchy are first
they are local to their parents
their have offset to point (0,0,0) of parent space
if not then and they are local to world then this is the jump reason
in that case the sub-objects are not really sub objects
and handle them as normal object without parent
object to world
is transform matrix of objects without parent
they are local to your scene world
then goes camera view
it is inverse of camera space
where Z-axis is view direction
last is projection
like gluPerspective(...)
this one is usually stored in GL_PROJECTION matrix on fixed pipeline
instead of GL_MODELVIEW
Clipping is done by OpenGL separately via glViewport
(your usable view I think)
Look for more info here
you can check the content of your matrices
look at positions 12,13,14 where the offset vector is stored
do not forget that this vector is local to parent coordinate system!!!
so it is more likely rotated by it ...
also a good idea is to draw axis lines for each tested matrix in its parent space
to see if they are correct
I use red,gree,blue lines for x,y,z - axis
just extract origin of coordinate system [12,13,14]
and draw line from it to the same point + a*axis vector
a is line length (big enough so you see a line not a point)
axis vectors are at positions x=[0,1,2], y=[4,5,6], z=[8,9,10]
do not forget to set matrices to parent coordinate system !!!
if you handle matrices your self via GLSL then do not forget that
direction vectors like Normals are transformed without offset
so get the whole transform matrix (all multilicated together without projection and sometimes also camera)
set the offset to zero [12,13,14]=(0.0,0.0,0.0)
and multiply by this matrix
I would like to combine mouse and keyboard inputs with the Oculus Rift to create a smooth experience for the user. The goals are:
Positional movement 100% controlled by the keyboard relative to the direction the person is facing.
Orientation controlled 100% by HMD devices like the Oculus Rift.
Mouse orbit capabilities adding to the orientation of the person using the Oculus Rift. For example, if I am looking left I can still move my mouse to "move" more leftward.
Now, I have 100% working code for when someone doesn't have an Oculus Rift, I just don't know how to combine the orientation and other elements of the Oculus Rift to my already working code to get it 100%.
Anyway, here is my working code for controlling the keyboard and mouse without the Oculus Rift:
Note that all of this code assumes a perspective mode of the camera:
/*
Variables
*/
glm::vec3 DirectionOfWhereCameraIsFacing;
glm::vec3 CenterOfWhatIsBeingLookedAt;
glm::vec3 PositionOfEyesOfPerson;
glm::vec3 CameraAxis;
glm::vec3 DirectionOfUpForPerson;
glm::quat CameraQuatPitch;
float Pitch;
float Yaw;
float Roll;
float MouseDampingRate;
float PhysicalMovementDampingRate;
glm::quat CameraQuatYaw;
glm::quat CameraQuatRoll;
glm::quat CameraQuatBothPitchAndYaw;
glm::vec3 CameraPositionDelta;
/*
Inside display update function.
*/
DirectionOfWhereCameraIsFacing = glm::normalize(CenterOfWhatIsBeingLookedAt - PositionOfEyesOfPerson);
CameraAxis = glm::cross(DirectionOfWhereCameraIsFacing, DirectionOfUpForPerson);
CameraQuatPitch = glm::angleAxis(Pitch, CameraAxis);
CameraQuatYaw = glm::angleAxis(Yaw, DirectionOfUpForPerson);
CameraQuatRoll = glm::angleAxis(Roll, CameraAxis);
CameraQuatBothPitchAndYaw = glm::cross(CameraQuatPitch, CameraQuatYaw);
CameraQuatBothPitchAndYaw = glm::normalize(CameraQuatBothPitchAndYaw);
DirectionOfWhereCameraIsFacing = glm::rotate(CameraQuatBothPitchAndYaw, DirectionOfWhereCameraIsFacing);
PositionOfEyesOfPerson += CameraPositionDelta;
CenterOfWhatIsBeingLookedAt = PositionOfEyesOfPerson + DirectionOfWhereCameraIsFacing * 1.0f;
Yaw *= MouseDampingRate;
Pitch *= MouseDampingRate;
CameraPositionDelta = CameraPositionDelta * PhysicalMovementDampingRate;
View = glm::lookAt(PositionOfEyesOfPerson, CenterOfWhatIsBeingLookedAt, DirectionOfUpForPerson);
ProjectionViewMatrix = Projection * View;
The Oculus Rift provides orientation data via their SDK and can be accessed like so:
/*
Variables
*/
ovrMatrix4f OculusRiftProjection;
glm::mat4 Projection;
OVR::Quatf OculusRiftOrientation;
glm::quat CurrentOrientation;
/*
Partial Code for retrieving projection and orientation data from Oculus SDK
*/
OculusRiftProjection = ovrMatrix4f_Projection(MainEyeRenderDesc[l_Eye].Desc.Fov, 10.0f, 6000.0f, true);
for (int o = 0; o < 4; o++){
for (int i = 0; i < 4; i++) {
Projection[o][i] = OculusRiftProjection.M[o][i];
}
}
Projection = glm::transpose(Projection);
OculusRiftOrientation = PredictedPose.Orientation.Conj();
CurrentOrientation.w = OculusRiftOrientation.w;
CurrentOrientation.x = OculusRiftOrientation.x;
CurrentOrientation.y = OculusRiftOrientation.y;
CurrentOrientation.z = OculusRiftOrientation.z;
CurrentOrientation = glm::normalize(CurrentOrientation);
After that last line the glm based quaterion "CurrentOrientation" has the correct information which, if plugged straight into an existing MVP matrix structure and sent into OpenGL will allow you to move your head around in the environment without issue.
Now, my problem is how to combine the two parts together successfully.
When I have done this in the past it results in the rotation stuck in place (when you turn your head left you keep rotating left as opposed to just rotating in the amount that you turned) and the fact that I can no longer accurately determine the direction the person is facing so that my position controls work.
So at that point since I can no longer determine what is "forward" my position controls essentially become crap...
How can I successfully achieve my goals?
I've done some work on this by maintaining a 'camera' matrix which represents the position and orientation of they player, and then during rendering, composing that with the most recent orientation data collected from the headset.
I have a single interaction class which is designed to pull input from a variety of sources, including keyboard and joystick (as well as a spacemouse, or a Razer Hydra).
You'll probably find it easier to maintain the state as a single combined matrix like I do, rather than trying to compose a lookat matrix every frame.
If you look at my Rift.cpp base class for developing my examples you'll see that I capture keyboard input and accumulate it in the CameraControl instance. This is accumulated in the instance so that during the applyInteraction call later we can apply movement indicated by the keyboard, along with other inputs:
void RiftApp::onKey(int key, int scancode, int action, int mods) {
...
// Allow the camera controller to intercept the input
if (CameraControl::instance().onKey(player, key, scancode, action, mods)) {
return;
}
...
}
In my per-frame update code I query any other enabled devices and apply all the inputs to the matrix. Then I update the modelview matrix with the inverse of the player position:
void RiftApp::update() {
...
CameraControl::instance().applyInteraction(player);
gl::Stacks::modelview().top() = glm::inverse(player);
...
}
Finally, in my rendering code I have the following, which applies the headset orientation:
void RiftApp::draw() {
gl::MatrixStack & mv = gl::Stacks::modelview();
gl::MatrixStack & pr = gl::Stacks::projection();
for_each_eye([&](ovrEyeType eye) {
gl::Stacks::with_push(pr, mv, [&]{
ovrPosef renderPose = ovrHmd_BeginEyeRender(hmd, eye);
// Set up the per-eye modelview matrix
{
// Apply the head pose
glm::mat4 m = Rift::fromOvr(renderPose);
mv.preMultiply(glm::inverse(m));
// Apply the per-eye offset
glm::vec3 eyeOffset = Rift::fromOvr(erd.ViewAdjust);
mv.preMultiply(glm::translate(glm::mat4(), eyeOffset));
}
// Render the scene to an offscreen buffer
frameBuffers[eye].activate();
renderScene();
frameBuffers[eye].deactivate();
ovrHmd_EndEyeRender(hmd, eye, renderPose, &eyeTextures[eye].Texture);
});
GL_CHECK_ERROR;
});
...
}
I'm trying to implement skeletal animation in a small program I'm writing. The idea is to calculate the transformation matrix on the CPU every frame by interpolating keyframe data, then feeding this data to my vertex shader which multiplies my vertices by this matrix like this:
vec4 v = animationMatrices[int(boneIndices.x)] * gl_Vertex * boneWeights.x;
Where boneWeights and boneIndices are attributes and animationMatrices is a uniform array of transformation matrices updated every frame before drawing. (The idea is to have multiple bones affecting one vertex eventually, but right now I'm testing with one bone per vertex so just taking the weight.x and indices.x is enough).
Now the problem is calculating the transformation matrix for each bone. My transformation matrix for the single joint is good, the problem is that it always takes (0,0,0) as pivot instead of the pivot. I took the joint matrices from the COLLADA which correctly shows my skeleton when I draw them like this:
public void Draw()
{
GL.PushMatrix();
drawBone(Root);
GL.PopMatrix();
}
private void drawBone(Bone b)
{
GL.PointSize(50);
GL.MultMatrix(ref b.restMatrix);
GL.Begin(BeginMode.Points);
GL.Color3((byte)0, (byte)50, (byte)0);
if (b.Name == "Blades")
{
GL.Vertex3(0, 0, 0);
}
GL.End();
foreach (Bone bc in b.Children)
{
GL.PushMatrix();
drawBone(bc);
GL.PopMatrix();
}
}
So now to calculate the actual matrix I've tried:
Matrix4 jointMatrix = b.restMatrixInv * boneTransform * b.restMatrix;
or according to the collada documentation (this doesn't really make sense to me):
Matrix4 jointMatrix = b.restMatrix * b.restMatrixInv * boneTransform;
And I know I also have to put the parent matrix in here somewhere, I'm guessing something like this:
Matrix4 jointMatrix = b.restMatrixInv * boneTransform * b.restMatrix * b.Parent.jointMatrix;
But at the moment I'm mostly just confused and any push in the right direction would help. I really need to get this order right...
I am writing a viewer for a proprietary mesh & animation format in OpenGL.
During rendering a transformation matrix is created for each bone (node) and is applied to the vertices that bone is attached to.
It is possible for a bone to be marked as "Billboarded" which as most everyone knows, means it should always face the camera.
So the idea is to generate a matrix for that bone which when used to transform the vertices it's attached to, causes the vertices to be billboarded.
On my test model it should look like this:
However currently it looks like this:
Note, that despite its incorrect orientation, it is billboarded. As in no matter which direction the camera looks, those vertices are always facing that direction at that orientation.
My code for generating the matrix for bones marked as billboarded is:
mat4 view;
glGetFloatv(GL_MODELVIEW_MATRIX, (float*)&view);
vec4 camPos = vec4(-view[3].x, -view[3].y, -view[3].z,1);
vec3 camUp = vec3(view[0].y, view[1].y, view[2].y);
// zero the translation in the matrix, so we can use the matrix to transform
// camera postion to world coordinates using the view matrix
view[3].x = view[3].y = view[3].z = 0;
// the view matrix is how to get to the gluLookAt pos from what we gave as
// input for the camera position, so to go the other way we need to reverse
// the rotation. Transposing the matrix will do this.
{
float * matrix = (float*)&view;
float temp[16];
// copy this into temp
memcpy(temp, matrix, sizeof(float) * 16);
matrix[1] = temp[4]; matrix[4] = temp[1];
matrix[2] = temp[8]; matrix[8] = temp[2];
matrix[6] = temp[9]; matrix[9] = temp[6];
}
// get the correct position of the camera in world space
camPos = view * camPos;
//vec3 pos = pivot;
vec3 look = glm::normalize(vec3(camPos.x-pos.x,camPos.y-pos.y,camPos.z-pos.z));
vec3 right = glm::cross(camUp,look);
vec3 up = glm::cross(look,right);
mat4 bmatrix;
bmatrix[0].x = right.x;
bmatrix[0].y = right.y;
bmatrix[0].z = right.z;
bmatrix[0].w = 0;
bmatrix[1].x = up.x;
bmatrix[1].y = up.y;
bmatrix[1].z = up.z;
bmatrix[1].w = 0;
bmatrix[2].x = look.x;
bmatrix[2].y = look.y;
bmatrix[2].z = look.z;
bmatrix[2].w = 0;
bmatrix[3].x = pos.x;
bmatrix[3].y = pos.y;
bmatrix[3].z = pos.z;
bmatrix[3].w = 1;
I am using GLM to do the math involved.
Though this part of the code is based off of the tutorial here, other parts of the code are based off of an open source program similar to the one I'm building. However that program was written for DirectX and I haven't had much luck directly converting. The (working) directX code for billboarding looks like this:
D3DXMatrixRotationY(&CameraRotationMatrixY, -Camera.GetPitch());
D3DXMatrixRotationZ(&CameraRotationMatrixZ, Camera.GetYaw());
D3DXMatrixMultiply(&CameraRotationMatrix, &CameraRotationMatrixY, &CameraRotationMatrixZ);
D3DXQuaternionRotationMatrix(&CameraRotation, &CameraRotationMatrix);
D3DXMatrixTransformation(&CameraRotationMatrix, NULL, NULL, NULL, &ModelBaseData->PivotPoint, &CameraRotation, NULL);
D3DXMatrixDecompose(&Scaling, &Rotation, &Translation, &BaseMatrix);
D3DXMatrixTransformation(&RotationMatrix, NULL, NULL, NULL, &ModelBaseData->PivotPoint, &Rotation, NULL);
D3DXMatrixMultiply(&TempMatrix, &CameraRotationMatrix, &RotationMatrix);
D3DXMatrixMultiply(&BaseMatrix, &TempMatrix, &BaseMatrix);
Note the results are stored in baseMatrix in the directX version.
EDIT2: Here's the code I came up with when I tried to modify my code according to datenwolf's suggestions. I'm pretty sure I made some mistakes still. This attempt creates heavily distorted results with one end of the object directly in the camera.
mat4 view;
glGetFloatv(GL_MODELVIEW_MATRIX, (float*)&view);
vec3 pos = vec3(calculatedMatrix[3].x,calculatedMatrix[3].y,calculatedMatrix[3].z);
mat4 inverted = glm::inverse(view);
vec4 plook = inverted * vec4(0,0,0,1);
vec3 look = vec3(plook.x,plook.y,plook.z);
vec3 right = orthogonalize(vec3(view[0].x,view[1].x,view[2].x),look);
vec3 up = orthogonalize(vec3(view[0].y,view[1].y,view[2].y),look);
mat4 bmatrix;
bmatrix[0].x = right.x;
bmatrix[0].y = right.y;
bmatrix[0].z = right.z;
bmatrix[0].w = 0;
bmatrix[1].x = up.x;
bmatrix[1].y = up.y;
bmatrix[1].z = up.z;
bmatrix[1].w = 0;
bmatrix[2].x = look.x;
bmatrix[2].y = look.y;
bmatrix[2].z = look.z;
bmatrix[2].w = 0;
bmatrix[3].x = pos.x;
bmatrix[3].y = pos.y;
bmatrix[3].z = pos.z;
bmatrix[3].w = 1;
calculatedMatrix = bmatrix;
vec3 orthogonalize(vec3 toOrtho, vec3 orthoAgainst) {
float bottom = (orthoAgainst.x*orthoAgainst.x)+(orthoAgainst.y*orthoAgainst.y)+(orthoAgainst.z*orthoAgainst.z);
float top = (toOrtho.x*orthoAgainst.x)+(toOrtho.y*orthoAgainst.y)+(toOrtho.z*orthoAgainst.z);
return toOrtho - top/bottom*orthoAgainst;
}
Creating a parallel to view billboard matrix is as simple as setting the upper left 3×3 submatrix of the total modelview matrix to identity. There are only some cases where you actually require the actual look vector.
Anyway, you're thinking far too complicated. All your tinkering with the matrix completely misses the point. Namely that the modelview transformation assumes that the camera is always at (0,0,0) and moves world and models in opposite. What you try to do is finding the vector in model space that points towards the camera. Which is simply the vector that will point toward (0,0,0) after transformation.
So all we have to do is invert the modelview matrix and transform (0,0,0,1) with it. That's your look vector. For your calculations of right and up vectors orthogonalize the 1st (X) and 2nd (Y) column of the modelview matrix against that look vector.
Figured it out myself. It turns out the model format I'm using uses different axes for billboarding. Most billboarding implementations (including the one I used) use the X,Y coordinates to position the billboarded object. The format I was reading uses Y and Z.
The thing to look for is that there was a billboarding effect, but facing the wrong direction. To fix this I played with the different camera vectors until I arrived at the correct matrix calculation:
bmatrix[1].x = right.x;
bmatrix[1].y = right.y;
bmatrix[1].z = right.z;
bmatrix[1].w = 0;
bmatrix[2].x = up.x;
bmatrix[2].y = up.y;
bmatrix[2].z = up.z;
bmatrix[2].w = 0;
bmatrix[0].x = look.x;
bmatrix[0].y = look.y;
bmatrix[0].z = look.z;
bmatrix[0].w = 0;
bmatrix[3].x = pos.x;
bmatrix[3].y = pos.y;
bmatrix[3].z = pos.z;
bmatrix[3].w = 1;
My attempts to follow datenwolf's advice did not succeed and at this time he hasn't offered any additional explanation so I'm unsure why. Thanks anyways!