Orbit camera around sphere while looking at the center (using quaternions) - c++

I have looked at a ton of quaternion examples on several sites including this one, but found none that answer this, so here goes...
I want to orbit my camera around a large sphere, and have the camera always point at the center of the sphere. To make things easy, the sphere is located at {0,0,0} in the world. I am using a quaternion for camera orientation, and a vector for camera position. The problem is that the camera position orbits the sphere perfectly as it should, but always looks one constant direction straight forward instead of adjusting to point to the center as it orbits.
This must be something simple, but I am new to quaternions... what am I missing?
I'm using C++, DirectX9. Here is my code:
// Note: g_camRotAngleYawDir and g_camRotAnglePitchDir are set to either 1.0f, -1.0f according to keypresses, otherwise equal 0.0f
// Also, g_camOrientationQuat is just an identity quat to begin with.
float theta = 0.05f;
D3DXQUATERNION g_deltaQuat( 0.0f, 0.0f, 0.0f, 1.0f );
D3DXQuaternionRotationYawPitchRoll(&g_deltaQuat, g_camRotAngleYawDir * theta, g_camRotAnglePitchDir * theta, g_camRotAngleRollDir * theta);
D3DXQUATERNION tempQuat = g_camOrientationQuat;
D3DXQuaternionMultiply(&g_camOrientationQuat, &tempQuat, &g_deltaQuat);
D3DXMatrixRotationQuaternion(&g_viewMatrix, &g_camOrientationQuat);
g_viewMatrix._41 = g_camPosition.x;
g_viewMatrix._42 = g_camPosition.y;
g_viewMatrix._43 = g_camPosition.z;
g_direct3DDevice9->SetTransform( D3DTS_VIEW, &g_viewMatrix );
[EDIT - Feb 13, 2012]
Ok, here's my understanding so far:
move the camera using an angular delta each frame.
Get a vector from center to camera-pos.
Call quaternionRotationBetweenVectors with a z-facing unit vector and the target vector.
Then use the result of that function for the orientation of the view matrix, and the camera-position goes in the translation portion of the view matrix.
Here's the new code (called every frame)...
// This part orbits the position around the sphere according to deltas for yaw, pitch, roll
D3DXQuaternionRotationYawPitchRoll(&deltaQuat, yawDelta, pitchDelta, rollDelta);
D3DXMatrixRotationQuaternion(&mat1, &deltaQuat);
D3DXVec3Transform(&g_camPosition, &g_camPosition, &mat1);
// This part adjusts the orientation of the camera to point at the center of the sphere
dir1 = normalize(vec3(0.0f, 0.0f, 0.0f) - g_camPosition);
QuaternionRotationBetweenVectors(&g_camOrientationQuat, vec3(0.0f, 0.0f, 1.0f), &dir1);
D3DXMatrixRotationQuaternion(&g_viewMatrix, &g_camOrientationQuat);
g_viewMatrix._41 = g_camPosition.x;
g_viewMatrix._42 = g_camPosition.y;
g_viewMatrix._43 = g_camPosition.z;
g_direct3DDevice9->SetTransform( D3DTS_VIEW, &g_viewMatrix );
...I tried that solution out, without success. What am I doing wrong?

It should be as easy as doing the following whenever you update the position (assuming the camera is pointing along the z-axis):
direction = normalize(center - position)
orientation = quaternionRotationBetweenVectors(vec3(0,0,1), direction)
It is fairly easy to find examples of how to implement quaternionRotationBetweenVectors. You could start with the question "Finding quaternion representing the rotation from one vector to another".
Here's an untested sketch of an implementation using the DirectX9 API:
D3DXQUATERNION* quaternionRotationBetweenVectors(__inout D3DXQUATERNION* result, __in const D3DXVECTOR3* v1, __in const D3DXVECTOR3* v2)
{
D3DXVECTOR3 axis;
D3DXVec3Cross(&axis, v1, v2);
D3DXVec3Normalize(&axis, &axis);
float angle = acos(D3DXVec3Dot(v1, v2));
D3DXQuaternionRotationAxis(result, &axis, angle);
return result;
}
This implementation expects v1, v2 to be normalized.

You need to show how you calculate the angles. Is there a reason why you aren't using D3DXMatrixLookAtLH?

Related

Is there any way to extract a transform matrix from a view matrix in glm?

I need to extract the transform matrix from my camera to assign it to a mesh.
I'm working in a computational graphics project in school, the objective is to simulate the arms of a character in first person perspective.
My camera implementation includes a vector3 for the camera position, so i can assign that to my mesh, the problem is that i can't extract the rotation of the camera from my view matrix yet.
I calculate my final pitch and yaw in the rotation function this way, x and y are the current mouse position in the screen
m_yaw += (x - m_mouseLastPosition.x) * m_rotateSpeed;
m_pitch -= (y - m_mouseLastPosition.y) * m_rotateSpeed;
This is how i update the view matrix when it changes
glm::vec3 newFront;
newFront.x = -cos(glm::radians(m_yaw)) * cos(glm::radians(m_pitch));
newFront.y = sin(glm::radians(m_yaw)) * cos(glm::radians(m_pitch));
newFront.z = sin(glm::radians(m_pitch));
m_front = glm::normalize(newFront);
m_right = glm::normalize(glm::cross(m_front, m_worldUp));
m_up = glm::normalize(glm::cross(m_right, m_front));
m_viewMatrix = glm::lookAt(m_position, (m_position + m_front), m_up);
Right now I can assign the position of the camera to my mesh, like this
m_mesh.m_transform = glm::translate(glm::mat4(1.0f), m_camera.m_position);
I can assign the camera position successfully, but not rotation.
What i expect is to assign the full camera transform to my mesh, or to extract the rotation independently and assign it to the mesh after.
The steps to setting up the model view projection matrix, that I have always followed (which doesn't mean it's 100% right), which appears to be what you are having problems with is:
// Eye position is in world coordinate system, as is scene_center. up_vector is normalized.
glm::dmat4 view = glm::lookat(eye_position, scene_center, up_vector);
glm::dmat4 proj = glm::perspective(field_of_view, aspect, near_x, far_x);
// This converts the model from it's units, to the units of the world coordinate system
glm::dmat4 model = glm::scale(glm::dmat4(1.0), glm::dvec3(1.0, 1.0, 1.0));
// Add model level rotations here utilizing glm::rotate
// offset is where the objects 0,0 should be mapped to in the world coordinate system
model = glm::translate(model, offset);
// Order of course matters here.
glm::dvec3 mvp = proj * view * model;
Hope that helps.
Thank you so much for your answers, they helped a lot.
I managed to solve my problem in a very simple way, i just had to directly assign the final transform to my mesh using the separate properties of my camera.
glm is so new to me, i wasn't familiar with the way it handles matrix multiplications.
The final code takes a translation matrix with the camera position, then i rotated the resulting matrix in the Y axis with my pitch and finaly the resulting one rotates in the X axis with my yaw.
m_mesh.m_transform = glm::rotate(glm::rotate(glm::translate(glm::mat4(1.0f), camera.m_position), glm::radians(-camera.m_yaw), glm::vec3(0.0f, 1.0f, 0.0f)), glm::radians(-camera.m_pitch), glm::vec3(1.0f, 0.0f, 0.0f));

Rotating object around another object

So I have a camera object in my scenegraph which I want to rotate around another object and still look at it.
So far the code I've tried to translate its position just keeps moving it back and forth to the right and back a small amount.
Here is the code I've tried using in my game update loop:
//ang is set to 75.0f
camera.position += camera.right * glm::vec3(cos(ang * deltaTime), 1.0f, 1.0f);
I'm not really sure where I'm going wrong. I've looked at other code to rotate around the object and they use cos and sine but since im only translating along the x axis I thought I would only need this.
First you have to create a rotated vector. This can be done by glm::rotateZ. Note, since glm version 0.9.6 the angle has to be set in radians.
float ang = ....; // angle per second in radians
float timeSinceStart = ....; // seconds since the start of the animation
float dist = ....; // distance from the camera to the target
glm::vec3 cameraVec = glm::rotateZ(glm::vec3(dist, 0.0f, 0.0f), ang * timeSinceStart);
Furthermore, you must know the point around which the camera should turn. Probably the position of the objet:
glm::vec3 objectPosition = .....; // position of the object where the camera looks to
The new position of the camera is the position of the object, displaced by the rotation vector:
camera.position = objectPosition + cameraVec;
The target of the camera has to be the object position, because the camera should look to the object:
camera.front = glm::normalize(objectPosition - camera.position);
The up vector of the camera should be the z-axis (rotation axis):
camera.up = glm::vec3(0.0f, 0.0f, 1.0f);

Window coordinates to camera angles?

So I want to use quaternions and angles to control my camera using my mouse.
I accumulate the vertical/horizontal angles like this:
void Camera::RotateCamera(const float offsetHorizontalAngle, const float offsetVerticalAngle)
{
mHorizontalAngle += offsetHorizontalAngle;
mHorizontalAngle = std::fmod(mHorizontalAngle, 360.0f);
mVerticalAngle += offsetVerticalAngle;
mVerticalAngle = std::fmod(mVerticalAngle, 360.0f);
}
and compute my orientation like this:
Mat4 Camera::Orientation() const
{
Quaternion rotation;
rotation = glm::angleAxis(mVerticalAngle, Vec3(1.0f, 0.0f, 0.0f));
rotation = rotation * glm::angleAxis(mHorizontalAngle, Vec3(0.0f, 1.0f, 0.0f));
return glm::toMat4(rotation);
}
and the forward vector, which I need for glm::lookAt, like this:
Vec3 Camera::Forward() const
{
return Vec3(glm::inverse(Orientation()) * Vec4(0.0f, 0.0f, -1.0f, 0.0f));
}
I think that should do the trick, but I do not know how in my example game to get actual angles? All I have is the current and previous mouse location in window coordinates.. how can I get proper angles from that?
EDIT: on a second thought.. my "RotateCamera()" cant be right; I am experiencing rubber-banding effect due to the angles reseting after reaching 360 deegres... so how do I accumulate angles properly? I can just sum them up endlessly
Take a cross section of the viewing frustum (the blue circle is your mouse position):
Theta is half of your FOV
p is your projection plane distance (don't worry - it will cancel out)
From simple ratios it is clear that:
But from simple trignometry
So ...
Just calculate the angle psi for each of your mouse positions and subtract to get the difference.
A similar formula can be found for the vertical angle:
Where A is your aspect ratio (width / height)

Issue with GLM Camera X,Y Rotation introducing Z Rotation

So I've been having trouble with a camera I've implemented in OpenGL and C++ using the GLM library. The type of camera I'm aiming for is a fly around camera which will allow easy exploration of a 3D world. I have managed to get the camera pretty much working, it's nice and smooth, looks around and the movement seems to be nice and correct.
The only problem I seem to have is that the rotation along the camera's X and Y axis (looking up and down) introduces some rotation about it's Z axis. This has the result of causing the world to slightly roll whilst travelling about.
As an example... if I have a square quad in front of the camera and move the camera in a circular motion, so as if looking around in a circle with your head, once the motion is complete the quad will have rolled slightly as if you've tilted your head.
My camera is currently a component which I can attach to an object/entity in my scene. Each entity has a "Frame" which is basically the model matrix for that entity. The Frame contains the following attributes:
glm::mat4 m_Matrix;
glm::vec3 m_Position;
glm::vec3 m_Up;
glm::vec3 m_Forward;
These are then used by the camera to create the appropriate viewMatrix like this:
const glm::mat4& CameraComponent::GetViewMatrix()
{
//Get the transform of the object
const Frame& transform = GetOwnerGO()->GetTransform();
//Update the viewMatrix
m_ViewMatrix = glm::lookAt(transform.GetPosition(), //position of camera
transform.GetPosition() + transform.GetForward(), //position to look at
transform.GetUp()); //up vector
//return reference to the view matrix
return m_ViewMatrix;
}
And now... here are my rotate X and Y methods within the Frame object, which I'm guessing is the place of the problem:
void Frame::RotateX( float delta )
{
glm::vec3 cross = glm::normalize(glm::cross(m_Up, m_Forward)); //calculate x axis
glm::mat4 Rotation = glm::rotate(glm::mat4(1.0f), delta, cross);
m_Forward = glm::normalize(glm::vec3(Rotation * glm::vec4(m_Forward, 0.0f))); //Rotate forward vector by new rotation
m_Up = glm::normalize(glm::vec3(Rotation * glm::vec4(m_Up, 0.0f))); //Rotate up vector by new rotation
}
void Frame::RotateY( float delta )
{
glm::mat4 Rotation = glm::rotate(glm::mat4(1.0f), delta, m_Up);
//Rotate forward vector by new rotation
m_Forward = glm::normalize(glm::vec3(Rotation * glm::vec4(m_Forward, 0.0f)));
}
So somewhere in there, there's a problem which I've been searching around trying to fix. I've been messing with it for a few days now, trying random things but I either get the same result, or the z axis rotation is fixed but other bugs appear such as incorrect X, Y rotation and camera movement.
I had a look at gimbal lock but from what I understood of it, this problem didn't seem quite like gimbal lock to me. But I may be wrong.
Store the current pitch/yaw angles and generate the camera matrix on-the-fly instead of trying to accumulate small changes on the intermediate vectors.
In your RotateY function, change it from this:
glm::mat4 Rotation = glm::rotate(glm::mat4(1.0f), delta, m_Up);
to this:
glm::mat4 Rotation = glm::rotate(glm::mat4(1.0f), delta, glm::vec3(0,1,0));

C++ OpenGL Quaternion for Camera flips it upside down

It does look at the target when I move to the target to the right and looks at it up until it goes 180 degrees of the -zaxis and decides to go the other way.
Matrix4x4 camera::GetViewMat()
{
Matrix4x4 oRotate, oView;
oView.SetIdentity();
Vector3 lookAtDir = m_targetPosition - m_camPosition;
Vector3 lookAtHorizontal = Vector3(lookAtDir.GetX(), 0.0f, lookAtDir.GetZ());
lookAtHorizontal.Normalize();
float angle = acosf(Vector3(0.0f, 0.0f, -1.0f).Dot(lookAtHorizontal));
Quaternions horizontalOrient(angle, Vector3(0.0f, 1.0f, 0.0f));
ori = horizontalOrient;
ori.Conjugate();
oRotate = ori.ToMatrix();
Vector3 inverseTranslate = Vector3(-m_camPosition.GetX(), -m_camPosition.GetY(), -m_camPosition.GetZ());
oRotate.Transform(inverseTranslate);
oRotate.Set(0, 3, inverseTranslate.GetX());
oRotate.Set(1, 3, inverseTranslate.GetY());
oRotate.Set(2, 3, inverseTranslate.GetZ());
oView = oRotate;
return oView;
}
As promised, a bit of code showing the way I'd make a camera look at a specific point in space at all times.
First of all, we'd need a method to construct a quaternion from an angle and an axis, I happen to have that on pastebin, the angle input is in radians:
http://pastebin.com/vLcx4Qqh
Make sure you don't input the axis (0,0,0), which wouldn't make any sense whatsoever.
Now the actual update method, we need to get the quaternion rotating the camera from default orientation to pointing towards the target point. PLEASE note I just wrote this out of the top of my head, it probably needs a little debugging and may need a little optimization, but this should at least give you a push in the right direction.
void camera::update()
{
// First get the direction from the camera's position to the target point
vec3 lookAtDir = m_targetPoint - m_position;
// I'm going to divide the vector into two 'components', the Y axis rotation
// and the Up/Down rotation, like a regular camera would work.
// First to calculate the rotation around the Y axis, so we zero out the y
// component:
vec3 lookAtHorizontal = vec3(lookAtDir.x, 0.0f, lookAtDir.z).normalize();
// Get the quaternion from 'default' direction to the horizontal direction
// In this case, 'default' direction is along the -z axis, like most OpenGL
// programs. Make sure the projection matrix works according to this.
float angle = acos(vec3(0.0f, 0.0f, -1.0f).dot(lookAtHorizontal));
quaternion horizontalOrient(angle, vec3(0.0f, 1.0f, 0.0f));
// Since we already stripped the Y component, we can simply get the up/down
// rotation from it as well.
angle = acos(lookAtDir.normalize().dot(lookAtHorizontal));
if(angle) horizontalOrient *= quaternion(angle, lookAtDir.cross(lookAtHorizontal));
// ...
m_orientation = horizontalOrient;
}
Now to actually take m_orientation and m_position and get the world -> camera matrix
// First inverse each element (-position and inverse the quaternion),
// the position is rotated since the position within a matrix is 'added' last
// to the output vector, so it needs to account for rotation of the space.
mat3 rotationMatrix = m_orientation.inverse().toMatrix();
vec3 inverseTranslate = rotationMatrix * -m_position; // Note the minus
mat4 matrix = mat3; // just means the matrix is expanded, the last entry (bottom right of the matrix) is a 1.0f like an identity matrix would be.
// This bit is row-major in my case, you just need to set the translation of the matrix.
matrix[3] = inverseTranslate.x;
matrix[7] = inverseTranslate.y;
matrix[11] = inverseTranslate.z;
EDIT I think it should be obvious but just for completeness, .dot() takes the dot product of the vectors, .cross() takes the cross product, the object executing the method is vector A, and the parameter of the method is vector B.