Determine angle between camera position and world point in 3D engine - c++

I want to determine the horizontal and vertical angle, from a camera's position to a world point, in respect to the camera's forward axis.
My linear algebra is a bit rusty, but given the camera's forward, up, and right vector, for example:
camForward = [0 0 1];
camUp = [0 1 0];
camRight = [1 0 0];
And the camera position and world point, for example:
camPosition = [1 2 3];
worldPoint = [5 6 4];
The sought-after angles should be determinable by first taking the difference of the positions:
delta = worldPoint-camPosition;
Then projecting it on the camera axes using the dot products:
deltaHorizontal = dot(delta,camRight);
deltaVertical = dot(delta,camUp);
deltaDepth = dot(delta,camForward);
And finally computing angles as:
angleHorizontal = atan(deltaHorizontal/deltaDepth);
angleVertical = atan(deltaVertical/deltaDepth);
In the example case, this yields that both angles become ~76°, which seems reasonable; varying the positions and axes also seem to give reasonable results.
Thus, if I am not getting the angles I expect, it should be due to that I am using either incorrect position and/or camera axes. It is worth noting that the 3D engine is using OpenGL and GLM.
I am fairly certain that the positions are correct, as moving around in the scene and inspecting the positions in relation to known reference points give consistent and correct results. Leading me to believe that I am using the wrong camera axes. To get the angles I am using (the equivalent of):
glm::vec3 worldPoint = glm::unProject( glm::vec3(windowX, windowY, windowZ), viewMatrix, projectionMatrix, glm::vec4(0,0,windowWidth,windowHeight));
glm::vec3 delta = glm::vec3(worldPoint.x, worldPoint.y, worldPoint.z);
float horizontalDistance = glm::dot(delta, cameraData->right);
float verticalDistance = glm::dot(delta, cameraData->up);
float depthDistance = glm::dot(delta, cameraData->forward);
float horizontalAngle = glm::atan(horizontalDistance/depthDistance)
float verticalAngle = glm::atan(verticalDistance/depthDistance)
Each frame, forward, up, and right are read from a view matrix, viewMatrix which in turn is produced by a converting a quaternion, Q, which holds the camera rotation which is controlled by mouse:
void updateView(CameraData * cameraData, MouseData * mouseData, MouseParameters * mouseParameters){
float deltaX = mouseData->currentX - mouseData->lastX;
float deltaY = mouseData->currentY - mouseData->lastY;
mouseData->lastX = mouseData->currentX;
mouseData->lastY = mouseData->currentY;
float pitch = mouseParameters->sensitivityY * deltaY;
float yaw = mouseParameters->sensitivityX * deltaX;
glm::quat pitch_Q = glm::quat(glm::vec3(pitch, 0.0f, 0.0f));
glm::quat yaw_Q = glm::quat(glm::vec3(0.0f, yaw, 0.0f));
cameraData->Q = pitch_Q * cameraData->Q * yaw_Q;
cameraData->Q = glm::normalize(cameraData->Q);
glm::mat4 rotation = glm::toMat4(cameraData->Q);
glm::mat4 translation = glm::mat4(1.0f);
translation = glm::translate(translation, -(cameraData->position));
cameraData->viewMatrix = rotation * translation;
cameraData->forward = (cameraData->viewMatrix)[2];
cameraData->up = (cameraData->viewMatrix)[1];
cameraData->right = (cameraData->viewMatrix)[0];
}
However, something goes wrong, and the correct angles are seemingly only produced while looking along, or perpendicular to, the world z-axis ([0 0 1]). Where am I mistaken?

Related

Quaternion-based First Person View Camera

I have been learning OpenGL by following the tutorial, located at https://paroj.github.io/gltut/.
Passing the basics, I got a bit stuck at understanding quaternions and their relation to spatial orientation and transformations, especially from world- to camera-space and vice versa. In the chapter Camera-Relative Orientation, the author makes a camera, which rotates a model in world space relative to the camera orientation. Quoting:
We want to apply an orientation offset (R), which takes points in camera-space. If we wanted to apply this to the camera matrix, it would simply be multiplied by the camera matrix: R * C * O * p. That's nice and all, but we want to apply a transform to O, not to C.
My uneducated guess would be that if we applied the offset to camera space, we would get the first-person camera. Is this correct? Instead, the offset is applied to the model in world space, making the spaceship spin relative to that space, and not to camera space. We just observe it spin from camera space.
Inspired by at least some understanding of quaternions (or so I thought), I tried to implement the first person camera. It has two properties:
struct Camera{
glm::vec3 position; // Position in world space.
glm::quat orientation; // Orientation in world space.
}
Position is modified in reaction to keyboard actions, while the orientation changes due to mouse movement on screen.
Note: GLM overloads * operator for glm::quat * glm::vec3 with the relation for rotating a vector by a quaternion (more compact form of v' = qvq^-1)
For example, moving forward and moving right:
glm::vec3 worldOffset;
float scaleFactor = 0.5f;
if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS) {
worldOffset = orientation * (axis_vectors[AxisVector::AXIS_Z_NEG]); // AXIS_Z_NEG = glm::vec3(0, 0, -1)
position += worldOffset * scaleFactor;
}
if (glfwGetKey(window, GLFW_KEY_D) == GLFW_PRESS) {
worldOffset = orientation * (axis_vectors[AxisVector::AXIS_X_NEG]); // AXIS_Z_NEG = glm::vec3(-1, 0, 0)
position += worldOffset * scaleFactor;
}
Orientation and position information is passed to glm::lookAt matrix for constructing the world-to-camera transformation, like so:
auto camPosition = position;
auto camForward = orientation * glm::vec3(0.0, 0.0, -1.0);
viewMatrix = glm::lookAt(camPosition, camPosition + camForward, glm::vec3(0.0, 1.0, 0.0));
Combining model, view and projection matrices and passing the result to vertex shader displays everything okay - the way one would expect to see things from the first-person POV. However, things get messy when I add mouse movements, tracking the amount of movement in x and y directions. I want to rotate around the world y-axis and local x-axis:
auto xOffset = glm::angleAxis(xAmount, axis_vectors[AxisVector::AXIS_Y_POS]); // mouse movement in x-direction
auto yOffset = glm::angleAxis(yAmount, axis_vectors[AxisVector::AXIS_X_POS]); // mouse movement in y-direction
orientation = orientation * xOffset; // Works OK, can look left/right
orientation = yOffset * orientation; // When adding this line, things get ugly
What would the problem be here?
I admit, I don't have enough knowledge to debug the mouse movement code properly, I mainly followed the lines, saying "right multiply to apply the offset in world space, left multiply to do it in camera space."
I feel like I know things half-way, drawing conclusions from a plethora of e-resources on the subject, while getting more educated and more confused at the same time.
Thanks for any answers.
To rotate a glm quaternion representing orientation:
//Precomputation:
//pitch (rot around x in radians),
//yaw (rot around y in radians),
//roll (rot around z in radians)
//are computed/incremented by mouse/keyboard events
To compute view matrix:
void CameraFPSQuaternion::UpdateView()
{
//FPS camera: RotationX(pitch) * RotationY(yaw)
glm::quat qPitch = glm::angleAxis(pitch, glm::vec3(1, 0, 0));
glm::quat qYaw = glm::angleAxis(yaw, glm::vec3(0, 1, 0));
glm::quat qRoll = glm::angleAxis(roll,glm::vec3(0,0,1));
//For a FPS camera we can omit roll
glm::quat orientation = qPitch * qYaw;
orientation = glm::normalize(orientation);
glm::mat4 rotate = glm::mat4_cast(orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
viewMatrix = rotate * translate;
}
If you want to store the quaternion, then you recompute it whenever yaw, pitch, or roll changes:
void CameraFPSQuaternion::RotatePitch(float rads) // rotate around cams local X axis
{
glm::quat qPitch = glm::angleAxis(rads, glm::vec3(1, 0, 0));
m_orientation = glm::normalize(qPitch) * m_orientation;
glm::mat4 rotate = glm::mat4_cast(m_orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
m_viewMatrix = rotate * translate;
}
If you want to give a rotation speed around a given axis, you use slerp:
void CameraFPSQuaternion::Update(float deltaTimeSeconds)
{
//FPS camera: RotationX(pitch) * RotationY(yaw)
glm::quat qPitch = glm::angleAxis(m_d_pitch, glm::vec3(1, 0, 0));
glm::quat qYaw = glm::angleAxis(m_d_yaw, glm::vec3(0, 1, 0));
glm::quat qRoll = glm::angleAxis(m_d_roll,glm::vec3(0,0,1));
//For a FPS camera we can omit roll
glm::quat m_d_orientation = qPitch * qYaw;
glm::quat delta = glm::mix(glm::quat(0,0,0,0),m_d_orientation,deltaTimeSeconds);
m_orientation = glm::normalize(delta) * m_orientation;
glm::mat4 rotate = glm::mat4_cast(orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
viewMatrix = rotate * translate;
}
The problem lied with the usage of glm::lookAt for constructing the view matrix. Instead, I am now constructing the view matrix like so:
auto rotate = glm::mat4_cast(entity->orientation);
auto translate = glm::mat4(1.0f);
translate = glm::translate(translate, -entity->position);
viewMatrix = rotate * translate;
For translation, I'm left multiplying with an inverse of orientation instead of orientation now.
glm::quat invOrient = glm::conjugate(orientation);
if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS) {
worldOffset = invOrient * (axis_vectors[AxisVector::AXIS_Z_NEG]);
position += worldOffset * scaleFactor;
}
...
Everything else is the same, apart from some further offset quaternion normalizations in the mouse movement code.
The camera now behaves and feels like a first-person camera.
I still don't properly understand the difference between view matrix and lookAt matrix, if there is any. But that's the topic for another question.

OpenGL transforming objects with multiple rotations of Different axis

I am building a modeling program and I'd like to do transformations on objects in their own space and then assign that single object to a group to rotate around another axis which the group rotates around. However, I'd also like to be able to do transformations in the object's own space when it's combined.
Manipulating the individual object, I pick the object's center.
glm::mat4 transform;
transform = glm::translate(transform, - obj.meshCenter);
glm::mat4 transform1;
transform1 = glm::translate(transform1, obj.meshCenter);
obj.rotation = transform1*obj.thisRot*transform;
I then send this off to the shader,
glUniformMatrix4fv(modelLoc, 1, GL_FALSE, glm::value_ptr(obj.translation*obj.rotation*objscale);
I would now like to rotate this object around another axis, say an axis of (5,0,0) of 45 degrees.
I now have:
glm::mat4 groupR;
groupR = glm::rotate(groupR,glm::degrees(45.0f),glm::vec3(5,0,0));
obj.groupRotation = groupR;
glUniformMatrix4fv(modelLoc, 1, GL_FALSE,
glm::value_ptr(obj.groupRotation*obj.translation*obj.rotation*objscale)
I've now moved the object from it's local space to the Group space.
I'm having a bit of difficulty now operating tranformations in the object's own space when combined with the Group's rotation.
I've had limited success when I set the groupR axis to (0,1,0) like so:
///Translating object in its own space///
glm::mat4 R = obj.groupRotation;
obj.translation = glm::inverse(R) * obj.translate * R;
the problem here is that this will only translate the object correctly in it's own space if the axis of rotation of R (Group's rotation) is equal to (0,1,0):
///Rotating object in its own space///
glm::mat4 R = obj.groupRotation;
obj.rotation = glm::inverse(R) * obj.rot * R;
Again, the rotations are incorrect. I'm thinking that maybe I have to undo the groupR's axis translation? and then re-apply it somewhere?
Let's assume we have an object that is moved, rotated and scaled, and we define a transformation matrix as follows:
glm::mat4 objTrans ...; // translation
glm::mat4 objRot ...; // roation
glm::mat4 objScale ...; // scaling
glm::mat4 objMat = objTrans * objRot * objScale;
And we have rotation matrix that we want to run on the object. In this case we have rotation around the Z-axis:
foat angle ...; // rotation angle
glm::mat4 rotMat = glm::rotate( angle, glm::vec3( 0.0, 0.0, 1.0 ) );
We have several rotations we can do with this information.
First we want to rotate the object on its local axis:
glm::mat4 modelMat = objMat * rotMat;
A Rotation around the worlds origin can be performed like this:
glm::mat4 modelMat = rotMat * objMat;
In order to rotate around the origin of the object in the world coordinate system, we must eliminate the rotation of the object:
glm::mat4 modelMat = objMat * (glm::inverse(objRot) * rotMat * objRot);
A Rotation around the worlds origin in relation to the object you have to do the opposite:
glm::mat4 modelMat = (objRot * rotMat * glm::inverse(objRot)) * objMat;
If you have a complete transformations matrix for an object and you do not know the rotation part, then it can be easily determined.
Note that a transformation matrix usually looks like this:
( X-axis.x, X-axis.y, X-axis.z, 0 )
( Y-axis.x, Y-axis.y, Y-axis.z, 0 )
( Z-axis.x, Z-axis.y, Z-axis.z, 0 )
( trans.x, trans.y, trans.z, 1 )
To generate a rotation only matrix you have to extract the normalized axis vectors:
glm::mat4 a ...; // any matrix
glm::vec3 x = glm::normalize( a[0][0], a[0][1], a[0][2] );
glm::vec3 y = glm::normalize( a[1][0], a[1][1], a[1][2] );
glm::vec3 z = glm::normalize( a[2][0], a[2][1], a[2][2] );
glm::mat4 r;
r[0][0] = x[0]; r[0][1] = x[1]; r[0][2] = x[2]; r[0][3] = 0.0f;
r[1][0] = y[0]; r[1][1] = y[1]; r[1][2] = y[2]; r[0][3] = 0.0f;
r[2][0] = z[0]; r[2][1] = z[1]; r[2][2] = z[2]; r[0][3] = 0.0f;
r[3][0] = 0.0f; r[3][1] = 0.0f; r[3][2] = 0.0f; r[0][3] = 1.0f;
Here is a partial answer to the behavior I want and the setup I used. This seems to be what I need to do to get the correct transforms in object space while apart of a group rotation.
Here I have a model composed of 7 different individual meshes that is rotated around the origin of (0,5,0) on the y Axis, this is just an arbitrary rotation I choose for testing.
for (int i = 0; i < models.at(currentSelectedPointer.x)->meshes.size()i++)
{
glm::mat4 rotMat;
rotMat = glm::translate(rotMat, glm::vec3(5, 0, 0));
rotMat = glm::rotate(rotMat, f, glm::vec3(0, 1.0, 0.0));
rotMat = glm::translate(rotMat, glm::vec3(-5, 0, 0));
models.at(currentSelectedPointer.x)->meshes.at(i).groupRotation = rotMat;
}
all the meshes are now rotating around (0,5,0) as a group, not at (0,5,0), on the Y axis.
to do the correct rotation transform on a single object in it's own object space, I have to undo the location of the groupRotation's origin (Sorry for the messy code, but I did it in steps like this to keep everything seperated and easily disectable). Also the individual object has an identity matrix for both it's translation and it's scale.
//These will be used to shift the groupRotation origin back to the
// origin in order to rotate around the object's origin.
glm::mat4 gotoGroupAxis;
gotoGroupAxis= glm::translate(gotoGroupAxis, glm::vec3(5, 0, 0));
glm::mat4 goBack ;
goBack = glm::translate(goBack , glm::vec3(-5, 0, 0));
////////Group rotation and it's inverse
glm::mat4 tempGroupRot = goBack *obj.groupRotation*gotoGroupAxis;
glm::mat4 tempGroupRotInverse= glm::inverse(tempGroupRot);
//thisRot and lastRot are matrix variables I use to accumulate and
//save rotations
obj.thisRot = tempGroupRotInverse*
glm::toMat4(currentRotation)*tempGroupRot *
obj.lastRot;
//now I translate the object's rotation origin to it's center.
glm::mat4 transform = glm::translate(transform, -obj.meshCenter);
glm::mat4 transform1 = glm::translate(transform1, obj.meshCenter);
//Finally I rotate the object in it's own space.
obj.rotation = transform1*obj.thisRot*transform;
Update:
//Translation works as well with
obj.finalTranslation= tempGroupRotInverse*
obj.translation * tempGroupRot ;
This is only a partial answer because I'm going to be doing transforms on an object level and group level and I'm almost certain that something will go wrong down the line that hasn't been taken into account by the answer I've posted.

Rotate geometry to align to a direction vector

I've been trying to get my generated geometry to align with a direction vector. To illustrate what my current problem is:
A = Correctly aligned geometry ( just a triangle for testing )
B = Incorrectly aligned geometry
My current solution in code for this triangle example (This code is run for all the nodes you see on screen starting at the split, I am using the GLM math library):
glm::vec3 v1, v2, v3;
v1.x = -0.25f;
v1.z = -0.25f;
v2.x = 0.25f;
v2.z = -0.25f;
v3.x = 0.0f;
v3.z = 0.25f;
v1.y = 0.0f;
v2.y = 0.0f;
v3.y = 0.0f;
glm::mat4x4 translate = glm::translate(glm::mat4x4(1.0f), sp.position);
glm::mat4x4 rotate = glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), sp.direction, glm::vec3(0.0f, 1.0f, 0.0f));
v1 = glm::vec4(translate * rotate * glm::vec4(v1, 1.0f)).swizzle(glm::comp::X, glm::comp::Y, glm::comp::Z);
v2 = glm::vec4(translate * rotate * glm::vec4(v2, 1.0f)).swizzle(glm::comp::X, glm::comp::Y, glm::comp::Z);
v3 = glm::vec4(translate * rotate * glm::vec4(v3, 1.0f)).swizzle(glm::comp::X, glm::comp::Y, glm::comp::Z);
The direction vector values for point A:
x 0.000000000 float
y 0.788205445 float
z 0.615412235 float
The direction vector values for point B:
x 0.0543831661 float
y 0.788205445 float
z -0.613004684 float
Edit 1 (24/11/2013 # 20:36):
A and B do not have any relation, both are generated separately. When generating A or B only a position and direction is known.
I've been looking at solutions posted here:
Quaternions, rotate a model and align with a direction
Direct3D
Rotation Matrix from Vector and vice-versa
Direction Vector To
Rotation Matrix
But I haven't been able to successfully rotate my geometry to align with my direction vector. I feel like I'm doing something rather basic wrong.
Any help would be greatly appreciated!
If A and B are unit vectors and you want a rotation matrix R that transforms B so that it aligns with A, then start by computing C = B x A (the cross-product of B and A). C is the axis of rotation, and arcsin(|C|) is the necessary rotation angle.
From these you can build the required rotation matrix. It looks like glm has support for this, so I won't explain further.
NB if you are doing many, many of these in performance-critical code, you can gain a bit of speed by noting |C| = sin(theta), sqrt(1 - |C|^2) = cos(theta) and computing the matrix yourself with these known values of sin(theta) and cos(theta). For this see for example this discussion. The glm routine will take your angle arcsin(|C|) and proceed immediately to compute its sin and cos, a small waste since you already knew these and the operations are relatively expensive.
If the rotation is about some point p other than the origin, then let T be a translation that takes p to the origin, and find X = T^-1 R T. This X will be the transformation you want.

Calculating The Camera View Matrix

I have an issue where my camera seems to orbit the origin. It makes me think that I have my matrix multiplication backwards. However it seems correct to me, and if I reverse it it sorts out the orbit issue, but I get an other issue where I think my translation is backwards.
glm::mat4 Camera::updateDelta(const float *positionVec3, const float *rotationVec3)
{
// Rotation Axis
const glm::vec3 xAxis(1.0f, 0.0f, 0.0f);
const glm::vec3 yAxis(0.0f, 1.0f, 0.0f);
const glm::vec3 zAxis(0.0f, 0.0f, 1.0f); // Should this be -1?
// Accumulate Rotations
m_rotation.x += rotationVec3[0]; // pitch
m_rotation.y += rotationVec3[1]; // yaw
m_rotation.z += rotationVec3[2]; // roll
// Calculate Rotation
glm::mat4 rotViewMat;
rotViewMat = glm::rotate(rotViewMat, m_rotation.x, xAxis);
rotViewMat = glm::rotate(rotViewMat, m_rotation.y, yAxis);
rotViewMat = glm::rotate(rotViewMat, m_rotation.z, zAxis);
// Updated direction vectors
m_forward = glm::vec3(rotViewMat[0][2], rotViewMat[1][2], rotViewMat[2][2]);
m_up = glm::vec3(rotViewMat[0][1], rotViewMat[1][1], rotViewMat[2][1]);
m_right = glm::vec3(rotViewMat[0][0], rotViewMat[1][0], rotViewMat[2][0]);
m_forward = glm::normalize(m_forward);
m_up = glm::normalize(m_up);
m_right = glm::normalize(m_right);
// Calculate Position
m_position += (m_forward * positionVec3[2]);
m_position += (m_up * positionVec3[1]);
m_position += (m_right * positionVec3[0]);
m_position += glm::vec3(positionVec3[0], positionVec3[1], positionVec3[2]);
glm::mat4 translateViewMat;
translateViewMat = glm::translate(translateViewMat, m_position);
// Calculate view matrix.
//m_viewMat = rotViewMat * translateViewMat;
m_viewMat = translateViewMat * rotViewMat;
// Return View Proj
return m_projMat * m_viewMat;
}
Everywhere else I do the matrix multiplication in reverse which gives me the correct answer, but this function seems to want the reverse.
When calculating an objects position in 3D space I do this
m_worldMat = transMat * rotMat * scale;
Which works as expected.
There seem to be a few wrong things with the code.
One: rotViewMat is used in the first rotate call before it is initialized. What value does it get initially? Is it a unit matrix?
Two: Rotation does not have the mathematical properties you seem to assume. Rotating around x, then around y, then around z (each at a constant velocity) is not the same as rotating around any axis ("orbiting"), it is a weird wobble. Because of subsequent rotations, your accumulated x rotation ("pitch") for example may actually be causing a movement in an entirely different direction (consider what happens to x when the accumulated y rotation is close to 90 deg). Another way of saying that is rotation is non-commutative, see:
https://physics.stackexchange.com/questions/48345/non-commutative-property-of-rotation and
https://physics.stackexchange.com/questions/10362/how-does-non-commutativity-lead-to-uncertainty/10368#10368.
See also: http://en.wikipedia.org/wiki/Rotation_matrix#Sequential_angles and http://en.wikipedia.org/wiki/Euler_angles. Since Euler angles (roll-pitch-yaw) are not vectors, it doesn't make sense to add to them a velocity vector.
What you probably want is this:
glm::mat4 Camera::updateDelta(const float *positionVec3, const float *axisVec3, const float angularVelocity)
{
...
glm::mat4 rotViewMat; // initialize to unit matrix
rotViewMat = glm::rotate(rotViewMat, angularVelocity, axisVec3);
...

OpenGL - Camera Orbiting a Point with Quaternions

So I currently use quaternions to store and modify the orientation of the objects in my OpenGL scene, as well as the orientation of the camera. When rotating these objects directly (i.e. saying I want to rotate the camera Z amount around the Z-axis, or I want to rotate an object X around the X-axis and then translate it T along its local Z-axis), I have no problems, so I can only assume my fundamental rotation code is correct.
However, I am now trying to implement a function to make my camera orbit an arbitrary point in space, and am having quite a hard time of it. Here is what I have come up with so far, which doesn't work (this takes place within the Camera class).
//Get the inverse of the orientation, which should represent the orientation
//"from" the focal point to the camera
Quaternion InverseOrient = m_Orientation;
InverseOrient.Invert();
///Rotation
//Create change quaternions for each axis
Quaternion xOffset = Quaternion();
xOffset.FromAxisAngle(xChange * m_TurnSpeed, 1.0, 0.0, 0.0);
Quaternion yOffset = Quaternion();
yOffset.FromAxisAngle(yChange * m_TurnSpeed, 0.0, 1.0, 0.0);
Quaternion zOffset = Quaternion();
zOffset.FromAxisAngle(zChange * m_TurnSpeed, 0.0, 0.0, 1.0);
//Multiply the change quats into the inversed orientation quat
InverseOrient = yOffset * zOffset * xOffset * InverseOrient;
//Translate according to the focal distance
//Start with a vector relative to the position being looked at
sf::Vector3<float> RelativePos(0, 0, -m_FocalDistance);
//Rotate according to the quaternion
RelativePos = InverseOrient.MultVect(RelativePos);
//Add that relative position to the focal point
m_Position.x = m_FocalPoint->x + RelativePos.x;
m_Position.y = m_FocalPoint->y + RelativePos.y;
m_Position.z = m_FocalPoint->z + RelativePos.z;
//Now set the orientation to the inverse of the quaternion
//used to position the camera
m_Orientation = InverseOrient;
m_Orientation.Invert();
What ends up happening is that the camera rotates around some other point - certainly not the object, but apparently not itself either, as though it were looping through space in a spiral path.
So this is clearly not the way to go about orbiting a camera around a point, but what is?
I would operate on the camera first in spherical coordinates and convert to quaternions as necessary.
Given the following assumptions:
The camera has no roll
The point you are looking at is [x, y, z]
You have yaw, pitch angles
[0, 1, 0] is "up"
Here is how to calculate some important values:
The view vector: v = [vx, vy, vz] = [cos(yaw)*cos(pitch), sin(pitch), -sin(yaw)*cos(pitch)]
The camera location: p = [x, y, z] - r*v
The right vector: cross product v with [0, 1, 0]
The up vector: cross product v with the right vector
Your view quaternion is [0, vx, vy, vz] (that's the view vector with a 0 w-component)
Now in your simulation you can operate on pitch/yaw, which are pretty intuitive. If you want to do interpolation, convert the before and after pitch+yaws into quaternions and do quaternion spherical linear interpolation.