Related
I'm attempting to implement a camera controller for a first person, mouse-look based camera for OpenGL. This is a simple problem when the camera is always oriented normally (camera up vector = world Y axis). However, I'm having real trouble getting everything working properly with a camera that can be used seamlessly for any orientation. The purpose is to allow a player to move around an entire planet. An additional requirement is that the direction remain the same relative to the orientation as the camera's orientation changes. An example would be, if you're walking around a planet, the direction remains the same relative to the ground, so as you go "down" along the side from a pole, the direction is also automatically rotated.
So far, I've attempted a number of different things to get this working, but as I see it, there should be two different ways of doing this. The first is to do regular camera rotation based on yaw and pitch angles from the world axes, and then transform the resulting look direction by the camera orientation to obtain the final look direction. The second approach is to rotate the camera with yaw and pitch angles based on calculated up and right vectors. The up vector is easy here; it's just the orientation. I haven't gotten any right vector I've found to work correctly though.
OK, here's the code for these two approaches.
Common code
// m_orientation calculated from planet center to current position
m_horizontal += horizontal;
m_vertical += vertical;
while (m_horizontal > TWO_PI) {
m_horizontal -= TWO_PI;
}
while (m_horizontal < -TWO_PI) {
m_horizontal += TWO_PI;
}
if (m_vertical > MAX_VERTICAL) {
m_vertical = MAX_VERTICAL;
}
else if (m_vertical < -MAX_VERTICAL) {
m_vertical = -MAX_VERTICAL;
}
// code from either implementation
m_view = glm::lookAt(m_position, m_position + m_direction, m_orientation);
First approach with yaw, pitch about world axes and then transform
// check for m_orientation != WORLD_UP...
glm::vec3 axis = glm::normalize(glm::cross(WORLD_UP, m_orientation));
float angle_degrees = acosf(m_orientation.y) * RADS_TO_DEGREES;
glm::mat4 trans = glm::rotate(glm::mat4(), angle_degrees, axis);
// can also be determined with two rotation matrices about world axes, end result is identical
m_direction = glm::vec3(cosf(m_vertical) * sinf(m_horizontal),
sinf(m_vertical),
cosf(m_vertical) * cosf(m_horizontal));
m_direction = glm::vec3(trans * glm::vec4(m_direction));
Second approach with yaw and pitch about appropriate up and right vectors
m_right = ??? // tried literally everything
glm::mat4 yaw = glm::rotate(glm::mat4(), m_horizontal, m_orientation);
glm::mat4 pitch = glm::rotate(glm::mat4(), m_vertical, m_right);
glm::mat4 trans = yaw * pitch;
m_direction = glm::vec3(trans[2]); // z axis
OK, so here's the problem. The first approach works almost perfectly, but near the south pole of a planet (within ~15 degrees of orientation=(0,-1,0), effect gets stronger closer you are), the camera is automatically rotated toward the south pole as the orientation changes. So if the camera orientation does not change, near the south pole, the camera works perfectly. Any change in orientation results in the camera rotating toward the south pole. The more orientation change, the more the camera rotates. Now I have tried removing either the pitch or yaw from the world axis camera rotation, and this effect appears only with the pitch calculation included. With only yaw, then the camera behaves perfectly (lacking any pitch control ofc). As far as I can tell, my transformation to go from regular up=(0,1,0) to the current orientation is incorrect. Any help on that?
Now the other way to do things appears to work somewhat correctly, but I simply have not found a good right vector. Everything I've tried results in strange behavior of both horizontal and vertical movements. The most obvious solution, cross product of previous frame's direction and current orientation to produce the right vector doesn't work. Any suggestions for a good right vector?
I'm also happy to see completely different solutions to this problem. I know it's possible, but no amount of searching has given me a good solution. Thanks very much in advance.
Edit 1: Tried a few more things in response to Paweł Stawarz
Results in incorrect orientation of camera and weird mouse movement. I made sure my matrix multiplication was in the correct order. I also tried the transpose.
m_view = glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), m_direction, m_up);
m_view = trans * m_view; //trans is rotation from orientation=(0,1,0) to orientation=m_orientation
Results in the same problem as previously, with the camera rotating toward the south pole by itself. Also the vertical mouse rotation is not correct, causes camera to go in circles.
m_view = glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), m_direction, m_up);
m_view = trans * m_view;
m_direction = glm::vec3(m_view[2]);
m_view = glm::lookAt(m_position, m_direction + m_position, m_orientation);
Edit 2: Using the RIGHT vector method, with no transformation between orientations is working a little better. However, it causes the camera yaw to oscillate wildly with pitch near to vertical (at least 5 degrees away from vertical). In addition, the range of motion for pitch is not adjusted by the orientation, so for example on the side of the planet, vertical motion is restricted to directly in front of you to behind you (~(0,1,0) to ~(0,-1,0)).
glm::mat4 yaw = glm::rotate(glm::mat4(), m_horizontal * ONEEIGHTY_PI, m_orientation);
glm::mat4 pitch = glm::rotate(glm::mat4(), m_vertical * -ONEEIGHTY_PI, m_right);
glm::mat4 cam = pitch * yaw;
m_right = glm::vec3(cam[0]);
m_up = glm::vec3(cam[1]);
m_direction = glm::vec3(cam[2]);
m_view = glm::lookAt(m_position, m_direction + m_position, m_up);
m_vp = m_perspective * m_view;
Solved it. Needed a different transformation. See here for a pretty good explanation.
glm::mat4 trans;
float factor = 1.0f;
float real_vertical = vertical;
m_horizontal += horizontal;
m_vertical += vertical;
while (m_horizontal > TWO_PI) {
m_horizontal -= TWO_PI;
}
while (m_horizontal < -TWO_PI) {
m_horizontal += TWO_PI;
}
if (m_vertical > MAX_VERTICAL) {
m_vertical = MAX_VERTICAL;
}
else if (m_vertical < -MAX_VERTICAL) {
m_vertical = -MAX_VERTICAL;
}
glm::quat world_axes_rotation = glm::angleAxis(m_horizontal * ONEEIGHTY_PI, glm::vec3(0.0f, 1.0f, 0.0f));
world_axes_rotation = glm::normalize(world_axes_rotation);
world_axes_rotation = glm::rotate(world_axes_rotation, m_vertical * ONEEIGHTY_PI, glm::vec3(1.0f, 0.0f, 0.0f));
m_pole = glm::normalize(m_pole - glm::dot(m_orientation, m_pole) * m_orientation);
glm::mat4 local_transform;
local_transform[0] = glm::vec4(m_pole.x, m_pole.y, m_pole.z, 0.0f);
local_transform[1] = glm::vec4(m_orientation.x, m_orientation.y, m_orientation.z, 0.0f);
glm::vec3 tmp = glm::cross(m_pole, m_orientation);
local_transform[2] = glm::vec4(tmp.x, tmp.y, tmp.z, 0.0f);
local_transform[3] = glm::vec4(m_position.x, m_position.y, m_position.z, 1.0f);
world_axes_rotation = glm::normalize(world_axes_rotation);
m_view = local_transform * glm::mat4_cast(world_axes_rotation);
m_direction = -1.0f * glm::vec3(m_view[2]);
m_up = glm::vec3(m_view[1]);
m_right = glm::vec3(m_view[0]);
m_view = glm::inverse(m_view);
If we keep things simple, by using the standard approach:
Move the camera to the current player position
Rotate it towards where the player is looking at
The camera rotation is described by:
The UP vector which is the normalized vector that starts at (p0x,p0y,p0z) (where p0 is the position of the center of planet) and goes thru (p1x,p1y,p1z) (where p1 describes the place the player is currently at),
the RIGHT vector is the vector perpendicular to the UP vector and perpendicular to the direction the player is looking - the LOOK vector (in the case where he's looking straight ahead - perpendicular to his direction).
Since the UP vector can be calulated straight from the player current position, you have to get the LOOK and RIGHT vectors. Both are cross products of corresponding other vectors.
Note also, that allowing the player to look up/down and pan his head, can (and probably will) in fact change the UP vector.
I am trying to translate my camera's position along an orientation defined in a glm::quat.
void Camera::TranslateCameraAlongZ(float distance) {
glm::vec3 direction = glm::normalize(rotation * glm::vec3(0.0f, 0.0f, 1.0f));
position += direction * distance;
}
This works fine when rotation is the identity quaternion. The X and the Z translations do not work when the rotation is anything else. When the camera is rotated 45 degrees to the left and I call TranslateCameraAlongZ(-0.1f) I am translated backwards and to the left when I should be going backwards and to the right. All translations that are not at perfect 90 degree increments are similarly messed up. What am I doing wrong here, and what is the simplest way to fix it? In case they might be relevant, here is the function which generates my view matrix and the function that rotates my camera:
glm::mat4 Camera::GetView() {
view = glm::toMat4(rotation) * glm::translate(glm::mat4(), position);
return view;
}
void Camera::RotateCameraDeg(float x, float y, float z) {
rotation = glm::normalize(glm::angleAxis(x,glm::vec3(1.0f, 0.0f, 0.0f)) * rotation);
rotation = glm::normalize(glm::angleAxis(y,glm::vec3(0.0f, 1.0f, 0.0f)) * rotation);
rotation = glm::normalize(glm::angleAxis(z,glm::vec3(0.0f, 0.0f, 1.0f)) * rotation);
std::cout << glm::eulerAngles(rotation).x << " " << glm::eulerAngles(rotation).y << " " << glm::eulerAngles(rotation).z << "\n";
}
I'm guessing that "rotation" is the inverse of the camera's rotation (thinking of the camera as an object in the world). That is, "rotation" takes an object from world space to camera space. If you want to move the camera's world space position forwards, you need to take the local forward vector (0,0,1) or (0,0,-1)?, multiply it by the inverse of "rotation" (to move the vector from camera space to world space), then scale and add to the position.
I'm creating the view matrix for my camera using its current orientation (quaternion) and its current position.
void Camera::updateViewMatrix()
{
view = glm::gtx::quaternion::toMat4(orientation);
// Include rotation (Free Look Camera)
view[3][0] = -glm::dot(glm::vec3(view[0][0], view[0][1], view[0][2]), position);
view[3][1] = -glm::dot(glm::vec3(view[1][0], view[1][1], view[1][2]), position);
view[3][2] = -glm::dot(glm::vec3(view[2][0], view[2][1], view[2][2]), position);
// Ignore rotation (FPS Camera)
//view[3][0] = -position.x;
//view[3][1] = -position.y;
//view[3][2] = -position.z;
view[3][3] = 1.0f;
}
There is a problem with this in that I do not believe the quaternion to matrix calculation is giving the correct answer. Translating the camera works as expected but rotating it causes incorrect behavior.
I am rotating the camera using the difference between the current mouse position and the the centre of the screen (resetting the mouse position each frame)
int xPos;
int yPos;
glfwGetMousePos(&xPos, &yPos);
int centreX = 800 / 2;
int centreY = 600 / 2;
rotate(xPos - centreX, yPos - centreY);
// Reset mouse position for next frame
glfwSetMousePos(800 / 2, 600 / 2);
The rotation takes place in this method
void Camera::rotate(float yawDegrees, float pitchDegrees)
{
// Apply rotation speed to the rotation
yawDegrees *= lookSensitivity;
pitchDegrees *= lookSensitivity;
if (isLookInverted)
{
pitchDegrees = -pitchDegrees;
}
pitchAccum += pitchDegrees;
// Stop the camera from looking any higher than 90 degrees
if (pitchAccum > 90.0f)
{
//pitchDegrees = 90.0f - (pitchAccum - pitchDegrees);
pitchAccum = 90.0f;
}
// Stop the camera from looking any lower than 90 degrees
if (pitchAccum < -90.0f)
{
//pitchDegrees = -90.0f - (pitchAccum - pitchDegrees);
pitchAccum = -90.0f;
}
yawAccum += yawDegrees;
if (yawAccum > 360.0f)
{
yawAccum -= 360.0f;
}
if (yawAccum < -360.0f)
{
yawAccum += 360.0f;
}
float yaw = yawDegrees * DEG2RAD;
float pitch = pitchDegrees * DEG2RAD;
glm::quat rotation;
// Rotate the camera about the world Y axis (if mouse has moved in any x direction)
rotation = glm::gtx::quaternion::angleAxis(yaw, 0.0f, 1.0f, 0.0f);
// Concatenate quaterions
orientation = orientation * rotation;
// Rotate the camera about the world X axis (if mouse has moved in any y direction)
rotation = glm::gtx::quaternion::angleAxis(pitch, 1.0f, 0.0f, 0.0f);
// Concatenate quaternions
orientation = orientation * rotation;
}
Am I concatenating the quaternions correctly for the correct orientation?
There is also a problem with the pitch accumulation in that it restricts my view to ~±5 degrees rather than ±90. What could be the cause of that?
EDIT:
I have solved the problem for the pitch accumulation so that its range is [-90, 90]. It turns out that glm uses degrees and not vectors for axis angle and the order of multiplication for the quaternion concatenation was incorrect.
// Rotate the camera about the world Y axis
// N.B. 'angleAxis' method takes angle in degrees (not in radians)
rotation = glm::gtx::quaternion::angleAxis(yawDegrees, 0.0f, 1.0f, 0.0f);
// Concatenate quaterions ('*' operator concatenates)
// C#: Quaternion.Concatenate(ref rotation, ref orientation)
orientation = orientation * rotation;
// Rotate the camera about the world X axis
rotation = glm::gtx::quaternion::angleAxis(pitchDegrees, 1.0f, 0.0f, 0.0f);
// Concatenate quaterions ('*' operator concatenates)
// C#: Quaternion.Concatenate(ref orientation, ref rotation)
orientation = rotation * orientation;
The problem that remains is that the view matrix rotation appears to rotate the drawn object and not look around like a normal FPS camera.
I have uploaded a video to YouTube to demonstrate the problem. I move the mouse around to change the camera's orientation but the triangle appears to rotate instead.
YouTube video demonstrating camera orientation problem
EDIT 2:
void Camera::rotate(float yawDegrees, float pitchDegrees)
{
// Apply rotation speed to the rotation
yawDegrees *= lookSensitivity;
pitchDegrees *= lookSensitivity;
if (isLookInverted)
{
pitchDegrees = -pitchDegrees;
}
pitchAccum += pitchDegrees;
// Stop the camera from looking any higher than 90 degrees
if (pitchAccum > 90.0f)
{
pitchDegrees = 90.0f - (pitchAccum - pitchDegrees);
pitchAccum = 90.0f;
}
// Stop the camera from looking any lower than 90 degrees
else if (pitchAccum < -90.0f)
{
pitchDegrees = -90.0f - (pitchAccum - pitchDegrees);
pitchAccum = -90.0f;
}
// 'pitchAccum' range is [-90, 90]
//printf("pitchAccum %f \n", pitchAccum);
yawAccum += yawDegrees;
if (yawAccum > 360.0f)
{
yawAccum -= 360.0f;
}
else if (yawAccum < -360.0f)
{
yawAccum += 360.0f;
}
orientation =
glm::gtx::quaternion::angleAxis(pitchAccum, 1.0f, 0.0f, 0.0f) *
glm::gtx::quaternion::angleAxis(yawAccum, 0.0f, 1.0f, 0.0f);
}
EDIT3:
The following multiplication order allows the camera to rotate around its own axis but face the wrong direction:
glm::mat4 translation;
translation = glm::translate(translation, position);
view = glm::gtx::quaternion::toMat4(orientation) * translation;
EDIT4:
The following will work (applying the translation matrix based on the position after then rotation)
// Rotation
view = glm::gtx::quaternion::toMat4(orientation);
// Translation
glm::mat4 translation;
translation = glm::translate(translation, -position);
view *= translation;
I can't get the dot product with each orientation axis to work though
// Rotation
view = glm::gtx::quaternion::toMat4(orientation);
glm::vec3 p(
glm::dot(glm::vec3(view[0][0], view[0][1], view[0][2]), position),
glm::dot(glm::vec3(view[1][0], view[1][1], view[1][2]), position),
glm::dot(glm::vec3(view[2][0], view[2][1], view[2][2]), position)
);
// Translation
glm::mat4 translation;
translation = glm::translate(translation, -p);
view *= translation;
In order to give you a definite answer, I think that we would need the code that shows how you're actually supplying the view matrix and vertices to OpenGL. However, the symptom sounds pretty typical of incorrect matrix order.
Consider some variables:
V represents the inverse of the current orientation of the camera (the quaternion).
T represents the translation matrix holding the position of the camera. This should be an identity matrix with negation of the camera's position going down the fourth column (assuming that we're right-multiplying column vectors).
U represents the inverse of the change in orientation.
p represents a vertex in world space.
Note: all of the matrices are inverse matrices because the transformations will be applied to the vertex, not the camera, but the end result is the same.
By default the OpenGL camera is at the origin looking down the negative-z axis. When the view isn't changing (U==I), then the vertex's transformation from world coordinates to camera coordinates should be: p'=TVp. You first orient the camera (by rotating the world in the opposite direction) and then translate the camera into position (by shifting the world in the opposite direction).
Now there are a few places to put U. If we put U to the right of V, then we get the behavior of a first-person view. When you move the mouse up, whatever is currently in view rotates downward around the camera. When you move the mouse right, whatever is in view rotates to the left around the camera.
If we put U between T and V, then the camera turns relative to the world's axes instead of the camera's. This is strange behavior. If V happens to turn the camera off to the side, then moving the mouse up and down will make the world seem to 'roll' instead of 'pitch' or 'yaw'.
If we put U left of T, then the camera rotates around the world's axes around the world's origin. This can be even stranger because it makes the camera fly through world faster the farther the camera is from the origin. However, because the rotation is around the origin, if the camera happens to be looking at the origin, objects there will just appear to be turning around. This is sort of what you're seeing because of the dot-products that you're taking to rotate the camera's position.
You check to make sure that pitchAccum stays within [-90,90], but you've commented out the portion that would make use of that fact. This seems odd to me.
The way that you left-multiply pitch but right-multiply yaw makes it so that your quaternions aren't doing much for you. They're just holding your Euler angles. Unless orientation changes are coming in from other places, you could simply say that orientation = glm::gtx::quaternion::angleAxis(pitchAccum*DEG2RAD, 1.0f, 0.0f, 0.0f) * glm::gtx::quaternion::angleAxis(yawAccum*DEG2RAD, 0.0f, 1.0f, 0.0f); and overwrite the old orientation completely.
From what I understand in this tutorial, there might be a reason why pitch angle is restricted at 90 degrees.
Regardless of using quaternions or a look at matrix, at the end, we give an initial orientation to the Camera. In quaternions, this is the initial value of the orientation, in lookAt, it is the initial value of the up vector.
If the direction facing towards the camera is parallel to this initial vector, then the cross product of these will be zero, which means the camera might have any orientation if pitch is 90 or -90 degrees.
In the internal implementation of toMat4(orientation) this would result in one of your x_dir/y_dir/z_dir vectors to be a zero vector, which would mean that your can have any orientation. This is also discussed in this book, which says that if Y angle is 90 degrees, a degree of freedom is lost (Edward Angel and Dave Shreiner, Interactive Computer Graphics, A Top-Down Approach with WebGL, Seventh Edition, Addison-Wesley 2015.), which is discussed as Gimbal Lock.
I can see that you are aware of this problem, but in your code, the yaw angle is still set to 90 degrees if it overflows 90, leaving your Camera in an invalid state. You should consider something like this instead:
if (pitchAccum > 89.999f && pitchAccum <= 90.0f)
{
pitchAccum = 90.001f;
}
else if (pitchAccum < -89.999f && pitchAccum >= -90.0f)
{
pitchAccum = -90.001f;
}
if (pitchAccum >= 360.0f)
{
pitchAccum = 0.0f;
}
else if (pitchAccum <= -360.0f)
{
pitchAccum = 0.0f;
}
Or you can define another custom action of your choice when pitchAccum is 90 degrees.
It does look at the target when I move to the target to the right and looks at it up until it goes 180 degrees of the -zaxis and decides to go the other way.
Matrix4x4 camera::GetViewMat()
{
Matrix4x4 oRotate, oView;
oView.SetIdentity();
Vector3 lookAtDir = m_targetPosition - m_camPosition;
Vector3 lookAtHorizontal = Vector3(lookAtDir.GetX(), 0.0f, lookAtDir.GetZ());
lookAtHorizontal.Normalize();
float angle = acosf(Vector3(0.0f, 0.0f, -1.0f).Dot(lookAtHorizontal));
Quaternions horizontalOrient(angle, Vector3(0.0f, 1.0f, 0.0f));
ori = horizontalOrient;
ori.Conjugate();
oRotate = ori.ToMatrix();
Vector3 inverseTranslate = Vector3(-m_camPosition.GetX(), -m_camPosition.GetY(), -m_camPosition.GetZ());
oRotate.Transform(inverseTranslate);
oRotate.Set(0, 3, inverseTranslate.GetX());
oRotate.Set(1, 3, inverseTranslate.GetY());
oRotate.Set(2, 3, inverseTranslate.GetZ());
oView = oRotate;
return oView;
}
As promised, a bit of code showing the way I'd make a camera look at a specific point in space at all times.
First of all, we'd need a method to construct a quaternion from an angle and an axis, I happen to have that on pastebin, the angle input is in radians:
http://pastebin.com/vLcx4Qqh
Make sure you don't input the axis (0,0,0), which wouldn't make any sense whatsoever.
Now the actual update method, we need to get the quaternion rotating the camera from default orientation to pointing towards the target point. PLEASE note I just wrote this out of the top of my head, it probably needs a little debugging and may need a little optimization, but this should at least give you a push in the right direction.
void camera::update()
{
// First get the direction from the camera's position to the target point
vec3 lookAtDir = m_targetPoint - m_position;
// I'm going to divide the vector into two 'components', the Y axis rotation
// and the Up/Down rotation, like a regular camera would work.
// First to calculate the rotation around the Y axis, so we zero out the y
// component:
vec3 lookAtHorizontal = vec3(lookAtDir.x, 0.0f, lookAtDir.z).normalize();
// Get the quaternion from 'default' direction to the horizontal direction
// In this case, 'default' direction is along the -z axis, like most OpenGL
// programs. Make sure the projection matrix works according to this.
float angle = acos(vec3(0.0f, 0.0f, -1.0f).dot(lookAtHorizontal));
quaternion horizontalOrient(angle, vec3(0.0f, 1.0f, 0.0f));
// Since we already stripped the Y component, we can simply get the up/down
// rotation from it as well.
angle = acos(lookAtDir.normalize().dot(lookAtHorizontal));
if(angle) horizontalOrient *= quaternion(angle, lookAtDir.cross(lookAtHorizontal));
// ...
m_orientation = horizontalOrient;
}
Now to actually take m_orientation and m_position and get the world -> camera matrix
// First inverse each element (-position and inverse the quaternion),
// the position is rotated since the position within a matrix is 'added' last
// to the output vector, so it needs to account for rotation of the space.
mat3 rotationMatrix = m_orientation.inverse().toMatrix();
vec3 inverseTranslate = rotationMatrix * -m_position; // Note the minus
mat4 matrix = mat3; // just means the matrix is expanded, the last entry (bottom right of the matrix) is a 1.0f like an identity matrix would be.
// This bit is row-major in my case, you just need to set the translation of the matrix.
matrix[3] = inverseTranslate.x;
matrix[7] = inverseTranslate.y;
matrix[11] = inverseTranslate.z;
EDIT I think it should be obvious but just for completeness, .dot() takes the dot product of the vectors, .cross() takes the cross product, the object executing the method is vector A, and the parameter of the method is vector B.
I have looked at a ton of quaternion examples on several sites including this one, but found none that answer this, so here goes...
I want to orbit my camera around a large sphere, and have the camera always point at the center of the sphere. To make things easy, the sphere is located at {0,0,0} in the world. I am using a quaternion for camera orientation, and a vector for camera position. The problem is that the camera position orbits the sphere perfectly as it should, but always looks one constant direction straight forward instead of adjusting to point to the center as it orbits.
This must be something simple, but I am new to quaternions... what am I missing?
I'm using C++, DirectX9. Here is my code:
// Note: g_camRotAngleYawDir and g_camRotAnglePitchDir are set to either 1.0f, -1.0f according to keypresses, otherwise equal 0.0f
// Also, g_camOrientationQuat is just an identity quat to begin with.
float theta = 0.05f;
D3DXQUATERNION g_deltaQuat( 0.0f, 0.0f, 0.0f, 1.0f );
D3DXQuaternionRotationYawPitchRoll(&g_deltaQuat, g_camRotAngleYawDir * theta, g_camRotAnglePitchDir * theta, g_camRotAngleRollDir * theta);
D3DXQUATERNION tempQuat = g_camOrientationQuat;
D3DXQuaternionMultiply(&g_camOrientationQuat, &tempQuat, &g_deltaQuat);
D3DXMatrixRotationQuaternion(&g_viewMatrix, &g_camOrientationQuat);
g_viewMatrix._41 = g_camPosition.x;
g_viewMatrix._42 = g_camPosition.y;
g_viewMatrix._43 = g_camPosition.z;
g_direct3DDevice9->SetTransform( D3DTS_VIEW, &g_viewMatrix );
[EDIT - Feb 13, 2012]
Ok, here's my understanding so far:
move the camera using an angular delta each frame.
Get a vector from center to camera-pos.
Call quaternionRotationBetweenVectors with a z-facing unit vector and the target vector.
Then use the result of that function for the orientation of the view matrix, and the camera-position goes in the translation portion of the view matrix.
Here's the new code (called every frame)...
// This part orbits the position around the sphere according to deltas for yaw, pitch, roll
D3DXQuaternionRotationYawPitchRoll(&deltaQuat, yawDelta, pitchDelta, rollDelta);
D3DXMatrixRotationQuaternion(&mat1, &deltaQuat);
D3DXVec3Transform(&g_camPosition, &g_camPosition, &mat1);
// This part adjusts the orientation of the camera to point at the center of the sphere
dir1 = normalize(vec3(0.0f, 0.0f, 0.0f) - g_camPosition);
QuaternionRotationBetweenVectors(&g_camOrientationQuat, vec3(0.0f, 0.0f, 1.0f), &dir1);
D3DXMatrixRotationQuaternion(&g_viewMatrix, &g_camOrientationQuat);
g_viewMatrix._41 = g_camPosition.x;
g_viewMatrix._42 = g_camPosition.y;
g_viewMatrix._43 = g_camPosition.z;
g_direct3DDevice9->SetTransform( D3DTS_VIEW, &g_viewMatrix );
...I tried that solution out, without success. What am I doing wrong?
It should be as easy as doing the following whenever you update the position (assuming the camera is pointing along the z-axis):
direction = normalize(center - position)
orientation = quaternionRotationBetweenVectors(vec3(0,0,1), direction)
It is fairly easy to find examples of how to implement quaternionRotationBetweenVectors. You could start with the question "Finding quaternion representing the rotation from one vector to another".
Here's an untested sketch of an implementation using the DirectX9 API:
D3DXQUATERNION* quaternionRotationBetweenVectors(__inout D3DXQUATERNION* result, __in const D3DXVECTOR3* v1, __in const D3DXVECTOR3* v2)
{
D3DXVECTOR3 axis;
D3DXVec3Cross(&axis, v1, v2);
D3DXVec3Normalize(&axis, &axis);
float angle = acos(D3DXVec3Dot(v1, v2));
D3DXQuaternionRotationAxis(result, &axis, angle);
return result;
}
This implementation expects v1, v2 to be normalized.
You need to show how you calculate the angles. Is there a reason why you aren't using D3DXMatrixLookAtLH?