OpenGL ArcBall for rotating mesh - c++

I am using legacy OpenGL to draw a mesh. I am now trying to implement an arcball class to rotate the object with the mouse. However, when i move the mouse, the object either doesn't rotate or rotates by way too big an angle.
This is the method that is called when the mouse is clicked:
void ArcBall::startRotation(int xPos, int yPos) {
int x = xPos - context->getWidth() / 2;
int y = context->getHeight() / 2 - yPos;
startVector = ArcBall::mapCoordinates(x, y).normalized();
endVector = startVector;
rotating = true;
}
This method is meant to simply map the mouse coordinates to be centered at the center of the screen and map them to the bounding sphere, resulting in a starting vector
This is the method that is called when the mouse moves:
void ArcBall::updateRotation(int xPos, int yPos) {
int x = xPos - context->getWidth() / 2;
int y = context->getHeight() / 2 - yPos;
endVector = mapCoordinates(x, y).normalized();
rotationAxis = QVector3D::crossProduct(endVector, startVector).normalized();
angle = (float)qRadiansToDegrees(acos(QVector3D::dotProduct(startVector, endVector)));
rotation.rotate(angle, rotationAxis.x(), rotationAxis.y(), rotationAxis.z());
startVector = endVector;
}
This method is again meant to map the mouse coordinates to be centered t the middle of the screen, then compute the new vector and compute a rotation axis and angle based on these two vectors.
I then use
glMultMatrixf(ArcBall::rotation.data());
to apply the rotation

I recommend to do store the mouse position at the point where you initially click in the view. Calculate the amount of the mouse movement in window coordinates. The distance of the movement has to be mapped to an angle. The rotation axis is perpendicular (normal) to the direction of the mouse movement. The result is a rotation of an object similar to this WebGL demo.
Store the current mouse position in startRotation. Note store the coordinates of the position mouse position not normalized vector:
// xy normalized device coordinates:
float ndcX = 2.0f * xPos / context->getWidth() - 1.0f;
float ndcY = 1.0 - 2.0f * yPos / context->getHeight();
startVector = QVector3D(ndcX, ndcY, 0.0);
Get the current position in updateRotation:
// xy normalized device coordinates:
float ndcX = 2.0f * xPos / context->getWidth() - 1.0f;
float ndcY = 1.0 - 2.0f * yPos / context->getHeight();
endVector = QVector3D(ndcX, ndcY, 0.0);
Calculate the vector from the start position to the end position:
QVector3D direction = endVector - startVector;
The rotation axis is normal to the direction of movement:
rotationAxis = QVector3D(-direction.y(), direction.x(), 0.0).normalized();
Note even if the type of direction is QVector3D, it is still a 2 dimensional vector. It is a vector in the XY plane of the viewport representing the mouse movement on the viewport. The z coordinate is 0. A 2 dimensional vector (x, y), can be 90 degrees counter clockwise rotated, by (-y, x).
The length of the direction vector represents tha angle of rotation. A mouse motion over the entire screen results in a vector with length 2.0. So if a dragging over the full screen should result in a full rotation, the length of the vector has to be multiplied by PI. If the a hlf rotation should be performed, then by PI/2:
angle = (float)qRadiansToDegrees(direction.length() * 3.141593);
Finally the new rotation has to be applied to the existing rotation and not to the model:
QMatrix4x4 addRotation;
addRotation.rotate(angle, rotationAxis.x(), rotationAxis.y(), rotationAxis.z());
rotation = addRotation * rotation;
Final code listing of the methods startRotation and updateRotation:
void ArcBall::startRotation(int xPos, int yPos) {
// xy normalized device coordinates:
float ndcX = 2.0f * xPos / context->getWidth() - 1.0f;
float ndcY = 1.0 - 2.0f * yPos / context->getHeight();
startVector = QVector3D(ndcX, ndcY, 0.0);
endVector = startVector;
rotating = true;
}
void ArcBall::updateRotation(int xPos, int yPos) {
// xy normalized device coordinates:
float ndcX = 2.0f * xPos / context->getWidth() - 1.0f;
float ndcY = 1.0 - 2.0f * yPos / context->getHeight();
endVector = QVector3D(ndcX, ndcY, 0.0);
QVector3D direction = endVector - startVector;
rotationAxis = QVector3D(-direction.y(), direction.x(), 0.0).normalized();
angle = (float)qRadiansToDegrees(direction.length() * 3.141593);
QMatrix4x4 addRotation;
addRotation.rotate(angle, rotationAxis.x(), rotationAxis.y(), rotationAxis.z());
rotation = addRotation * rotation;
startVector = endVector;
}
If you want a rotation around the upwards axis of the object a tilting the object along the view space x axis, then the calculation is different. First apply the rotation matrix around the y axis (up vector) then the current view matrix and finally the rotation on the x axis:
view-matrix = rotate-X * view-matrix * rotate-Y
The function update rotation has to look like this:
void ArcBall::updateRotation(int xPos, int yPos) {
// xy normalized device coordinates:
float ndcX = 2.0f * xPos / context->getWidth() - 1.0f;
float ndcY = 1.0 - 2.0f * yPos / context->getHeight();
endVector = QVector3D(ndcX, ndcY, 0.0);
QVector3D direction = endVector - startVector;
float angleY = (float)qRadiansToDegrees(-direction.x() * 3.141593);
float angleX = (float)qRadiansToDegrees(-direction.y() * 3.141593);
QMatrix4x4 rotationX;
rotationX.rotate(angleX, 1.0f 0.0f, 0.0f);
QMatrix4x4 rotationUp;
rotationX.rotate(angleY, 0.0f 1.0f, 0.0f);
rotation = rotationX * rotation * rotationUp;
startVector = endVector;
}

Related

OpenGL 2D Circle - Rotated AABB Collision

I have trouble figuring out a way to detect collision between a circle and a rotated rectangle. My approach was to first rotate the circle and the rectangle by -angle, where angle is the amount of radians the rectangle is rotated. Therefore, the rectangle and the circle are aligned with the axes, so I can perform the basic circle - AABB collision detection.
bool CheckCollision(float circleX, float circleY, float radius, float left, float bottom, float width, float height, float angle){
// Rotating the circle and the rectangle with -angle
circleX = circleX * cos(-angle) - circleY * sin(-angle);
circleY = circleX * sin(-angle) + circleY * cos(-angle);
left = left * cos(-angle) - bottom* sin(-angle);
bottom = left * sin(-angle) + bottom * cos(-angle);
}
glm::vec2 center(circleX, circleY);
// calculate AABB info (center, half-extents)
glm::vec2 aabb_half_extents(width / 2.0f, height / 2.0f);
glm::vec2 aabb_center(
left + aabb_half_extents.x,
bottom + aabb_half_extents.y
);
// get difference vector between both centers
glm::vec2 difference = center - aabb_center;
glm::vec2 clamped = glm::clamp(difference, -aabb_half_extents, aabb_half_extents);
// add clamped value to AABB_center and we get the value of box closest to circle
glm::vec2 closest = aabb_center + clamped;
// retrieve vector between center circle and closest point AABB and check if length <= radius
difference = closest - center;
return glm::length(difference) < radius;
Let rectangle center is rcx, rcy. Set coordinate origin in this point and rotate circle center about this point (cx, cy are coordinates relative to rectangle center):
cx = (circleX - rcx) * cos(-angle) - (circleY - rcy) * sin(-angle);
cy = (circleX - rcx) * sin(-angle) + (circleY - rcy) * cos(-angle);
Now get squared distance from circle center to rectangle's closest point(zero denotes circle center is inside rectangle):
dx = max(Abs(cx) - rect_width / 2, 0)
dy = max(Abs(cy) - rect_height / 2, 0)
SquaredDistance = dx * dx + dy * dy
Then compare it with squared radius

opengl camera zoom to cursor, avoiding > 90deg fov

I'm trying to set up a google maps style zoom-to-cursor control for my opengl camera. I'm using a similar method to the one suggested here. Basically, I get the position of the cursor, and calculate the width/height of my perspective view at that depth using some trigonometry. I then change the field of view, and calculate how to much I need to translate in order to keep the point under the cursor in the same apparent position on the screen. That part works pretty well.
The issue is that I want to limit the fov to be less than 90 degrees. When it ends up >90, I cut it in half and then translate everything away from the camera so that the resulting scene looks the same as with the larger fov. The equation to find that necessary translation isn't working, which is strange because it comes from pretty simple algebra. I can't find my mistake. Here's the relevant code.
void Visual::scroll_callback(GLFWwindow* window, double xoffset, double yoffset)
{
glm::mat4 modelview = view*model;
glm::vec4 viewport = { 0.0, 0.0, width, height };
float winX = cursorPrevX;
float winY = viewport[3] - cursorPrevY;
float winZ;
glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
glm::vec3 screenCoords = { winX, winY, winZ };
glm::vec3 cursorPosition = glm::unProject(screenCoords, modelview, projection, viewport);
if (isinf(cursorPosition[2]) || isnan(cursorPosition[2])) {
cursorPosition[2] = 0.0;
}
float zoomFactor = 1.1;
// = zooming in
if (yoffset > 0.0)
zoomFactor = 1/1.1;
//the width and height of the perspective view, at the depth of the cursor position
glm::vec2 fovXY = camera.getFovXY(cursorPosition[2] - zTranslate, width / height);
camera.setZoomFromFov(fovXY.y * zoomFactor, cursorPosition[2] - zTranslate);
//don't want fov to be greater than 90, so cut it in half and move the world farther away from the camera to compensate
//not working...
if (camera.Zoom > 90.0 && zTranslate*2 > MAX_DEPTH) {
float prevZoom = camera.Zoom;
camera.Zoom *= .5;
//need increased distance between camera and world origin, so that view does not appear to change when fov is reduced
zTranslate = cursorPosition[2] - tan(glm::radians(prevZoom)) / tan(glm::radians(camera.Zoom) * (cursorPosition[2] - zTranslate));
}
else if (camera.Zoom > 90.0) {
camera.Zoom = 90.0;
}
glm::vec2 newFovXY = camera.getFovXY(cursorPosition[2] - zTranslate, width / height);
//translate so that position under the cursor does not appear to move.
xTranslate += (newFovXY.x - fovXY.x) * (winX / width - .5);
yTranslate += (newFovXY.y - fovXY.y) * (winY / height - .5);
updateView = true;
}
The definition of my view matrix. Called ever iteration of the main loop.
void Visual::setView() {
view = glm::mat4();
view = glm::translate(view, { xTranslate,yTranslate,zTranslate });
view = glm::rotate(view, glm::radians(camera.inclination), glm::vec3(1.f, 0.f, 0.f));
view = glm::rotate(view, glm::radians(camera.azimuth), glm::vec3(0.f, 0.f, 1.f));
camera.Right = glm::column(view, 0).xyz();
camera.Up = glm::column(view, 1).xyz();
camera.Front = -glm::column(view, 2).xyz(); // minus because OpenGL camera looks towards negative Z.
camera.Position = glm::column(view, 3).xyz();
updateView = false;
}
Field of view helper functions.
glm::vec2 getFovXY(float depth, float aspectRatio) {
float fovY = tan(glm::radians(Zoom / 2)) * depth;
float fovX = fovY * aspectRatio;
return glm::vec2{ 2*fovX , 2*fovY };
}
//you have a desired fov, and you want to set the zoom to achieve that.
//factor of 1/2 inside the atan because we actually need the half-fov. Keep full-fov as input for consistency
void setZoomFromFov(float fovY, float depth) {
Zoom = glm::degrees(2 * atan(fovY / (2 * depth)));
}
The equations I'm using can be found from the diagram here. Since I want to have the same field of view dimensions before and after the angle is changed, I start with
fovY = tan(theta1) * d1 = tan(theta2) * d2
d2 = (tan(theta1) / tan(theta2)) * d1
d1 = distance between camera and cursor position, before fov change = cursorPosition[2] - zTranslate
d2 = distance after
theta1 = fov angle before
theta2 = fov angle after = theta1 * .5
Appreciate the help.

Arcball camera inverting at 90 deg azimuth

I'm attempting to implement an arcball style camera. I use glm::lookAt to keep the camera pointed at a target, and then move it around the surface of a sphere using azimuth/inclination angles to rotate the view.
I'm running into an issue where the view gets flipped upside down when the azimuth approaches 90 degrees.
Here's the relevant code:
Get projection and view martrices. Runs in the main loop
void Visual::updateModelViewProjection()
{
model = glm::mat4();
projection = glm::mat4();
view = glm::mat4();
projection = glm::perspective
(
(float)glm::radians(camera.Zoom),
(float)width / height, // aspect ratio
0.1f, // near clipping plane
10000.0f // far clipping plane
);
view = glm::lookAt(camera.Position, camera.Target, camera.Up);
}
Mouse move event, for camera rotation
void Visual::cursor_position_callback(GLFWwindow* window, double xpos, double ypos)
{
if (leftMousePressed)
{
...
}
if (rightMousePressed)
{
GLfloat xoffset = (xpos - cursorPrevX) / 4.0;
GLfloat yoffset = (cursorPrevY - ypos) / 4.0;
camera.inclination += yoffset;
camera.azimuth -= xoffset;
if (camera.inclination > 89.0f)
camera.inclination = 89.0f;
if (camera.inclination < 1.0f)
camera.inclination = 1.0f;
if (camera.azimuth > 359.0f)
camera.azimuth = 359.0f;
if (camera.azimuth < 1.0f)
camera.azimuth = 1.0f;
float radius = glm::distance(camera.Position, camera.Target);
camera.Position[0] = camera.Target[0] + radius * cos(glm::radians(camera.azimuth)) * sin(glm::radians(camera.inclination));
camera.Position[1] = camera.Target[1] + radius * sin(glm::radians(camera.azimuth)) * sin(glm::radians(camera.inclination));
camera.Position[2] = camera.Target[2] + radius * cos(glm::radians(camera.inclination));
camera.updateCameraVectors();
}
cursorPrevX = xpos;
cursorPrevY = ypos;
}
Calculate camera orientation vectors
void updateCameraVectors()
{
Front = glm::normalize(Target-Position);
Right = glm::rotate(glm::normalize(glm::cross(Front, {0.0, 1.0, 0.0})), glm::radians(90.0f), Front);
Up = glm::normalize(glm::cross(Front, Right));
}
I'm pretty sure it's related to the way I calculate my camera's right vector, but I cannot figure out how to compensate.
Has anyone run into this before? Any suggestions?
It's a common mistake to use lookAt for rotating the camera. You should not. The backward/right/up directions are the columns of your view matrix. If you already have them then you don't even need lookAt, which tries to redo some of your calculations. On the other hand, lookAt doesn't help you in finding those vectors in the first place.
Instead build the view matrix first as a composition of translations and rotations, and then extract those vectors from its columns:
void Visual::cursor_position_callback(GLFWwindow* window, double xpos, double ypos)
{
...
if (rightMousePressed)
{
GLfloat xoffset = (xpos - cursorPrevX) / 4.0;
GLfloat yoffset = (cursorPrevY - ypos) / 4.0;
camera.inclination = std::clamp(camera.inclination + yoffset, -90.f, 90.f);
camera.azimuth = fmodf(camera.azimuth + xoffset, 360.f);
view = glm::mat4();
view = glm::translate(view, glm::vec3(0.f, 0.f, camera.radius)); // add camera.radius to control the distance-from-target
view = glm::rotate(view, glm::radians(camera.inclination + 90.f), glm::vec3(1.f,0.f,0.f));
view = glm::rotate(view, glm::radians(camera.azimuth), glm::vec3(0.f,0.f,1.f));
view = glm::translate(view, camera.Target);
camera.Right = glm::column(view, 0);
camera.Up = glm::column(view, 1);
camera.Front = -glm::column(view, 2); // minus because OpenGL camera looks towards negative Z.
camera.Position = glm::column(view, 3);
view = glm::inverse(view);
}
...
}
Then remove the code that calculates view and the direction vectors from updateModelViewProjection and updateCameraVectors.
Disclaimer: this code is untested. You might need to fix a minus sign somewhere, order of operations, or the conventions might mismatch (Z is up or Y is up, etc...).

FPS Camera using JOGL and gluLookAt

In display i have:
Camera c = new Camera(canvas);
glu.gluLookAt(c.eyeX, c.eyeY, c.eyeZ, c.point_X, c.point_Y, c.point_Z, 0, 1, 0);
those variables are from an object from my Camera class which has:
float eyeX = 5.0f, eyeY = 5.0f, eyeZ = 5.0f;
float camYaw = 0.0f; //camera rotation in X axis
float camPitch = 0.0f; //camera rotation in Y axis
float point_X = 10.0f;
float point_Y = 5.0f;
float point_Z = 5.0f;
i calculate the delta on the mouse movement and camYaw goes from 0 to 360 degrees
and camPitch from -90 to 90 degrees (or 0 - 2PI and -+PI/2)
this works fine but when i put the calculated point_X, point_Y, point_Z to the gluLookAt() it moves the cam in a strange way (seems it rotates the camera to an invisible sphere depending on the given radius in the equasions)
public void updateCamera() {
float radius = 5.0f;
point_X = (float) (radius * Math.cos(camYaw) * Math.sin(camPitch));
point_Z = (float) (radius * Math.sin(camPitch) * Math.sin(camYaw));
point_Y = (float) (radius * Math.cos(camPitch));
}
Im trying to convert polar to cartesian coordinates.
The larger the radius the better it "works".
Changing from degrees to radians still doesnt work.

OpenGL Matrix Camera controls, local rotation not functioning properly

So I'm trying to figure out how to mannually create a camera class that creates a local frame for camera transformations. I've created a player object based on OpenGL SuperBible's GLFrame class.
I got keyboard keys mapped to the MoveUp, MoveRight and MoveForward functions and the horizontal and vertical mouse movements are mapped to the xRot variable and rotateLocalY function. This is done to create a FPS style camera.
The problem however is in the RotateLocalY. Translation works fine and so does the vertical mouse movement but the horizontal movement scales all my objects down or up in a weird way. Besides the scaling, the rotation also seems to restrict itself to 180 degrees and rotates around the world origin (0.0) instead of my player's local position.
I figured that the scaling had something to do with normalizing vectors but the GLframe class (which I used for reference) never normalized any vectors and that class works just fine. Normalizing most of my vectors only solved the scaling and all the other problems were still there so I'm figuring one piece of code is causing all these problems?
I can't seem to figure out where the problem lies, I'll post all the appropriate code here and a screenshot to show the scaling.
Player object
Player::Player()
{
location[0] = 0.0f; location[1] = 0.0f; location[2] = 0.0f;
up[0] = 0.0f; up[1] = 1.0f; up[2] = 0.0f;
forward[0] = 0.0f; forward[1] = 0.0f; forward[2] = -1.0f;
}
// Does all the camera transformation. Should be called before scene rendering!
void Player::ApplyTransform()
{
M3DMatrix44f cameraMatrix;
this->getTransformationMatrix(cameraMatrix);
glRotatef(xAngle, 1.0f, 0.0f, 0.0f);
glMultMatrixf(cameraMatrix);
}
void Player::MoveForward(GLfloat delta)
{
location[0] += forward[0] * delta;
location[1] += forward[1] * delta;
location[2] += forward[2] * delta;
}
void Player::MoveUp(GLfloat delta)
{
location[0] += up[0] * delta;
location[1] += up[1] * delta;
location[2] += up[2] * delta;
}
void Player::MoveRight(GLfloat delta)
{
// Get X axis vector first via cross product
M3DVector3f xAxis;
m3dCrossProduct(xAxis, up, forward);
location[0] += xAxis[0] * delta;
location[1] += xAxis[1] * delta;
location[2] += xAxis[2] * delta;
}
void Player::RotateLocalY(GLfloat angle)
{
// Calculate a rotation matrix first
M3DMatrix44f rotationMatrix;
// Rotate around the up vector
m3dRotationMatrix44(rotationMatrix, angle, up[0], up[1], up[2]); // Use up vector to get correct rotations even with multiple rotations used.
// Get new forward vector out of the rotation matrix
M3DVector3f newForward;
newForward[0] = rotationMatrix[0] * forward[0] + rotationMatrix[4] * forward[1] + rotationMatrix[8] * forward[2];
newForward[1] = rotationMatrix[1] * forward[1] + rotationMatrix[5] * forward[1] + rotationMatrix[9] * forward[2];
newForward[2] = rotationMatrix[2] * forward[2] + rotationMatrix[6] * forward[1] + rotationMatrix[10] * forward[2];
m3dCopyVector3(forward, newForward);
}
void Player::getTransformationMatrix(M3DMatrix44f matrix)
{
// Get Z axis (Z axis is reversed with camera transformations)
M3DVector3f zAxis;
zAxis[0] = -forward[0];
zAxis[1] = -forward[1];
zAxis[2] = -forward[2];
// Get X axis
M3DVector3f xAxis;
m3dCrossProduct(xAxis, up, zAxis);
// Fill in X column in transformation matrix
m3dSetMatrixColumn44(matrix, xAxis, 0); // first column
matrix[3] = 0.0f; // Set 4th value to 0
// Fill in the Y column
m3dSetMatrixColumn44(matrix, up, 1); // 2nd column
matrix[7] = 0.0f;
// Fill in the Z column
m3dSetMatrixColumn44(matrix, zAxis, 2); // 3rd column
matrix[11] = 0.0f;
// Do the translation
M3DVector3f negativeLocation; // Required for camera transform (right handed OpenGL system. Looking down negative Z axis)
negativeLocation[0] = -location[0];
negativeLocation[1] = -location[1];
negativeLocation[2] = -location[2];
m3dSetMatrixColumn44(matrix, negativeLocation, 3); // 4th column
matrix[15] = 1.0f;
}
Player object header
class Player
{
public:
//////////////////////////////////////
// Variables
M3DVector3f location;
M3DVector3f up;
M3DVector3f forward;
GLfloat xAngle; // Used for FPS divided X angle rotation (can't combine yaw and pitch since we'll also get a Roll which we don't want for FPS)
/////////////////////////////////////
// Functions
Player();
void ApplyTransform();
void MoveForward(GLfloat delta);
void MoveUp(GLfloat delta);
void MoveRight(GLfloat delta);
void RotateLocalY(GLfloat angle); // Only need rotation on local axis for FPS camera style. Then a translation on world X axis. (done in apply transform)
private:
void getTransformationMatrix(M3DMatrix44f matrix);
};
Applying transformations
// Clear screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
// Apply camera transforms
player.ApplyTransform();
// Set up lights
...
// Use shaders
...
// Render the scene
RenderScene();
// Do post rendering operations
glutSwapBuffers();
and mouse
float mouseSensitivity = 500.0f;
float horizontal = (width / 2) - mouseX;
float vertical = (height / 2) - mouseY;
horizontal /= mouseSensitivity;
vertical /= (mouseSensitivity / 25);
player.xAngle += -vertical;
player.RotateLocalY(horizontal);
glutWarpPointer((width / 2), (height / 2));
Honestly I think you are taking a way to complicated approach to your problem. There are many ways to create a camera. My favorite is using a R3-Vector and a Quaternion, but you could also work with a R3-Vector and two floats (pitch and yaw).
The setup with two angles is simple:
glLoadIdentity();
glTranslatef(-pos[0], -pos[1], -pos[2]);
glRotatef(-yaw, 0.0f, 0.0f, 1.0f);
glRotatef(-pitch, 0.0f, 1.0f, 0.0f);
The tricky part now is moving the camera. You must do something along the lines of:
flaot ds = speed * dt;
position += tranform_y(pich, tranform_z(yaw, Vector3(ds, 0, 0)));
How to do the transforms, I would have to look that up, but you could to it by using a rotation matrix
Rotation is trivial, just add or subtract from the pitch and yaw values.
I like using a quaternion for the orientation because it is general and thus you have a camera (any entity that is) that independent of any movement scheme. In this case you have a camera that looks like so:
class Camera
{
public:
// lots of stuff omitted
void setup();
void move_local(Vector3f value);
void rotate(float dy, float dz);
private:
mx::Vector3f position;
mx::Quaternionf orientation;
};
Then the setup code uses shamelessly gluLookAt; you could make a transformation matrix out of it, but I never got it to work right.
void Camera::setup()
{
// projection related stuff
mx::Vector3f eye = position;
mx::Vector3f forward = mx::transform(orientation, mx::Vector3f(1, 0, 0));
mx::Vector3f center = eye + forward;
mx::Vector3f up = mx::transform(orientation, mx::Vector3f(0, 0, 1));
gluLookAt(eye(0), eye(1), eye(2), center(0), center(1), center(2), up(0), up(1), up(2));
}
Moving the camera in local frame is also simple:
void Camera::move_local(Vector3f value)
{
position += mx::transform(orientation, value);
}
The rotation is also straight forward.
void Camera::rotate(float dy, float dz)
{
mx::Quaternionf o = orientation;
o = mx::axis_angle_to_quaternion(horizontal, mx::Vector3f(0, 0, 1)) * o;
o = o * mx::axis_angle_to_quaternion(vertical, mx::Vector3f(0, 1, 0));
orientation = o;
}
(Shameless plug):
If you are asking what math library I use, it is mathex. I wrote it...