Arcball Camera: how to obtain right, direction and up - c++

I'm trying to implement a camera that follows a moving object. I've implemented these functions:
void Camera::espheric_yaw(float degrees, glm::vec3 center_point)
{
float lim_yaw = glm::radians(89.0f);
float radians = glm::radians(degrees);
absoluteYaw += radians;
... clamp absoluteYaw
float radius = 10.0f;
float camX = cos(absoluteYaw) * cos(absoluteRoll) * radius;
float camY = sin(absoluteRoll)* radius;
float camZ = sin(absoluteYaw) * cos(absoluteRoll) * radius;
eyes.x = camX;
eyes.y = camY;
eyes.z = camZ;
lookAt = center_point;
view = glm::normalize(lookAt - eyes);
up = glm::vec3(0, 1, 0);
right = glm::normalize(glm::cross(view, up));
}
I want to use this function (and the pitch version) for a camera that follows a moving 3d model. Right now, it works when the center_point is the (0,1,0). I think i'm getting the position right but the up vector is clearly not always (0,1,0).
How can I get my up, view and right vector for the camera? And then, if I update the eyes position of the camera this way, how will my camera move when the other object (centered at center_position parameter) moves?
The idea is to update this each time I have mouse input with centered_value = center of the moving object. Then use gluLookAt with view, eyes and up values of my camera (and lookAt which will be eyes+view).

Following a moving object is matter of pointing the camera to that object. This is what typical lookAt function does. See the maths here and then use glm::lookAt().
The 'Arcball' technic is for rotating with the mouse. See some maths here.
The idea is to get two vectors (first, second) from positions on screen. For each vector, X,Y are taking depending on pixels "travelled" by mouse and the size of the window. Z is calculated by 'trackball' maths. With these two vectors (after normalizing them), its cross product gives the axis of rotation in camera coordinates, and its dot product gives the angle. Now, you can rotate the camera by glm::rotate()
If you go another route (e.g. calculating camera matrix on your own), then the "up" direction of the camera must be updated by yourself. Remember it's perpendicular to the other two axis of the camera.

Related

OpenGL camera rotation using gluLookAt

I am trying to use gluLookAt to implement an FPS style camera in OpenGL fixed function pipeline. The mouse should rotate the camera in any given direction.
I store the position of the camera:
float xP;
float yP;
float zP;
I store the look at coordinates:
float xL;
float yL;
float zL;
The up vector is always set to (0,1,0)
I use this camera as follows: gluLookAt(xP,yP,zP, xL,yL,zL, 0,1,0);
I want my camera to be able to be able to move along the yaw and pitch, but not roll.
After every frame, I reset the coordinates of the mouse to the middle of the screen. From this I am able to get a change in both x and y.
How can I convert the change in x and y after each frame to appropriately change the lookat coordinates (xL, yL, zL) to rotate the camera?
Start with a set of vectors:
fwd = (0, 0, -1);
rht = (1, 0, 0);
up = (0, 1, 0);
Given that Your x and y, taken from the mouse positions You mentioned, are small enough You can take them directly as yaw and pitch rotations respectively. With yaw value rotate the rht and fwd vectors over the up vector, than rotate fwd vactor over the rht with pitch value. This way You'll have a new forward direction for Your camera (fwd vactor) from which You can derive a new look-at point (L = P + fwd in Your case).
You have to remember to restrict pitch rotation not to have fwd and up vectors parallel at some point. You can prevent that by recreating the up vector every time You do pitch rotation - simply do a cross product between rht and fwd vactors. A side-note here though - this way up will not always be (0,1,0).

GLM: How can I keep the clicked point under mouse when rotating?

I am trying to rotate a mesh about its origin using a standard arcball rotation. Whenever I click on the 3D object, I cast a ray from the mouse to find the intersection point. I measure the distance of that intersection point from the 3D object origin to create a sphere that will be used as the "arcball" in that rotation (until the mouse is released). The reason I do this at the start of every rotation is because I want the clicked point to remain under the mouse. The object rotates correctly, its just that the magnitude of the rotation is smaller and as a result the clicked point does not remain under the mouse. The following example is trying to rotate a cube. I paint a black rectangle on the texture to keep track of the initially clicked point. Here is a video of the problem:
http://youtu.be/x8rsqq1Qdfo
This is the function that gets the arcball vector based on the mouse position and the radius of the sphere (arcballRadius) calculated on click (I am worried that this function does not consider the location of the cube object, although it so happens that the cube object is located at (0,0,z):
/**
* Get a normalized vector from the center of the virtual ball O to a
* point P on the virtual ball surface, such that P is aligned on
* screen's (X,Y) coordinates. If (X,Y) is too far away from the
* sphere, return the nearest point on the virtual ball surface.
*/
glm::vec3 get_arcball_vector(double x, double y) {
glm::vec3 P = glm::vec3(x,y,0);
float OP_squared = P.x * P.x + P.y * P.y;
if (OP_squared <= arcballRadius*arcballRadius)
P.z = sqrt(arcballRadius*arcballRadius - OP_squared); // Pythagore
else
{
static int i;
std::cout << i++ << "Nearest point" << std::endl;
P = glm::normalize(P); // nearest point
}
return P;
}
Whenever the mouse is moved
//get two vectors, one for the previous point and one for the current point
glm::vec3 va = glm::normalize(get_arcball_vector(prevMousePos.x, prevMousePos.y)); //previous point
glm::vec3 vb = glm::normalize(get_arcball_vector(mousePos.x, mousePos.y)); //current point
float angle = acos(glm::dot(va, vb)); //angle between those two vectors based on dot product
//since these axes are in camera coordinates they must be converted before applied to the object
glm::vec3 axis_in_camera_coord = glm::cross(va, vb);
glm::mat3 camera2object = glm::inverse(glm::mat3(viewMatrix) * glm::mat3(cube.modelMatrix));
glm::vec3 axis_in_object_coord = camera2object * axis_in_camera_coord;
//apply rotation to cube's matrix
cube.modelMatrix = glm::rotate(cube.modelMatrix, angle, axis_in_object_coord);
How can I make the clicked point remain under the mouse?
Set the arcball radius to the distance between the point clicked and the center of the object. In other words the first point is a raycast on the cube and the subsequent points will be raycasts on an imaginary sphere centered on the object and with the above mentioned radius.
P.S.: Check out my arcball rotation code on codereview.SE. It doesn't mess with acos and axis-angle and only normalizes twice.

Rotation About Incorrect Axes (First-Person Camera Implementation)

I am implementing a first-person camera to move about a scene using the arrow keys on my keyboard. It seems to work OK when I am only rotating about a single axis (X or Y), however if I am rotating about both axes it also gives me rotation about the third, Z, axis. I am fairly sure that the problem is that my camera does not rotate about global axis but instead its local ones, resulting in 'roll' when I just want yaw and pitch. In my code I deduce a forward vector from the X and Y rotation, stored in two variables. The most relevant code snippet is as follows:
glm::mat4 CameraManager::rotateWorld(float angle, glm::vec3 rot){
static float yRot = 0.0f;
static float xRot = 0.0f;
glm::vec3 degrees = rot * angle;
glm::vec3 radians = glm::vec3(degrees.x * (M_PI/180.0f),
degrees.y * (M_PI/180.0f),
degrees.z * (M_PI/180.0f));
yRot += radians.y;
xRot += radians.x;
forwardVector = glm::vec3(sinf(yRot) * cosf(xRot),
-sinf(xRot),
-cosf(yRot) * cosf(xRot));
return glm::rotate(glm::mat4(1.0f), angle, rot);
}
the rotateWorld function is complemented by the moveForwardfunction:
glm::mat4 CameraManager::moveForward(float dist){
glm::vec3 translation = forwardVector/glm::vec3(sqrt(forwardVector.x * forwardVector.x +
forwardVector.y * forwardVector.y +
forwardVector.z * forwardVector.z)) * dist;
return glm::translate(glm::mat4(1.0f), -translation);
}
where yRot is equivalent to yaw and xRot is equivalent to pitch.
The rotation and translation matrices are simply multiplied together in the main section of the program.
I then go on to multiply a distance d by this vector to update the position.
xRot and yRot are static doubles that get incremented/decremented when the user presses an arrow key.
When the program starts, this is the view. The plane and the monkey head are facing the 'right way' up. incrementing/decrementing the pitch and yaw individually work as expected. But when I, say, increase the pitch and then yaw, the scene flips sideways! (Picture below.) Any ideas how to fix this?
If I understand you correctly, the problem you're experiencing is that your "up" vector is not always pointing vertically upwards with respect to the intended Y axis of your viewing plane.
Determining a correct "up" vector usually requires a combination of cross product operations against the vector you have against the viewport's X and Y axes.
You may find some useful hints in the documentation for the gluLookAt function whose purpose is to calculate a view matrix with desired orientation (i.e. without roll) given an eye position and the coordinates of the intended centre of the field.

How to properly rotate a quaternion along all axis?

I want to code a first person camera with its rotation stored in a quaternion. Unfortunately there is something wrong with the rotation.
The following function is responsible to rotate the camera. The parameters Mouse and Speed pass the mouse movement and rotation speed. Then the function fetches the rotation quaternion, rotates it and stores the result. By the way, I'm using Bullet Physics that is where the types and functions come from.
void Rotate(vec2 Mouse, float Speed)
{
btTransform transform = camera->getWorldTransform();
btQuaternion rotation = transform.getRotation();
Mouse = Mouse * Speed; // apply mouse sensitivity
btQuaternion change(Mouse.y, Mouse.x, 0); // create quaternion from angles
rotation = change * rotation; // rotate camera by that
transform.setRotation(rotation);
camera->setWorldTransform(transform);
}
To illustrate the resulting camera rotation when the mouse moves, I show you a hand drawing. On the left side the wrong rotation the camera actually performs is shown. On the right side the desired correct case is shown. The arrows indicate how the camera is rotate when moving the mouse up (in orange) and down (in blue).
As you can see, as long as the yaw is zero, the rotation is correct. But the more yaw it has, the smaller the circles in which the camera rotates become. In contrast, the circles should always run along the whole sphere like a longitude.
I am not very familiar with quaternions, so here I ask how to correctly rotate them.
I found out how to properly rotate a quaternion on my own. The key was to find vectors for the axis I want to rotate around. Those are used to create quaternions from axis and angle, when angle is the amount to rotate around the actual axis.
The following code shows what I ended up with. It also allows to roll the camera, which might be useful some time.
void Rotate(btVector3 Amount, float Sensitivity)
{
// fetch current rotation
btTransform transform = camera->getWorldTransform();
btQuaternion rotation = transform.getRotation();
// apply mouse sensitivity
Amount *= Sensitivity;
// create orientation vectors
btVector3 up(0, 1, 0);
btVector3 lookat = quatRotate(rotation, btVector3(0, 0, 1));
btVector3 forward = btVector3(lookat.getX(), 0, lookat.getZ()).normalize();
btVector3 side = btCross(up, forward);
// rotate camera with quaternions created from axis and angle
rotation = btQuaternion(up, Amount.getY()) * rotation;
rotation = btQuaternion(side, Amount.getX()) * rotation;
rotation = btQuaternion(forward, Amount.getZ()) * rotation;
// set new rotation
transform.setRotation(rotation);
camera->setWorldTransform(transform);
}
Since I rarely found information about quaternion rotation, I'll spend some time further explaining the code above.
Fetching and setting the rotation is specific to the physics engine and isn't related to this question so I won't elaborate on this. The next part, multiplying the amount by a mouse sensitivity should be really clear. Let's continue with the direction vectors.
The up vector depends on your own implementation. Most conveniently, the positive Y axis points up, therefore we end up with 0, 1, 0.
The lookat vector represents the direction the camera looks at. We simply rotate a unit vector pointing forward by the camera rotation quaternion. Again, the forward pointing vector depends on your conventions. If the Y axis is up, the positive Z axis might point forward, which is 0, 0, 1.
Do not mix that up with the next vector. It's named forward which references to the camera rotation. Therefore we just need to project the lookat vector to the ground. In this case, we simply take the lookat vector and ignore the up pointing component. For neatness we normalize that vector.
The side vector points leftwards from the camera orientation. Therefore it lies perpendicular to both the up and the forward vector and we can use the cross product to compute it.
Given those vectors, we can correctly rotate the camera quaternion around them. Which you start with, Z, Y or Z, depends on the Euler angle sequence which is, again, a convention varying from application to application. Since I want to rotations to be applied in Y X Z order, I do the following.
First, rotate the camera around the up axis by the amount for the Y rotation. This is yaw.
Then rotate around the side axis, which points leftwards, by the X amount. It's pitch.
And lastly, rotate around the forward vector by the Z amount to apply roll.
To apply those rotations, we need to multiply the quaternions create by axis and angle with the current camera rotation. Lastly we apply the resulted quaternion to the body in the physics simulation.
Matrices and pitch/yaw/roll both having their limitations, I do not use them anymore but use instead quaternions. I rotate the view vector and recalculate first the camera vectors, then the view matrix in regard to the rotated view vector.
void Camera::rotateViewVector(glm::quat quat) {
glm::quat rotatedViewQuat;
quat = glm::normalize(quat);
m_viewVector = glm::normalize(m_viewVector);
glm::quat viewQuat(0.0f,
m_viewVector.x,
m_viewVector.y,
m_viewVector.z);
viewQuat = glm::normalize(viewQuat);
rotatedViewQuat = (quat * viewQuat) * glm::conjugate(quat);
rotatedViewQuat = glm::normalize(rotatedViewQuat);
m_viewVector = glm::normalize(glm::vec3(rotatedViewQuat.x, rotatedViewQuat.y, rotatedViewQuat.z));
m_rightVector = glm::normalize(glm::cross(glm::vec3(0.0f, 1.0f, 0.0f), m_viewVector));
m_upVector = glm::normalize(glm::cross(m_viewVector, m_rightVector));
}

Converting from a crystal ball interface to FPS-style camera?

I currently have a crystal ball interface camera set-up where the camera is always looking at the origin and pressing left,right,up,down simply moves around the object.
I want to change this so that the camera can move around freely around the 3D environment.
I currently have two functions, LEFT and UP that have been implemented as the CB-interface I mentioned.
I want the left and right key to strafe left/right, while up/down to rise and sink the camera. How exactly would I go about changing it?
Also..what would be the proper way to move the camera forward and backward? I was thinking maybe draging the mouse could equate to moving foward/backward?
void Transform::left(float degrees, vec3& eye, vec3& up) {
eye = eye*rotate(degrees, up);
}
void Transform::up(float degrees, vec3& eye, vec3& up) {
vec3 ortho_axis = glm::cross(eye, up);
ortho_axis = glm::normalize(ortho_axis);
eye = eye*rotate(degrees, ortho_axis);
up = up*rotate(degrees, ortho_axis);
}
Basically throw away the entire code and start over. The camera in your scene behaves like any other object and thus has a position and orientation. (Or one transform matrix.) All you need to do is apply the opposite of the camera transformation.
glTransformf(-camera.x, -camera.y, -camera.z);
glRotatef(-camera.angle, camera.axis.x, camera.axis.y, camera.axis.z);
You get the idea.
What I do in my code is, since I have quaternions for rotation and converting that into a matrix or axis & angle is expensive, I compute the the forward and up vectors for the camera and feed that into gluLookAt.
Vector3f f = transform(camera.orientation, Vector3f(1, 0, 0));
Vector3f u = transform(camera.orientation, Vector3f(0, 0, 1));
Vector3f p = camera.position;
Vector3f c = p + f;
gluLookAt(p.x, p.y, p.z, c.x, c.y, c.z, u.x, u.y. u.z);
A common setup for FPS controls is to have movement bound to keys (Strafe Left/Right, Move Forward/Back) and looking around tied to the mouse (x-axis: rotate Left/Right, y-axis: pitch up/down).
HTH