I am trying to translate my camera's position along an orientation defined in a glm::quat.
void Camera::TranslateCameraAlongZ(float distance) {
glm::vec3 direction = glm::normalize(rotation * glm::vec3(0.0f, 0.0f, 1.0f));
position += direction * distance;
}
This works fine when rotation is the identity quaternion. The X and the Z translations do not work when the rotation is anything else. When the camera is rotated 45 degrees to the left and I call TranslateCameraAlongZ(-0.1f) I am translated backwards and to the left when I should be going backwards and to the right. All translations that are not at perfect 90 degree increments are similarly messed up. What am I doing wrong here, and what is the simplest way to fix it? In case they might be relevant, here is the function which generates my view matrix and the function that rotates my camera:
glm::mat4 Camera::GetView() {
view = glm::toMat4(rotation) * glm::translate(glm::mat4(), position);
return view;
}
void Camera::RotateCameraDeg(float x, float y, float z) {
rotation = glm::normalize(glm::angleAxis(x,glm::vec3(1.0f, 0.0f, 0.0f)) * rotation);
rotation = glm::normalize(glm::angleAxis(y,glm::vec3(0.0f, 1.0f, 0.0f)) * rotation);
rotation = glm::normalize(glm::angleAxis(z,glm::vec3(0.0f, 0.0f, 1.0f)) * rotation);
std::cout << glm::eulerAngles(rotation).x << " " << glm::eulerAngles(rotation).y << " " << glm::eulerAngles(rotation).z << "\n";
}
I'm guessing that "rotation" is the inverse of the camera's rotation (thinking of the camera as an object in the world). That is, "rotation" takes an object from world space to camera space. If you want to move the camera's world space position forwards, you need to take the local forward vector (0,0,1) or (0,0,-1)?, multiply it by the inverse of "rotation" (to move the vector from camera space to world space), then scale and add to the position.
Related
I want to control my camera so that it can rotate around the model.
The theoretical code should be:
// `camera_rotation_angle_x_` and `camera_rotation_angle_y_` are initialized to 0, and can be modified by the user.
glm::mat4 CreateViewMatrix(glm::vec3 eye_pos, glm::vec3 scene_center, glm::vec3 up_vec) {
auto eye_transform = glm::translate(glm::mat4(1.0f), -scene_center); // recenter
eye_transform = glm::rotate(eye_transform, camera_rotation_angle_x_, glm::vec3(1.0f, 0.0f, 0.0f));
eye_transform = glm::rotate(eye_transform, camera_rotation_angle_y_, glm::vec3(0.0f, 1.0f, 0.0f));
eye_transform = glm::translate(eye_transform, scene_center); // move back
eye_pos = eye_transform * glm::vec4(eye_pos, 1.0f);
up_vec = eye_transform * glm::vec4(up_vec, 0.0f);
return glm::lookAt(eye_pos, scene_center, up_vec);
}
But the two lines "recenter" and "move back" must be written as follows to rotate correctly, otherwise the distance from the camera to the center will vary when the rotation parameters change:
auto eye_transform = glm::translate(glm::mat4(1.0f), scene_center); // recenter *The sign has changed*
...
eye_transform = glm::translate(eye_transform, -scene_center); // move back *The sign has changed*
...
// Correctness test, only when the distance remains constant, the rotation logic is correct.
cout << "eye_pos: " << eye_pos[0] << ", " << eye_pos[1] << ", " << eye_pos[2] << endl;
cout << "distance: " << (sqrt(
pow(eye_pos[0] - scene_center[0], 2)
+ pow(eye_pos[1] - scene_center[1], 2)
+ pow(eye_pos[2] - scene_center[2], 2)
)) << endl;
It is the correct logic to subtract the central value first and then add it back. It does not make any sense to add and then subtract.
So what's going wrong that I have to write code with logic errors in order for it to work properly?
The caller's code is below, maybe the bug is here?
// `kEyePos`, `kSceneCenter`, `kUpVec`, `kFovY`, `kAspect`, `kDistanceEyeToBack` and `kLightPos` are constants throughout the lifetime of the program
UniformBufferObject ubo{};
ubo.model = glm::mat4(1.0f);
ubo.view = CreateViewMatrix(kEyePos, kSceneCenter, kUpVec);
ubo.proj = glm::perspective(kFovY, kAspect, 0.1f, kDistanceEyeToBack);
// GLM was originally designed for OpenGL, where the Y coordinate of the clip coordinates is inverted.
// The easiest way to compensate for that is to flip the sign on the scaling factor of the Y axis in the projection matrix.
// Because of the Y-flip we did in the projection matrix, the vertices are now being drawn in counter-clockwise order instead of clockwise order.
// This causes backface culling to kick in and prevents any geometry from being drawn.
// You should modify the frontFace in `VkPipelineRasterizationStateCreateInfo` to `VK_FRONT_FACE_COUNTER_CLOCKWISE` to correct this.
ubo.proj[1][1] *= -1;
ubo.light_pos = glm::vec4(kLightPos, 1.0f); // The w component of point is 1
memcpy(vk_buffer_->GetUniformBufferMapped(frame_index), &ubo, sizeof(ubo));
I found that the problem was with the order of matrix construction, not with the glm::lookAt method.
m = glm::rotate(m, angle, up_vec)
is equivalent to
m = m * glm::rotate(glm::mat4(1), angle, up_vec)
, not
m = glm::rotate(glm::mat4(1), angle, up_vec) * m
as I thought.
This easily explains why swapping "recenter" and "move back" works properly.
The correct code is as follows:
glm::mat4 VulkanRendering::CreateViewMatrix(glm::vec3 eye_pos, glm::vec3 scene_center, glm::vec3 up_vec) const {
// Create transform matrix in reverse order.
// First rotate around the X axis, and then around the Y axis, otherwise it does not match the practice of most games.
auto view_transform = glm::translate(glm::mat4(1.0f), scene_center); // last: move back
view_transform = glm::rotate(view_transform, camera_rotation_angle_y_, glm::vec3(0.0f, 1.0f, 0.0f));
view_transform = glm::rotate(view_transform, camera_rotation_angle_x_, glm::vec3(1.0f, 0.0f, 0.0f));
view_transform = glm::translate(view_transform, -scene_center); // first: recenter
eye_pos = view_transform * glm::vec4(eye_pos, 1.0f); // The w component of point is 1
up_vec = view_transform * glm::vec4(up_vec, 0.0f); // The w component of vector is 0
return glm::lookAt(eye_pos, scene_center, up_vec);
}
So I have a camera object in my scenegraph which I want to rotate around another object and still look at it.
So far the code I've tried to translate its position just keeps moving it back and forth to the right and back a small amount.
Here is the code I've tried using in my game update loop:
//ang is set to 75.0f
camera.position += camera.right * glm::vec3(cos(ang * deltaTime), 1.0f, 1.0f);
I'm not really sure where I'm going wrong. I've looked at other code to rotate around the object and they use cos and sine but since im only translating along the x axis I thought I would only need this.
First you have to create a rotated vector. This can be done by glm::rotateZ. Note, since glm version 0.9.6 the angle has to be set in radians.
float ang = ....; // angle per second in radians
float timeSinceStart = ....; // seconds since the start of the animation
float dist = ....; // distance from the camera to the target
glm::vec3 cameraVec = glm::rotateZ(glm::vec3(dist, 0.0f, 0.0f), ang * timeSinceStart);
Furthermore, you must know the point around which the camera should turn. Probably the position of the objet:
glm::vec3 objectPosition = .....; // position of the object where the camera looks to
The new position of the camera is the position of the object, displaced by the rotation vector:
camera.position = objectPosition + cameraVec;
The target of the camera has to be the object position, because the camera should look to the object:
camera.front = glm::normalize(objectPosition - camera.position);
The up vector of the camera should be the z-axis (rotation axis):
camera.up = glm::vec3(0.0f, 0.0f, 1.0f);
So I want to use quaternions and angles to control my camera using my mouse.
I accumulate the vertical/horizontal angles like this:
void Camera::RotateCamera(const float offsetHorizontalAngle, const float offsetVerticalAngle)
{
mHorizontalAngle += offsetHorizontalAngle;
mHorizontalAngle = std::fmod(mHorizontalAngle, 360.0f);
mVerticalAngle += offsetVerticalAngle;
mVerticalAngle = std::fmod(mVerticalAngle, 360.0f);
}
and compute my orientation like this:
Mat4 Camera::Orientation() const
{
Quaternion rotation;
rotation = glm::angleAxis(mVerticalAngle, Vec3(1.0f, 0.0f, 0.0f));
rotation = rotation * glm::angleAxis(mHorizontalAngle, Vec3(0.0f, 1.0f, 0.0f));
return glm::toMat4(rotation);
}
and the forward vector, which I need for glm::lookAt, like this:
Vec3 Camera::Forward() const
{
return Vec3(glm::inverse(Orientation()) * Vec4(0.0f, 0.0f, -1.0f, 0.0f));
}
I think that should do the trick, but I do not know how in my example game to get actual angles? All I have is the current and previous mouse location in window coordinates.. how can I get proper angles from that?
EDIT: on a second thought.. my "RotateCamera()" cant be right; I am experiencing rubber-banding effect due to the angles reseting after reaching 360 deegres... so how do I accumulate angles properly? I can just sum them up endlessly
Take a cross section of the viewing frustum (the blue circle is your mouse position):
Theta is half of your FOV
p is your projection plane distance (don't worry - it will cancel out)
From simple ratios it is clear that:
But from simple trignometry
So ...
Just calculate the angle psi for each of your mouse positions and subtract to get the difference.
A similar formula can be found for the vertical angle:
Where A is your aspect ratio (width / height)
I'm creating the view matrix for my camera using its current orientation (quaternion) and its current position.
void Camera::updateViewMatrix()
{
view = glm::gtx::quaternion::toMat4(orientation);
// Include rotation (Free Look Camera)
view[3][0] = -glm::dot(glm::vec3(view[0][0], view[0][1], view[0][2]), position);
view[3][1] = -glm::dot(glm::vec3(view[1][0], view[1][1], view[1][2]), position);
view[3][2] = -glm::dot(glm::vec3(view[2][0], view[2][1], view[2][2]), position);
// Ignore rotation (FPS Camera)
//view[3][0] = -position.x;
//view[3][1] = -position.y;
//view[3][2] = -position.z;
view[3][3] = 1.0f;
}
There is a problem with this in that I do not believe the quaternion to matrix calculation is giving the correct answer. Translating the camera works as expected but rotating it causes incorrect behavior.
I am rotating the camera using the difference between the current mouse position and the the centre of the screen (resetting the mouse position each frame)
int xPos;
int yPos;
glfwGetMousePos(&xPos, &yPos);
int centreX = 800 / 2;
int centreY = 600 / 2;
rotate(xPos - centreX, yPos - centreY);
// Reset mouse position for next frame
glfwSetMousePos(800 / 2, 600 / 2);
The rotation takes place in this method
void Camera::rotate(float yawDegrees, float pitchDegrees)
{
// Apply rotation speed to the rotation
yawDegrees *= lookSensitivity;
pitchDegrees *= lookSensitivity;
if (isLookInverted)
{
pitchDegrees = -pitchDegrees;
}
pitchAccum += pitchDegrees;
// Stop the camera from looking any higher than 90 degrees
if (pitchAccum > 90.0f)
{
//pitchDegrees = 90.0f - (pitchAccum - pitchDegrees);
pitchAccum = 90.0f;
}
// Stop the camera from looking any lower than 90 degrees
if (pitchAccum < -90.0f)
{
//pitchDegrees = -90.0f - (pitchAccum - pitchDegrees);
pitchAccum = -90.0f;
}
yawAccum += yawDegrees;
if (yawAccum > 360.0f)
{
yawAccum -= 360.0f;
}
if (yawAccum < -360.0f)
{
yawAccum += 360.0f;
}
float yaw = yawDegrees * DEG2RAD;
float pitch = pitchDegrees * DEG2RAD;
glm::quat rotation;
// Rotate the camera about the world Y axis (if mouse has moved in any x direction)
rotation = glm::gtx::quaternion::angleAxis(yaw, 0.0f, 1.0f, 0.0f);
// Concatenate quaterions
orientation = orientation * rotation;
// Rotate the camera about the world X axis (if mouse has moved in any y direction)
rotation = glm::gtx::quaternion::angleAxis(pitch, 1.0f, 0.0f, 0.0f);
// Concatenate quaternions
orientation = orientation * rotation;
}
Am I concatenating the quaternions correctly for the correct orientation?
There is also a problem with the pitch accumulation in that it restricts my view to ~±5 degrees rather than ±90. What could be the cause of that?
EDIT:
I have solved the problem for the pitch accumulation so that its range is [-90, 90]. It turns out that glm uses degrees and not vectors for axis angle and the order of multiplication for the quaternion concatenation was incorrect.
// Rotate the camera about the world Y axis
// N.B. 'angleAxis' method takes angle in degrees (not in radians)
rotation = glm::gtx::quaternion::angleAxis(yawDegrees, 0.0f, 1.0f, 0.0f);
// Concatenate quaterions ('*' operator concatenates)
// C#: Quaternion.Concatenate(ref rotation, ref orientation)
orientation = orientation * rotation;
// Rotate the camera about the world X axis
rotation = glm::gtx::quaternion::angleAxis(pitchDegrees, 1.0f, 0.0f, 0.0f);
// Concatenate quaterions ('*' operator concatenates)
// C#: Quaternion.Concatenate(ref orientation, ref rotation)
orientation = rotation * orientation;
The problem that remains is that the view matrix rotation appears to rotate the drawn object and not look around like a normal FPS camera.
I have uploaded a video to YouTube to demonstrate the problem. I move the mouse around to change the camera's orientation but the triangle appears to rotate instead.
YouTube video demonstrating camera orientation problem
EDIT 2:
void Camera::rotate(float yawDegrees, float pitchDegrees)
{
// Apply rotation speed to the rotation
yawDegrees *= lookSensitivity;
pitchDegrees *= lookSensitivity;
if (isLookInverted)
{
pitchDegrees = -pitchDegrees;
}
pitchAccum += pitchDegrees;
// Stop the camera from looking any higher than 90 degrees
if (pitchAccum > 90.0f)
{
pitchDegrees = 90.0f - (pitchAccum - pitchDegrees);
pitchAccum = 90.0f;
}
// Stop the camera from looking any lower than 90 degrees
else if (pitchAccum < -90.0f)
{
pitchDegrees = -90.0f - (pitchAccum - pitchDegrees);
pitchAccum = -90.0f;
}
// 'pitchAccum' range is [-90, 90]
//printf("pitchAccum %f \n", pitchAccum);
yawAccum += yawDegrees;
if (yawAccum > 360.0f)
{
yawAccum -= 360.0f;
}
else if (yawAccum < -360.0f)
{
yawAccum += 360.0f;
}
orientation =
glm::gtx::quaternion::angleAxis(pitchAccum, 1.0f, 0.0f, 0.0f) *
glm::gtx::quaternion::angleAxis(yawAccum, 0.0f, 1.0f, 0.0f);
}
EDIT3:
The following multiplication order allows the camera to rotate around its own axis but face the wrong direction:
glm::mat4 translation;
translation = glm::translate(translation, position);
view = glm::gtx::quaternion::toMat4(orientation) * translation;
EDIT4:
The following will work (applying the translation matrix based on the position after then rotation)
// Rotation
view = glm::gtx::quaternion::toMat4(orientation);
// Translation
glm::mat4 translation;
translation = glm::translate(translation, -position);
view *= translation;
I can't get the dot product with each orientation axis to work though
// Rotation
view = glm::gtx::quaternion::toMat4(orientation);
glm::vec3 p(
glm::dot(glm::vec3(view[0][0], view[0][1], view[0][2]), position),
glm::dot(glm::vec3(view[1][0], view[1][1], view[1][2]), position),
glm::dot(glm::vec3(view[2][0], view[2][1], view[2][2]), position)
);
// Translation
glm::mat4 translation;
translation = glm::translate(translation, -p);
view *= translation;
In order to give you a definite answer, I think that we would need the code that shows how you're actually supplying the view matrix and vertices to OpenGL. However, the symptom sounds pretty typical of incorrect matrix order.
Consider some variables:
V represents the inverse of the current orientation of the camera (the quaternion).
T represents the translation matrix holding the position of the camera. This should be an identity matrix with negation of the camera's position going down the fourth column (assuming that we're right-multiplying column vectors).
U represents the inverse of the change in orientation.
p represents a vertex in world space.
Note: all of the matrices are inverse matrices because the transformations will be applied to the vertex, not the camera, but the end result is the same.
By default the OpenGL camera is at the origin looking down the negative-z axis. When the view isn't changing (U==I), then the vertex's transformation from world coordinates to camera coordinates should be: p'=TVp. You first orient the camera (by rotating the world in the opposite direction) and then translate the camera into position (by shifting the world in the opposite direction).
Now there are a few places to put U. If we put U to the right of V, then we get the behavior of a first-person view. When you move the mouse up, whatever is currently in view rotates downward around the camera. When you move the mouse right, whatever is in view rotates to the left around the camera.
If we put U between T and V, then the camera turns relative to the world's axes instead of the camera's. This is strange behavior. If V happens to turn the camera off to the side, then moving the mouse up and down will make the world seem to 'roll' instead of 'pitch' or 'yaw'.
If we put U left of T, then the camera rotates around the world's axes around the world's origin. This can be even stranger because it makes the camera fly through world faster the farther the camera is from the origin. However, because the rotation is around the origin, if the camera happens to be looking at the origin, objects there will just appear to be turning around. This is sort of what you're seeing because of the dot-products that you're taking to rotate the camera's position.
You check to make sure that pitchAccum stays within [-90,90], but you've commented out the portion that would make use of that fact. This seems odd to me.
The way that you left-multiply pitch but right-multiply yaw makes it so that your quaternions aren't doing much for you. They're just holding your Euler angles. Unless orientation changes are coming in from other places, you could simply say that orientation = glm::gtx::quaternion::angleAxis(pitchAccum*DEG2RAD, 1.0f, 0.0f, 0.0f) * glm::gtx::quaternion::angleAxis(yawAccum*DEG2RAD, 0.0f, 1.0f, 0.0f); and overwrite the old orientation completely.
From what I understand in this tutorial, there might be a reason why pitch angle is restricted at 90 degrees.
Regardless of using quaternions or a look at matrix, at the end, we give an initial orientation to the Camera. In quaternions, this is the initial value of the orientation, in lookAt, it is the initial value of the up vector.
If the direction facing towards the camera is parallel to this initial vector, then the cross product of these will be zero, which means the camera might have any orientation if pitch is 90 or -90 degrees.
In the internal implementation of toMat4(orientation) this would result in one of your x_dir/y_dir/z_dir vectors to be a zero vector, which would mean that your can have any orientation. This is also discussed in this book, which says that if Y angle is 90 degrees, a degree of freedom is lost (Edward Angel and Dave Shreiner, Interactive Computer Graphics, A Top-Down Approach with WebGL, Seventh Edition, Addison-Wesley 2015.), which is discussed as Gimbal Lock.
I can see that you are aware of this problem, but in your code, the yaw angle is still set to 90 degrees if it overflows 90, leaving your Camera in an invalid state. You should consider something like this instead:
if (pitchAccum > 89.999f && pitchAccum <= 90.0f)
{
pitchAccum = 90.001f;
}
else if (pitchAccum < -89.999f && pitchAccum >= -90.0f)
{
pitchAccum = -90.001f;
}
if (pitchAccum >= 360.0f)
{
pitchAccum = 0.0f;
}
else if (pitchAccum <= -360.0f)
{
pitchAccum = 0.0f;
}
Or you can define another custom action of your choice when pitchAccum is 90 degrees.
It does look at the target when I move to the target to the right and looks at it up until it goes 180 degrees of the -zaxis and decides to go the other way.
Matrix4x4 camera::GetViewMat()
{
Matrix4x4 oRotate, oView;
oView.SetIdentity();
Vector3 lookAtDir = m_targetPosition - m_camPosition;
Vector3 lookAtHorizontal = Vector3(lookAtDir.GetX(), 0.0f, lookAtDir.GetZ());
lookAtHorizontal.Normalize();
float angle = acosf(Vector3(0.0f, 0.0f, -1.0f).Dot(lookAtHorizontal));
Quaternions horizontalOrient(angle, Vector3(0.0f, 1.0f, 0.0f));
ori = horizontalOrient;
ori.Conjugate();
oRotate = ori.ToMatrix();
Vector3 inverseTranslate = Vector3(-m_camPosition.GetX(), -m_camPosition.GetY(), -m_camPosition.GetZ());
oRotate.Transform(inverseTranslate);
oRotate.Set(0, 3, inverseTranslate.GetX());
oRotate.Set(1, 3, inverseTranslate.GetY());
oRotate.Set(2, 3, inverseTranslate.GetZ());
oView = oRotate;
return oView;
}
As promised, a bit of code showing the way I'd make a camera look at a specific point in space at all times.
First of all, we'd need a method to construct a quaternion from an angle and an axis, I happen to have that on pastebin, the angle input is in radians:
http://pastebin.com/vLcx4Qqh
Make sure you don't input the axis (0,0,0), which wouldn't make any sense whatsoever.
Now the actual update method, we need to get the quaternion rotating the camera from default orientation to pointing towards the target point. PLEASE note I just wrote this out of the top of my head, it probably needs a little debugging and may need a little optimization, but this should at least give you a push in the right direction.
void camera::update()
{
// First get the direction from the camera's position to the target point
vec3 lookAtDir = m_targetPoint - m_position;
// I'm going to divide the vector into two 'components', the Y axis rotation
// and the Up/Down rotation, like a regular camera would work.
// First to calculate the rotation around the Y axis, so we zero out the y
// component:
vec3 lookAtHorizontal = vec3(lookAtDir.x, 0.0f, lookAtDir.z).normalize();
// Get the quaternion from 'default' direction to the horizontal direction
// In this case, 'default' direction is along the -z axis, like most OpenGL
// programs. Make sure the projection matrix works according to this.
float angle = acos(vec3(0.0f, 0.0f, -1.0f).dot(lookAtHorizontal));
quaternion horizontalOrient(angle, vec3(0.0f, 1.0f, 0.0f));
// Since we already stripped the Y component, we can simply get the up/down
// rotation from it as well.
angle = acos(lookAtDir.normalize().dot(lookAtHorizontal));
if(angle) horizontalOrient *= quaternion(angle, lookAtDir.cross(lookAtHorizontal));
// ...
m_orientation = horizontalOrient;
}
Now to actually take m_orientation and m_position and get the world -> camera matrix
// First inverse each element (-position and inverse the quaternion),
// the position is rotated since the position within a matrix is 'added' last
// to the output vector, so it needs to account for rotation of the space.
mat3 rotationMatrix = m_orientation.inverse().toMatrix();
vec3 inverseTranslate = rotationMatrix * -m_position; // Note the minus
mat4 matrix = mat3; // just means the matrix is expanded, the last entry (bottom right of the matrix) is a 1.0f like an identity matrix would be.
// This bit is row-major in my case, you just need to set the translation of the matrix.
matrix[3] = inverseTranslate.x;
matrix[7] = inverseTranslate.y;
matrix[11] = inverseTranslate.z;
EDIT I think it should be obvious but just for completeness, .dot() takes the dot product of the vectors, .cross() takes the cross product, the object executing the method is vector A, and the parameter of the method is vector B.