I'm trying to implement a camera all by myself in OpenGL (I use glfw and gml).
As for now, I don't have any class for it. I will create it later. So here is my try on coding the camera movements; it works fine with simple mouse movements, but otherwise, the camera tilts sideways. I'm still new to OpenGL so I don't have a lot to show but here is illustrated my problem: http://imgur.com/a/p9xXQ
I have a few (global as for now) variables :
float lastX = 0.0f, lastY = 0.0f, yaw = 0.0f, pitch = 0.0f;
glm::vec3 cameraPos(0.0f, 0.0f, 3.0f);
glm::vec3 cameraUp(0.0f, 1.0f, 0.0f); // As a reminder, x points to the right, y points upwards and z points towards you
glm::vec3 cameraFront(0.0f, 0.0f, -1.0f);
With these, I can create a view matrix this way :
glm::mat4 view;
view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp);
I want to be able to move my camera perpendicularly (yaw) and laterally (pitch), i.e. up, down, right, left on my screen. For this, it is enough to rotate the cameraFront vector and the cameraUp vector appropriately and then update the view matrix with the updated vectors.
My Cursor Position Callback looks like this :
glm::vec3 rotateAroundAxis(glm::vec3 toRotate, float angle, glm::vec3 axisDirection, glm::vec3 axisPoint) { // angle in radians
toRotate -= axisPoint;
glm::mat4 rotationMatrix(1.0f);
rotationMatrix = glm::rotate(rotationMatrix, angle, axisDirection);
glm::vec4 result = rotationMatrix*glm::vec4(toRotate, 1.0f);
toRotate = glm::vec3(result.x, result.y, result.z);
toRotate += axisPoint;
return toRotate;
}
void mouseCallback(GLFWwindow* window, double xpos, double ypos) {
const float maxPitch = float(M_PI) - float(M_PI) / 180.0f;
glm::vec3 cameraRight = -glm::cross(cameraUp, cameraFront);
float xOffset = xpos - lastX;
float yOffset = ypos - lastY;
lastX = xpos;
lastY = ypos;
float sensitivity = 0.0005f;
xOffset *= sensitivity;
yOffset *= sensitivity;
yaw += xOffset; // useless here
pitch += yOffset;
if (pitch > maxPitch) {
yOffset = 0.0f;
}
if (pitch < -maxPitch) {
yOffset = 0.0f;
}
cameraFront = rotateAroundAxis(cameraFront, -xOffset, cameraUp, cameraPos);
cameraFront = rotateAroundAxis(cameraFront, -yOffset, cameraRight, cameraPos);
cameraUp = rotateAroundAxis(cameraUp, -yOffset, cameraRight, cameraPos);
}
As I said, it works fine for simple up-down, left-right camera movements, but when I start to move my mouse in circles or like a madman, the camera starts to rotate longitudinally (roll).
I've tried to force cameraRight.y = cameraPos.y so that the cameraRight vector doesn't tilt upwards/downwards due to numerical errors but it doesn't solve the problem. I've also tried to add a (global) cameraRight vector to keep track of it instead of computing it every time so the end of the function looks like this :
cameraFront = rotateAroundAxis(cameraFront, -xOffset, cameraUp, cameraPos);
cameraRight = rotateAroundAxis(cameraRight, -xOffset, cameraUp, cameraPos);
cameraFront = rotateAroundAxis(cameraFront, -yOffset, cameraRight, cameraPos);
cameraUp = rotateAroundAxis(cameraUp, -yOffset, cameraRight, cameraPos);
but it doesn't solve the problem. Any pieces of advice ?
It seems you have global X-axis to the right, Y-axis going deep in the screen and Z-axis going up. And you local camera axis system is similar.
The desired behaviour is rotate the camera over its current position, left-right mouse movement is rotation around global Z, and up-dowm mouse movement is rotation around local X. Think a bit around these rotations until you understand them well, and why one is around global but the other around local directions. Imagine a security camera and its movements to visualize the axis systems and rotations.
The goal is getting the parameters used to define the View transformation by lookAtfunction.
First rotate around local X. We convert this local vector into global axis system by inverting the current View-matrix, you call view
glm::vec3 currGlobalX = glm::normalize((glm::inverse(view) * glm::vec4(1.0, 0.0, 0.0, 0.0)).xyz);
We need to rotate not only the cameraUp vector, but also the current target defined in global coordinates, what you call cameraPos + cameraFront:
cameraUp = rotateAroundAxis(cameraUp, -yOffset, currGlobalX, glm::vec3(0.0f, 0.0f, 0.0f)); //vector, not needed to translate
cameraUp = glm::normalize(cameraUp);
currenTarget = rotateAroundAxis(currenTarget, -yOffset, currGlobalX, cameraPos); //point, need translation
Now rotate around global Z-axis
cameraUp = rotateAroundAxis(cameraUp, -xOffset, glm::vec3(0.0f, 0.0f, 1.0f), glm::vec3(0.0f, 0.0f, 0.0f)); //vector, not needed to translate
cameraUp = glm::normalize(cameraUp);
currenTarget = rotateAroundAxis(currenTarget, -xOffset, glm::vec3(0.0f, 0.0f, 1.0f), cameraPos); //point, need translation
Finally, update view:
view = glm::lookAt(cameraPos, currenTarget, cameraUp);
Related
When camera is moved around, why are my starting rays are still stuck at origin 0, 0, 0 even though the camera position has been updated?
It works fine if I start the program and my camera position is at default 0, 0, 0. But once I move my camera for instance pan to the right and click some more, the lines are still coming from 0 0 0 when it should be starting from wherever the camera is. Am I doing something terribly wrong? I've checked to make sure they're being updated in the main loop. I've used this code snippit below referenced from:
picking in 3D with ray-tracing using NinevehGL or OpenGL i-phone
// 1. Get mouse coordinates then normalize
float x = (2.0f * lastX) / width - 1.0f;
float y = 1.0f - (2.0f * lastY) / height;
// 2. Move from clip space to world space
glm::mat4 inverseWorldMatrix = glm::inverse(proj * view);
glm::vec4 near_vec = glm::vec4(x, y, -1.0f, 1.0f);
glm::vec4 far_vec = glm::vec4(x, y, 1.0f, 1.0f);
glm::vec4 startRay = inverseWorldMatrix * near_vec;
glm::vec4 endRay = inverseWorldMatrix * far_vec;
// perspective divide
startR /= startR.w;
endR /= endR.w;
glm::vec3 direction = glm::vec3(endR - startR);
// start the ray points from the camera position
glm::vec3 startPos = glm::vec3(camera.GetPosition());
glm::vec3 endPos = glm::vec3(startPos + direction * someLength);
The first screenshot I click some rays, the 2nd I move my camera to the right and click some more but the initial starting rays are still at 0, 0, 0. What I'm looking for is for the rays to come out wherever the camera position is in the 3rd image, ie the red rays sorry for the confusion, the red lines are supposed to shoot out and into the distance not up.
// and these are my matrices
// projection
glm::mat4 proj = glm::perspective(glm::radians(camera.GetFov()), (float)width / height, 0.1f, 100.0f);
// view
glm::mat4 view = camera.GetViewMatrix(); // This returns glm::lookAt(this->Position, this->Position + this->Front, this->Up);
// model
glm::mat4 model = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, 0.0f));
Its hard to tell where in the code the problem lies. But, I use this function for ray casting that is adapted from code from scratch-a-pixel and learnopengl:
vec3 rayCast(double xpos, double ypos, mat4 projection, mat4 view) {
// converts a position from the 2d xpos, ypos to a normalized 3d direction
float x = (2.0f * xpos) / WIDTH - 1.0f;
float y = 1.0f - (2.0f * ypos) / HEIGHT;
float z = 1.0f;
vec3 ray_nds = vec3(x, y, z);
vec4 ray_clip = vec4(ray_nds.x, ray_nds.y, -1.0f, 1.0f);
// eye space to clip we would multiply by projection so
// clip space to eye space is the inverse projection
vec4 ray_eye = inverse(projection) * ray_clip;
// convert point to forwards
ray_eye = vec4(ray_eye.x, ray_eye.y, -1.0f, 0.0f);
// world space to eye space is usually multiply by view so
// eye space to world space is inverse view
vec4 inv_ray_wor = (inverse(view) * ray_eye);
vec3 ray_wor = vec3(inv_ray_wor.x, inv_ray_wor.y, inv_ray_wor.z);
ray_wor = normalize(ray_wor);
return ray_wor;
}
where you can draw your line with startPos = camera.Position and endPos = camera.Position + rayCast(...) * scalar_amount.
I´m trying to implements a bullet so I have this free movement first person camera. I got this camera from learnopengl.com this is the coding:
// Default camera values
const float YAW = -90.0f;
const float PITCH = 0.0f;
const float SPEED = 2.5f;
const float SENSITIVITY = 0.1f;
const float ZOOM = 45.0f;
// An abstract camera class that processes input and calculates the corresponding Euler Angles, Vectors and Matrices for use in OpenGL
class Camera
{
public:
// Camera Attributes
glm::vec3 Position;
glm::vec3 Front;
glm::vec3 Up;
glm::vec3 Right;
glm::vec3 WorldUp;
// Euler Angles
float Yaw;
float Pitch;
// Camera options
float MovementSpeed;
float MouseSensitivity;
float Zoom;
// Constructor with vectors
Camera(glm::vec3 position = glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3 up = glm::vec3(0.0f, 1.0f, 0.0f), float yaw = YAW, float pitch = PITCH) : Front(glm::vec3(0.0f, 0.0f, -1.0f)), MovementSpeed(SPEED), MouseSensitivity(SENSITIVITY), Zoom(ZOOM)
{
Position = position;
WorldUp = up;
Yaw = yaw;
Pitch = pitch;
updateCameraVectors();
}
// Constructor with scalar values
Camera(float posX, float posY, float posZ, float upX, float upY, float upZ, float yaw, float pitch) : Front(glm::vec3(0.0f, 0.0f, -1.0f)), MovementSpeed(SPEED), MouseSensitivity(SENSITIVITY), Zoom(ZOOM)
{
Position = glm::vec3(posX, posY, posZ);
WorldUp = glm::vec3(upX, upY, upZ);
Yaw = yaw;
Pitch = pitch;
updateCameraVectors();
}
// Returns the view matrix calculated using Euler Angles and the LookAt Matrix
glm::mat4 GetViewMatrix()
{
return glm::lookAt(Position, Position + Front, Up);
}
// Processes input received from any keyboard-like input system. Accepts input parameter in the form of camera defined ENUM (to abstract it from windowing systems)
void ProcessKeyboard(Camera_Movement direction, float deltaTime)
{
float velocity = MovementSpeed * deltaTime;
if (direction == FORWARD)
Position += Front * velocity;
if (direction == BACKWARD)
Position -= Front * velocity;
if (direction == LEFT)
Position -= Right * velocity;
if (direction == RIGHT)
Position += Right * velocity;
}
// Processes input received from a mouse input system. Expects the offset value in both the x and y direction.
void ProcessMouseMovement(float xoffset, float yoffset, GLboolean constrainPitch = true)
{
xoffset *= MouseSensitivity;
yoffset *= MouseSensitivity;
Yaw += xoffset;
Pitch += yoffset;
// Make sure that when pitch is out of bounds, screen doesn't get flipped
if (constrainPitch)
{
if (Pitch > 89.0f)
Pitch = 89.0f;
if (Pitch < -89.0f)
Pitch = -89.0f;
}
// Update Front, Right and Up Vectors using the updated Euler angles
updateCameraVectors();
}
// Processes input received from a mouse scroll-wheel event. Only requires input on the vertical wheel-axis
void ProcessMouseScroll(float yoffset)
{
if (Zoom >= 1.0f && Zoom <= 45.0f)
Zoom -= yoffset;
if (Zoom <= 1.0f)
Zoom = 1.0f;
if (Zoom >= 45.0f)
Zoom = 45.0f;
}
private:
// Calculates the front vector from the Camera's (updated) Euler Angles
void updateCameraVectors()
{
// Calculate the new Front vector
glm::vec3 front;
front.x = cos(glm::radians(Yaw)) * cos(glm::radians(Pitch));
front.y = sin(glm::radians(Pitch));
front.z = sin(glm::radians(Yaw)) * cos(glm::radians(Pitch));
Front = glm::normalize(front);
// Also re-calculate the Right and Up vector
Right = glm::normalize(glm::cross(Front, WorldUp)); // Normalize the vectors, because their length gets closer to 0 the more you look up or down which results in slower movement.
Up = glm::normalize(glm::cross(Right, Front));
}
};
So now I want to create a bullet that starts from
model = glm::translate(model, camara.Position+7.0f*camara.Front);
The issue is that as I try to move the camera the object rotates with it which I know why but I don't know how to fix it, I have tried something like this:
model = glm::rotate(model, glm::radians(camara.Pitch), glm::vec3(1.0f, 0.0f, 0.0f));
model = glm::rotate(model, -glm::radians(camara.Yaw), glm::vec3(0.0f, 1.0f, 0.0f));
trying to sync the rotations but it's not working.
I want to store the position because then I want the bullets to go straight no matter where I move. Thank you.
This is how I always want it to look:
This is how it rotates as I move:
I am having trouble about understand the translate the camera. I already can successfully rotate the camera, but I am still confused about translating the camera. I include the code about how to rotate the camera, Since translating and rotating need to use the lookat function. The homework says translating the camera means that both the eye and the center should be moved with the same amount. I understand I can change the parameters in the lookat function to implement this.
The definition of lookat function is below:
Lookat(cameraPos, center, up)
glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 10.0f);
glm::vec3 center(0.0f, 0.0f, 0.0f);
glm::vec3 cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);
modelViewProjectionMatrix.Perspective(glm::radians(fov), float(width) / float(height), 0.1f, 100.0f);
modelViewProjectionMatrix.LookAt(cameraPos, center, cameraUp);
void CursorPositionCallback(GLFWwindow* lWindow, double xpos, double ypos)
{
int state = glfwGetMouseButton(window, GLFW_MOUSE_BUTTON_LEFT);
if (state == GLFW_PRESS)
{
if (firstMouse)
{
lastX = xpos;
lastY = ypos;
firstMouse = false;
}
float xoffset = xpos - lastX;
float yoffset = lastY- ypos;
lastX = xpos;
lastY = ypos;
yaw += xoffset;
pitch += yoffset;
glm::vec3 front;
front.x = center[0] + 5.0f*cos(glm::radians(yaw)) * cos(glm::radians(pitch));
front.y = center[1] + 5.0f*sin(glm::radians(pitch));
front.z = center[1] + 5.0f*sin(glm::radians(yaw)) * cos(glm::radians(pitch));
cameraPos = front;
}
}
If you want to translate the camera by an offset, then you've to add the same vector (glm::vec3 offset) to the camera position (cameraPos) and the camera target (center):
center = center + offset;
cameraPos = cameraPos + offset;
When you calculate a new target of the camera (center), by a pitch and yaw angle, then you've to update the up vector (cameraUp) of the camera, too:
glm::vec3 front(
cos(glm::radians(pitch)) * cos(glm::radians(yaw)),
sin(glm::radians(pitch)),
cos(glm::radians(pitch)) * sin(glm::radians(yaw))
);
glm::vec3 up(
-sin(glm::radians(pitch)) * cos(glm::radians(yaw)),
cos(glm::radians(pitch)),
-sin(glm::radians(pitch)) * sin(glm::radians(yaw))
);
cameraPos = center + front * 5.0f;
cameraUp = up;
To translate the camera along the x axis (from left to right) in viewspace you've to calculate the vector to the right by the Cross product of the vector to the target (front) and the up vector (cameraUp or up):
glm::vec3 right = glm::cross(front, up);
The y axis (from bottom to top) in viewspace, is the up vector.
To translate about the scalars (float trans_x) and (trans_y), the scaled right and up vector have to be add to the camera position (cameraPos) and the camera target (center):
center = center + right * trans_x + up * trans_y;
cameraPos = cameraPos + right * trans_x + up * trans_y;
Use the manipulated vectors to set the view matrix:
modelViewProjectionMatrix.LookAt(cameraPos, center, cameraUp);
My goal is to navigate in the view port using the mouse.
Every frame that the mouse move, I recalculate the cameraFront and cameraUp vectors and finally the view matrix. The problem is that the view matrix sometimes create rotation in the z axis, witch I don't expected to be.
I am not sure what I am doing wrong.
glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 3.0f);
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, -1.0f);
glm::vec3 cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);
void mouse_callback(GLFWwindow* window, double xpos, double ypos)
{
if (firstMouse)
{
lastX = xpos;
lastY = ypos;
firstMouse = false;
}
float xoffset = xpos - lastX;
float yoffset = ypos - lastY;
lastX = xpos;
lastY = ypos;
float sensitivity = 0.05;
xoffset *= sensitivity;
yoffset *= sensitivity;
glm::quat rotY = glm::angleAxis(glm::radians(xoffset), cameraUp);
cameraFront = glm::normalize(rotY * cameraFront);
glm::vec3 rightAxis = glm::cross(cameraUp, cameraFront);
glm::quat rotX = glm::angleAxis(glm::radians(yoffset), rightAxis);
cameraFront = glm::normalize(rotX * cameraFront);
cameraUp = glm::normalize(glm::cross(cameraFront, rightAxis));
}
in the while loop am recalculate the view matrix:
view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp);
I am learning OpenGl from the Tutorial which show example how to navigate in the scene, but I am trying to it differently.
can anyone see my mistake?
I'm currently trying to rotate the camera around its local axis based on keyboard/mouse input and the code I currently have uses DirectXMath and works nicely, however it is using the world axis to rotate around rather than the cameras local axis. Because of this, some of the rotations are not as expected and causes issues as the camera rotates. For example, when we tilt our camera, the Y axis will change and we will want to rotate around another axis to get our expected results.
What am I doing wrong in the code or what do I need to change in order to rotate around its local axis?
vector.x, vector.y, vector.z (The vector to rotate around, i.e. (1.0f, 0.0f, 0.0f))
//define our camera matrix
XMFLOAT4X4 cameraMatrix;
//position, lookat, up values for the camera
XMFLOAT3 position;
XMFLOAT3 up;
XMFLOAT3 lookat;
void Camera::rotate(XMFLOAT3 vector, float theta) {
XMStoreFloat4x4(&cameraMatrix, XMMatrixIdentity());
//set our view quaternion to our current camera's lookat position
XMVECTOR viewQuaternion = XMQuaternionIdentity();
viewQuaternion = XMVectorSet(lookat.x, lookat.y, lookat.z, 0.0f);
//set the rotation vector based on our parameter, i.e (1.0f, 0.0f, 0.0f)
//to rotate around the x axis
XMVECTOR rotationVector = XMVectorSet(vector.x, vector.y, vector.z, 0.0f);
//create a rotation quaternion to rotate around our vector, with a specified angle, theta
XMVECTOR rotationQuaternion = XMVectorSet(
XMVectorGetX(rotationVector) * sin(theta / 2),
XMVectorGetY(rotationVector) * sin(theta / 2),
XMVectorGetZ(rotationVector) * sin(theta / 2),
cos(theta / 2));
//get our rotation quaternion inverse
XMVECTOR rotationInverse = XMQuaternionInverse(rotationQuaternion);
//new view quaternion = [ newView = ROTATION * VIEW * INVERSE ROTATION ]
//multiply our rotation quaternion with our view quaternion
XMVECTOR newViewQuaternion = XMQuaternionMultiply(rotationQuaternion, viewQuaternion);
//multiply the result of our calculation above with the inverse rotation
//to get our new view values
newViewQuaternion = XMQuaternionMultiply(newViewQuaternion, rotationInverse);
//take the new lookat values from our newViewQuaternion and put them into the camera
lookat = XMFLOAT3(XMVectorGetX(newViewQuaternion), XMVectorGetY(newViewQuaternion), XMVectorGetZ(newViewQuaternion));
//build our camera matrix using XMMatrixLookAtLH
XMStoreFloat4x4(&cameraMatrix, XMMatrixLookAtLH(
XMVectorSet(position.x, position.y, position.z, 0.0f),
XMVectorSet(lookat.x, lookat.y, lookat.z, 0.0f),
XMVectorSet(up.x, up.y, up.z, 0.0f)));
}
The view matrix is then set
//store our camera's matrix inside the view matrix
XMStoreFloat4x4(&_view, camera->getCameraMatrix() );
-
Edit:
I have tried an alternative solution without using quaternions, and it seems I can get the camera to rotate correctly around its own axis, however the camera's lookat values now never change and after I have stopped using the mouse/keyboard, it snaps back to its original position.
void Camera::update(float delta) {
XMStoreFloat4x4(&cameraMatrix, XMMatrixIdentity());
//do we have a rotation?
//this is set as we try to rotate, around a current axis such as
//(1.0f, 0.0f, 0.0f)
if (rotationVector.x != 0.0f || rotationVector.y != 0.0f || rotationVector.z != 0.0f) {
//yes, we have an axis to rotate around
//create our axis vector to rotate around
XMVECTOR axisVector = XMVectorSet(rotationVector.x, rotationVector.y, rotationVector.z, 0.0f);
//create our rotation matrix using XMMatrixRotationAxis, and rotate around this axis with a specified angle theta
XMMATRIX rotationMatrix = XMMatrixRotationAxis(axisVector, 2.0 * delta);
//create our camera's view matrix
XMMATRIX viewMatrix = XMMatrixLookAtLH(
XMVectorSet(position.x, position.y, position.z, 0.0f),
XMVectorSet(lookat.x, lookat.y, lookat.z, 0.0f),
XMVectorSet(up.x, up.y, up.z, 0.0f));
//multiply our camera's view matrix by the rotation matrix
//make sure the rotation is on the right to ensure local axis rotation
XMMATRIX finalCameraMatrix = viewMatrix * rotationMatrix;
/* this piece of code allows the camera to correctly rotate and it doesn't
snap back to its original position, as the lookat coordinates are being set
each time. However, this will make the camera rotate around the world axis
rather than the local axis. Which brings us to the same problem we had
with the quaternion rotation */
//XMVECTOR look = XMVectorSet(lookat.x, lookat.y, lookat.z, 0.0);
//XMVECTOR finalLook = XMVector3Transform(look, rotationMatrix);
//lookat.x = XMVectorGetX(finalLook);
//lookat.y = XMVectorGetY(finalLook);
//lookat.z = XMVectorGetZ(finalLook);
//finally store the finalCameraMatrix into our camera matrix
XMStoreFloat4x4(&cameraMatrix, finalCameraMatrix);
} else {
//no, there is no rotation, don't apply the roation matrix
//no rotation, don't apply the rotation matrix
XMStoreFloat4x4(&cameraMatrix, XMMatrixLookAtLH(
XMVectorSet(position.x, position.y, position.z, 0.0f),
XMVectorSet(lookat.x, lookat.y, lookat.z, 0.0f),
XMVectorSet(up.x, up.y, up.z, 0.0f)));
}
An example can be seen here: https://i.gyazo.com/f83204389551eff427446e06624b2cf9.mp4
I think I am missing setting the actual lookat value to the new lookat value, but I'm not sure how to calculate the new value, or extract it from the new view matrix (which I have already tried)