I've been working on a 3d camera in opengl using C++.
When I look around with the camera, sometimes there will be unexpected roll in the camera, especially when I am rotating the camera in circles.
I suspect this is a floating point error, but I don't know how to detect it.
Here is the camera class:
#ifndef CAMERA_H
#define CAMERA_H
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtx/transform.hpp>
#include <glm/gtc/quaternion.hpp>
#include <glm/gtx/quaternion.hpp>
#include <glm/gtx/rotate_vector.hpp>
#include <glm/gtx/euler_angles.hpp>
#include <glm/gtx/string_cast.hpp>
#include <iostream>
using glm::vec3;
using glm::mat4;
using glm::quat;
enum CamDirection {
CAM_FORWARD,
CAM_BACKWARD,
CAM_LEFT,
CAM_RIGHT
};
class Camera {
public:
void cameraUpdate();
mat4 getViewMatrix();
Camera();
Camera(vec3 startPosition);
void move(CamDirection dir, GLfloat deltaTime);
void look(double xOffset, double yOffset);
void update();
private:
mat4 viewMatrix;
const GLfloat camSpeed = 5.05f;
};
mat4 Camera::getViewMatrix() {
return viewMatrix;
}
Camera::Camera(){}
Camera::Camera(vec3 startPos):
viewMatrix(glm::lookAt(startPos, vec3(0.0f, 0.0f, 0.0f), vec3(0.0f, 1.0f, 0.0f)))
{}
void Camera::move(CamDirection dir, GLfloat deltaTime) {
mat4 trans;
const vec3 camForward = vec3(viewMatrix[0][2], viewMatrix[1][2], viewMatrix[2][2]);
const vec3 camRight = vec3(viewMatrix[0][0], viewMatrix[1][0], viewMatrix[2][0]);
if (dir == CAM_FORWARD)
trans = glm::translate(trans, (camSpeed * deltaTime) * camForward);
else if (dir == CAM_BACKWARD)
trans = glm::translate(trans, -1 * (camSpeed * deltaTime) * camForward);
else if (dir == CAM_RIGHT)
trans = glm::translate(trans, -1 * (camSpeed * deltaTime) * camRight);
else
trans = glm::translate(trans, (camSpeed * deltaTime) * camRight);
viewMatrix *= trans;
}
void Camera::look(double xOffset, double yOffset) {
// 2 * acos(q[3])
quat rotation = glm::angleAxis((GLfloat)xOffset, vec3( 0.0f, 1.0f, 0.0f));
viewMatrix = glm::mat4_cast(rotation) * viewMatrix;
rotation = glm::angleAxis((GLfloat)yOffset, vec3(-1.0f, 0.0f, 0.0f));
mat4 rotMatrix = glm::mat4_cast(rotation);
viewMatrix = rotMatrix * viewMatrix;
}
void Camera::update() {
}
#endif // CAMERA_H
I managed to figure it out. Although I had to completely rewrite it to do it.
My problem was on these lines:
quat rotation = glm::angleAxis((GLfloat)xOffset, vec3( 0.0f, 1.0f, 0.0f));
viewMatrix = glm::mat4_cast(rotation) * viewMatrix;
rotation = glm::angleAxis((GLfloat)yOffset, vec3(-1.0f, 0.0f, 0.0f));
mat4 rotMatrix = glm::mat4_cast(rotation);
Building an intermediate quaternion to store orientation worked instead, and I could replace the look method with this:
quat pitch = quat(vec3(-yOffset, 0.0f, 0.0f));
quat yaw = quat(vec3(0.f, xOffset, 0.f));
orientation = pitch * orientation * yaw;
By multiplying the orientation the way on the last line, no unintended roll can happen.
There are two problems in that code:
First, if xOffset, yOffset are just screen pixel differences (obtained by mouse positions), you MUST set a factor that translates them to angles. There are better ways, for example form two vectors from center of window to mouse positions (previous and current) and calculate angle between them, by dot product. Depending on glm sets (degrees is default, but you can set radians) a non-factorized xOffset may be a huge angle, not smooth rotation.
Second accumlating rotations by newViewMatrix = thisMouseRotation * oldViewMatrix degenerates the matrix after some movements. This is due to limited numbers representation of computers: e.g. 10/3=3.333 but 3.333*3=9.999 != 10
Solutions:
A) Store the rotation in a quaternions. Initialize a quaternion and
update it for every rotation newQuater = thisMoveQuater * oldQuater. Time to time "normalize" the quaternion so as to minimize
numbers issue. The viewMatrix is calculated by viewMatrix = Mat4x4FromQuaternion * translationMatrix so we avoid the previous
viewMatrix and its issues.
B) Accumulate angles of rotation around each X,Y,Z axis. Calculate
each time it's needed the rotation matrix using these accumulated
angles. Perhaps you clamp an angle value to something like 0.2
degrees. This way the user can achieve the same position as several
rotations before.
Related
When camera is moved around, why are my starting rays are still stuck at origin 0, 0, 0 even though the camera position has been updated?
It works fine if I start the program and my camera position is at default 0, 0, 0. But once I move my camera for instance pan to the right and click some more, the lines are still coming from 0 0 0 when it should be starting from wherever the camera is. Am I doing something terribly wrong? I've checked to make sure they're being updated in the main loop. I've used this code snippit below referenced from:
picking in 3D with ray-tracing using NinevehGL or OpenGL i-phone
// 1. Get mouse coordinates then normalize
float x = (2.0f * lastX) / width - 1.0f;
float y = 1.0f - (2.0f * lastY) / height;
// 2. Move from clip space to world space
glm::mat4 inverseWorldMatrix = glm::inverse(proj * view);
glm::vec4 near_vec = glm::vec4(x, y, -1.0f, 1.0f);
glm::vec4 far_vec = glm::vec4(x, y, 1.0f, 1.0f);
glm::vec4 startRay = inverseWorldMatrix * near_vec;
glm::vec4 endRay = inverseWorldMatrix * far_vec;
// perspective divide
startR /= startR.w;
endR /= endR.w;
glm::vec3 direction = glm::vec3(endR - startR);
// start the ray points from the camera position
glm::vec3 startPos = glm::vec3(camera.GetPosition());
glm::vec3 endPos = glm::vec3(startPos + direction * someLength);
The first screenshot I click some rays, the 2nd I move my camera to the right and click some more but the initial starting rays are still at 0, 0, 0. What I'm looking for is for the rays to come out wherever the camera position is in the 3rd image, ie the red rays sorry for the confusion, the red lines are supposed to shoot out and into the distance not up.
// and these are my matrices
// projection
glm::mat4 proj = glm::perspective(glm::radians(camera.GetFov()), (float)width / height, 0.1f, 100.0f);
// view
glm::mat4 view = camera.GetViewMatrix(); // This returns glm::lookAt(this->Position, this->Position + this->Front, this->Up);
// model
glm::mat4 model = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, 0.0f));
Its hard to tell where in the code the problem lies. But, I use this function for ray casting that is adapted from code from scratch-a-pixel and learnopengl:
vec3 rayCast(double xpos, double ypos, mat4 projection, mat4 view) {
// converts a position from the 2d xpos, ypos to a normalized 3d direction
float x = (2.0f * xpos) / WIDTH - 1.0f;
float y = 1.0f - (2.0f * ypos) / HEIGHT;
float z = 1.0f;
vec3 ray_nds = vec3(x, y, z);
vec4 ray_clip = vec4(ray_nds.x, ray_nds.y, -1.0f, 1.0f);
// eye space to clip we would multiply by projection so
// clip space to eye space is the inverse projection
vec4 ray_eye = inverse(projection) * ray_clip;
// convert point to forwards
ray_eye = vec4(ray_eye.x, ray_eye.y, -1.0f, 0.0f);
// world space to eye space is usually multiply by view so
// eye space to world space is inverse view
vec4 inv_ray_wor = (inverse(view) * ray_eye);
vec3 ray_wor = vec3(inv_ray_wor.x, inv_ray_wor.y, inv_ray_wor.z);
ray_wor = normalize(ray_wor);
return ray_wor;
}
where you can draw your line with startPos = camera.Position and endPos = camera.Position + rayCast(...) * scalar_amount.
I create a cube like normal using 8 vertex points that outline a cube and use indices to draw each individual triangle. However, when I create my camera matrix and rotate it using the lookat function with glm it rotates the entire screen positions not world positions.
glm::mat4 Projection = glm::mat4(1);
Projection = glm::perspective(glm::radians(60.0f), (float)window_width / (float)window_hight, 0.1f, 100.0f);
const float radius = 10.0f;
float camX = sin(glfwGetTime()) * radius;
float camZ = cos(glfwGetTime()) * radius;
glm::mat4 View = glm::mat4(1);
View = glm::lookAt(
glm::vec3(camX, 0, camZ),
glm::vec3(0, 0, 0),
glm::vec3(0, 1, 0)
);
glm::mat4 Model = glm::mat4(1);
glm::mat4 mvp = Projection * View * Model;
Then in glsl:
uniform mat4 camera_mat4
void main()
{
vec4 pos = vec4(vertexPosition_modelspace, 1.0) * camera_mat4;
gl_Position.xyzw = pos;
}
Example: GLM rotating screen coordinates not cube
How can I transform the camera left around the interface? Need to rotate the "eye" vector around the "up" vector!?.
void Transform::left(float degrees, vec3& eye, vec3& up) {
float c = cosf(degrees * (pi / 180));
float s = sinf(degrees * (pi / 180));
}
mat4 Transform::lookAt(vec3 eye, vec3 up) {
glm::mat4 view = glm::lookAt(
glm::vec3(eye.x, eye.y, eye.z),
glm::vec3(0.0f, 0.0f, 0.0f),
glm::vec3(up.x, up.y, up.z)
);
return view;
}
Calculate a rotated eye vector by multiplying the rotation vector by the original, unrotated, eye vector and pass that into the lookAt function.
float rotX = cosf(angle * (pi / 180));
float rotY = sinf(angle * (pi / 180));
glm::vec3 rotatedEye = glm::vec3(eye.x * rotX , eye.y * rotY, eye.z)
glam::mat4 view = lookAt(rotatedEye, up);
Note that each time your camera vectors change you will need to calculate a new view matrix.
I am trying to implement a Camera class so I can walk and look on the world as follows:
#ifndef _CAMERA_H_
#define _CAMERA_H_
#include <glm\glm.hpp>
class Camera
{
public:
Camera();
~Camera();
void Update(const glm::vec2& newXY);
//if by = 0.0 it means, it will use the const Class speed to scale it
void MoveForward(const float by = 0.0f);
void MoveBackword(const float by = 0.0f);
void MoveLef(const float by = 0.0f);
void MoveRight(const float by = 0.0f);
void MoveUp(const float by = 0.0f);
void MoveDown(const float by = 0.0f);
void Speed(const float speed = 0.0f);
glm::vec3& GetCurrentPosition();
glm::vec3& GetCurrentDirection();
glm::mat4 GetWorldToView() const;
private:
glm::vec3 position, viewDirection, strafeDir;
glm::vec2 oldYX;
float speed;
const glm::vec3 up;
};
#endif
#include "Camera.h"
#include <glm\gtx\transform.hpp>
Camera::Camera()
:up(0.0f, 1.0f, 0.0), viewDirection(0.0f, 0.0f, -1.0f),
speed(0.1f)
{
}
Camera::~Camera()
{
}
void Camera::Update(const glm::vec2& newXY)
{
glm::vec2 delta = newXY - oldYX;
auto length = glm::length(delta);
if (glm::length(delta) < 50.f)
{
strafeDir = glm::cross(viewDirection, up);
glm::mat4 rotation = glm::rotate(-delta.x * speed, up) *
glm::rotate(-delta.y * speed, strafeDir);
viewDirection = glm::mat3(rotation) * viewDirection;
}
oldYX = newXY;
}
void Camera::Speed(const float speed)
{
this->speed = speed;
}
void Camera::MoveForward(const float by)
{
float s = by == 0.0f ? speed : by;
position += s * viewDirection;
}
void Camera::MoveBackword(const float by)
{
float s = by == 0.0f ? speed : by;
position += -s * viewDirection;
}
void Camera::MoveLef(const float by )
{
float s = by == 0.0f ? speed : by;
position += -s * strafeDir;
}
void Camera::MoveRight(const float by )
{
float s = by == 0.0f ? speed : by;
position += -s * strafeDir;
}
void Camera::MoveUp(const float by )
{
float s = by == 0.0f ? speed : by;
position += s * up;
}
void Camera::MoveDown(const float by )
{
float s = by == 0.0f ? speed : by;
position += -s * up;
}
glm::vec3& Camera::GetCurrentPosition()
{
return position;
}
glm::vec3& Camera::GetCurrentDirection()
{
return viewDirection;
}
glm::mat4 Camera::GetWorldToView() const
{
return glm::lookAt(position, position + viewDirection, up);
}
and I update and render as follow :
void Game::OnUpdate()
{
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUniformMatrix4fv(program->GetUniformLocation("modelToViewWorld"), 1, GL_FALSE, &cam.GetWorldToView()[0][0]);
}
void Game::OnRender()
{
model->Draw();
}
Where the vertex shader looks like:
#version 410
layout (location = 0) in vec3 inVertex;
layout (location = 1) in vec2 inTexture;
layout (location = 2) in vec3 inNormal;
uniform mat4 modelToViewWorld;
void main()
{
gl_Position = vec4(mat3(modelToViewWorld) * inVertex, 1);
}
But I am moving/rotating the Model itself, not the camera around it. What am I doing wrong here?
I think the problem is that you are not inverting the view matrix. The model-view matrix is just a product of a model->world coordinates matrix transformation and a world->view coordinates matrix transformation. The first one takes the coordinates in the local model spaces and transforms them to the world space, therefore needs no invertion. However, the second one takes the coordinates of a camera in world space and transforms them to the local coordinate system of the camera and since it's the opposite of the first one it needs to be inverted.
You are not rotating the model, you are rotating the view direction.
viewDirection = glm::mat3(rotation) * viewDirection;
What you want to do is to rotate the center of the camera around the object and then set the direction of the camera towards the object.
For example:
position = vec3( radius * cos(t), radius * sin(t), 0);
direction = normalize(-position);
Edit: okay, I've written the code totally intuitive now and this is the result:
http://i.imgur.com/x5arJE9.jpg
The Cube is at 0,0,0
As you can see, the camera position is negative on the z axis, suggesting that I'm viewing along the positive z axis, which does not match up. (fw is negative)
Also the cube colors suggest that I'm on the positive z axis, looking in the negative direction. Also the positive x-axis is to the right (in modelspace)
The angles are calculated like this:
public virtual Vector3 Right
{
get
{
return Vector3.Transform(Vector3.UnitX, Rotation);
}
}
public virtual Vector3 Forward
{
get
{
return Vector3.Transform(-Vector3.UnitZ, Rotation);
}
}
public virtual Vector3 Up
{
get
{
return Vector3.Transform(Vector3.UnitY, Rotation);
}
}
Rotation is a Quaternion.
This is how the view and model matrices are creates:
public virtual Matrix4 GetMatrix()
{
Matrix4 translation = Matrix4.CreateTranslation(Position);
Matrix4 rotation = Matrix4.CreateFromQuaternion(Rotation);
return translation * rotation;
}
Projection:
private void SetupProjection()
{
if(GameObject != null)
{
AspectRatio = GameObject.App.Window.Width / (float)GameObject.App.Window.Height;
projectionMatrix = Matrix4.CreatePerspectiveFieldOfView((float)((Math.PI * Fov) / 180), AspectRatio, ZNear, ZFar);
}
}
Matrix multiplication:
public Matrix4 GetModelViewProjectionMatrix(Transform model)
{
return model.GetMatrix()* Transform.GetMatrix() * projectionMatrix;
}
Shader:
[Shader vertex]
#version 150 core
in vec3 pos;
in vec4 color;
uniform float _time;
uniform mat4 _modelViewProjection;
out vec4 vColor;
void main() {
gl_Position = _modelViewProjection * vec4(pos, 1);
vColor = color;
}
OpenTK matrices are transposed, thus the multiplication order.
Any idea why the axis / locations are all messed up ?
End of edit. Original Post:
Have a look at this image: http://i.imgur.com/Cjjr8jz.jpg
As you can see, while the forward vector ( of the camera ) is positive in the z-Axis and the red cube is on the negative x axis,
float[] points = {
// position (3) Color (3)
-s, s, z, 1.0f, 0.0f, 0.0f, // Red point
s, s, z, 0.0f, 1.0f, 0.0f, // Green point
s, -s, z, 0.0f, 0.0f, 1.0f, // Blue point
-s, -s, z, 1.0f, 1.0f, 0.0f, // Yellow point
};
(cubes are created in the geometry shader around those points)
the camera x position seems to be inverted. In other words, if I increase the camera position along its local x axis, it will move to the left, and vice versa.
I pass the transformation matrix like this:
if (DefaultAttributeLocations.TryGetValue("modelViewProjectionMatrix", out loc))
{
if (loc >= 0)
{
Matrix4 mvMatrix = Camera.GetMatrix() * projectionMatrix;
GL.UniformMatrix4(loc, false, ref mvMatrix);
}
}
The GetMatrix() method looks like this:
public virtual Matrix4 GetMatrix()
{
Matrix4 translation = Matrix4.CreateTranslation(Position);
Matrix4 rotation = Matrix4.CreateFromQuaternion(Rotation);
return translation * rotation;
}
And the projection matrix:
private void SetupProjection()
{
AspectRatio = Window.Width / (float)Window.Height;
projectionMatrix = Matrix4.CreatePerspectiveFieldOfView((float)((Math.PI * Fov)/180), AspectRatio, ZNear, ZFar);
}
I don't see what I'm doing wrong :/
It's a little hard to tell from the code, but I believe this is because in OpenGL, the default forward vector of the camera is negative along the Z axis - yours is positive, which means you're looking at the model from the back. That would be why the X coordinate seems inverted.
Although this question is a few years old, I'd still like to give my input.
The reason you're experiencing this bug is because OpenTK's matrices are row major. All this really means is you have to do all matrix math is reverse. For example, the transformation matrix will be multiplied like so:
public static Matrix4 CreateTransformationMatrix(Vector3 position, Quaternion rotation, Vector3 scale)
{
return Matrix4.CreateScale(scale) *
Matrix4.CreateFromQuaternion(rotation) *
Matrix4.CreateTranslation(position);
}
This goes for any matrix, so if you're using Vector3's instead of Quaternion's for your rotation it would look like this:
public static Matrix4 CreateTransformationMatrix(Vector3 position, Vector3 rotation, Vector3 scale)
{
return Matrix4.CreateScale(scale) *
Matrix4.CreateRotationZ(rotation.Z) *
Matrix4.CreateRotationY(rotation.Y) *
Matrix4.CreateRotationX(rotation.X) *
Matrix4.CreateTranslation(position);
}
Note that your vertex shader will still be multiplied like this:
void main()
{
gl_Position = projection * view * transform * vec4(position, 1.0f);
}
I hope this helps!