Edit: okay, I've written the code totally intuitive now and this is the result:
http://i.imgur.com/x5arJE9.jpg
The Cube is at 0,0,0
As you can see, the camera position is negative on the z axis, suggesting that I'm viewing along the positive z axis, which does not match up. (fw is negative)
Also the cube colors suggest that I'm on the positive z axis, looking in the negative direction. Also the positive x-axis is to the right (in modelspace)
The angles are calculated like this:
public virtual Vector3 Right
{
get
{
return Vector3.Transform(Vector3.UnitX, Rotation);
}
}
public virtual Vector3 Forward
{
get
{
return Vector3.Transform(-Vector3.UnitZ, Rotation);
}
}
public virtual Vector3 Up
{
get
{
return Vector3.Transform(Vector3.UnitY, Rotation);
}
}
Rotation is a Quaternion.
This is how the view and model matrices are creates:
public virtual Matrix4 GetMatrix()
{
Matrix4 translation = Matrix4.CreateTranslation(Position);
Matrix4 rotation = Matrix4.CreateFromQuaternion(Rotation);
return translation * rotation;
}
Projection:
private void SetupProjection()
{
if(GameObject != null)
{
AspectRatio = GameObject.App.Window.Width / (float)GameObject.App.Window.Height;
projectionMatrix = Matrix4.CreatePerspectiveFieldOfView((float)((Math.PI * Fov) / 180), AspectRatio, ZNear, ZFar);
}
}
Matrix multiplication:
public Matrix4 GetModelViewProjectionMatrix(Transform model)
{
return model.GetMatrix()* Transform.GetMatrix() * projectionMatrix;
}
Shader:
[Shader vertex]
#version 150 core
in vec3 pos;
in vec4 color;
uniform float _time;
uniform mat4 _modelViewProjection;
out vec4 vColor;
void main() {
gl_Position = _modelViewProjection * vec4(pos, 1);
vColor = color;
}
OpenTK matrices are transposed, thus the multiplication order.
Any idea why the axis / locations are all messed up ?
End of edit. Original Post:
Have a look at this image: http://i.imgur.com/Cjjr8jz.jpg
As you can see, while the forward vector ( of the camera ) is positive in the z-Axis and the red cube is on the negative x axis,
float[] points = {
// position (3) Color (3)
-s, s, z, 1.0f, 0.0f, 0.0f, // Red point
s, s, z, 0.0f, 1.0f, 0.0f, // Green point
s, -s, z, 0.0f, 0.0f, 1.0f, // Blue point
-s, -s, z, 1.0f, 1.0f, 0.0f, // Yellow point
};
(cubes are created in the geometry shader around those points)
the camera x position seems to be inverted. In other words, if I increase the camera position along its local x axis, it will move to the left, and vice versa.
I pass the transformation matrix like this:
if (DefaultAttributeLocations.TryGetValue("modelViewProjectionMatrix", out loc))
{
if (loc >= 0)
{
Matrix4 mvMatrix = Camera.GetMatrix() * projectionMatrix;
GL.UniformMatrix4(loc, false, ref mvMatrix);
}
}
The GetMatrix() method looks like this:
public virtual Matrix4 GetMatrix()
{
Matrix4 translation = Matrix4.CreateTranslation(Position);
Matrix4 rotation = Matrix4.CreateFromQuaternion(Rotation);
return translation * rotation;
}
And the projection matrix:
private void SetupProjection()
{
AspectRatio = Window.Width / (float)Window.Height;
projectionMatrix = Matrix4.CreatePerspectiveFieldOfView((float)((Math.PI * Fov)/180), AspectRatio, ZNear, ZFar);
}
I don't see what I'm doing wrong :/
It's a little hard to tell from the code, but I believe this is because in OpenGL, the default forward vector of the camera is negative along the Z axis - yours is positive, which means you're looking at the model from the back. That would be why the X coordinate seems inverted.
Although this question is a few years old, I'd still like to give my input.
The reason you're experiencing this bug is because OpenTK's matrices are row major. All this really means is you have to do all matrix math is reverse. For example, the transformation matrix will be multiplied like so:
public static Matrix4 CreateTransformationMatrix(Vector3 position, Quaternion rotation, Vector3 scale)
{
return Matrix4.CreateScale(scale) *
Matrix4.CreateFromQuaternion(rotation) *
Matrix4.CreateTranslation(position);
}
This goes for any matrix, so if you're using Vector3's instead of Quaternion's for your rotation it would look like this:
public static Matrix4 CreateTransformationMatrix(Vector3 position, Vector3 rotation, Vector3 scale)
{
return Matrix4.CreateScale(scale) *
Matrix4.CreateRotationZ(rotation.Z) *
Matrix4.CreateRotationY(rotation.Y) *
Matrix4.CreateRotationX(rotation.X) *
Matrix4.CreateTranslation(position);
}
Note that your vertex shader will still be multiplied like this:
void main()
{
gl_Position = projection * view * transform * vec4(position, 1.0f);
}
I hope this helps!
Related
I've been working on a 3d camera in opengl using C++.
When I look around with the camera, sometimes there will be unexpected roll in the camera, especially when I am rotating the camera in circles.
I suspect this is a floating point error, but I don't know how to detect it.
Here is the camera class:
#ifndef CAMERA_H
#define CAMERA_H
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtx/transform.hpp>
#include <glm/gtc/quaternion.hpp>
#include <glm/gtx/quaternion.hpp>
#include <glm/gtx/rotate_vector.hpp>
#include <glm/gtx/euler_angles.hpp>
#include <glm/gtx/string_cast.hpp>
#include <iostream>
using glm::vec3;
using glm::mat4;
using glm::quat;
enum CamDirection {
CAM_FORWARD,
CAM_BACKWARD,
CAM_LEFT,
CAM_RIGHT
};
class Camera {
public:
void cameraUpdate();
mat4 getViewMatrix();
Camera();
Camera(vec3 startPosition);
void move(CamDirection dir, GLfloat deltaTime);
void look(double xOffset, double yOffset);
void update();
private:
mat4 viewMatrix;
const GLfloat camSpeed = 5.05f;
};
mat4 Camera::getViewMatrix() {
return viewMatrix;
}
Camera::Camera(){}
Camera::Camera(vec3 startPos):
viewMatrix(glm::lookAt(startPos, vec3(0.0f, 0.0f, 0.0f), vec3(0.0f, 1.0f, 0.0f)))
{}
void Camera::move(CamDirection dir, GLfloat deltaTime) {
mat4 trans;
const vec3 camForward = vec3(viewMatrix[0][2], viewMatrix[1][2], viewMatrix[2][2]);
const vec3 camRight = vec3(viewMatrix[0][0], viewMatrix[1][0], viewMatrix[2][0]);
if (dir == CAM_FORWARD)
trans = glm::translate(trans, (camSpeed * deltaTime) * camForward);
else if (dir == CAM_BACKWARD)
trans = glm::translate(trans, -1 * (camSpeed * deltaTime) * camForward);
else if (dir == CAM_RIGHT)
trans = glm::translate(trans, -1 * (camSpeed * deltaTime) * camRight);
else
trans = glm::translate(trans, (camSpeed * deltaTime) * camRight);
viewMatrix *= trans;
}
void Camera::look(double xOffset, double yOffset) {
// 2 * acos(q[3])
quat rotation = glm::angleAxis((GLfloat)xOffset, vec3( 0.0f, 1.0f, 0.0f));
viewMatrix = glm::mat4_cast(rotation) * viewMatrix;
rotation = glm::angleAxis((GLfloat)yOffset, vec3(-1.0f, 0.0f, 0.0f));
mat4 rotMatrix = glm::mat4_cast(rotation);
viewMatrix = rotMatrix * viewMatrix;
}
void Camera::update() {
}
#endif // CAMERA_H
I managed to figure it out. Although I had to completely rewrite it to do it.
My problem was on these lines:
quat rotation = glm::angleAxis((GLfloat)xOffset, vec3( 0.0f, 1.0f, 0.0f));
viewMatrix = glm::mat4_cast(rotation) * viewMatrix;
rotation = glm::angleAxis((GLfloat)yOffset, vec3(-1.0f, 0.0f, 0.0f));
mat4 rotMatrix = glm::mat4_cast(rotation);
Building an intermediate quaternion to store orientation worked instead, and I could replace the look method with this:
quat pitch = quat(vec3(-yOffset, 0.0f, 0.0f));
quat yaw = quat(vec3(0.f, xOffset, 0.f));
orientation = pitch * orientation * yaw;
By multiplying the orientation the way on the last line, no unintended roll can happen.
There are two problems in that code:
First, if xOffset, yOffset are just screen pixel differences (obtained by mouse positions), you MUST set a factor that translates them to angles. There are better ways, for example form two vectors from center of window to mouse positions (previous and current) and calculate angle between them, by dot product. Depending on glm sets (degrees is default, but you can set radians) a non-factorized xOffset may be a huge angle, not smooth rotation.
Second accumlating rotations by newViewMatrix = thisMouseRotation * oldViewMatrix degenerates the matrix after some movements. This is due to limited numbers representation of computers: e.g. 10/3=3.333 but 3.333*3=9.999 != 10
Solutions:
A) Store the rotation in a quaternions. Initialize a quaternion and
update it for every rotation newQuater = thisMoveQuater * oldQuater. Time to time "normalize" the quaternion so as to minimize
numbers issue. The viewMatrix is calculated by viewMatrix = Mat4x4FromQuaternion * translationMatrix so we avoid the previous
viewMatrix and its issues.
B) Accumulate angles of rotation around each X,Y,Z axis. Calculate
each time it's needed the rotation matrix using these accumulated
angles. Perhaps you clamp an angle value to something like 0.2
degrees. This way the user can achieve the same position as several
rotations before.
I've been stuck on this for two days now, I'm unsure where else to look. I'm rendering two 3d cubes using OpenGL, and trying to apply a local rotation to each cube in these scene in response to me pressing a button.
I've got to the point where my cubes rotate in 3d space, but their both rotating about the world-space origin, instead of their own local origins.
(couple second video)
https://www.youtube.com/watch?v=3mrK4_cCvUw
After scouring the internet, the appropriate formula for calculating the MVP is as follow:
auto const model = TranslationMatrix * RotationMatrix * ScaleMatrix;
auto const modelview = projection * view * model;
Each of my cube's has it's own "model", which is defined as follows:
struct model
{
glm::vec3 translation;
glm::quat rotation;
glm::vec3 scale = glm::vec3{1.0f};
};
When I press a button on my keyboard, I create a quaternion representing the new angle and multiply it with the previous rotation quaternion, updating it in place.
The function looks like this:
template<typename TData>
void rotate_entity(TData &data, ecst::entity_id const eid, float const angle,
glm::vec3 const& axis) const
{
auto &m = data.get(ct::model, eid);
auto const q = glm::angleAxis(glm::degrees(angle), axis);
m.rotation = q * m.rotation;
// I'm a bit unsure on this last line above, I've also tried the following without fully understanding the difference
// m.rotation = m.rotation * q;
}
The axis is provided by the user like so:
// inside user-input handling function
float constexpr ANGLE = 0.2f;
...
// y-rotation
case SDLK_u: {
auto constexpr ROTATION_VECTOR = glm::vec3{0.0f, 1.0f, 0.0f};
rotate_entities(data, ANGLE, ROTATION_VECTOR);
break;
}
case SDLK_i: {
auto constexpr ROTATION_VECTOR = glm::vec3{0.0f, -1.0f, 0.0f};
rotate_entities(data, ANGLE, ROTATION_VECTOR);
break;
}
My GLSL vertex shader is pretty straight forward from what I've found in the example code out there:
// attributes input to the vertex shader
in vec4 a_position; // position value
// output of the vertex shader - input to fragment
// shader
out vec3 v_uv;
uniform mat4 u_mvmatrix;
void main()
{
gl_Position = u_mvmatrix * a_position;
v_uv = vec3(a_position.x, a_position.y, a_position.z);
}
Inside my draw code, the exact code I'm using to calculate the MVP for each cube is:
...
auto const& model = shape.model();
auto const tmatrix = glm::translate(glm::mat4{}, model.translation);
auto const rmatrix = glm::toMat4(model.rotation);
auto const smatrix = glm::scale(glm::mat4{}, model.scale);
auto const mmatrix = tmatrix * rmatrix * smatrix;
auto const mvmatrix = projection * view * mmatrix;
// simple wrapper that does logging and forwards to glUniformMatrix4fv()
p.set_uniform_matrix_4fv(logger, "u_mvmatrix", mvmatrix);
Earlier in my program, I calculate my view/projection matrices like so:
auto const windowheight = static_cast<GLfloat>(hw.h);
auto const windowwidth = static_cast<GLfloat>(hw.w);
auto projection = glm::perspective(60.0f, (windowwidth / windowheight), 0.1f, 100.0f);
auto view = glm::lookAt(
glm::vec3(0.0f, 0.0f, 1.0f), // camera position
glm::vec3(0.0f, 0.0f, -1.0f), // look at origin
glm::vec3(0.0f, 1.0f, 0.0f)); // "up" vector
The positions of my cube's in world-space are on the Z axis, so they should be visible:
cube0.set_world_position(0.0f, 0.0f, 0.0f, 1.0f);
cube1.set_world_position(-0.7f, 0.7f, 0.0f, 1.0f);
// I call set_world_position() exactly once before my game enter's it's main loop.
// I never call this again, it just modifies the vertex used as the center of the shape.
// It doesn't modify the model matrix at all.
// I call it once before my game enter's it's game loop, and I never modify it after that.
So, my question is, is the appropriate way to update a rotation for an object?
Should I be storing a quaternion directly in my object's "model"?
Should I be storing my translation and scaling as separate vec3's?
Is there an easier way to do this? I've been reading and re-reading anything I can find, but I don't see anyone doing this in the same way.
This tutorial is a bit short on details, specifically how to apply a rotation to an existing rotation (I believe this is just multiplying the quaternions together, which is what I'm doing inside rotate_entity(...) above).
http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-17-quaternions/
https://github.com/opengl-tutorials/ogl/blob/master/tutorial17_rotations/tutorial17.cpp#L306-L311
Does it make more sense to store the resulting "MVP" matrix myself as my "model" and apply glm::transform/glm::scale/glm::rotate operations on the MVP matrix directly? (I tried this last option earlier, but I couldn't figure out how to get that to work too).
Thanks!
edit: better link
Generally, you don't want to modify the position of your model's individual vertices on the CPU. That's the entire purpose of the vertex program. The purpose of the model matrix is to position the model in the world in the vertex program.
To rotate a model around its center, you need to first move the center to the origin, then rotate it, then move the center to its final position. So let's say you have a cube that stretches from (0,0,0) to (1,1,1). You need to:
Translate the cube by (-0.5, -0.5, -0.5)
Rotate by the angle
Translate the cube by (0.5, 0.5, 0.5)
Translate the cube to wherever it belongs in the scene
You can combine the last 2 translations into a single one, and of course, you can collapse all of these transformations into a single matrix that is your model matrix.
Hello i have an issue with a returning value of the glm lookAt function. When i am executing in debug mode, i get at this point
... Result[3][2] = dot(f, eye); ... of the glm function a wrong value in the translate z-position of the matrix. The value is -2, that shows me that the forward and eye vector are in the opposite position. My eye, center and up vectors are eye(0,0,2), center(0,0,-1) and up(0,1,0). The cam coodinate vectors are: f(0,0,-1), s(1,0,0) and u(0,1,0). And the vantage point the user looks at is (0,0,0). So the right view matrix should be this one:
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
but i get this one:
1 -0 0 -0
-0 1 -0 -0
0 0 1 -2
0 0 0 1
My code is:
struct camera {
vec3 position = vec3(0.0f); // position of the camera
vec3 view_direction = vec3(0.0f); // forward vector (orientation)
vec3 side = vec3(0.0f); // right vector (side)
vec3 up = vec3(0.0f, 1.0f, 0.0f); // up vector
float speed = 0.1;
float yaw = 0.0f; // y-rotation
float cam_yaw_speed = 10.0f; // 10 degrees per second
float pitch = 0.0f; // x-rotation
float roll = 0.0f;
...
// calculate the orientation vector (forward)
vec3 getOrientation(vec3 vantage_point) {
// calc the difference and normalize the resulting vector
vec3 result = vantage_point - position;
result = normalize(result);
return result;
}
// calculate the right (side) vector of the camera, by given orientation(forward) and up vectors
mat4 look_at_point(vec3 vantage_point) {
view_direction = getOrientation(vantage_point);
// calculate the lookat matrix
return lookAt(position, position + view_direction, up);
}
};
I have tryied to figure out how to manage this problem but i still have no idea. Can someone help me?
The main function where i am executing the main_cam.look_at_point(vantage_point) function is showed below:
...
GLfloat points[] = {
0.0f, 0.5f, 0.0f,
0.5f, 0.0f, 0.0f,
-0.5f, 0.0f, 0.0f };
float speed = 1.0f; // move at 1 unit per second
float last_position = 0.0f;
// init camera
main_cam.position = vec3(0.0f, 0.0f, 2.0f); // don't start at zero, or will be too close
main_cam.speed = 1.0f; // 1 unit per second
main_cam.cam_yaw_speed = 10.0f; // 10 degrees per second
vec3 vantage_point = vec3(0.0f, 0.0f, 0.0f);
mat4 T = translate(mat4(1.0), main_cam.position);
//mat4 R = rotate(mat4(), -main_cam.yaw, vec3(0.0, 1.0, 0.0));
mat4 R = main_cam.look_at_point(vantage_point);
mat4 view_matrix = R * T;
// input variables
float near = 0.1f; // clipping plane
float far = 100.0f; // clipping plane
float fov = 67.0f * ONE_DEG_IN_RAD; // convert 67 degrees to radians
float aspect = (float)g_gl_width / (float)g_gl_height; // aspect ratio
mat4 proj_matrix = perspective(fov, aspect, near, far);
use_shader_program(shader_program);
set_uniform_matrix4fv(shader_program, "view", 1, GL_FALSE, &view_matrix[0][0]);
set_uniform_matrix4fv(shader_program, "proj", 1, GL_FALSE, &proj_matrix[0][0]);
...
Testing with the rotate function of glm the triangle is shown right.
Triangle shown with the rotate function of glm
I suspect that the problem is here:
mat4 view_matrix = R * T; // <---
The matrix returned by lookAt already does the translation.
Try manually applying the transformation on the (0,0,0) point that is inside your triangle. T will translate it to (0,0,2), but now it coincides with the camera, so R will send it back into (0,0,0). Now you get a division by zero accident in the projective divide.
So remove the multiplication by T:
mat4 view_matrix = R;
Now (0,0,0) will be mapped to (0,0,-2), which is in the direction camera is looking. (In camera space the center-of-projection is at (0,0,0) and the camera is looking towards the negative Z direction).
EDIT: I want to point out that calculating the view_direction from vantage_point and then feeding position + view_direction back into lookAt is a rather contrived way of achieving your goals. What you do in getOrientation function is what lookAt already does inside. Instead you can get the view_direction from the result of lookAt:
mat4 look_at_point(vec3 vantage_point) {
// calculate the lookat matrix
mat4 M = lookAt(position, vantage_point, up);
view_direction = -vec3(M[2][0], M[2][1], M[2][2]);
return M;
}
However, considering that ultimately you're trying to implement yaw/pitch/roll camera controls, you are better off to not using lookAt at all.
The symptom is, that the "position of the camera" seems to be mirrored around the x axis (negative z instead of positive z) and the "orientation of the camera" is opposing to the expected. In other words, I have to rotate the camera by 180 degrees and move it forwards to see any renderings.
In all OpenGl camera tutorials which I have seen, there was always a positive z coordinate for the camera position. Maybe there is only a single sign mistake in the code, but I do not see it. I am also posting the corresponding shader code. My objects are rendered at world coordinate z=0.1.
The initialization of the camera instance is show in the following lines
m_viewMatrix = math::Matrix4D::lookAt(m_cameraPosition, m_cameraPosition + m_cameraForward, m_cameraUp);
where
m_cameraForward(math::Vector3D(0.0f, 0.0f, -1.0f)),
m_cameraRight(math::Vector3D (1.0f, 0.0f, 0.0f)),
m_cameraUp(math::Vector3D(0.0f, 1.0f, 0.0f)),
m_cameraPosition(math::Vector3D(0.0f, 0.0f, 20.0f))
The result is a black screen. When I change the camera position to
m_cameraPosition(math::Vector3D(0.0f, 0.0f, -20.0f)
everything works fine.
The function lookAt is given by the following lines:
Matrix4D Matrix4D::lookAt(
const Vector3D& f_cameraPosition_r,
const Vector3D& f_targetPosition_r,
const Vector3D& f_upDirection_r)
{
const math::Vector3D l_forwardDirection = (f_targetPosition_r - f_cameraPosition_r).normalized();
const math::Vector3D l_rightDirection = f_upDirection_r.cross(l_forwardDirection).normalized();
const math::Vector3D l_upDirection = l_forwardDirection.cross(l_rightDirection); // is normalized
return math::Matrix4D(
l_rightDirection.x, l_rightDirection.y, l_rightDirection.z, l_rightDirection.dot(f_cameraPosition_r*(-1.0f)),
l_upDirection.x, l_upDirection.y, l_upDirection.z, l_upDirection.dot(f_cameraPosition_r*(-1.0f)),
l_forwardDirection.x, l_forwardDirection.y, l_forwardDirection.z, l_forwardDirection.dot(f_cameraPosition_r*(-1.0f)),
0.0f, 0.0f, 0.0f, 1.0f
);
}
The memory layout of the matrix4d is column major, as expected by OpenGl.
All other functions like dot and cross are unit tested.
vertex shader:
#version 430
in layout (location = 0) vec3 position;
in layout (location = 1) vec4 color;
in layout (location = 2) vec3 normal;
uniform mat4 pr_matrix; // projection matrix
uniform mat4 vw_matrix = mat4(1.0); // view matrix <------
uniform mat4 ml_matrix = mat4(1.0); // model matrix
out vec4 colorOut;
void main()
{
gl_Position = pr_matrix * vw_matrix * ml_matrix * vec4(position,1.0);
colorOut = color;
}
fragment shader:
#version 430
out vec4 color;
in vec4 colorOut;
void main()
{
color = colorOut;
}
edit (added perspective matrix):
Matrix4D Matrix4D::perspectiveProjection(
const float f_viewportWidth_f,
const float f_viewportHeight_f,
const float f_nearPlaneDistance_f,
const float f_farPlaneDistance_f,
const float f_radFieldOfViewY_f)
{
const float l_aspectRatio_f = f_viewportWidth_f / f_viewportHeight_f;
const float l_tanHalfFovy_f = tan(f_radFieldOfViewY_f * 0.5);
const float l_frustumLength = f_farPlaneDistance_f - f_nearPlaneDistance_f;
const float l_scaleX = 1.0f / (l_aspectRatio_f * l_tanHalfFovy_f);
const float l_scaleY = 1.0f / l_tanHalfFovy_f;
const float l_scaleZ = - (f_farPlaneDistance_f + f_nearPlaneDistance_f) / l_frustumLength;
const float l_value32 = -(2.0f*f_farPlaneDistance_f*f_nearPlaneDistance_f) / l_frustumLength;
return Matrix4D(
l_scaleX, +0.0f, +0.0f, +0.0f,
+0.0f, l_scaleY, +0.0f, +0.0f,
+0.0f, +0.0f, l_scaleZ, l_value32,
+0.0f, +0.0f, -1.0f, +0.0f);
Your projection matrix is following the "classic" OpenGL conventions: viewing direction is (0,0,-1) in eye space (last row of the matrix).
However, your view matrix does not follow that convention: You must put the negated forward direction into the matrix (also for calculation the translation z component there). In its current form, the view matrix just rotates so that the forward direction is mapped to +z.
Negating this of course means that you will use a right-handed coordinate system for world space (which is what classic GL did). If you don't want that, you can also just change the projection matrix to actually look at +z.
I am trying to implement a Camera class so I can walk and look on the world as follows:
#ifndef _CAMERA_H_
#define _CAMERA_H_
#include <glm\glm.hpp>
class Camera
{
public:
Camera();
~Camera();
void Update(const glm::vec2& newXY);
//if by = 0.0 it means, it will use the const Class speed to scale it
void MoveForward(const float by = 0.0f);
void MoveBackword(const float by = 0.0f);
void MoveLef(const float by = 0.0f);
void MoveRight(const float by = 0.0f);
void MoveUp(const float by = 0.0f);
void MoveDown(const float by = 0.0f);
void Speed(const float speed = 0.0f);
glm::vec3& GetCurrentPosition();
glm::vec3& GetCurrentDirection();
glm::mat4 GetWorldToView() const;
private:
glm::vec3 position, viewDirection, strafeDir;
glm::vec2 oldYX;
float speed;
const glm::vec3 up;
};
#endif
#include "Camera.h"
#include <glm\gtx\transform.hpp>
Camera::Camera()
:up(0.0f, 1.0f, 0.0), viewDirection(0.0f, 0.0f, -1.0f),
speed(0.1f)
{
}
Camera::~Camera()
{
}
void Camera::Update(const glm::vec2& newXY)
{
glm::vec2 delta = newXY - oldYX;
auto length = glm::length(delta);
if (glm::length(delta) < 50.f)
{
strafeDir = glm::cross(viewDirection, up);
glm::mat4 rotation = glm::rotate(-delta.x * speed, up) *
glm::rotate(-delta.y * speed, strafeDir);
viewDirection = glm::mat3(rotation) * viewDirection;
}
oldYX = newXY;
}
void Camera::Speed(const float speed)
{
this->speed = speed;
}
void Camera::MoveForward(const float by)
{
float s = by == 0.0f ? speed : by;
position += s * viewDirection;
}
void Camera::MoveBackword(const float by)
{
float s = by == 0.0f ? speed : by;
position += -s * viewDirection;
}
void Camera::MoveLef(const float by )
{
float s = by == 0.0f ? speed : by;
position += -s * strafeDir;
}
void Camera::MoveRight(const float by )
{
float s = by == 0.0f ? speed : by;
position += -s * strafeDir;
}
void Camera::MoveUp(const float by )
{
float s = by == 0.0f ? speed : by;
position += s * up;
}
void Camera::MoveDown(const float by )
{
float s = by == 0.0f ? speed : by;
position += -s * up;
}
glm::vec3& Camera::GetCurrentPosition()
{
return position;
}
glm::vec3& Camera::GetCurrentDirection()
{
return viewDirection;
}
glm::mat4 Camera::GetWorldToView() const
{
return glm::lookAt(position, position + viewDirection, up);
}
and I update and render as follow :
void Game::OnUpdate()
{
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUniformMatrix4fv(program->GetUniformLocation("modelToViewWorld"), 1, GL_FALSE, &cam.GetWorldToView()[0][0]);
}
void Game::OnRender()
{
model->Draw();
}
Where the vertex shader looks like:
#version 410
layout (location = 0) in vec3 inVertex;
layout (location = 1) in vec2 inTexture;
layout (location = 2) in vec3 inNormal;
uniform mat4 modelToViewWorld;
void main()
{
gl_Position = vec4(mat3(modelToViewWorld) * inVertex, 1);
}
But I am moving/rotating the Model itself, not the camera around it. What am I doing wrong here?
I think the problem is that you are not inverting the view matrix. The model-view matrix is just a product of a model->world coordinates matrix transformation and a world->view coordinates matrix transformation. The first one takes the coordinates in the local model spaces and transforms them to the world space, therefore needs no invertion. However, the second one takes the coordinates of a camera in world space and transforms them to the local coordinate system of the camera and since it's the opposite of the first one it needs to be inverted.
You are not rotating the model, you are rotating the view direction.
viewDirection = glm::mat3(rotation) * viewDirection;
What you want to do is to rotate the center of the camera around the object and then set the direction of the camera towards the object.
For example:
position = vec3( radius * cos(t), radius * sin(t), 0);
direction = normalize(-position);