GLM Vector Mathematics - opengl

I have a cube rendered on the screen which represents a car (or similar).
Using Projection/Model matrices and Glm I am able to move it back and fourth along the axes and rotate it left or right.
I'm having trouble with the vector mathematics to make the cube move forwards no matter which direction it's current orientation is. (ie. if I would like, if it's rotated right 30degrees, when it's move forwards, it travels along the 30degree angle on a new axes).
I hope I've explained that correctly.
This is what I've managed to do so far in terms of using glm to move the cube:
glm::vec3 vel; //velocity vector
void renderMovingCube(){
glUseProgram(movingCubeShader.handle());
GLuint matrixLoc4MovingCube = glGetUniformLocation(movingCubeShader.handle(), "ProjectionMatrix");
glUniformMatrix4fv(matrixLoc4MovingCube, 1, GL_FALSE, &ProjectionMatrix[0][0]);
glm::mat4 viewMatrixMovingCube;
viewMatrixMovingCube = glm::lookAt(camOrigin, camLookingAt, camNormalXYZ);
vel.x = cos(rotX); vel.y=sin(rotX);
vel*=moveCube;
//move cube
ModelViewMatrix = glm::translate(viewMatrixMovingCube,globalPos*vel);
//bring ground and cube to bottom of screen
ModelViewMatrix = glm::translate(ModelViewMatrix, glm::vec3(0,-48,0));
ModelViewMatrix = glm::rotate(ModelViewMatrix, rotX, glm::vec3(0,1,0)); //manually turn
glUniformMatrix4fv(glGetUniformLocation(movingCubeShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader
movingCube.render(); //draw
glUseProgram(0);
}
keyboard input:
void keyboard()
{
char BACKWARD = keys['S']; char FORWARD = keys['W'];
char ROT_LEFT = keys['A']; char ROT_RIGHT = keys['D'];
if (FORWARD) //W - move forwards
{
globalPos += vel;
//globalPos.z -= moveCube;
BACKWARD = false;
}
if (BACKWARD)//S - move backwards
{
globalPos.z += moveCube;
FORWARD = false;
}
if (ROT_LEFT)//A - turn left
{
rotX +=0.01f;
ROT_LEFT = false;
}
if (ROT_RIGHT)//D - turn right
{
rotX -=0.01f;
ROT_RIGHT = false;
}
Where am I going wrong with my vectors? I would like change the direction of the cube (which it does) but then move forwards in that direction.

Related

Flying camera using glm is acting weird

I´m working on a Vulkan application and want to implement a "flying camera" which can move around anywhere freely.
I placed a cube in the middle of the scene that I can fly around.
The camera position works perfectly but the rotation acts weirdly as soon as I move around the cube.
When I´m on the opposite side of the cube the UP and DOWN directions that I give via my mouse are inverted and when I´m on either side of the cube they just don´t work at all. Anywhere between that it just does weird circles. Note that this only affects up and down movements not left or right.
Here is a demonstration of how it looks for me where I only move my mouse up and down in this order on every side of the cube (except when moving around it) I apologize for the bad frame rate, i had to convert it to gif and lower the quality:
https://imgur.com/HxknkQV
Quick explanation of the video:
Firstly, while it might have looked like I moved my mouse left and right while not moving looking at the sides, I didn´t. I only went up and down every time except for when I moved into the positions. The other thing is that while for the opposite site of the cube it might have looked like the rotation worked, it was actually inverted.
This is the code for my Camera:
#pragma once
#define GLM_FORCE_DEPTH_ZERO_TO_ONE
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtx/rotate_vector.hpp>
class Camera {
private:
glm::mat4 _model;
glm::mat4 _view;
glm::mat4 _projection;
glm::vec3 _position;
glm::vec3 _up;
glm::vec3 _moveSpeed = glm::vec3(0.08f);
float _mouseSens = 0.005f;
public:
glm::vec3 _direction;
enum MovementType { FORWARD, BACKWARD, LEFT, RIGHT, UP, DOWN };
Camera(uint32_t width, uint32_t height){
_model = glm::mat4(1.0f);
_projection = glm::perspective(glm::radians(90.0f), width / (float)height, 0.01f, 100.0f);
_direction = glm::vec3(-2, 0, 0);
_up = glm::vec3(0.0f, 0.0f, 1.0f);
_position = glm::vec3(2.0f, 0.0f, 0.0f);
_projection[1][1] *= -1; //Because Vulkan uses different axis, hence the up Vector being different to an OpenGL application for example
}
void rotate(float amount, glm::vec3 axis){
_direction = glm::rotate(_direction, amount * _mouseSens, axis);
}
void move(MovementType movement) {
switch (movement)
{
case FORWARD:
_position += _direction * _moveSpeed;
break;
case BACKWARD:
_position -= _direction * _moveSpeed;
break;
case LEFT:
_position -= glm::normalize(glm::cross(_direction, _up)) * _moveSpeed;
break;
case RIGHT:
_position += glm::normalize(glm::cross(_direction, _up)) * _moveSpeed;
break;
case UP:
_position += _up * _moveSpeed;
break;
case DOWN:
_position -= _up * _moveSpeed;
break;
}
}
glm::mat4 getMVP() {
_view = glm::lookAt(_position, _position + _direction, _up);
return _projection * _view * _model;
}
};
Any help would be gladly appreciated as I am not really that good in Vector and Matrix calculations, and really don´t know how to fix this. Thanks.
It looks to me as if you were rotating the camera in world space (but I can't tell for sure because the code that invokes Camera::rotate is not included in your question).
If my assumption is correct, rotating in camera space should solve the problem. I.e. assuming that Camera::rotate performs a rotation relative to the axes of the current camera's space, you'll have to transform that back into world space, which can be done with the inverse of _view:
void rotate(float amount, glm::vec3 axis){
auto directionCamSpace = glm::rotate(_direction, amount * _mouseSens, axis);
_directionWorldSpace = glm::mat3(glm::inverse(_view)) * _direction;
}
And then use _directionWorldSpace with glm::lookAt:
glm::mat4 getMVP() {
_view = glm::lookAt(_position, _position + _directionWorldSpace, _up);
return _projection * _view * _model;
}
I am afraid that it might be that this does not lead you to the final solution of your problem yet and that further/other artefacts occur, but it should at least get you one step further.
The best way to implement such a camera would probably be to use quaternions to track the rotation of the camera, rotate the camera's coordinate system with the accumulated quaternion rotations and then use glm::inverse to compute the view matrix from the rotated camera's coordinate system. (You wouldn't need glm::lookAt at all with that approach.)

Quaternion-based First Person View Camera

I have been learning OpenGL by following the tutorial, located at https://paroj.github.io/gltut/.
Passing the basics, I got a bit stuck at understanding quaternions and their relation to spatial orientation and transformations, especially from world- to camera-space and vice versa. In the chapter Camera-Relative Orientation, the author makes a camera, which rotates a model in world space relative to the camera orientation. Quoting:
We want to apply an orientation offset (R), which takes points in camera-space. If we wanted to apply this to the camera matrix, it would simply be multiplied by the camera matrix: R * C * O * p. That's nice and all, but we want to apply a transform to O, not to C.
My uneducated guess would be that if we applied the offset to camera space, we would get the first-person camera. Is this correct? Instead, the offset is applied to the model in world space, making the spaceship spin relative to that space, and not to camera space. We just observe it spin from camera space.
Inspired by at least some understanding of quaternions (or so I thought), I tried to implement the first person camera. It has two properties:
struct Camera{
glm::vec3 position; // Position in world space.
glm::quat orientation; // Orientation in world space.
}
Position is modified in reaction to keyboard actions, while the orientation changes due to mouse movement on screen.
Note: GLM overloads * operator for glm::quat * glm::vec3 with the relation for rotating a vector by a quaternion (more compact form of v' = qvq^-1)
For example, moving forward and moving right:
glm::vec3 worldOffset;
float scaleFactor = 0.5f;
if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS) {
worldOffset = orientation * (axis_vectors[AxisVector::AXIS_Z_NEG]); // AXIS_Z_NEG = glm::vec3(0, 0, -1)
position += worldOffset * scaleFactor;
}
if (glfwGetKey(window, GLFW_KEY_D) == GLFW_PRESS) {
worldOffset = orientation * (axis_vectors[AxisVector::AXIS_X_NEG]); // AXIS_Z_NEG = glm::vec3(-1, 0, 0)
position += worldOffset * scaleFactor;
}
Orientation and position information is passed to glm::lookAt matrix for constructing the world-to-camera transformation, like so:
auto camPosition = position;
auto camForward = orientation * glm::vec3(0.0, 0.0, -1.0);
viewMatrix = glm::lookAt(camPosition, camPosition + camForward, glm::vec3(0.0, 1.0, 0.0));
Combining model, view and projection matrices and passing the result to vertex shader displays everything okay - the way one would expect to see things from the first-person POV. However, things get messy when I add mouse movements, tracking the amount of movement in x and y directions. I want to rotate around the world y-axis and local x-axis:
auto xOffset = glm::angleAxis(xAmount, axis_vectors[AxisVector::AXIS_Y_POS]); // mouse movement in x-direction
auto yOffset = glm::angleAxis(yAmount, axis_vectors[AxisVector::AXIS_X_POS]); // mouse movement in y-direction
orientation = orientation * xOffset; // Works OK, can look left/right
orientation = yOffset * orientation; // When adding this line, things get ugly
What would the problem be here?
I admit, I don't have enough knowledge to debug the mouse movement code properly, I mainly followed the lines, saying "right multiply to apply the offset in world space, left multiply to do it in camera space."
I feel like I know things half-way, drawing conclusions from a plethora of e-resources on the subject, while getting more educated and more confused at the same time.
Thanks for any answers.
To rotate a glm quaternion representing orientation:
//Precomputation:
//pitch (rot around x in radians),
//yaw (rot around y in radians),
//roll (rot around z in radians)
//are computed/incremented by mouse/keyboard events
To compute view matrix:
void CameraFPSQuaternion::UpdateView()
{
//FPS camera: RotationX(pitch) * RotationY(yaw)
glm::quat qPitch = glm::angleAxis(pitch, glm::vec3(1, 0, 0));
glm::quat qYaw = glm::angleAxis(yaw, glm::vec3(0, 1, 0));
glm::quat qRoll = glm::angleAxis(roll,glm::vec3(0,0,1));
//For a FPS camera we can omit roll
glm::quat orientation = qPitch * qYaw;
orientation = glm::normalize(orientation);
glm::mat4 rotate = glm::mat4_cast(orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
viewMatrix = rotate * translate;
}
If you want to store the quaternion, then you recompute it whenever yaw, pitch, or roll changes:
void CameraFPSQuaternion::RotatePitch(float rads) // rotate around cams local X axis
{
glm::quat qPitch = glm::angleAxis(rads, glm::vec3(1, 0, 0));
m_orientation = glm::normalize(qPitch) * m_orientation;
glm::mat4 rotate = glm::mat4_cast(m_orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
m_viewMatrix = rotate * translate;
}
If you want to give a rotation speed around a given axis, you use slerp:
void CameraFPSQuaternion::Update(float deltaTimeSeconds)
{
//FPS camera: RotationX(pitch) * RotationY(yaw)
glm::quat qPitch = glm::angleAxis(m_d_pitch, glm::vec3(1, 0, 0));
glm::quat qYaw = glm::angleAxis(m_d_yaw, glm::vec3(0, 1, 0));
glm::quat qRoll = glm::angleAxis(m_d_roll,glm::vec3(0,0,1));
//For a FPS camera we can omit roll
glm::quat m_d_orientation = qPitch * qYaw;
glm::quat delta = glm::mix(glm::quat(0,0,0,0),m_d_orientation,deltaTimeSeconds);
m_orientation = glm::normalize(delta) * m_orientation;
glm::mat4 rotate = glm::mat4_cast(orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
viewMatrix = rotate * translate;
}
The problem lied with the usage of glm::lookAt for constructing the view matrix. Instead, I am now constructing the view matrix like so:
auto rotate = glm::mat4_cast(entity->orientation);
auto translate = glm::mat4(1.0f);
translate = glm::translate(translate, -entity->position);
viewMatrix = rotate * translate;
For translation, I'm left multiplying with an inverse of orientation instead of orientation now.
glm::quat invOrient = glm::conjugate(orientation);
if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS) {
worldOffset = invOrient * (axis_vectors[AxisVector::AXIS_Z_NEG]);
position += worldOffset * scaleFactor;
}
...
Everything else is the same, apart from some further offset quaternion normalizations in the mouse movement code.
The camera now behaves and feels like a first-person camera.
I still don't properly understand the difference between view matrix and lookAt matrix, if there is any. But that's the topic for another question.

object keep his distance around camera c++ opengl

I want to keep the object permanently at a certain distance from the camera. How i can made this? I tried this:
vec3 obj_pos = -cam->Get_CameraPos() ;
obj_pos .z -= 10.0f ;
...
o_modelMatrix = glm::translate(o_modelMatrix, obj_pos);
but it's not working; The object simply stands on the determined position and not moving
Full code of render:
void MasterRenderer::renderPlane() {
PlaneShader->useShaderProgram();
glm::mat4 o_modelMatrix;
glm::mat4 o_view = cam->Get_ViewMatrix();
glm::mat4 o_projection = glm::perspective(static_cast<GLfloat>(glm::radians(cam->Get_fov())),
static_cast<GLfloat>(WIDTH) / static_cast<GLfloat>(HEIGHT), 0.1f, 1000.0f);
glUniformMatrix4fv(glGetUniformLocation(PlaneShader->ShaderProgramID, "projection"), 1, GL_FALSE, glm::value_ptr(o_projection ));
glUniformMatrix4fv(glGetUniformLocation(PlaneShader->ShaderProgramID, "view"), 1, GL_FALSE, glm::value_ptr(o_view ));
vec3 eye_pos = vec3(o_view [3][0], o_view [3][1], o_view [3][2]); //or cam->getCameraPosition();
glm::vec3 losDirection = glm::normalize(vec3(0.0f, 0.0f, -1.0f) - eye_pos);
vec3 obj_pos = eye_pos + losDirection * 1.0f;
b_modelMatrix = scale(o_modelMatrix, vec3(20.0f));
b_modelMatrix = glm::translate(b_modelMatrix, obj_pos );
glUniformMatrix4fv(glGetUniformLocation(PlaneShader->ShaderProgramID,
"model"), 1, GL_FALSE, glm::value_ptr(o_modelMatrix));
...
/// draw
Maybe this is a shot from the hip, but I suppose that you set up a lookat matrix and that you the position of your object is defined in world coordinates.
Commonly a camera is defined by a eye position, at target (center) position and an up vector.
The direction in which the camera looks is the line of sight, which is the unit vector from the eye position to the target position.
Calcualte the line of sight:
glm::vec3 cameraPosition ...; // the eye position
glm::vec3 cameraTarget ...; // the traget (center) posiiton
glm::vec3 losDirection = glm::normalize( cameraTarget - cameraPosition );
Possibly the camera class knows the direction of view (line of sight), then you can skip this calculation.
If the object is always to be placed a certain distance in front of the camera, the position of the object is the position of the camera plus a distance in the direction of the line of sight:
float distance = ...;
float objectPosition = cameraPosition + losDirection * distance;
glm::mat4 modelPosMat = glm::translate( glm::mat4(1.0f) , objectPosition );
glm::mat4 objectModelMat = ...; // initial model matrix of the object
o_modelMatrix = modelPosMat * objectModelMat;
Note the objectModelMat is the identity matrix if the object has no further transformations glm::mat4(1.0f).
so you want to move the object with camera (instead of moving camera with object like camera follow). If this is just for some GUI stuff you can use different static view matrices for it. But if you want to do this in way you suggested then this is the way:
definitions
First we need few 3D 4x4 homogenuous transform matrices (read the link to see how to disect/construct what you need). So lets define some matrices we need for this:
C - inverse camera matrix (no projection)
M - direct object matrix
R - direct object rotation
Each matrix has 4 vectors X,Y,Z are the axises of the coordinate system represented by it and O is the origin. Direct matrix means the matrix directly represents the coordinate system and inverse means that it is the inverse of such matrix.
Math
so we want to construct M so it is placed at some distance d directly in front of C and has rotation R. I assume you are using perspective projection and C viewing direction is -Z axis. So what you need to do is compute position of M. That is easy you just do this:
iC = inverse(C); // get the direct matrix of camera
M = R; // set rotation of object
M.O = iC.O - d*iC.Z; // set position of object
The M.O = (M[12],M[13],M[14]) and iC.Z = (iC.Z[8],iC.Z[9],iC.Z[10]) so if you got direct access to your matrix you can do this on your own in case GLM does not provide element access.
Beware that all this is for standard OpenGL matrix convention and multiplication order. If you use DirectX convention instead then M,R are inverse and C is direct matrix so you would need to change the equations accordingly. Sorry I do not use GLM so I am not confident to generate any code for you.
In case you want to apply camera rotations on object rotations too then you need to change M = R to M = R*iC or M = iC*R which depends on what effect you want to achieve.
It's work fine with not multiplication, but addition
obj_pos = glm::normalize(glm::cross(vec3(0.0f, 0.0f, -1.0f), vec3(0.0f, 1.0f, 0.0f)));
o_modelMatrix[3][0] = camera_pos.x;
o_modelMatrix[3][1] = camera_pos.y;
o_modelMatrix[3][2] = camera_pos.z + distance;
o_modelMatrix = glm::translate(o_modelMatrix, obj_pos);

Cascaded Shadow maps not quite right

Ok. So, I've been messing around with shadows in my game engine for the last week. I've mostly implemented cascading shadow maps (CSM), but I'm having a bit of a problem with shadowing that I just can't seem to solve.
The only light in this scene is a directional light (sun), pointing {-0.1 -0.25 -0.65}. I calculate 4 sets of frustum bounds for the four splits of my CSMs with this code:
// each projection matrix calculated with same near plane, different far
Frustum make_worldFrustum(const glm::mat4& _invProjView) {
Frustum fr; glm::vec4 temp;
temp = _invProjView * glm::vec4(-1, -1, -1, 1);
fr.xyz = glm::vec3(temp) / temp.w;
temp = _invProjView * glm::vec4(-1, -1, 1, 1);
fr.xyZ = glm::vec3(temp) / temp.w;
...etc 6 more times for ndc cube
return fr;
}
For the light, I get a view matrix like this:
glm::mat4 viewMat = glm::lookAt(cam.pos, cam.pos + lightDir, {0,0,1});
I then create each ortho matrix from the bounds of each frustum:
lightMatVec.clear();
for (auto& frus : cam.frusVec) {
glm::vec3 arr[8] {
glm::vec3(viewMat * glm::vec4(frus.xyz, 1)),
glm::vec3(viewMat * glm::vec4(frus.xyZ, 1)),
etc...
};
glm::vec3 minO = {INFINITY, INFINITY, INFINITY};
glm::vec3 maxO = {-INFINITY, -INFINITY, -INFINITY};
for (auto& vec : arr) {
minO = glm::min(minO, vec);
maxO = glm::max(maxO, vec);
}
glm::mat4 projMat = glm::ortho(minO.x, maxO.x, minO.y, maxO.y, minO.z, maxO.z);
lightMatVec.push_back(projMat * viewMat);
}
I have a 4 layer TEXTURE_2D_ARRAY bound to 4 framebuffers that I draw the scene into with a very simple vertex shader (frag disabled or punchthrough alpha).
I then draw the final scene. The vertex shader outputs four shadow texcoords:
out vec3 slShadcrd[4];
// stuff
for (int i = 0; i < 4; i++) {
vec4 sc = WorldBlock.skylMatArr[i] * vec4(world_pos, 1);
slShadcrd[i] = sc.xyz / sc.w * 0.5f + 0.5f;
}
And a fragment shader, which determines the split to use with:
int csmIndex = 0;
for (uint i = 0u; i < CameraBlock.csmCnt; i++) {
if (-view_pos.z > CameraBlock.csmSplits[i]) index++;
else break;
}
And samples the shadow map array with this function:
float sample_shadow(vec3 _sc, int _csmIndex, sampler2DArrayShadow _tex) {
return texture(_tex, vec4(_sc.xy, _csmIndex, _sc.z)).r;
}
And, this is the scene I get (with each split slightly tinted and the 4 depth layers overlayed):
Great! Looks good.
But, if I turn the camera slightly to the right:
Then shadows start disappearing (and depending on the angle, appearing where they shouldn't be).
I have GL_DEPTH_CLAMP enabled, so that isn't the issue. I'm culling front faces, but turning that off doesn't make a difference to this issue.
What am I missing? I feel like it's an issue with one of my projections, but they all look right to me. Thanks!
EDIT:
All four of the the light's frustums drawn. They are all there, but only z is changing relative to the camera (see comment below):
EDIT:
Probably more useful, this is how the frustums look when I only update them once, when the camera is at (0,0,0) and pointing forwards (0,1,0). Also I drew them with depth testing this time.
IMPORTANT EDIT:
It seems that this issue is directly related to the light's view matrix, currently:
glm::mat4 viewMat = glm::lookAt(cam.pos, cam.pos + lightDir, {0,0,1});
Changing the values for eye and target seems to affect the buggered shadows. But I don't know what I should actually be setting this to? Should be easy for someone with a better understanding than me :D
Solved it! It was indeed an issue with the light's view matrix! All I had to do was replace camPos with the centre point of each frustum! Meaning that each split's light matrix needed a different view matrix. So I just create each view matrix like this...
glm::mat4 viewMat = glm::lookAt(frusCentre, frusCentre+lightDir, {0,0,1});
And get frusCentre simply...
glm::vec3 calc_frusCentre(const Frustum& _frus) {
glm::vec3 min(INFINITY, INFINITY, INFINITY);
glm::vec3 max(-INFINITY, -INFINITY, -INFINITY);
for (auto& vec : {_frus.xyz, _frus.xyZ, _frus.xYz, _frus.xYZ,
_frus.Xyz, _frus.XyZ, _frus.XYz, _frus.XYZ}) {
min = glm::min(min, vec);
max = glm::max(max, vec);
}
return (min + max) / 2.f;
}
And bam! Everything works spectacularly!
EDIT (Last one!):
What I had was not quite right. The view matrix should actually be:
glm::lookAt(frusCentre-lightDir, frusCentre, {0,0,1});

3D Camera Rotation in OpenGL: How to prevent camera jitter?

I'm fairly new to OpenGL and 3D programming but I've begun to implement camera rotation using quaternions based on the tutorial from http://www.cprogramming.com/tutorial/3d/quaternions.html . This is all written in Java using JOGL.
I realise these kind of questions get asked quite a lot but I've been searching around and can't find a solution that works so I figured it might be a problem with my code specifically.
So the problem is that there is jittering and odd rotation if I do two different successive rotations on one or more axis. The first rotation along the an axis, either negatively or positively, works fine. However, if I rotate positively along the an axis and then rotate negatively on that axis then the rotation will jitter back and forth as if it was alternating between doing a positive and negative rotation.
If I automate the rotation, (e.g. rotate left 500 times then rotate right 500 times) then it appears to work properly which led me to think this might be related to the keypresses. However, the rotation is also incorrect (for lack of a better word) if I rotate around the x axis and then rotate around the y axis afterwards.
Anyway, I have a renderer class with the following display loop for drawing `scene nodes':
private void render(GLAutoDrawable drawable) {
GL2 gl = drawable.getGL().getGL2();
gl.glClear(GL2.GL_COLOR_BUFFER_BIT | GL2.GL_DEPTH_BUFFER_BIT);
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glLoadIdentity();
glu.gluPerspective(70, Constants.viewWidth / Constants.viewHeight, 0.1, 30000);
gl.glScalef(1.0f, -1.0f, 1.0f); //flip the y axis
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
camera.rotateCamera();
glu.gluLookAt(camera.getCamX(), camera.getCamY(), camera.getCamZ(), camera.getViewX(), camera.getViewY(), camera.getViewZ(), 0, 1, 0);
drawSceneNodes(gl);
}
private void drawSceneNodes(GL2 gl) {
if (currentEvent != null) {
ArrayList<SceneNode> sceneNodes = currentEvent.getSceneNodes();
for (SceneNode sceneNode : sceneNodes) {
sceneNode.update(gl);
}
}
if (renderQueue.size() > 0) {
currentEvent = renderQueue.remove(0);
}
}
Rotation is performed in the camera class as follows:
public class Camera {
private double width;
private double height;
private double rotation = 0;
private Vector3D cam = new Vector3D(0, 0, 0);
private Vector3D view = new Vector3D(0, 0, 0);
private Vector3D axis = new Vector3D(0, 0, 0);
private Rotation total = new Rotation(0, 0, 0, 1, true);
public Camera(GL2 gl, Vector3D cam, Vector3D view, int width, int height) {
this.cam = cam;
this.view = view;
this.width = width;
this.height = height;
}
public void rotateCamera() {
if (rotation != 0) {
//generate local quaternion from new axis and new rotation
Rotation local = new Rotation(Math.cos(rotation/2), Math.sin(rotation/2 * axis.getX()), Math.sin(rotation/2 * axis.getY()), Math.sin(rotation/2 * axis.getZ()), true);
//multiply local quaternion and total quaternion
total = total.applyTo(local);
//rotate the position of the camera with the new total quaternion
cam = rotatePoint(cam);
//set next rotation to 0
rotation = 0;
}
}
public Vector3D rotatePoint(Vector3D point) {
//set world centre to origin, i.e. (width/2, height/2, 0) to (0, 0, 0)
point = new Vector3D(point.getX() - width/2, point.getY() - height/2, point.getZ());
//rotate point
point = total.applyTo(point);
//set point in world coordinates, i.e. (0, 0, 0) to (width/2, height/2, 0)
return new Vector3D(point.getX() + width/2, point.getY() + height/2, point.getZ());
}
public void setAxis(Vector3D axis) {
this.axis = axis;
}
public void setRotation(double rotation) {
this.rotation = rotation;
}
}
The method rotateCamera generates the new permenant quaternions from the new rotation and previous rotations while the method rotatePoint merely multiplies a point by the rotation matrix generated from the permenant quaternion.
The axis of rotation and the angle of rotation are set by simple key presses as follows:
#Override
public void keyPressed(KeyEvent e) {
if (e.getKeyCode() == KeyEvent.VK_W) {
camera.setAxis(new float[] {1, 0, 0});
camera.setRotation(0.1f);
}
if (e.getKeyCode() == KeyEvent.VK_A) {
camera.setAxis(new float[] {0, 1, 0});
camera.setRotation(0.1f);
}
if (e.getKeyCode() == KeyEvent.VK_S) {
camera.setAxis(new float[] {1, 0, 0});
camera.setRotation(-0.1f);
}
if (e.getKeyCode() == KeyEvent.VK_D) {
camera.setAxis(new float[] {0, 1, 0});
camera.setRotation(-0.1f);
}
}
I hope I've provided enough detail. Any help would be very much appreciated.
About the jittering: I don't see any render loop in your code. How is the render method triggered? By a timer or by an event?
Your messed up rotations when rotating about two axes are probably related to the fact that you need to rotate the axis of the second rotation along with the total rotation of the first axis. You cannot just apply the rotation about the X or Y axis of the global coordinate system. You must apply the rotation about the up and right axes of the camera.
I suggest that you create a camera class that stores the up, right and view direction vectors of the camera and apply your rotations directly to those axes. If this is an FPS like camera, then you'll want to rotate the camera horizontally (looking left / right) about the absolute Y axis and not the up vector. This will also result in a new right axis of the camera. Then, you rotate the camera vertically (looking up / down) about the new right axis. However, you must be careful when the camera looks directly up or down, as in this case you can't use the cross product of the view direction and up vectors to obtain the right vector.