I loaded an object from a .obj file.
I am trying to apply a glm::rotate, glm::translate, glm::scale to it.
The movement (translation and rotation) is made using keyboard input like this;
// speed is 10
// angle starts at 0
if (keys[UP]) {
movex += speed * sin(angle);
movez += speed * cos(angle);
}
if (keys[DOWN]) {
movex -= speed * sin(angle);
movez -= speed * cos(angle);
}
if (keys[RIGHT]) {
angle -= PI / 180;
}
if (keys[LEFT]) {
angle += PI / 180;
}
and then, the transformations:
// use shader
glUseProgram(gl_program_shader);
// send the uniform matrices to the shader
glUniformMatrix4fv(glGetUniformLocation(gl_program_shader, "model_matrix"), 1, false, glm::value_ptr(model_matrix));
glUniformMatrix4fv(glGetUniformLocation(gl_program_shader, "view_matrix"), 1, false, glm::value_ptr(view_matrix));
glUniformMatrix4fv(glGetUniformLocation(gl_program_shader, "projection_matrix"), 1, false, glm::value_ptr(projection_matrix));
glm::mat4 rotate = glm::rotate(model_matrix, angle, glm::vec3(0, 1, 0));
glm::mat4 translate = glm::translate(model_matrix, glm::vec3(RADIUS + movex, 0, -RADIUS + movez));
glm::mat4 scale = glm::scale(model_matrix, glm::vec3(20.0, 20.0, 20.0));
glUniformMatrix4fv(glGetUniformLocation(gl_program_shader, "model_matrix"), 1, false, glm::value_ptr(translate * scale * rotate));
glBindVertexArray(ironMan->vao);
glDrawElements(GL_TRIANGLES, ironMan->num_indices, GL_UNSIGNED_INT, 0);
If I don't use KEY_RIGHT / KEY_LEFT, it works as expected (which means that it translates the object forward and backward).
If I use them, it rotates around it's own center, but when I press KEY_UP / KEY_DOWN, it translates ... well, not as expected.
I've put on the notifyKeyPressed() function a case to print out the movex, movey and angle value and it seems that the angle is not the one that it should:
when the IronMan is with it's back at me, angle = 0 - this is the
start point;
when the IronMan is with his face on the right, the
angle should be around -PI / 2 (-1.57), but it is around -80/-90.
I thought this was because of the glm::scale, but I changed the values and nothing changed.
Any idea why is the angle acting like this?
Your rotation center stays at the origin, you need to translate the rotation center of your object to the origin, make your scale, rotation, translate back, then translate.
EDIT
More importantly related to your exact problem, glm::rotate works in degrees.
Related
I have been learning OpenGL by following the tutorial, located at https://paroj.github.io/gltut/.
Passing the basics, I got a bit stuck at understanding quaternions and their relation to spatial orientation and transformations, especially from world- to camera-space and vice versa. In the chapter Camera-Relative Orientation, the author makes a camera, which rotates a model in world space relative to the camera orientation. Quoting:
We want to apply an orientation offset (R), which takes points in camera-space. If we wanted to apply this to the camera matrix, it would simply be multiplied by the camera matrix: R * C * O * p. That's nice and all, but we want to apply a transform to O, not to C.
My uneducated guess would be that if we applied the offset to camera space, we would get the first-person camera. Is this correct? Instead, the offset is applied to the model in world space, making the spaceship spin relative to that space, and not to camera space. We just observe it spin from camera space.
Inspired by at least some understanding of quaternions (or so I thought), I tried to implement the first person camera. It has two properties:
struct Camera{
glm::vec3 position; // Position in world space.
glm::quat orientation; // Orientation in world space.
}
Position is modified in reaction to keyboard actions, while the orientation changes due to mouse movement on screen.
Note: GLM overloads * operator for glm::quat * glm::vec3 with the relation for rotating a vector by a quaternion (more compact form of v' = qvq^-1)
For example, moving forward and moving right:
glm::vec3 worldOffset;
float scaleFactor = 0.5f;
if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS) {
worldOffset = orientation * (axis_vectors[AxisVector::AXIS_Z_NEG]); // AXIS_Z_NEG = glm::vec3(0, 0, -1)
position += worldOffset * scaleFactor;
}
if (glfwGetKey(window, GLFW_KEY_D) == GLFW_PRESS) {
worldOffset = orientation * (axis_vectors[AxisVector::AXIS_X_NEG]); // AXIS_Z_NEG = glm::vec3(-1, 0, 0)
position += worldOffset * scaleFactor;
}
Orientation and position information is passed to glm::lookAt matrix for constructing the world-to-camera transformation, like so:
auto camPosition = position;
auto camForward = orientation * glm::vec3(0.0, 0.0, -1.0);
viewMatrix = glm::lookAt(camPosition, camPosition + camForward, glm::vec3(0.0, 1.0, 0.0));
Combining model, view and projection matrices and passing the result to vertex shader displays everything okay - the way one would expect to see things from the first-person POV. However, things get messy when I add mouse movements, tracking the amount of movement in x and y directions. I want to rotate around the world y-axis and local x-axis:
auto xOffset = glm::angleAxis(xAmount, axis_vectors[AxisVector::AXIS_Y_POS]); // mouse movement in x-direction
auto yOffset = glm::angleAxis(yAmount, axis_vectors[AxisVector::AXIS_X_POS]); // mouse movement in y-direction
orientation = orientation * xOffset; // Works OK, can look left/right
orientation = yOffset * orientation; // When adding this line, things get ugly
What would the problem be here?
I admit, I don't have enough knowledge to debug the mouse movement code properly, I mainly followed the lines, saying "right multiply to apply the offset in world space, left multiply to do it in camera space."
I feel like I know things half-way, drawing conclusions from a plethora of e-resources on the subject, while getting more educated and more confused at the same time.
Thanks for any answers.
To rotate a glm quaternion representing orientation:
//Precomputation:
//pitch (rot around x in radians),
//yaw (rot around y in radians),
//roll (rot around z in radians)
//are computed/incremented by mouse/keyboard events
To compute view matrix:
void CameraFPSQuaternion::UpdateView()
{
//FPS camera: RotationX(pitch) * RotationY(yaw)
glm::quat qPitch = glm::angleAxis(pitch, glm::vec3(1, 0, 0));
glm::quat qYaw = glm::angleAxis(yaw, glm::vec3(0, 1, 0));
glm::quat qRoll = glm::angleAxis(roll,glm::vec3(0,0,1));
//For a FPS camera we can omit roll
glm::quat orientation = qPitch * qYaw;
orientation = glm::normalize(orientation);
glm::mat4 rotate = glm::mat4_cast(orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
viewMatrix = rotate * translate;
}
If you want to store the quaternion, then you recompute it whenever yaw, pitch, or roll changes:
void CameraFPSQuaternion::RotatePitch(float rads) // rotate around cams local X axis
{
glm::quat qPitch = glm::angleAxis(rads, glm::vec3(1, 0, 0));
m_orientation = glm::normalize(qPitch) * m_orientation;
glm::mat4 rotate = glm::mat4_cast(m_orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
m_viewMatrix = rotate * translate;
}
If you want to give a rotation speed around a given axis, you use slerp:
void CameraFPSQuaternion::Update(float deltaTimeSeconds)
{
//FPS camera: RotationX(pitch) * RotationY(yaw)
glm::quat qPitch = glm::angleAxis(m_d_pitch, glm::vec3(1, 0, 0));
glm::quat qYaw = glm::angleAxis(m_d_yaw, glm::vec3(0, 1, 0));
glm::quat qRoll = glm::angleAxis(m_d_roll,glm::vec3(0,0,1));
//For a FPS camera we can omit roll
glm::quat m_d_orientation = qPitch * qYaw;
glm::quat delta = glm::mix(glm::quat(0,0,0,0),m_d_orientation,deltaTimeSeconds);
m_orientation = glm::normalize(delta) * m_orientation;
glm::mat4 rotate = glm::mat4_cast(orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
viewMatrix = rotate * translate;
}
The problem lied with the usage of glm::lookAt for constructing the view matrix. Instead, I am now constructing the view matrix like so:
auto rotate = glm::mat4_cast(entity->orientation);
auto translate = glm::mat4(1.0f);
translate = glm::translate(translate, -entity->position);
viewMatrix = rotate * translate;
For translation, I'm left multiplying with an inverse of orientation instead of orientation now.
glm::quat invOrient = glm::conjugate(orientation);
if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS) {
worldOffset = invOrient * (axis_vectors[AxisVector::AXIS_Z_NEG]);
position += worldOffset * scaleFactor;
}
...
Everything else is the same, apart from some further offset quaternion normalizations in the mouse movement code.
The camera now behaves and feels like a first-person camera.
I still don't properly understand the difference between view matrix and lookAt matrix, if there is any. But that's the topic for another question.
I'm trying to set up a google maps style zoom-to-cursor control for my opengl camera. I'm using a similar method to the one suggested here. Basically, I get the position of the cursor, and calculate the width/height of my perspective view at that depth using some trigonometry. I then change the field of view, and calculate how to much I need to translate in order to keep the point under the cursor in the same apparent position on the screen. That part works pretty well.
The issue is that I want to limit the fov to be less than 90 degrees. When it ends up >90, I cut it in half and then translate everything away from the camera so that the resulting scene looks the same as with the larger fov. The equation to find that necessary translation isn't working, which is strange because it comes from pretty simple algebra. I can't find my mistake. Here's the relevant code.
void Visual::scroll_callback(GLFWwindow* window, double xoffset, double yoffset)
{
glm::mat4 modelview = view*model;
glm::vec4 viewport = { 0.0, 0.0, width, height };
float winX = cursorPrevX;
float winY = viewport[3] - cursorPrevY;
float winZ;
glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
glm::vec3 screenCoords = { winX, winY, winZ };
glm::vec3 cursorPosition = glm::unProject(screenCoords, modelview, projection, viewport);
if (isinf(cursorPosition[2]) || isnan(cursorPosition[2])) {
cursorPosition[2] = 0.0;
}
float zoomFactor = 1.1;
// = zooming in
if (yoffset > 0.0)
zoomFactor = 1/1.1;
//the width and height of the perspective view, at the depth of the cursor position
glm::vec2 fovXY = camera.getFovXY(cursorPosition[2] - zTranslate, width / height);
camera.setZoomFromFov(fovXY.y * zoomFactor, cursorPosition[2] - zTranslate);
//don't want fov to be greater than 90, so cut it in half and move the world farther away from the camera to compensate
//not working...
if (camera.Zoom > 90.0 && zTranslate*2 > MAX_DEPTH) {
float prevZoom = camera.Zoom;
camera.Zoom *= .5;
//need increased distance between camera and world origin, so that view does not appear to change when fov is reduced
zTranslate = cursorPosition[2] - tan(glm::radians(prevZoom)) / tan(glm::radians(camera.Zoom) * (cursorPosition[2] - zTranslate));
}
else if (camera.Zoom > 90.0) {
camera.Zoom = 90.0;
}
glm::vec2 newFovXY = camera.getFovXY(cursorPosition[2] - zTranslate, width / height);
//translate so that position under the cursor does not appear to move.
xTranslate += (newFovXY.x - fovXY.x) * (winX / width - .5);
yTranslate += (newFovXY.y - fovXY.y) * (winY / height - .5);
updateView = true;
}
The definition of my view matrix. Called ever iteration of the main loop.
void Visual::setView() {
view = glm::mat4();
view = glm::translate(view, { xTranslate,yTranslate,zTranslate });
view = glm::rotate(view, glm::radians(camera.inclination), glm::vec3(1.f, 0.f, 0.f));
view = glm::rotate(view, glm::radians(camera.azimuth), glm::vec3(0.f, 0.f, 1.f));
camera.Right = glm::column(view, 0).xyz();
camera.Up = glm::column(view, 1).xyz();
camera.Front = -glm::column(view, 2).xyz(); // minus because OpenGL camera looks towards negative Z.
camera.Position = glm::column(view, 3).xyz();
updateView = false;
}
Field of view helper functions.
glm::vec2 getFovXY(float depth, float aspectRatio) {
float fovY = tan(glm::radians(Zoom / 2)) * depth;
float fovX = fovY * aspectRatio;
return glm::vec2{ 2*fovX , 2*fovY };
}
//you have a desired fov, and you want to set the zoom to achieve that.
//factor of 1/2 inside the atan because we actually need the half-fov. Keep full-fov as input for consistency
void setZoomFromFov(float fovY, float depth) {
Zoom = glm::degrees(2 * atan(fovY / (2 * depth)));
}
The equations I'm using can be found from the diagram here. Since I want to have the same field of view dimensions before and after the angle is changed, I start with
fovY = tan(theta1) * d1 = tan(theta2) * d2
d2 = (tan(theta1) / tan(theta2)) * d1
d1 = distance between camera and cursor position, before fov change = cursorPosition[2] - zTranslate
d2 = distance after
theta1 = fov angle before
theta2 = fov angle after = theta1 * .5
Appreciate the help.
I am trying to rotate the camera using quarternions but i have problems doing it.
The first thing that I notices now is that when I execute this the Camera Position and Camera LookAt become almost the same and in some cases they are the same and then i get precision problems and all other problems relating to it when i try and move the camera.
if (Input::getInstance()->isMouseDown(SDL_BUTTON_RIGHT-1)){
//log_.debug("Camera Mouse Left down");
glm::vec2 mouseDelta = glm::ivec2(oldX, oldY) - Input::getInstance()->getMousePosition();
glm::quat q1 = glm::quat(glm::vec3(glm::radians(mouseDelta.y), glm::radians(mouseDelta.x), 0.0f));
cameraLook_ = q1 * (direction * mouseSensitivity_) * glm::conjugate(q1) + cameraPosition_;
//cameraLook_ = glm::rotate(cameraLook_, mouseDelta.x * delta, glm::vec3(0,1,0));
//cameraLook_ = glm::rotate(cameraLook_, mouseDelta.y * delta, glm::vec3(0, 0, 1));
}
I switched back to matrices for my solution, because for me they are easier to work with at the moment.
There are few things i need to understand about quarternions then I can fix my old problem.
glm::mat4 rotationMatrix = glm::translate(cameraPosition_ - glm::vec3(1.0));
rotationMatrix *= glm::rotate(mouseDelta.x, glm::vec3(0, 1, 0));
rotationMatrix *= glm::rotate(mouseDelta.y, glm::vec3(0, 0, 1));
rotationMatrix *= glm::translate(-cameraPosition_ + glm::vec3(1.0));
cameraLook_ = glm::vec3(rotationMatrix * glm::vec4(cameraLook_, 1.0f));
I try to use what many people seem to find a good way, I call gluUnproject 2 times with different z-values and then try to calculate the direction vector for the ray from these 2 vectors.
I read this question and tried to use the structure there for my own code:
glGetFloat(GL_MODELVIEW_MATRIX, modelBuffer);
glGetFloat(GL_PROJECTION_MATRIX, projBuffer);
glGetInteger(GL_VIEWPORT, viewBuffer);
gluUnProject(mouseX, mouseY, 0.0f, modelBuffer, projBuffer, viewBuffer, startBuffer);
gluUnProject(mouseX, mouseY, 1.0f, modelBuffer, projBuffer, viewBuffer, endBuffer);
start = vecmath.vector(startBuffer.get(0), startBuffer.get(1), startBuffer.get(2));
end = vecmath.vector(endBuffer.get(0), endBuffer.get(1), endBuffer.get(2));
direction = vecmath.vector(end.x()-start.x(), end.y()-start.y(), end.z()-start.z());
But this only returns the Homogeneous Clip Coordinates (I believe), since they only range from -1 to 1 on every axis.
How to actually get coordinates from which I can create a ray?
EDIT: This is how I construct the matrices:
Matrix projectionMatrix = vecmath.perspectiveMatrix(60f, aspect, 0.1f,
100f);
//The matrix of the camera = viewMatrix
setTransformation(vecmath.lookatMatrix(eye, center, up));
//And every object sets a ModelMatrix in it's display method
Matrix modelMatrix = parentMatrix.mult(vecmath
.translationMatrix(translation));
modelMatrix = modelMatrix.mult(vecmath.rotationMatrix(1, 0, 1, angle));
EDIT 2:
This is how the function looks right now:
private void calcMouseInWorldPosition(float mouseX, float mouseY, Matrix proj, Matrix view) {
Vector start = vecmath.vector(0, 0, 0);
Vector end = vecmath.vector(0, 0, 0);
FloatBuffer modelBuffer = BufferUtils.createFloatBuffer(16);
modelBuffer.put(view.asArray());
modelBuffer.rewind();
FloatBuffer projBuffer = BufferUtils.createFloatBuffer(16);
projBuffer.put(proj.asArray());
projBuffer.rewind();
FloatBuffer startBuffer = BufferUtils.createFloatBuffer(16);
FloatBuffer endBuffer = BufferUtils.createFloatBuffer(16);
IntBuffer viewBuffer = BufferUtils.createIntBuffer(16);
//The two calls for projection and modelView matrix are disabled here,
as I use my own matrices in this case
// glGetFloat(GL_MODELVIEW_MATRIX, modelBuffer);
// glGetFloat(GL_PROJECTION_MATRIX, projBuffer);
glGetInteger(GL_VIEWPORT, viewBuffer);
//I know this is really ugly and bad, but I know that the height and width is always 600
// and this is just for testing purposes
mouseY = 600 - mouseY;
gluUnProject(mouseX, mouseY, 0.0f, modelBuffer, projBuffer, viewBuffer, startBuffer);
gluUnProject(mouseX, mouseY, 1.0f, modelBuffer, projBuffer, viewBuffer, endBuffer);
start = vecmath.vector(startBuffer.get(0), startBuffer.get(1), startBuffer.get(2));
end = vecmath.vector(endBuffer.get(0), endBuffer.get(1), endBuffer.get(2));
direction = vecmath.vector(end.x()-start.x(), end.y()-start.y(), end.z()-start.z());
}
I'm trying to use my own projection and view matrix, but this only seems to give weirder results.
With the GlGet... stuff I get this for a click in the upper right corner:
start: (0.97333336, -0.98, -1.0)
end: (0.97333336, -0.98, 1.0)
When I use my own stuff I get this for the same position:
start: (-2.4399707, -0.55425626, -14.202201)
end: (-2.4399707, -0.55425626, -16.198204)
Now I actually need a modelView matrix instead of just the view matrix, but I don't know how I am supposed to get it, since it is altered and created anew in every display call of every object.
But is this really the problem? In this tutorial he says "Normally, to get into clip space from eye space we multiply the vector by a projection matrix. We can go backwards by multiplying by the inverse of this matrix." and in the next step he multiplies again by the inverse of the view matrix, so I thought this is what I should actually do?
EDIT 3:
Here I tried what user42813 suggested:
Matrix view = cam.getTransformation();
view = view.invertRigid();
mouseY = height - mouseY - 1;
//Here I only these values, because the Z and W values would be 0
//following your suggestion, so no use adding them here
float tempX = view.get(0, 0) * mouseX + view.get(1, 0) * mouseY;
float tempY = view.get(0, 1) * mouseX + view.get(1, 1) * mouseY;
float tempZ = view.get(0, 2) * mouseX + view.get(1, 2) * mouseY;
origin = vecmath.vector(tempX, tempY, tempZ);
direction = cam.getDirection();
But now the direction and origin values are always the same:
origin: (-0.04557252, -0.0020000197, -0.9989586)
direction: (-0.04557252, -0.0020000197, -0.9989586)
Ok I finally managed to work this out, maybe this will help someone.
I found some formula for this and did this with the coordinates that I was getting, which ranged from -1 to 1:
float tempX = (float) (start.x() * 0.1f * Math.tan(Math.PI * 60f / 360));
float tempY = (float) (start.y() * 0.1f * Math.tan(Math.PI * 60f / 360) * height / width);
float tempZ = -0.1f;
direction = vecmath.vector(tempX, tempY, tempZ); //create new vector with these x,y,z
direction = view.transformDirection(direction);
//multiply this new vector with the INVERSED viewMatrix
origin = view.getPosition(); //set the origin to the position values of the matrix (the right column)
I dont really use deprecated opengl but i would share my thought,
First it would be helpfull if you show us how you build your View matrix,
Second the View matrix you have is in the local space of the camera,
now typically you would multiply your mouseX and (ScreenHeight - mouseY - 1) by the View matrix (i think the inverse of that matrix sorry, not sure!) then you will have the mouse coordinates in camera space, then you will add the Forward vector to that vector created by the mouse, then you will have it, it would look something like that:
float mouseCoord[] = { mouseX, screen_heihgt - mouseY - 1, 0, 0 }; /* 0, 0 because we multipling by a matrix 4.*/
mouseCoord = multiply( ViewMatrix /*Or: inverse(ViewMatrix)*/, mouseCoord );
float ray[] = add( mouseCoord, forwardVector );
I have 3 transformations in the following order and with the following variables:
glTranslate(dirX, dirY, dirZ);
glRotate(angleX, 1, 0, 0);
glRotate(angleY, 0, 1, 0);
With these, I'm able to transform my ModelView in 3D achieving several effects (translate object around space, rotate object around its center, zoom in and out from origin) .
With the same variables, and using gluLookAt(), I want to achieve the last 2 (rotate around center of object, zoom from origin)
target = object_position
pos.x = zoom * sin(phi) * cos(theta);
pos.y = zoom * cos(phi);
pos.z = zoom * sin(phi) * sin(theta);
pos += target;
gluLookAt(pos, target, vec3(0, 1, 0)); // up vector is fixed...
The code above creates a 'camera' that is looking at the object centre and can rotate around (using spherical coords).
http://mathworld.wolfram.com/SphericalCoordinates.html