OpenGL weird rotation result with mouse - c++

So I'm working on a 3d painting application. I managed to make a basic renderer and model loader.
I created a camera system where I can use it to navigate around the scene(mouse/keyboard) but that's not what I want, so I made that camera static and now I'm trying to rotate/pan/zoom the model itself. I managed to implement panning and zooming. for panning i change the x/y position according to the mouse and for zooming I add or substract from the z-axis according to the mouse scroll.
But now I want to be able to rotate the 3d model with the mouse. Example: When i hold right mouse button and move the mouse up the model should rotate on it's x-axis(pitch) and if i move the mouse to the left/right it should rotate on y-axis(yaw). And I just couldn't do it.
The code bellow I get xpos/ypos of the cursor on the screen, calculate the offset and "trying to rotate the cube". The only problem is that i can't rotate the cuber normally if i move the mouse up the model rotate on the x-axis and y-axis with a little tilt and vice-versa.
This is the code in my rendering loop:
shader.use();
glm::mat4 projection = glm::perspective(glm::radians(45.0f),
(float)SCR_WIDTH/(float)SCR_HEIGHT, 0.01f, 100.0f);
shader.setMat4f("projection", projection);
glm::mat4 view = camera.getViewMatrix();
shader.setMat4f("view", view);
glm::mat4 modelm = glm::mat4(1.0f);
modelm = glm::translate(modelm, object_translation);
// Should rotate cube according to mouse movement
//modelm = glm::rotate(modelm, glm::radians(angle), glm::vec3(0.0f));
shader.setMat4f("model", modelm);
renderer.draw(model, shader);
This is the call where i handle the mouse movement callback:
void mouseCallback(GLFWwindow* window, double xpos, double ypos)
{
if (is_rotating)
{
if (is_first_mouse)
{
lastX = xpos;
lastY = ypos;
is_first_mouse = false;
}
// xpos and ypos are the cursor coords i get those with the
mouse_callback
double xoffset = xpos - lastX;
double yoffset = lastY - ypos;
lastX = xpos;
lastY = ypos;
object_rotation.y += xoffset; // If i use yoffset the rotation flips
object_rotation.x += yoffset;
rotation_angle += (xoffset + yoffset) * 0.25f;
}
}
Mouse panning works fine too can't say the same for the rotation.

I fixed it. After some research and asking i was told that u can only do one rotation at a time and i was trying to do both x/y-axis in the same time. Once i seperate the two rotations. object will now rotate on x-axis first then y-axis the problem was solved.
The code should be like this:
shader.use();
glm::mat4 projection = glm::perspective(glm::radians(45.0f), (float)SCR_WIDTH/(float)SCR_HEIGHT, 0.01f, 100.0f);
shader.setMat4f("projection", projection);
glm::mat4 view = camera.getViewMatrix();
shader.setMat4f("view", view);
glm::mat4 modelm = glm::mat4(1.0f);
modelm = glm::scale(modelm, glm::vec3(1.0f));
modelm = glm::translate(modelm, glm::vec3(0.0f, 0.0f, -5.0f));
// Handle x-axis rotation
modelm = glm::rotate(modelm, glm::radians(object_orientation_angle_x), glm::vec3(object_rotation_x, 0.0f, 0.0f));
// Handle y-axis rotation
modelm = glm::rotate(modelm, glm::radians(object_orientation_angle_y), glm::vec3(0.0f, object_rotation_y, 0.0f));
shader.setMat4f("model", modelm);
renderer.draw(model, shader);

You are storing your rotation as euler angle inside object_rotation.
I advised you to use:
glm::mat4 rotation = glm::eulerAngleYX(object_rotation.y, object_rotation.x); // pitch then yaw
or
glm::mat4 rotation = glm::eulerAngleXY(object_rotation.x, object_rotation.y); // yaw then roll
In your case both should do the job, I advised you in the future to store thoose informations inside your camera (eye, up, center) instead of your object, everything become more simpler.

Related

Quaternion-based First Person View Camera

I have been learning OpenGL by following the tutorial, located at https://paroj.github.io/gltut/.
Passing the basics, I got a bit stuck at understanding quaternions and their relation to spatial orientation and transformations, especially from world- to camera-space and vice versa. In the chapter Camera-Relative Orientation, the author makes a camera, which rotates a model in world space relative to the camera orientation. Quoting:
We want to apply an orientation offset (R), which takes points in camera-space. If we wanted to apply this to the camera matrix, it would simply be multiplied by the camera matrix: R * C * O * p. That's nice and all, but we want to apply a transform to O, not to C.
My uneducated guess would be that if we applied the offset to camera space, we would get the first-person camera. Is this correct? Instead, the offset is applied to the model in world space, making the spaceship spin relative to that space, and not to camera space. We just observe it spin from camera space.
Inspired by at least some understanding of quaternions (or so I thought), I tried to implement the first person camera. It has two properties:
struct Camera{
glm::vec3 position; // Position in world space.
glm::quat orientation; // Orientation in world space.
}
Position is modified in reaction to keyboard actions, while the orientation changes due to mouse movement on screen.
Note: GLM overloads * operator for glm::quat * glm::vec3 with the relation for rotating a vector by a quaternion (more compact form of v' = qvq^-1)
For example, moving forward and moving right:
glm::vec3 worldOffset;
float scaleFactor = 0.5f;
if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS) {
worldOffset = orientation * (axis_vectors[AxisVector::AXIS_Z_NEG]); // AXIS_Z_NEG = glm::vec3(0, 0, -1)
position += worldOffset * scaleFactor;
}
if (glfwGetKey(window, GLFW_KEY_D) == GLFW_PRESS) {
worldOffset = orientation * (axis_vectors[AxisVector::AXIS_X_NEG]); // AXIS_Z_NEG = glm::vec3(-1, 0, 0)
position += worldOffset * scaleFactor;
}
Orientation and position information is passed to glm::lookAt matrix for constructing the world-to-camera transformation, like so:
auto camPosition = position;
auto camForward = orientation * glm::vec3(0.0, 0.0, -1.0);
viewMatrix = glm::lookAt(camPosition, camPosition + camForward, glm::vec3(0.0, 1.0, 0.0));
Combining model, view and projection matrices and passing the result to vertex shader displays everything okay - the way one would expect to see things from the first-person POV. However, things get messy when I add mouse movements, tracking the amount of movement in x and y directions. I want to rotate around the world y-axis and local x-axis:
auto xOffset = glm::angleAxis(xAmount, axis_vectors[AxisVector::AXIS_Y_POS]); // mouse movement in x-direction
auto yOffset = glm::angleAxis(yAmount, axis_vectors[AxisVector::AXIS_X_POS]); // mouse movement in y-direction
orientation = orientation * xOffset; // Works OK, can look left/right
orientation = yOffset * orientation; // When adding this line, things get ugly
What would the problem be here?
I admit, I don't have enough knowledge to debug the mouse movement code properly, I mainly followed the lines, saying "right multiply to apply the offset in world space, left multiply to do it in camera space."
I feel like I know things half-way, drawing conclusions from a plethora of e-resources on the subject, while getting more educated and more confused at the same time.
Thanks for any answers.
To rotate a glm quaternion representing orientation:
//Precomputation:
//pitch (rot around x in radians),
//yaw (rot around y in radians),
//roll (rot around z in radians)
//are computed/incremented by mouse/keyboard events
To compute view matrix:
void CameraFPSQuaternion::UpdateView()
{
//FPS camera: RotationX(pitch) * RotationY(yaw)
glm::quat qPitch = glm::angleAxis(pitch, glm::vec3(1, 0, 0));
glm::quat qYaw = glm::angleAxis(yaw, glm::vec3(0, 1, 0));
glm::quat qRoll = glm::angleAxis(roll,glm::vec3(0,0,1));
//For a FPS camera we can omit roll
glm::quat orientation = qPitch * qYaw;
orientation = glm::normalize(orientation);
glm::mat4 rotate = glm::mat4_cast(orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
viewMatrix = rotate * translate;
}
If you want to store the quaternion, then you recompute it whenever yaw, pitch, or roll changes:
void CameraFPSQuaternion::RotatePitch(float rads) // rotate around cams local X axis
{
glm::quat qPitch = glm::angleAxis(rads, glm::vec3(1, 0, 0));
m_orientation = glm::normalize(qPitch) * m_orientation;
glm::mat4 rotate = glm::mat4_cast(m_orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
m_viewMatrix = rotate * translate;
}
If you want to give a rotation speed around a given axis, you use slerp:
void CameraFPSQuaternion::Update(float deltaTimeSeconds)
{
//FPS camera: RotationX(pitch) * RotationY(yaw)
glm::quat qPitch = glm::angleAxis(m_d_pitch, glm::vec3(1, 0, 0));
glm::quat qYaw = glm::angleAxis(m_d_yaw, glm::vec3(0, 1, 0));
glm::quat qRoll = glm::angleAxis(m_d_roll,glm::vec3(0,0,1));
//For a FPS camera we can omit roll
glm::quat m_d_orientation = qPitch * qYaw;
glm::quat delta = glm::mix(glm::quat(0,0,0,0),m_d_orientation,deltaTimeSeconds);
m_orientation = glm::normalize(delta) * m_orientation;
glm::mat4 rotate = glm::mat4_cast(orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
viewMatrix = rotate * translate;
}
The problem lied with the usage of glm::lookAt for constructing the view matrix. Instead, I am now constructing the view matrix like so:
auto rotate = glm::mat4_cast(entity->orientation);
auto translate = glm::mat4(1.0f);
translate = glm::translate(translate, -entity->position);
viewMatrix = rotate * translate;
For translation, I'm left multiplying with an inverse of orientation instead of orientation now.
glm::quat invOrient = glm::conjugate(orientation);
if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS) {
worldOffset = invOrient * (axis_vectors[AxisVector::AXIS_Z_NEG]);
position += worldOffset * scaleFactor;
}
...
Everything else is the same, apart from some further offset quaternion normalizations in the mouse movement code.
The camera now behaves and feels like a first-person camera.
I still don't properly understand the difference between view matrix and lookAt matrix, if there is any. But that's the topic for another question.

Arcball camera inverting at 90 deg azimuth

I'm attempting to implement an arcball style camera. I use glm::lookAt to keep the camera pointed at a target, and then move it around the surface of a sphere using azimuth/inclination angles to rotate the view.
I'm running into an issue where the view gets flipped upside down when the azimuth approaches 90 degrees.
Here's the relevant code:
Get projection and view martrices. Runs in the main loop
void Visual::updateModelViewProjection()
{
model = glm::mat4();
projection = glm::mat4();
view = glm::mat4();
projection = glm::perspective
(
(float)glm::radians(camera.Zoom),
(float)width / height, // aspect ratio
0.1f, // near clipping plane
10000.0f // far clipping plane
);
view = glm::lookAt(camera.Position, camera.Target, camera.Up);
}
Mouse move event, for camera rotation
void Visual::cursor_position_callback(GLFWwindow* window, double xpos, double ypos)
{
if (leftMousePressed)
{
...
}
if (rightMousePressed)
{
GLfloat xoffset = (xpos - cursorPrevX) / 4.0;
GLfloat yoffset = (cursorPrevY - ypos) / 4.0;
camera.inclination += yoffset;
camera.azimuth -= xoffset;
if (camera.inclination > 89.0f)
camera.inclination = 89.0f;
if (camera.inclination < 1.0f)
camera.inclination = 1.0f;
if (camera.azimuth > 359.0f)
camera.azimuth = 359.0f;
if (camera.azimuth < 1.0f)
camera.azimuth = 1.0f;
float radius = glm::distance(camera.Position, camera.Target);
camera.Position[0] = camera.Target[0] + radius * cos(glm::radians(camera.azimuth)) * sin(glm::radians(camera.inclination));
camera.Position[1] = camera.Target[1] + radius * sin(glm::radians(camera.azimuth)) * sin(glm::radians(camera.inclination));
camera.Position[2] = camera.Target[2] + radius * cos(glm::radians(camera.inclination));
camera.updateCameraVectors();
}
cursorPrevX = xpos;
cursorPrevY = ypos;
}
Calculate camera orientation vectors
void updateCameraVectors()
{
Front = glm::normalize(Target-Position);
Right = glm::rotate(glm::normalize(glm::cross(Front, {0.0, 1.0, 0.0})), glm::radians(90.0f), Front);
Up = glm::normalize(glm::cross(Front, Right));
}
I'm pretty sure it's related to the way I calculate my camera's right vector, but I cannot figure out how to compensate.
Has anyone run into this before? Any suggestions?
It's a common mistake to use lookAt for rotating the camera. You should not. The backward/right/up directions are the columns of your view matrix. If you already have them then you don't even need lookAt, which tries to redo some of your calculations. On the other hand, lookAt doesn't help you in finding those vectors in the first place.
Instead build the view matrix first as a composition of translations and rotations, and then extract those vectors from its columns:
void Visual::cursor_position_callback(GLFWwindow* window, double xpos, double ypos)
{
...
if (rightMousePressed)
{
GLfloat xoffset = (xpos - cursorPrevX) / 4.0;
GLfloat yoffset = (cursorPrevY - ypos) / 4.0;
camera.inclination = std::clamp(camera.inclination + yoffset, -90.f, 90.f);
camera.azimuth = fmodf(camera.azimuth + xoffset, 360.f);
view = glm::mat4();
view = glm::translate(view, glm::vec3(0.f, 0.f, camera.radius)); // add camera.radius to control the distance-from-target
view = glm::rotate(view, glm::radians(camera.inclination + 90.f), glm::vec3(1.f,0.f,0.f));
view = glm::rotate(view, glm::radians(camera.azimuth), glm::vec3(0.f,0.f,1.f));
view = glm::translate(view, camera.Target);
camera.Right = glm::column(view, 0);
camera.Up = glm::column(view, 1);
camera.Front = -glm::column(view, 2); // minus because OpenGL camera looks towards negative Z.
camera.Position = glm::column(view, 3);
view = glm::inverse(view);
}
...
}
Then remove the code that calculates view and the direction vectors from updateModelViewProjection and updateCameraVectors.
Disclaimer: this code is untested. You might need to fix a minus sign somewhere, order of operations, or the conventions might mismatch (Z is up or Y is up, etc...).

Camera rotation and orientation in variable gravity

I'm attempting to implement a camera controller for a first person, mouse-look based camera for OpenGL. This is a simple problem when the camera is always oriented normally (camera up vector = world Y axis). However, I'm having real trouble getting everything working properly with a camera that can be used seamlessly for any orientation. The purpose is to allow a player to move around an entire planet. An additional requirement is that the direction remain the same relative to the orientation as the camera's orientation changes. An example would be, if you're walking around a planet, the direction remains the same relative to the ground, so as you go "down" along the side from a pole, the direction is also automatically rotated.
So far, I've attempted a number of different things to get this working, but as I see it, there should be two different ways of doing this. The first is to do regular camera rotation based on yaw and pitch angles from the world axes, and then transform the resulting look direction by the camera orientation to obtain the final look direction. The second approach is to rotate the camera with yaw and pitch angles based on calculated up and right vectors. The up vector is easy here; it's just the orientation. I haven't gotten any right vector I've found to work correctly though.
OK, here's the code for these two approaches.
Common code
// m_orientation calculated from planet center to current position
m_horizontal += horizontal;
m_vertical += vertical;
while (m_horizontal > TWO_PI) {
m_horizontal -= TWO_PI;
}
while (m_horizontal < -TWO_PI) {
m_horizontal += TWO_PI;
}
if (m_vertical > MAX_VERTICAL) {
m_vertical = MAX_VERTICAL;
}
else if (m_vertical < -MAX_VERTICAL) {
m_vertical = -MAX_VERTICAL;
}
// code from either implementation
m_view = glm::lookAt(m_position, m_position + m_direction, m_orientation);
First approach with yaw, pitch about world axes and then transform
// check for m_orientation != WORLD_UP...
glm::vec3 axis = glm::normalize(glm::cross(WORLD_UP, m_orientation));
float angle_degrees = acosf(m_orientation.y) * RADS_TO_DEGREES;
glm::mat4 trans = glm::rotate(glm::mat4(), angle_degrees, axis);
// can also be determined with two rotation matrices about world axes, end result is identical
m_direction = glm::vec3(cosf(m_vertical) * sinf(m_horizontal),
sinf(m_vertical),
cosf(m_vertical) * cosf(m_horizontal));
m_direction = glm::vec3(trans * glm::vec4(m_direction));
Second approach with yaw and pitch about appropriate up and right vectors
m_right = ??? // tried literally everything
glm::mat4 yaw = glm::rotate(glm::mat4(), m_horizontal, m_orientation);
glm::mat4 pitch = glm::rotate(glm::mat4(), m_vertical, m_right);
glm::mat4 trans = yaw * pitch;
m_direction = glm::vec3(trans[2]); // z axis
OK, so here's the problem. The first approach works almost perfectly, but near the south pole of a planet (within ~15 degrees of orientation=(0,-1,0), effect gets stronger closer you are), the camera is automatically rotated toward the south pole as the orientation changes. So if the camera orientation does not change, near the south pole, the camera works perfectly. Any change in orientation results in the camera rotating toward the south pole. The more orientation change, the more the camera rotates. Now I have tried removing either the pitch or yaw from the world axis camera rotation, and this effect appears only with the pitch calculation included. With only yaw, then the camera behaves perfectly (lacking any pitch control ofc). As far as I can tell, my transformation to go from regular up=(0,1,0) to the current orientation is incorrect. Any help on that?
Now the other way to do things appears to work somewhat correctly, but I simply have not found a good right vector. Everything I've tried results in strange behavior of both horizontal and vertical movements. The most obvious solution, cross product of previous frame's direction and current orientation to produce the right vector doesn't work. Any suggestions for a good right vector?
I'm also happy to see completely different solutions to this problem. I know it's possible, but no amount of searching has given me a good solution. Thanks very much in advance.
Edit 1: Tried a few more things in response to Paweł Stawarz
Results in incorrect orientation of camera and weird mouse movement. I made sure my matrix multiplication was in the correct order. I also tried the transpose.
m_view = glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), m_direction, m_up);
m_view = trans * m_view; //trans is rotation from orientation=(0,1,0) to orientation=m_orientation
Results in the same problem as previously, with the camera rotating toward the south pole by itself. Also the vertical mouse rotation is not correct, causes camera to go in circles.
m_view = glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), m_direction, m_up);
m_view = trans * m_view;
m_direction = glm::vec3(m_view[2]);
m_view = glm::lookAt(m_position, m_direction + m_position, m_orientation);
Edit 2: Using the RIGHT vector method, with no transformation between orientations is working a little better. However, it causes the camera yaw to oscillate wildly with pitch near to vertical (at least 5 degrees away from vertical). In addition, the range of motion for pitch is not adjusted by the orientation, so for example on the side of the planet, vertical motion is restricted to directly in front of you to behind you (~(0,1,0) to ~(0,-1,0)).
glm::mat4 yaw = glm::rotate(glm::mat4(), m_horizontal * ONEEIGHTY_PI, m_orientation);
glm::mat4 pitch = glm::rotate(glm::mat4(), m_vertical * -ONEEIGHTY_PI, m_right);
glm::mat4 cam = pitch * yaw;
m_right = glm::vec3(cam[0]);
m_up = glm::vec3(cam[1]);
m_direction = glm::vec3(cam[2]);
m_view = glm::lookAt(m_position, m_direction + m_position, m_up);
m_vp = m_perspective * m_view;
Solved it. Needed a different transformation. See here for a pretty good explanation.
glm::mat4 trans;
float factor = 1.0f;
float real_vertical = vertical;
m_horizontal += horizontal;
m_vertical += vertical;
while (m_horizontal > TWO_PI) {
m_horizontal -= TWO_PI;
}
while (m_horizontal < -TWO_PI) {
m_horizontal += TWO_PI;
}
if (m_vertical > MAX_VERTICAL) {
m_vertical = MAX_VERTICAL;
}
else if (m_vertical < -MAX_VERTICAL) {
m_vertical = -MAX_VERTICAL;
}
glm::quat world_axes_rotation = glm::angleAxis(m_horizontal * ONEEIGHTY_PI, glm::vec3(0.0f, 1.0f, 0.0f));
world_axes_rotation = glm::normalize(world_axes_rotation);
world_axes_rotation = glm::rotate(world_axes_rotation, m_vertical * ONEEIGHTY_PI, glm::vec3(1.0f, 0.0f, 0.0f));
m_pole = glm::normalize(m_pole - glm::dot(m_orientation, m_pole) * m_orientation);
glm::mat4 local_transform;
local_transform[0] = glm::vec4(m_pole.x, m_pole.y, m_pole.z, 0.0f);
local_transform[1] = glm::vec4(m_orientation.x, m_orientation.y, m_orientation.z, 0.0f);
glm::vec3 tmp = glm::cross(m_pole, m_orientation);
local_transform[2] = glm::vec4(tmp.x, tmp.y, tmp.z, 0.0f);
local_transform[3] = glm::vec4(m_position.x, m_position.y, m_position.z, 1.0f);
world_axes_rotation = glm::normalize(world_axes_rotation);
m_view = local_transform * glm::mat4_cast(world_axes_rotation);
m_direction = -1.0f * glm::vec3(m_view[2]);
m_up = glm::vec3(m_view[1]);
m_right = glm::vec3(m_view[0]);
m_view = glm::inverse(m_view);
If we keep things simple, by using the standard approach:
Move the camera to the current player position
Rotate it towards where the player is looking at
The camera rotation is described by:
The UP vector which is the normalized vector that starts at (p0x,p0y,p0z) (where p0 is the position of the center of planet) and goes thru (p1x,p1y,p1z) (where p1 describes the place the player is currently at),
the RIGHT vector is the vector perpendicular to the UP vector and perpendicular to the direction the player is looking - the LOOK vector (in the case where he's looking straight ahead - perpendicular to his direction).
Since the UP vector can be calulated straight from the player current position, you have to get the LOOK and RIGHT vectors. Both are cross products of corresponding other vectors.
Note also, that allowing the player to look up/down and pan his head, can (and probably will) in fact change the UP vector.

Window coordinates to camera angles?

So I want to use quaternions and angles to control my camera using my mouse.
I accumulate the vertical/horizontal angles like this:
void Camera::RotateCamera(const float offsetHorizontalAngle, const float offsetVerticalAngle)
{
mHorizontalAngle += offsetHorizontalAngle;
mHorizontalAngle = std::fmod(mHorizontalAngle, 360.0f);
mVerticalAngle += offsetVerticalAngle;
mVerticalAngle = std::fmod(mVerticalAngle, 360.0f);
}
and compute my orientation like this:
Mat4 Camera::Orientation() const
{
Quaternion rotation;
rotation = glm::angleAxis(mVerticalAngle, Vec3(1.0f, 0.0f, 0.0f));
rotation = rotation * glm::angleAxis(mHorizontalAngle, Vec3(0.0f, 1.0f, 0.0f));
return glm::toMat4(rotation);
}
and the forward vector, which I need for glm::lookAt, like this:
Vec3 Camera::Forward() const
{
return Vec3(glm::inverse(Orientation()) * Vec4(0.0f, 0.0f, -1.0f, 0.0f));
}
I think that should do the trick, but I do not know how in my example game to get actual angles? All I have is the current and previous mouse location in window coordinates.. how can I get proper angles from that?
EDIT: on a second thought.. my "RotateCamera()" cant be right; I am experiencing rubber-banding effect due to the angles reseting after reaching 360 deegres... so how do I accumulate angles properly? I can just sum them up endlessly
Take a cross section of the viewing frustum (the blue circle is your mouse position):
Theta is half of your FOV
p is your projection plane distance (don't worry - it will cancel out)
From simple ratios it is clear that:
But from simple trignometry
So ...
Just calculate the angle psi for each of your mouse positions and subtract to get the difference.
A similar formula can be found for the vertical angle:
Where A is your aspect ratio (width / height)

Zoom in on current mouse position in OpenGL using GLM functionality

I'm despairing of the task to zoom in on the current mouse position in OpenGL. I've tried a lot of different things and read other posts on this, but I couldn't adapt the possible solutions to my specific problem. So as far as I understood it, you'll have to get the current window coordinates of the mouse curser, then unproject them to get world coordinates and finally translate to those world coordinates.
To find the current mouse positions, I use the following code in my GLUT mouse callback function every time the right mouse button is clicked.
if(button == 2)
{
mouse_current_x = x;
mouse_current_y = y;
...
Next up, I unproject the current mouse positions in my display function before setting up the ModelView and Projection matrices, which also seems to work perfectly fine:
// Unproject Window Coordinates
float mouse_current_z;
glReadPixels(mouse_current_x, mouse_current_y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &mouse_current_z);
glm::vec3 windowCoordinates = glm::vec3(mouse_current_x, mouse_current_y, mouse_current_z);
glm::vec4 viewport = glm::vec4(0.0f, 0.0f, (float)width, (float)height);
glm::vec3 worldCoordinates = glm::unProject(windowCoordinates, modelViewMatrix, projectionMatrix, viewport);
printf("(%f, %f, %f)\n", worldCoordinates.x, worldCoordinates.y, worldCoordinates.z);
Now the translation is where the trouble starts. Currently I'm drawing a cube with dimensions (dimensionX, dimensionY, dimensionZ) and translate to the center of that cube, so my zooming happens to the center point as well. I'm achieving zooming by translating in z-direction (dolly):
// Set ModelViewMatrix
modelViewMatrix = glm::mat4(1.0); // Start with the identity as the transformation matrix
modelViewMatrix = glm::translate(modelViewMatrix, glm::vec3(0.0, 0.0, -translate_z)); // Zoom in or out by translating in z-direction based on user input
modelViewMatrix = glm::rotate(modelViewMatrix, rotate_x, glm::vec3(1.0f, 0.0f, 0.0f)); // Rotate the whole szene in x-direction based on user input
modelViewMatrix = glm::rotate(modelViewMatrix, rotate_y, glm::vec3(0.0f, 1.0f, 0.0f)); // Rotate the whole szene in y-direction based on user input
modelViewMatrix = glm::rotate(modelViewMatrix, -90.0f, glm::vec3(1.0f, 0.0f, 0.0f)); // Rotate the camera by 90 degrees in negative x-direction to get a frontal look on the szene
modelViewMatrix = glm::translate(modelViewMatrix, glm::vec3(-dimensionX/2.0f, -dimensionY/2.0f, -dimensionZ/2.0f)); // Translate the origin to be the center of the cube
glBindBuffer(GL_UNIFORM_BUFFER, globalMatricesUBO);
glBufferSubData(GL_UNIFORM_BUFFER, sizeof(glm::mat4), sizeof(glm::mat4), glm::value_ptr(modelViewMatrix));
glBindBuffer(GL_UNIFORM_BUFFER, 0);
I tried to replace the translation to the center of the cube by translating to the worldCoordinates vector, but this didn't work. I also tried to scale the vector by width or height.
Am I missing out on some essential step here?
Maybe this won't work in your case. But to me this seems like the best way to handle this. Use glulookat() to look at the xyz position of the mouse click that you have already found. Then change the gluPerspective() to a smaller angle of view to achieve the actual zoom.