Camera and mouse-picking difficulties - opengl

Im working on a chess game which is in 3D space. Im trying to figure out how I could move the figures around but camera and different mouse modes are giving me a headache.
Here is an example screenshot:
The Idea:
There are 2 camera/mouse input modes, one allows me to move freely in the space and look around (unlocked camera/fps camera in short) and the other one locks the screen in, I cant move and mouse movement wont rotate the view (locked camera/sort of a menu camera without menu). In locked mode I would be able to select the square under pieces and move them to a different one through a ray Im casting into the scene to my cursor position.
What I Have:
Camera class that is working as intended made according to this code from learnopengl.com
https://learnopengl.com/code_viewer_gh.php?code=includes/learnopengl/camera.h
A mouse ray class made according to this tutorial:
https://antongerdelan.net/opengl/raycasting.html
Mouse direction calculation put into code (the last function returns the direction correctly as Ive tested it with a locked camera)(This is update with every render):
glm::vec3 NormalizedDeviceCoordinates() {
float x = (2.f * this->mouseX) / this->frameBufferWidth - 1.0f;
float y = 1.0f - (2.0f * this->mouseY) / this->frameBufferHeight;
float z = 1.0f;
return glm::vec3(x, y, z);
}
glm::vec4 HomogeneousClipCoordinates(glm::vec3 NormalizedDeviceCoords) {
return glm::vec4(NormalizedDeviceCoords.x, NormalizedDeviceCoords.y, -1.f, 1.f);
}
glm::vec4 EyeCoordinates(glm::vec4 HomogenousClipCoords) {
glm::vec4 ray_eye = glm::inverse(projectionMatrix) * HomogenousClipCoords;
return glm::vec4(ray_eye.x, ray_eye.y, -1.f, 0.f);
}
glm::vec3 WorldCoordinates(glm::vec4 EyeCoords) {
glm::vec3 ray_wor = (glm::inverse(viewMatrix) * EyeCoords);
ray_wor = glm::normalize(ray_wor);
return ray_wor;
}
glm::vec3 calculateMouseRay() {
return WorldCoordinates(EyeCoordinates(HomogeneousClipCoordinates(NormalizedDeviceCoordinates())));
}
I also have a keyboard input function with which I can switch between the 2 modes upon pressing M key. Differences between unlocked and locked mode:
glfwSetInputMode(this->window, GLFW_CURSOR, GLFW_CURSOR_DISABLED) / glfwSetInputMode(this->window, GLFW_CURSOR, GLFW_CURSOR_NORMAL)
allows for camera movement and rotation / does not allow movement or camera rotation
The problem:
I genuinly dont know how to describe it so I made a video and uploaded it to youtube.
Here is the link:
https://youtu.be/4s-M6vHxvCc
Now, time for explaining:
The black and white cube you see is my attempt at tracking the
direction ray, (simply put I draw the cube and send transformation
matrix to the shader which transforms it to camera location + ray
direction vector), for now the cube is shown in both modes (unlocked
and locked).
In the first half of the video Im in the unlocked mode. You can see me trying to rotate to
left and right in an attempt to show that the cube is somewhat stuck
in this angle and wont move beyond it.
In the second half of the video (after the cube "snaps" to position) I switch to locked mode. You can also see my cursor as well as the cube. They arent alined but the cube is in fact replicating cursor movement. (It is also worthy to point out that there is offset between cursor and the position, likely due to previous mouse rotation from unlocked mode, idk how to account for that)
Possible solutions/reasons/ideas:
Im not completely certain on this, but I think that there are more than one problems overlapping.
First thing first I think the cube is somehow locked to horizontal plane (unintentionally). If I move with movement keys the cube moves along with me but whenever I move the mouse it is always moving along that one plane.
Secondly, it logically should not be horizontal plane but plane to which the first screen was rendered (English is not my native language).
Thirdly If the second assumption is correct I need to somehow move this plane according to mouse rotation (it might be better to say camera direction) which I dont know how to do (or at which point I should add this to the equation at all in fact).
Afterword:
You may have noticed the problem with the first half of the video (unlocked mode). Put to video game terms (this is an assumption) I have both the fps camera (which usually has "cursor" in the middle of the screen) and some sort of menu camera (which traces the real cursor position) working at the same time, which is bad because what I would want is the cube being drawn in the center of the screen (camera direction) AND only when I switch to (locked mode) would the cube start moving based of off cursor position. But for example purposes you should see that there is a different problem mentioned above.
I will be grateful to anyone who can answer the question or point me to the right direction, if you need more information ask away.

Related

Cross Product Confusion

I have a question regarding a tutorial that I have been following on the rotation of the camera's view direction in OpenGL.
Whilst I appreciate that prospective respondents are not the authors of the tutorial, I think that this is likely to be a situation which most intermediate-experienced graphics programmers have encountered before so I will seek the advice of members on this topic.
Here is a link to the video to which I refer: https://www.youtube.com/watch?v=7oNLw9Bct1k.
The goal of the tutorial is to create a basic first person camera which the user controls via movement of the mouse.
Here is the function that handles cursor movement (with some variables/members renamed to conform with my personal conventions):
glm::vec2 Movement { OldCursorPosition - NewCursorPosition };
// camera rotates on y-axis when mouse moved left/right (orientation is { 0.0F, 1.0F, 0.0F }):
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.x) / 2, MVP.view.orientation)
* glm::vec4(MVP.view.direction, 0.0F);
glm::vec3 RotateAround { glm::cross(MVP.view.direction, MVP.view.orientation) };
/* why is camera rotating around cross product of view direction and view orientation
rather than just rotating around x-axis when mouse is moved up/down..? : */
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.y) / 2, RotateAround)
* glm::vec4(MVP.view.direction, 0.0F);
OldCursorPosition = NewCursorPosition;
What I struggle to understand is why obtaining the cross product is even required. What I would naturally expect is for the camera to rotate around the y-axis when the mouse is moved from left to right, and for the camera to rotate around the x-axis when the mouse is moved up and down. I just can't get my head around why the cross product is even relevant.
From my understanding, the cross product will return a vector which is perpendicular to two other vectors; in this case that is the cross product of the view direction and view orientation, but why would one want a cross product of these two vectors? Shouldn't the camera just rotate on the x-axis for up/down movement and then on the y-axis for left/right movement...? What am I missing/overlooking here?
Finally, when I run the program, I can't visually detect any rotation on the z-axis despite the fact that the rotation scalar 'RotateAround' has a z-value greater than or less than 0 on every call to the the function subsequent to the first (which suggests that the camera should rotate at least partially on the z-axis).
Perhaps this is just due to my lack of intuition, but if I change the line:
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.y) / 2, RotateAround)
* glm::vec4(MVP.view.direction, 0.0F);
To:
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.y) / 2, glm::vec3(1.0F, 0.0F, 0.0F))
* glm::vec4(MVP.view.direction, 0.0F);
So that the rotation only happens on the x-axis rather than partially on the x-axis and partially on the z-axis, and then run the program, I can't really notice much of a difference to the workings of the camera. It feels like maybe there is a difference but I can't articulate what this is.
The problem here is frame of reference.
rather than just rotating around x-axis when mouse is moved up/down..?
What you consider x axis? If that's an axis of global frame of reference or paralleled one, then yes. If that's x axis for frame of reference, partially constricted by camera's position, then, in general answer is no. Depends on order of rotations are done and if MVP gets saved between movements.
Provided that in code her MVP gets modified by rotation, this means it gets changed. If Camera would make 180 degrees around x axis, the direction of x axis would change to opposite one.
If camera would rotate around y axis (I assume ISO directions for ground vehicle), direction would change as well. If camera would rotate around global y by 90 degrees, then around global x by 45 degrees, in result you'll see that view had been tilted by 45 degrees sideways.
Order of rotation around constrained frame of reference for ground-bound vehicles (and possibly, for character of classic 3d shooter) is : around y, around x, around z. For aerial vehicles with airplane-like controls it is around z, around x, around y. In orbital space z and x are inverted, if I remember right (z points down).
You have to do the cross product because after multiple mouse moves the camera is now differently oriented. The original x-axis you wanted to rotate around is NOT the same x-axis you want to rotate around now. You must calculate the vector that is currently pointed straight out the side of the camera and rotate around that. This is considered the "right" vector. This is the cross product of the view and up vectors, where view is the "target" vector (down the camera's z axis, where you are looking) and up is straight up out of the camera up the camera's y-axis. These axes must be updated as the camera moves. Calculating the view and up vectors does not require a cross product as you should be applying rotations to these depending on your movements along the way. The view and up should update by rotations, but if you want to rotate around the x-axis (pitch) you must do a cross product.

Dragging a 3D lever (based on orientation & viewing angle)

In my project (C++/UE4), I have a lever mesh sticking out of the floor. Holding the left mouse button on this lever and moving the mouse initiates a dragging operation. This dragging operation is responsible for calculating the 2D delta mouse movements, and utilizing this data to rotate the lever *in local space*, which can only rotate on a single axis (negative or positive, but still only one axis).
But what if, instead of being in front of the lever, I'm actually behind it? What if I'm on one of its sides? What if the lever is actually sticking out of a wall instead of the floor?... How do I make it so that mouse movements actually rotate the lever appropriate to the angle at which it is viewed from, regardless of the lever's orientation?
To further explain myself...
Here's a list of scenarios, and how I'd like the mouse to control them:
When the lever's on the FLOOR and you are in FRONT of it:
If you move the mouse UP (-Y), it should rotate away from the camera
If you move the mouse DOWN (+Y), it should rotate toward the camera
When the lever's on the FLOOR and you are BEHIND it:
If you move the mouse UP (-Y), it should rotate away from the camera
(which is the opposite world-space direction of when you are in front of it)
If you move the mouse DOWN (+Y), it should rotate toward the camera
(which is the opposite world-space direction of when you are in front of it)
When the lever's on the FLOOR and you are BESIDE it:
If you move the mouse LEFT (-X), it should rotate to the LEFT of the camera
(which is the opposite direction of when you are on the other side of it)
If you move the mouse RIGHT (+X), it should rotate to the RIGHT of the camera
(which is the opposite direction of when you are on the other side of it)
When the lever's on a WALL and you are in FRONT of it:
If you move the mouse UP, it should rotate UP (toward the sky)
If you move the mouse DOWN, it should rotate DOWN (toward the floor)
When the lever's on the WALL and you are BESIDE it:
Same as when it's on the wall and you are in front of it
PLEASE NOTE if it helps at all, that UE4 does have built-in 2D/3D Vector math functions, as well as easy ways to project and deproject coordinates to/from the 3D world or 2D screen. Because of this, I always know the exact world-space and screen-space coordinates of the mouse location, the lever's pivot (base) location, and the lever's handle (top) location, as well as the amount (delta) that the mouse has moved each frame.
Get the pivot of the lever (the point around it rotates), and project it to the screen coordinates. Then when you first click you store the coordinates of the click.
Now when you need to know which way to rotate you compute the dot product between the vector pivot to first click and the vector pivot to current location (you should normalize the vectors before the dot product). This gives you cos(angle) that the mouse moved and you can use it (take arccos(value) to get the angle) to move the lever in 3d. It will be a bit wonky since the angle on screen is not the same as the projected angle, but it's easier to control this way (if you move the mouse 90 degrees the lever moves 90 degrees even if they don't align properly). Play with the setup and see what works best for your project.
Another way to do it is this: When you first click you store the point of the end of the lever (or even better the point where you clicked on the lever) in 3d space. You use the camera projection plane to move the point in 3d (you can use camera up vector, after you make it orthogonal to the camera view direction, then take the view direction cross up vector to get the right direction). You apply the mouse delta movements to the point, then project it into the rotation plane and move the lever to align the point to the projected one (the math is similar to the one above, just use the 3d points instead of the screen projections).
Caution: this doesn't work well if the camera is very close to the plane of rotation since it's not always clear if the lever should move forward or backwards.
I'm not an expert on unreal-engine4(just learning myself) but all these are basic vector math and should be supported well. Check out dot product and cross product on wikipedia since they are super useful for these kind of tricks.
Here's one approach:
When the user clicks on the lever, Suppose there is a a plane through the pivot of the lever whose normal is the same as the direction from the camera to the pivot.. Calculate the intersection point of the cursor's ray and that plane.
FVector rayOrigin;
FVector rayDirection;
FVector cameraPosition;
FVector leverPivotPosition;
FVector planeNormal = (leverPivotPosition-cameraPosition).GetSafeNormal(0.0001f);
float t = ((leverPivotPosition - rayOrigin) | planeNormal) / (planeNormal | rayDirection);
FVector planeHitPosition = rayOrigin + rayDirection * t;
Do a scalar projection of that onto the local top/bottom axis of the lever. Let's assume it's the local up/down axis:
FVector leverLocalUpDirectionNormalized;
float scalarPosition = planeHitPosition | leverLocalUpDirectionNormalized;
Then, in each other frame where the lever is held down, calculate the new scalarPosition for that frame. As scalarPosition increases between frames, the lever should rotate such that it moves towards the up side of the lever. As it decreases between frames, the lever should rotate towards the the down side of the lever.

How to move an object depending on the camera in OpenGL.

As shown in the image below.
The user moves the ball by changing x,y,z coordinates which correspond to right,left, up, down, near, far movements respectively. But when we change the camera from position A to position B things look weird. Right doesn't look right any more, that because the ball still moves in previous coordinate frame shown by previous z in the image. How can I make the ball move in such a away that changing camera doesn't affect they way its displacement looks.
simple example: if we place the camera such that it looking from positive X axis, the change in the values of z coordinate now, will look like right and left movements. However in reality changing z should be near and far always.
Thought i will answer it here:
I solved it by simply multiplying the cam model view matrix to the balls coordinates.
Here is the code:
glGetDoublev( GL_MODELVIEW_MATRIX, cam );
matsumX=cam[0]*tx+cam[1]*ty+cam[2]*tz+cam[3];
matsumY=cam[4]*tx+cam[5]*ty+cam[6]*tz+cam[7];
matsumZ=cam[8]*tx+cam[9]*ty+cam[10]*tz+cam[11];
where tx,ty,tz are ball's original coordinates and matsumX, matsumY, matsumZ are the new coordinates which change according the camera.

How do I find intersection of mouse click and a 3D mesh?

In my program I'm loading in a 3D mesh to view and interact with. The user can rotate and scale the view. I will be using a rotation matrix for the rotation and calling multmatrix to rotate the view, and scaling using glScalef. The user can also paint the mesh, and this is why I need to translate the mouse coordinates to see if it intersects with the mesh.
I've read http://www.opengl.org/resources/faq/technical/selection.htm and the method where I use gluUnproject at the near and far plane and subtracting, and I have some success with it, but only when gluLookAt's position is (0, 0, z) where z can be any reasonable number. When I move the position to say (0, 1, z), it goes haywire and returns an intersection where there is only empty space, and returns no intersection where the mesh is clearly underneath the mouse.
This is how I'm making the ray from the mouse click to the scene:
float hx, hy;
hx = mouse_x;
hy = mouse_y;
GLdouble m1x,m1y,m1z,m2x,m2y,m2z;
GLint viewport[4];
GLdouble projMatrix[16], mvMatrix[16];
glGetIntegerv(GL_VIEWPORT,viewport);
glGetDoublev(GL_MODELVIEW_MATRIX,mvMatrix);
glGetDoublev(GL_PROJECTION_MATRIX,projMatrix);
//unproject to find actual coordinates
gluUnProject(hx,scrHeight-hy,2.0,mvMatrix,projMatrix,viewport,&m1x,&m1y,&m1z);
gluUnProject(hx,scrHeight-hy,10.0,mvMatrix,projMatrix,viewport,&m2x,&m2y,&m2z);
gmVector3 dir = gmVector3(m2x,m2y,m2z) - gmVector3(m1x,m1y,m1z);
dir.normalize();
gmVector3 point;
bool intersected = findIntersection(gmVector3(0,0,5), dir, point);
I'm using glFrustum if that makes any difference.
The findIntersection code is really long and I'm pretty confident it works, so I won't post it unless someone wants to see it. The gist of it is that for every face in the mesh, find intersection between the ray and the plane, then see if the intersection point is inside the face.
I believe that it has to do with the camera's position and look at vector, but I don't know how, and what to do with them so that the mouse clicks on the mesh properly. Can anyone help me with this?
I also haven't yet made the rotation matrix or anything with the glScalef, so can anyone give me insight into this? Like, does unproject account for the multmatrix and glScalef calls when calculating?
Many thanks!
The solution is with raytracing. The ray you shoot is defined through two points. The first one is the origin of the camera, the second one is the mouse position projected on the view plane in the scene (the plane you describe with glFrustum). The intersection of this ray with you model is the point where your mouse click has hit the model
making the ray from the camera to the scene using the ray dir, I should've used:
bool intersected = findIntersection(gmVector3(m1x,m1y,m1z), dir, point);
(notice the different vector being passed to the function). This solved my problem, and didn't have anything to do with the gluLookAt after all!
Also, for the second part of the question that I asked, yes, the unproject does take into account the glScale and glmultmatrix functions.

OpenGL Rotations around World Origin when they should be around Local Origin

I'm implementing a simple camera system in OpenGL. I set up gluPerspective under the projection matrix and then use gluLookAt on the ModelView matrix. After this I have my main render loop which checks for keyboard events and, if any of the arrow keys are pressed, modifies angular and forward speeds (I only rotate through the y axis and move through the z (forwards)). Then I move the view using the following code (deltaTime is the amount of time since the last frame was rendered in seconds, in order to decouple movement from framerate):
//place our camera
newTime = RunTime(); //get the time since app start
deltaTime = newTime - time; //get the time since the last frame was rendered
time = newTime;
glRotatef(view.angularSpeed*deltaTime,0,1,0); //rotate
glTranslatef(0,0,view.forwardSpeed*deltaTime); //move forwards
//draw our vertices
draw();
//swap buffers
Swap_Buffers();
Then the code loops around again. My draw algorithm begins with a glPushMatrix() and ends in a glPopMatrix().
Each call to glRotatef() and glTranslatef() pushes the view forwards by the forwards speed in the direction of view.
However when I run the code, my object is drawn in the correct place, but when I move the movement is done with the orientation of the world origin (0,0,0 - facing along the Z axis) as opposed to the local orientation (where I'm pointing) and when I rotate, the rotation is done about (0,0,0) and not the position of the camera.
I end up with this strange effect of my camera orbiting (0,0,0) as opposed to rotating on the spot.
I do not call glLoadIdentity() at all anywhere inside the loop, and I am sure that the Matrix Mode is set to GL_MODELVIEW for the entire loop.
Another odd effect is if I put a glLoadIdentity() call inside the draw() function (between the PushMatrix and PopMatrix calls, the screen just goes black and no matter where I look I can't find the object I draw.
Does anybody know what I've messed up in order to make this orbit (0,0,0) instead of rotate on the spot?
glRotate() rotates the ModelView Matrix around the World Origin, so to rotate around some arbitrary point, you need to translate your matrix to have that point at the origin, rotate and then translate back to where you started.
I think what you need is this
float x, y, z;//point you want to rotate around
glTranslatef(0,0,view.forwardSpeed*deltaTime); //move forwards
glTranslatef(x,y,z); //translate to origin
glRotatef(view.angularSpeed*deltaTime,0,1,0); //rotate
glTranslatef(-x,-y,-z); //translate back
//draw our vertices
draw();
//swap buffers
Swap_Buffers();
Swap your rotate and translate calls around :)
Since they post-multiply the matrix stack the last the be called is the 'first' to be applied conceptually, if you care about that sort of thing.
I think you first have to translate your camera to point (0,0,0) then rotate, then translate it back.