Dragging a 3D lever (based on orientation & viewing angle) - c++

In my project (C++/UE4), I have a lever mesh sticking out of the floor. Holding the left mouse button on this lever and moving the mouse initiates a dragging operation. This dragging operation is responsible for calculating the 2D delta mouse movements, and utilizing this data to rotate the lever *in local space*, which can only rotate on a single axis (negative or positive, but still only one axis).
But what if, instead of being in front of the lever, I'm actually behind it? What if I'm on one of its sides? What if the lever is actually sticking out of a wall instead of the floor?... How do I make it so that mouse movements actually rotate the lever appropriate to the angle at which it is viewed from, regardless of the lever's orientation?
To further explain myself...
Here's a list of scenarios, and how I'd like the mouse to control them:
When the lever's on the FLOOR and you are in FRONT of it:
If you move the mouse UP (-Y), it should rotate away from the camera
If you move the mouse DOWN (+Y), it should rotate toward the camera
When the lever's on the FLOOR and you are BEHIND it:
If you move the mouse UP (-Y), it should rotate away from the camera
(which is the opposite world-space direction of when you are in front of it)
If you move the mouse DOWN (+Y), it should rotate toward the camera
(which is the opposite world-space direction of when you are in front of it)
When the lever's on the FLOOR and you are BESIDE it:
If you move the mouse LEFT (-X), it should rotate to the LEFT of the camera
(which is the opposite direction of when you are on the other side of it)
If you move the mouse RIGHT (+X), it should rotate to the RIGHT of the camera
(which is the opposite direction of when you are on the other side of it)
When the lever's on a WALL and you are in FRONT of it:
If you move the mouse UP, it should rotate UP (toward the sky)
If you move the mouse DOWN, it should rotate DOWN (toward the floor)
When the lever's on the WALL and you are BESIDE it:
Same as when it's on the wall and you are in front of it
PLEASE NOTE if it helps at all, that UE4 does have built-in 2D/3D Vector math functions, as well as easy ways to project and deproject coordinates to/from the 3D world or 2D screen. Because of this, I always know the exact world-space and screen-space coordinates of the mouse location, the lever's pivot (base) location, and the lever's handle (top) location, as well as the amount (delta) that the mouse has moved each frame.

Get the pivot of the lever (the point around it rotates), and project it to the screen coordinates. Then when you first click you store the coordinates of the click.
Now when you need to know which way to rotate you compute the dot product between the vector pivot to first click and the vector pivot to current location (you should normalize the vectors before the dot product). This gives you cos(angle) that the mouse moved and you can use it (take arccos(value) to get the angle) to move the lever in 3d. It will be a bit wonky since the angle on screen is not the same as the projected angle, but it's easier to control this way (if you move the mouse 90 degrees the lever moves 90 degrees even if they don't align properly). Play with the setup and see what works best for your project.
Another way to do it is this: When you first click you store the point of the end of the lever (or even better the point where you clicked on the lever) in 3d space. You use the camera projection plane to move the point in 3d (you can use camera up vector, after you make it orthogonal to the camera view direction, then take the view direction cross up vector to get the right direction). You apply the mouse delta movements to the point, then project it into the rotation plane and move the lever to align the point to the projected one (the math is similar to the one above, just use the 3d points instead of the screen projections).
Caution: this doesn't work well if the camera is very close to the plane of rotation since it's not always clear if the lever should move forward or backwards.
I'm not an expert on unreal-engine4(just learning myself) but all these are basic vector math and should be supported well. Check out dot product and cross product on wikipedia since they are super useful for these kind of tricks.

Here's one approach:
When the user clicks on the lever, Suppose there is a a plane through the pivot of the lever whose normal is the same as the direction from the camera to the pivot.. Calculate the intersection point of the cursor's ray and that plane.
FVector rayOrigin;
FVector rayDirection;
FVector cameraPosition;
FVector leverPivotPosition;
FVector planeNormal = (leverPivotPosition-cameraPosition).GetSafeNormal(0.0001f);
float t = ((leverPivotPosition - rayOrigin) | planeNormal) / (planeNormal | rayDirection);
FVector planeHitPosition = rayOrigin + rayDirection * t;
Do a scalar projection of that onto the local top/bottom axis of the lever. Let's assume it's the local up/down axis:
FVector leverLocalUpDirectionNormalized;
float scalarPosition = planeHitPosition | leverLocalUpDirectionNormalized;
Then, in each other frame where the lever is held down, calculate the new scalarPosition for that frame. As scalarPosition increases between frames, the lever should rotate such that it moves towards the up side of the lever. As it decreases between frames, the lever should rotate towards the the down side of the lever.

Related

Camera and mouse-picking difficulties

Im working on a chess game which is in 3D space. Im trying to figure out how I could move the figures around but camera and different mouse modes are giving me a headache.
Here is an example screenshot:
The Idea:
There are 2 camera/mouse input modes, one allows me to move freely in the space and look around (unlocked camera/fps camera in short) and the other one locks the screen in, I cant move and mouse movement wont rotate the view (locked camera/sort of a menu camera without menu). In locked mode I would be able to select the square under pieces and move them to a different one through a ray Im casting into the scene to my cursor position.
What I Have:
Camera class that is working as intended made according to this code from learnopengl.com
https://learnopengl.com/code_viewer_gh.php?code=includes/learnopengl/camera.h
A mouse ray class made according to this tutorial:
https://antongerdelan.net/opengl/raycasting.html
Mouse direction calculation put into code (the last function returns the direction correctly as Ive tested it with a locked camera)(This is update with every render):
glm::vec3 NormalizedDeviceCoordinates() {
float x = (2.f * this->mouseX) / this->frameBufferWidth - 1.0f;
float y = 1.0f - (2.0f * this->mouseY) / this->frameBufferHeight;
float z = 1.0f;
return glm::vec3(x, y, z);
}
glm::vec4 HomogeneousClipCoordinates(glm::vec3 NormalizedDeviceCoords) {
return glm::vec4(NormalizedDeviceCoords.x, NormalizedDeviceCoords.y, -1.f, 1.f);
}
glm::vec4 EyeCoordinates(glm::vec4 HomogenousClipCoords) {
glm::vec4 ray_eye = glm::inverse(projectionMatrix) * HomogenousClipCoords;
return glm::vec4(ray_eye.x, ray_eye.y, -1.f, 0.f);
}
glm::vec3 WorldCoordinates(glm::vec4 EyeCoords) {
glm::vec3 ray_wor = (glm::inverse(viewMatrix) * EyeCoords);
ray_wor = glm::normalize(ray_wor);
return ray_wor;
}
glm::vec3 calculateMouseRay() {
return WorldCoordinates(EyeCoordinates(HomogeneousClipCoordinates(NormalizedDeviceCoordinates())));
}
I also have a keyboard input function with which I can switch between the 2 modes upon pressing M key. Differences between unlocked and locked mode:
glfwSetInputMode(this->window, GLFW_CURSOR, GLFW_CURSOR_DISABLED) / glfwSetInputMode(this->window, GLFW_CURSOR, GLFW_CURSOR_NORMAL)
allows for camera movement and rotation / does not allow movement or camera rotation
The problem:
I genuinly dont know how to describe it so I made a video and uploaded it to youtube.
Here is the link:
https://youtu.be/4s-M6vHxvCc
Now, time for explaining:
The black and white cube you see is my attempt at tracking the
direction ray, (simply put I draw the cube and send transformation
matrix to the shader which transforms it to camera location + ray
direction vector), for now the cube is shown in both modes (unlocked
and locked).
In the first half of the video Im in the unlocked mode. You can see me trying to rotate to
left and right in an attempt to show that the cube is somewhat stuck
in this angle and wont move beyond it.
In the second half of the video (after the cube "snaps" to position) I switch to locked mode. You can also see my cursor as well as the cube. They arent alined but the cube is in fact replicating cursor movement. (It is also worthy to point out that there is offset between cursor and the position, likely due to previous mouse rotation from unlocked mode, idk how to account for that)
Possible solutions/reasons/ideas:
Im not completely certain on this, but I think that there are more than one problems overlapping.
First thing first I think the cube is somehow locked to horizontal plane (unintentionally). If I move with movement keys the cube moves along with me but whenever I move the mouse it is always moving along that one plane.
Secondly, it logically should not be horizontal plane but plane to which the first screen was rendered (English is not my native language).
Thirdly If the second assumption is correct I need to somehow move this plane according to mouse rotation (it might be better to say camera direction) which I dont know how to do (or at which point I should add this to the equation at all in fact).
Afterword:
You may have noticed the problem with the first half of the video (unlocked mode). Put to video game terms (this is an assumption) I have both the fps camera (which usually has "cursor" in the middle of the screen) and some sort of menu camera (which traces the real cursor position) working at the same time, which is bad because what I would want is the cube being drawn in the center of the screen (camera direction) AND only when I switch to (locked mode) would the cube start moving based of off cursor position. But for example purposes you should see that there is a different problem mentioned above.
I will be grateful to anyone who can answer the question or point me to the right direction, if you need more information ask away.

Cross Product Confusion

I have a question regarding a tutorial that I have been following on the rotation of the camera's view direction in OpenGL.
Whilst I appreciate that prospective respondents are not the authors of the tutorial, I think that this is likely to be a situation which most intermediate-experienced graphics programmers have encountered before so I will seek the advice of members on this topic.
Here is a link to the video to which I refer: https://www.youtube.com/watch?v=7oNLw9Bct1k.
The goal of the tutorial is to create a basic first person camera which the user controls via movement of the mouse.
Here is the function that handles cursor movement (with some variables/members renamed to conform with my personal conventions):
glm::vec2 Movement { OldCursorPosition - NewCursorPosition };
// camera rotates on y-axis when mouse moved left/right (orientation is { 0.0F, 1.0F, 0.0F }):
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.x) / 2, MVP.view.orientation)
* glm::vec4(MVP.view.direction, 0.0F);
glm::vec3 RotateAround { glm::cross(MVP.view.direction, MVP.view.orientation) };
/* why is camera rotating around cross product of view direction and view orientation
rather than just rotating around x-axis when mouse is moved up/down..? : */
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.y) / 2, RotateAround)
* glm::vec4(MVP.view.direction, 0.0F);
OldCursorPosition = NewCursorPosition;
What I struggle to understand is why obtaining the cross product is even required. What I would naturally expect is for the camera to rotate around the y-axis when the mouse is moved from left to right, and for the camera to rotate around the x-axis when the mouse is moved up and down. I just can't get my head around why the cross product is even relevant.
From my understanding, the cross product will return a vector which is perpendicular to two other vectors; in this case that is the cross product of the view direction and view orientation, but why would one want a cross product of these two vectors? Shouldn't the camera just rotate on the x-axis for up/down movement and then on the y-axis for left/right movement...? What am I missing/overlooking here?
Finally, when I run the program, I can't visually detect any rotation on the z-axis despite the fact that the rotation scalar 'RotateAround' has a z-value greater than or less than 0 on every call to the the function subsequent to the first (which suggests that the camera should rotate at least partially on the z-axis).
Perhaps this is just due to my lack of intuition, but if I change the line:
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.y) / 2, RotateAround)
* glm::vec4(MVP.view.direction, 0.0F);
To:
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.y) / 2, glm::vec3(1.0F, 0.0F, 0.0F))
* glm::vec4(MVP.view.direction, 0.0F);
So that the rotation only happens on the x-axis rather than partially on the x-axis and partially on the z-axis, and then run the program, I can't really notice much of a difference to the workings of the camera. It feels like maybe there is a difference but I can't articulate what this is.
The problem here is frame of reference.
rather than just rotating around x-axis when mouse is moved up/down..?
What you consider x axis? If that's an axis of global frame of reference or paralleled one, then yes. If that's x axis for frame of reference, partially constricted by camera's position, then, in general answer is no. Depends on order of rotations are done and if MVP gets saved between movements.
Provided that in code her MVP gets modified by rotation, this means it gets changed. If Camera would make 180 degrees around x axis, the direction of x axis would change to opposite one.
If camera would rotate around y axis (I assume ISO directions for ground vehicle), direction would change as well. If camera would rotate around global y by 90 degrees, then around global x by 45 degrees, in result you'll see that view had been tilted by 45 degrees sideways.
Order of rotation around constrained frame of reference for ground-bound vehicles (and possibly, for character of classic 3d shooter) is : around y, around x, around z. For aerial vehicles with airplane-like controls it is around z, around x, around y. In orbital space z and x are inverted, if I remember right (z points down).
You have to do the cross product because after multiple mouse moves the camera is now differently oriented. The original x-axis you wanted to rotate around is NOT the same x-axis you want to rotate around now. You must calculate the vector that is currently pointed straight out the side of the camera and rotate around that. This is considered the "right" vector. This is the cross product of the view and up vectors, where view is the "target" vector (down the camera's z axis, where you are looking) and up is straight up out of the camera up the camera's y-axis. These axes must be updated as the camera moves. Calculating the view and up vectors does not require a cross product as you should be applying rotations to these depending on your movements along the way. The view and up should update by rotations, but if you want to rotate around the x-axis (pitch) you must do a cross product.

Names for camera moves

I've got a 3D scene and want to offer an API to control the camera. The camera is currently described by its own position, a look-at point in the scene somewhere along the z axis of the camera frame of reference, an “up” vector describing the y axis of the camera frame of reference, and a field-of-view angle. I'd like to provide at least the following operations:
Two-dimensional operations (mouse drag or arrow keys)
Keep look-at point and rotate camera around that. This can also feel like rotating the object, with the look-at point describing its centre. I think that at some point I've heard this described as the camera “orbiting” around the centre of the scene.
Keep camera position, and rotate camera around that point. Colloquially I'd call this “looking around”. With a cinema camera this might perhaps be called pan and tilt, but in 3d modelling “panning” is usually something else, see below. Using aircraft principal directions, this would be a pitch-and-yaw movement of the camera.
Move camera position and look-at point in parallel. This can also feel like translating the object parallel to the view plane. As far as I know this is usually called “panning” in 3d modelling contexts.
One-dimensional operations (e.g. mouse wheel)
Keep look-at point and move camera closer to that, by a given factor. This is perhaps what most people would consider a “zoom” except for those who know about real cameras, see below.
Keep all positions, but change field-of-view angle. This is what a “real” zoom would be: changing the focal length of the lens but nothing else.
Move both look-at point and camera along the line connecting them, by a given distance. At first this feels very much like the first item above, but since it changes the look-at point, subsequent rotations will behave differently. I see this as complementing the last point of the 2d operations above, since together they allow me to move camera and look-at point together in all three directions. The cinema camera man might call this a “dolly” shot, but I guess a dolly might also be associated with the other translation moves parallel to the viewing plane.
Keep look-at point, but change camera distance from it and field-of-view angle in such a way that projected sizes in the plane of the look-at point remain unchanged. This would be a dolly zoom in cinematic contexts, but might also be used to adjust for the viewer's screen size and distance from screen, to make the field-of-view match the user's environment.
Rotate around z axis in camera frame of reference. Using aircraft principal directions, this would be a roll motion of the camera. But it could also feel like a rotation of the object within the image plane.
What would be a consistent, unambiguous, concise set of function names to describe all of the above operations? Perhaps something already established by some existing API?

How to move an object depending on the camera in OpenGL.

As shown in the image below.
The user moves the ball by changing x,y,z coordinates which correspond to right,left, up, down, near, far movements respectively. But when we change the camera from position A to position B things look weird. Right doesn't look right any more, that because the ball still moves in previous coordinate frame shown by previous z in the image. How can I make the ball move in such a away that changing camera doesn't affect they way its displacement looks.
simple example: if we place the camera such that it looking from positive X axis, the change in the values of z coordinate now, will look like right and left movements. However in reality changing z should be near and far always.
Thought i will answer it here:
I solved it by simply multiplying the cam model view matrix to the balls coordinates.
Here is the code:
glGetDoublev( GL_MODELVIEW_MATRIX, cam );
matsumX=cam[0]*tx+cam[1]*ty+cam[2]*tz+cam[3];
matsumY=cam[4]*tx+cam[5]*ty+cam[6]*tz+cam[7];
matsumZ=cam[8]*tx+cam[9]*ty+cam[10]*tz+cam[11];
where tx,ty,tz are ball's original coordinates and matsumX, matsumY, matsumZ are the new coordinates which change according the camera.

Determining Point From Edge With Rotation

I'm writing a screensaver with a bouncing ball (x and y, does not bounce in Z) in C++ using OpenGL. When this ball touches the edges of the screen, a small patch of damage will appear on the ball. (When the ball is damaged enough, it will explode.) Finding the part of the ball to damage is the easy part when the ball isn't rotating.
The algorithm I decided for this is that I'm keeping the position left most, right most, top most and bottom most vertex. For every collision, I obviously need to know which screen edge it hit. Before the ball could roll, when it touches a screen edge, if it hit the left screen edge, I know the left-most vertex is the point on the ball that took a hit. From there, I get all vertices that are within d distance from that point. I don't need the actual vertex that was hit, just the point on the ball's surface.
Doing this, I don't need to read all vertices, translate them by the x,y position of the ball and see which are off-screen. Doing this would solve all my problems but would be slow as hell.
Currently, the ball's rotation is controlled by pitch, yaw and roll. The problem is, what point on the ball's outer surface has touched the edge of the screen given my yaw, pitch and roll angles? I've looked into keeping an up, right and direction vector but I'm totally new to this and as someone might notice, totally lost. I've read the rotation matrix article on Wikipedia several times and still drawing a blank. If I got rid of one rotation angle it would be much simpler but I would prefer not to.
If you have your rotation angles then you can recreate the model view matrix in your code. With that matrix you can apply the rotation to the vertices of the mesh (simply by multiplication) and then find the left most (or whatever) vertex as you did before.
This article explains how to construct the rotation matrix with the angles you have.