I'm making a game in c++. It is a card game. I have made 13 cards that rotate about a point to arc out to make your hand. I need a way to figure out which card the user clicks on. My cards are basically rectangles rotated about a point that is in the center of the cards. I was thinking of maybe getting the mouse point and rotating it about my central point but i'm not sure how to rotate a point about a point. Thanks.
Rotating a around p
The trick is to reduce rotating around a point to rotating around the origin by doing translations.
Subtract p from a (move to the origin)
Rotate by angle
Add p to resulting point (move again)
Formula for rotating (x, y) aroung the origin:
If (x0, y0) is the central point and (xm, ym) is where the mouse is, you could calculate the angle of the mouse with respect to the center point by translating (x0, y0) to the origin (0, 0) and then converting to polar coordinates.
Translate to origin:
(x', y') = (xm - x0, ym - y0)
Convert from rectangular to polar (x, y) → (r, θ):
r = sqrt(x'2 + y'2)
θ = tan-1(y' / x')
The angle θ should be enough to tell you which card is selected.
You've gotten a couple of possibilities already. This is a rather different one that depends more on programming and less on trig.
The idea would be to draw each card in a unique, solid, color in a back buffer. Check the color at the point that matches the mouse click, and you have the card. At one time this would have been grossly impractical -- but with modern graphics hardware this can be quite competitive.
You could check that point is a part of polygon. Here you could find pretty fast implementation.
Related
I have implemented camera rotation around a centre entity and now want to add camera translation. I cannot just do centre.xy += mouse.delta.xy as if the camera is rotated facing the z axis and I drag to the right, this will obviously move the camera towards me (because x axis being incremented). In this instance, the centre.z attribute would need to be increased. I suppose I need to incorporate the camera's pitch, yaw and roll properties into this calculation but not sure how to go about it... any suggestions/links?
I also tried playing around with ray casting (which I have implemented), in place of the mouse delta, but to no avail.
EDIT - simple method:
val right = Vector3f(viewMatrix.m00(), viewMatrix.m01(), viewMatrix.m02()).mul(lmb.delta.x)
val up = Vector3f(viewMatrix.m10(), viewMatrix.m11(), viewMatrix.m12()).mul(lmb.delta.y)
val delta = right.add(up)
center.add(delta)
You did not write a lot about how you represent your camera, but I assume the following:
The camera is represented by a focus point centre and three Euler angles that describe the rotation about that focus point. Probably also a distance to the focus point.
I'll explain two ways - one rather simple and one more sophisticated.
Simple Way
Let's recap what you were trying to do:
centre.xy += mouse.delta.xy
This fails when the camera is not aligned with the coordinate system. A more general formulation of this approach would be:
centre += mouse.delta.x * right + mouse.delta.y * up
Here, right is a world-space vector pointing to the right side of the screen and up is a world-space vector pointing upwards. Depending on your mouse delta, you may instead want a down vector.
So, where do we get those vectors from? Easy. The view matrix has all we need. The first row (the first three entries of the row) are the right vector. The second row is the up vector. So, simply get the view matrix, read those vectors, and update the focus center. You might also want to add some scale.
More Sophisticated
In many applications, the panning functionality is designed in a way such that a certain 3D point under the mouse stays under the mouse during panning. This can be achieved in the following way:
First, we need the depth of the 3D point that we want to keep under the mouse. Two common options are the depth of the focus point or the actual depth of the 3D scene under the mouse (which you get from the depth map). I will explain the former.
We first need this depth in Normalized Device Coordinates. To do this, we first calculate the view-projection matrix:
VP = ProjectionMatrix * ViewMatrix
Then, we transform the focus point into clip space:
focusClip = VP * (focus, 1)
(focus, 1) is a 4D vector with a 1 as its last component. Finally, we derive NDC depth as
focusDepthNDC = focusClip.z / focusClip.w
Ok, now we have the depth. So we can calculate the 3D point that we want to keep under the mouse. First, lets invert the view-projection matrix because this allows us to go from clip space to world space:
VPInv = inverse(VP)
Then, the point under the mouse is (I'll call it x):
x = VPInv * (mouseStartNDC.x, mouseStartNDC.y, focusDepthNDC, 1)
mouseStartNDC is the mouse position before the shift. Keep in mind that this needs to be in normalized device coordinates. If you only have screen space coordinates, then:
ndcX = 2 * screenX / windowWidth - 1
ndcY = -2 * screenY / windowHeight + 1
x is again a 4D vector. Do the perspective divide:
x *= 1.0 / x.w
Now we have our 3D point. We just need to find a shift of the camera that keeps this position under the mouse at the mouse location after the shift:
newX = VPInv * (mouseEndNDC.x, mouseEndNDC.y, focusDepthNDC, 1)
Do the perspective divide again:
newX *= 1.0 / newX.w
And finally update your camera center:
centre += (x - newX).xyz
This approach works with any camera model that you can express in matrix form.
In my project (C++/UE4), I have a lever mesh sticking out of the floor. Holding the left mouse button on this lever and moving the mouse initiates a dragging operation. This dragging operation is responsible for calculating the 2D delta mouse movements, and utilizing this data to rotate the lever *in local space*, which can only rotate on a single axis (negative or positive, but still only one axis).
But what if, instead of being in front of the lever, I'm actually behind it? What if I'm on one of its sides? What if the lever is actually sticking out of a wall instead of the floor?... How do I make it so that mouse movements actually rotate the lever appropriate to the angle at which it is viewed from, regardless of the lever's orientation?
To further explain myself...
Here's a list of scenarios, and how I'd like the mouse to control them:
When the lever's on the FLOOR and you are in FRONT of it:
If you move the mouse UP (-Y), it should rotate away from the camera
If you move the mouse DOWN (+Y), it should rotate toward the camera
When the lever's on the FLOOR and you are BEHIND it:
If you move the mouse UP (-Y), it should rotate away from the camera
(which is the opposite world-space direction of when you are in front of it)
If you move the mouse DOWN (+Y), it should rotate toward the camera
(which is the opposite world-space direction of when you are in front of it)
When the lever's on the FLOOR and you are BESIDE it:
If you move the mouse LEFT (-X), it should rotate to the LEFT of the camera
(which is the opposite direction of when you are on the other side of it)
If you move the mouse RIGHT (+X), it should rotate to the RIGHT of the camera
(which is the opposite direction of when you are on the other side of it)
When the lever's on a WALL and you are in FRONT of it:
If you move the mouse UP, it should rotate UP (toward the sky)
If you move the mouse DOWN, it should rotate DOWN (toward the floor)
When the lever's on the WALL and you are BESIDE it:
Same as when it's on the wall and you are in front of it
PLEASE NOTE if it helps at all, that UE4 does have built-in 2D/3D Vector math functions, as well as easy ways to project and deproject coordinates to/from the 3D world or 2D screen. Because of this, I always know the exact world-space and screen-space coordinates of the mouse location, the lever's pivot (base) location, and the lever's handle (top) location, as well as the amount (delta) that the mouse has moved each frame.
Get the pivot of the lever (the point around it rotates), and project it to the screen coordinates. Then when you first click you store the coordinates of the click.
Now when you need to know which way to rotate you compute the dot product between the vector pivot to first click and the vector pivot to current location (you should normalize the vectors before the dot product). This gives you cos(angle) that the mouse moved and you can use it (take arccos(value) to get the angle) to move the lever in 3d. It will be a bit wonky since the angle on screen is not the same as the projected angle, but it's easier to control this way (if you move the mouse 90 degrees the lever moves 90 degrees even if they don't align properly). Play with the setup and see what works best for your project.
Another way to do it is this: When you first click you store the point of the end of the lever (or even better the point where you clicked on the lever) in 3d space. You use the camera projection plane to move the point in 3d (you can use camera up vector, after you make it orthogonal to the camera view direction, then take the view direction cross up vector to get the right direction). You apply the mouse delta movements to the point, then project it into the rotation plane and move the lever to align the point to the projected one (the math is similar to the one above, just use the 3d points instead of the screen projections).
Caution: this doesn't work well if the camera is very close to the plane of rotation since it's not always clear if the lever should move forward or backwards.
I'm not an expert on unreal-engine4(just learning myself) but all these are basic vector math and should be supported well. Check out dot product and cross product on wikipedia since they are super useful for these kind of tricks.
Here's one approach:
When the user clicks on the lever, Suppose there is a a plane through the pivot of the lever whose normal is the same as the direction from the camera to the pivot.. Calculate the intersection point of the cursor's ray and that plane.
FVector rayOrigin;
FVector rayDirection;
FVector cameraPosition;
FVector leverPivotPosition;
FVector planeNormal = (leverPivotPosition-cameraPosition).GetSafeNormal(0.0001f);
float t = ((leverPivotPosition - rayOrigin) | planeNormal) / (planeNormal | rayDirection);
FVector planeHitPosition = rayOrigin + rayDirection * t;
Do a scalar projection of that onto the local top/bottom axis of the lever. Let's assume it's the local up/down axis:
FVector leverLocalUpDirectionNormalized;
float scalarPosition = planeHitPosition | leverLocalUpDirectionNormalized;
Then, in each other frame where the lever is held down, calculate the new scalarPosition for that frame. As scalarPosition increases between frames, the lever should rotate such that it moves towards the up side of the lever. As it decreases between frames, the lever should rotate towards the the down side of the lever.
I will start by apologizing I highly doubt I will have any of the correct terminology, unfortenately after a few hours of raw testing and mashing my head against the wall I can figure this out.
I working with an engine the orients its models using a bottom aligned system. Meaning that the z axis (in a z up system), is z - radius = origin, or if the model is sitting at 0,0,0 all the tris would be in positive-z space.
I am integrating bullet which is a center aligned system, meaning that the objects origin is in the center of the mass (in this case a simple aabb cube).
I have the problem that the yaw pitch roll and origin i pass into the renderer is offset by radius in the z+ direction. Now the biggest issue comes when the pitch or roll becomes something other than 0. Because bullet's center aligned system, rotates around the pitch and roll around the center and the render rotates around the bottom. So there is a clear difference in where the model and the bounding box are lining up.
So is there an algorithm to convert from these two forms of orientation?
OK I figured out my own issue.
So For any one that stumbles upon this ill explain what exactly is happening and how to fix it.
Simply put my question is how to convert from world space (x, y, z planes) to local space (relative x y z planes). So if you were to take an arrow and face it in the direction of 0 x 0 y 0 z, where as its origin was in positive space, say 1 x 0 y 0 z your arrow is facing in world space. Meaning that if you were to move forward you would be moving along the x plane, left along the y plane, and up along the z plane.
Now if that arrow rotated along the yaw x degrees so that it was pointing at 1 x 1 y 0 z now when you move forward you are no longer moving along just the x plane. So this is whats called moving in local space. Meaning you are moving along the planes that are relative to the yaw and pitch of your node (object, model, etc).
So in my case i have bullet which is working in "world space" and i have my render that is working in "model space" so i just need to put it in a matrix that converts the two.
Two convert the spaces I need a projection matrix that converts them.
So here's the link to the source that I found that converts this and explains the relative math fairly easily. http://www.codinglabs.net/article_world_view_projection_matrix.aspx
thanx every one
chasester
I'm writing a 2D game using a wrapper of OpenGLES. There is a camera aiming at a bunch of textures, which are the sprites for the game. The user should be able to move the view around by moving their fingers around on the screen. The problem is, the camera is about 100 units away from the textures, so when the finger is slid across the screen to pan the camera, the sprites move faster than the finger due to parallax effect.
So basically, I need to convert 2D screen coordinates, to 3D coordinates at a specific z distance away (in my case 100 away because that's how far away the textures are).
There are some "Unproject" functions in C#, but I'm using C++ so I need the math behind this function. I'm extremely new to 3D stuff and I'm very bad at math so if you can explain like you are explaining to a 10 year old that would be much appreciated.
If I can do this, I can pan the camera at such a speed so it looks like the distant sprites are panning with the users finger.
For picking purposes, there are better ways than doing a reverse projection. See this answer: https://stackoverflow.com/a/1114023/252687
In general, you will have to scale your finger-movement-distance to use it in a far-away plane (z unit away).
i.e, it l unit is the amount of finger movement and if you want to find the effect z unit away, the length l' = l/z
But, please check the effect and adjust the l' (double/halve etc) to get the desired effect.
Found the answer at:
Wikipedia
It has the following formula:
To determine which screen x-coordinate corresponds to a point at Ax,Az multiply the point coordinates by:
where
Bx is the screen x coordinate
Ax is the model x coordinate
Bz is the focal length—the axial distance from the camera center to the image plane
Az is the subject distance.
Heyo,
I'm currently working on a project where I need to place the camera such that the full motion of a character would be viewable without moving the camera. I have the position where the character starts, as well as the maximum distance that the character will travel in all three directions (X,Y, & Z). I also have the field of view (which is 90 degrees).
Is there an equation that'll figure out where I need to place the camera so it won't have to move to see the full motion?
Note: this is using OpenGL.
Clarification: The camera should be "in front" of the character that's in the motion, not above.
It'll also be moving along a ground plane.
If you make a bounding sphere of the points, all you need to do is keep the camera at a distance greater than or equal to the radius of the bounding sphere / sin(FOV/2).
For example, if you have a bounding sphere with radius Radius, and a specified Field of View FOV, your camera just needs to be at a point "Dist" away, pointing towards the center of the bounding sphere.
The equation for calculating the distance is:
Dist = Radius / sin( FOV/2 );
This will work in 3D, for a camera at any orientation.
Simply having the maximum range of (X, Y, Z) is not on its own sufficient, because the viewing port is essentially pyramid shaped, with the apex of the pyramid being at the eye position.
For the sake of argument, let's assume that all movement is in the (X, Z) plane (i.e. the ground), and the eye is directly above the origin 10m along the Y axis.
Assuming a square viewport, with your 90˚ field of view you'd be able to see from ±10m along both the X and Z axis, but only for objects who are on the ground (Y = 0). As soon as they come off the ground your view is reduced. If it's 1m of the ground then your (X, Z) extent is only ±9m.
Clearly a real camera could be placed anyway in the scene, facing any direction. Even the "roll" angle of the camera could change how much is visible. There are actually infinitely many such camera points, so you will need to constrain your criteria somewhat.
Take the line segment from the startpoint to the endpoint. Construct a plane orthogonal to this line segment through the midpoint of the line segment. Then position the camera somewhere in this plane at an distance of more than the following from the intersection point of plane and line looking at the intersection point. The up vector of the camera must be in the plane and the horizontal field of view must be 90 degrees.
distance = sqrt(dx^2 + dy^2 + dz^2) / 2
This camera positions will all have the startpoint and the endpoint on the left or right border of the view port and verticaly centered.
Another solution might be to write a function that takes the startpoint, the endpoint, and the desired position of both points on the screen. Then just solve the projection equation for the camera transformation.
It depends, for example, if the object is gonna move in a plane, you can just place the camera outside a ball circumscribed its movement area (this depends on the fact that FOV is 90, which is a fortunate angle).
If the object is gonna move in 3D, it's much more difficult. It would help if you'd specify the region where the object moves (cube vs. ball...) and the direction you want to see it from.