how to calculate which direction to rotate? - c++

I'm trying to implement a simple AI system in my DirectX Application. I'm trying to get my Ai to rotate and face the direction I want it to face towards, which I manage to do, but can't figure out how to get it to determine how to rotate to the given direction (i.e should it rotate left or rotate right?).
Here is the code I've got which works out the angle it needs to rotate by to face the direction it's given:
D3DXVECTOR3 incident = destination - position;
float top = D3DXVec3Dot(&incident, &forwardVec);
float bottom = sqrt((incident.x * incident.x) + (incident.y * incident.y) + (incident.z * incident.z)) *
sqrt((forwardVec.x * forwardVec.x) + (forwardVec.y * forwardVec.y) + (forwardVec.z * forwardVec.z));
float remainingAngle = acos(top/bottom) * 180.0f / PI;
The forwardVec is a D3DXVECTOR3 of which way the AI is currently facing.

The dot product rule just tells you the shortest angle (which is always less than 180!), not which way to go. Do you have a way to get a direction angle out of a D3DXVECTOR (ie polar form kind of thing?) If so, then you can subtract (desired angle)-(current angle) and if that is within -180 to 180 go counterclockwise; otherwise, go clockwise.
I have a feeling that the cross product might also give a method, but I'd have to sit down with a piece of paper to work it out.

Let's suppose that straight ahead is 0 and you're counting degrees in a clockwise fashion.
If you need to turn 180 or less then you're moving right.
If you need to turn more than 180 you have to turn left. This turn is a left turn of 360 - value degrees.
I hope this answers your question.

The angle between 2 normalized vectors:
double GetAng (const D3DXVECTOR3& Xi_V1, const D3DXVECTOR3& Xi_V2)
{
D3DXVECTOR3 l_Axis;
D3DXVec3Cross(&l_Axis, &Xi_V1, &Xi_V2);
return atan2(D3DXVec3Length(&l_Axis), D3DXVec3Dot(&Xi_V1, &Xi_V2));
}
The returned angle is between -PI and PI and represents the shortest anglular rotation from v1 to v2.

Related

Quaternion rotation ignoring yaw

I'am working with Quaternion and one LSM6DSO32 captor gyro + accel. So I fused datas coming from my captor and after that I have a Quaternion, everything works well.
Now I'd like to detect if my Quaternion has rotated more than 90° about a initial quaternion, here is what I do, first I have q1 is my initial quaternion, q2 is the Quaternion coming from my fusion data, to detect if q2 has rotated more than 90° from q1 I do :
q_conj = conjugateQuaternion(q2);
q_mulitply = multiplyQuaternion(q1, q_conj);
float angle = (2 * acos(q_mulitply.element.w)) * RAD_TO_DEG;
if(angle > 90.0f)
do something
this is works very well I can detect if q2 has rotated more than 90°. But my "problem" is I also detect 90° rotation in yaw, and I don't want integrate yaw in my test. Is it possible to nullify yaw (z component in my quaternion) without modify w, x and y component ?
My final objective is to detect a rotation more than 90° but without caring yaw, and I don't want to use Euler angle because I want avoid Gimbal lock
Edit : I want to calculate the magnitude between q1and q2 and don't care about yaw
The "yaw" of a quaternion generally means q_yaw in a quaternion formed by q_roll * q_pitch * q_yaw. So that quaternion without its yaw would be q_roll * q_pitch. If you have the pitch and roll values at hand, the easiest thing to do is just to reconstruct the quaternion while ignoring q_yaw.
However, if we are really dealing with a completely arbitrary quaternion, we'll have to get from q_roll * q_pitch * q_yaw to q_roll * q_pitch.
We can do it by appending the opposite transformation at the end of the equation: q_roll * q_pitch * q_yaw * conj(q_yaw). q_yaw * conj(q_yaw) is guaranteed to be the identity quaternion as long as we are only dealing with normalized quaternions. And since we are dealing with rotations, that's a safe-enough assumption.
In other words, removing the "Yaw" of a quaternion would involve:
Find the yaw of the quaternion
Multiply the quaternion by the conjugate of that.
So we need to find the yaw of the quaternion, which is how much the forward vector is rotated around the up axis by that quaternion.
The simplest way to do that is to just try it out, and measure the result:
Transform a reference forward vector (on the ground plane) by the quaternion
Take that and project it back on the ground plane.
Get the angle between this projection and the reference vector.
Form a "Yaw" quaternion with that angle around the Up axis.
Putting all this together, and assuming you are using a Y=up system of coordinates, it would look roughly like this:
quat remove_yaw(quat q) {
vec3 forward{0, 0, -1};
vec3 up{0, 1, 0};
vec3 transformed = q.rotate(forward);
vec3 projected = transformed.project_on_plane(up);
if( length(projected) < epsilon ) {
// TODO: unsolvable, what should happen here?
}
float theta = acos(dot(normalize(projected), forward));
quat yaw_quat = quat.from_axis_angle(up, theta);
return multiply(q, conjugate(yaw_quat));
}
This can be simplified a bit, obviously. For example, the conjugate of a axis-angle quaternion is the same thing as a quaternion of the negative angle around the same axis, and I'm sure there are other possible simplifications here. However, I wanted to illustrate the principle as clearly as possible.
There's also a singularity when the pitch is exactly ±90°. In these cases the yaw is gimbal-locked into being indistinguishable from roll, so you'll have to figure out what you want to do when length(projected) < epsilon.

Rotating a matrix in the direction of a vector?

I have a player in the shape of a sphere that can move around freely in the directions x and z.
The players current speed is stored in a vector that is added to the players position on every frame:
m_position += m_speed;
I also have a rotation matrix that I'd like to rotate in the direction that the player's moving in (imagine how a ball would rotate if it rolled on the floor).
Here's a short video to help visualise the problem: http://imgur.com/YrTG2al
Notice in the video when I start moving up and down (Z) as opposed to left and right (X) the rotation axis no longer matches the player's movement.
Code used to produce the results:
glm::vec3 UP = glm::vec3(0, 1, 0);
float rollSpeed = fabs(m_Speed.x + m_Speed.z);
if (rollSpeed > 0.0f) {
m_RotationMatrix = glm::rotate(m_RotationMatrix, rollSpeed, glm::cross(UP, glm::normalize(m_Speed)));
}
Thankful for help
Your rollSpeed computation is wrong -- e.g., if the signs of m_Speed.x and m_Speed.z speed are different, they will subtract. You need to use the norm of the speed in the plane:
float rollSpeed = sqrt(m_Speed.x * m_Speed.x + m_Speed.y * m_Speed.y);
To be more general about it, you can re-use your cross product instead. That way, your math is less likely to get out of sync -- something like:
glm::vec3 rollAxis = glm::cross(UP, m_speed);
float rollSpeed = glm::length(rollAxis);
m_RotationMatrix = glm::rotate(m_RotationMatrix, rollSpeed, rollAxis);
rollSpeed should be the size of the speed vector.
float rollSpeed = glm::length(m_Speed);
The matrix transform expects an angle. The angle of rotation will depend on the size of your ball. But say it's radius r then the angle (in radians) you need is
angle = rollSpeed/r;
If I understood correctly you need a matrix rotation which would work in any axis direction(x,y,z).
I think you should write a rotate() method per axis (x, y, z), also you should point to direction on which axis your direction points, you should write direction.x or direction.y or direction.z and rotation matrix will understand to where the direction vector is being point.

OpenGL camera/direction vector

I've been trying to figure out what 2 portions of code are doing in this tutorial: Keyboard and Mouse.
Specifically:
// Direction : Spherical coordinates to Cartesian coordinates conversion
glm::vec3 direction(
cos(verticalAngle) * sin(horizontalAngle),
sin(verticalAngle),
cos(verticalAngle) * cos(horizontalAngle)
);
and
// Right vector
glm::vec3 right = glm::vec3(
sin(horizontalAngle - 3.14f/2.0f),
0,
cos(horizontalAngle - 3.14f/2.0f)
);
I don't see how the first one is spherical -> cartesian. When I look it up, I get:
x = r * sin(theta) * cos(phi)
y = r * sin(theta) * sin(phi)
z = r * cos(theta)
I've read on Euler angles, axis-angle and quaternions none of those have shed light on what this is doing or I'm just not able to grasp what I'm reading. ;)
On the 2nd one, shouldn't the right vector just be 90 degrees to the right of the direction vector?
This has a lot to do with the tutorial maker, and how he has decided to use spherical coordinates to generate his viewing angles. His approach is interesting, but remember that you can come up with your own!
// Direction : Spherical coordinates to Cartesian coordinates conversion
glm::vec3 direction(
cos(verticalAngle) * sin(horizontalAngle),
sin(verticalAngle),
cos(verticalAngle) * cos(horizontalAngle)
);
The reason that this looks different than the formula that you looked up, is because it's the same idea, but converted into a different space. The author simply wants the camera perspective to be "straight ahead" when verticalAngle == 0 && horizontalAngle == 0
Work it out yourself!
x = cos(0) * sin(0) = 0
y = sin(0) = 0
z = cos(0) * cos(0) = 1
So, in this instance, the "look" vector of the camera is pointed directly into the z-axis, which in the case of a typical OpenGL application, would generally be considered as straight ahead (ie: Y-Axis is usually up and down).
Try calculating different angles and see how that would make the camera look vector spin around.
In the second instance, the author has taken some liberties with the formula, and defined it in a way which would only be useful in first-person games / applications. There are some 3D situations in which the camera can be oriented in a different way (a flight simulator, for example). Regardless, it's still the same idea. The author is just adjusting spherical coordinates to his own needs.
Personally, I prefer to use euler angles to do camera angles. It's a little bit more work to set up (you'll need to do some matrix math), but it's a different way of solving the same problem. But that might be more useful in a situation that goes beyond the typical FPS-game.

Discontinuity in gluLookAt

This is how I calculate my line of sight vector and the up vector.
ly = sin(inclination);
lx = cos(inclination)*sin(azimuth);
lz = cos(inclination)*cos(azimuth);
uy = sin(inclination + M_PI / 2.0);
ux = cos(inclination + M_PI / 2.0)*sin(azimuth + M_PI);
uz = cos(inclination + M_PI / 2.0)*cos(azimuth + M_PI);
inclination is the angle of the line of sight vector from the xz plane and azimuth is the angle in the xz plane.
This works fine till my inclination reaches 225 degrees. At that point there is a discontinuity in the rotation for some reason. (Note By 225 degrees, I mean its past the upside-down point)
Any ideas as to why this is so?
EDIT: Never mind, figured it out. The azimuth does not need a 180 deg. tilt for the up vector.
I think you are talking of a limit angle of 90 degrees (pi). What you get is a normal behavior. When using gluLookAt, you specify an 'up' vector, used to determine the roll of the camera. In the special case where you are looking upside down, the 'up' vector is parallel to the eye direction vector, so it is not possible to determine the roll of the camera (this problem as an infinite number of solutions, so an arbitrary one is chosen by gluLookAt). May be you should compute this 'up' vector using your inclination and azimuth.

Convert Mouse pos into direction and back

I want to ask what would be the best formula to convert mouse X,Y position into one of 16 directiones from player position.
I work in c++ ,sfml 1.6 so I get every position easily, but I dont know how to convert them based on angle from player position or something. (I was never good on math and for more than 4 directions if statements looks too complex).
Also I want to send it to server which converts direction back into delta X,Y so he can do something like:
player.Move(deltaX * speed * GetElapsedTime(), ...Y);
The "easiest" way would be to convert your two sets of co-ordinates (one for current player position, one for current mouse position) into an angle relative to the player's position, where an angle of 0 is the line straight ahead of the player (or north, depending on how your game works). Then each of your sixteen directions would translate to a given 22.5 degree interval.
However, since you said you're bad at math, I imagine you're looking for something more concrete than that.
You could use atan2 to get the angle between the mouse position and the positive X axis:
#include <cmath>
float player_x = ...;
float player_y = ...;
float mouse_x = ...;
float mouse_y = ...;
float angle = std::atan2(mouse_y - player_y, mouse_x - player_x);
The angle returned by std::atan2() is a value between -M_PI (exclusive) and M_PI (inclusive):
-M_PI Left (-180°)
-0.5 * M_PI Down (-90°)
0 Right (0°)
0.5 * M_PI Up (90°)
M_PI Left (180°)
You can transform this value depending on how you want your mapping to "one of 16 directions", i.e., depending on what value you want to assign to which discrete direction.
Given the angle, getting a unit vector to represent the X/Y delta is quite easy, too:
float dx = std::cos(angle);
float dy = std::sin(angle);