Rotation Atan2 CCW CW continuity - c++

This is what I have
I have a plane in 2D X,Y
I set his destination by clicking on the screen X', Y'
I calculate the angle it needs to turn to face this destination with:
// Calculate the angle between plane position and destination point
CVector3 facingVec = m_vDestination - m_vPosition;
fAngle = -Math::radiansToDegrees ( (float)atan2f(m_vDestination.x - m_vPosition.x, m_vDestination.y - m_vPosition.y) ) ;
//This doesn't work, when rotating from ex. 350 degree to 0
//plane has to go all the way around 360,350,340,330,
//...,120,...100,90,..down to zero
float angleToTurn = fAngle - m_fRotationAngle;
if(angleToTurn < 0)
{
angleToTurn += 360.0f;
}
m_fRotationAngle += (angleToTurn) / 5;
// Move the unit towards the calculated angle m_fRotationAngle
m_vDirection.x = (-sin(Math::degreesToRadians(m_fRotationAngle)));
m_vDirection.y = (cos(Math::degreesToRadians(m_fRotationAngle)));
m_vPosition += ( 2 * m_vDirection * fDelta);
This is how it looks like
YT Video - sorry for the demo version, i couldn't get anything free at this moment.
This is what I need
I need this to behave properly, let's say plane is rotated at angle 350.
I set the destination and new angle should be 15.
Instead of going: 350,340,330,320,310,300,290,...10,0,15
It should continue: 350,0,15
Hope you can help me out with this guys, I've already dropped bezier approach - and I'm struggling with this since few days now.

If I read this correctly, you're trying to find the smallest angle to interpolate between the two vectors? If so, the following algorithm should work:
Find the angle of the first vector, relative to the fixed vector [1, 0]. This is a1.
Find the angle of the second vector, relative to the fixed vector [1, 0]. This is a2.
Let da = a2 - a1.
if da > 180, da -= 360;
else if da < 180, da += 360;
You need to calculate the angles with respect to another third vector [1, 0] so you can determine weather to rotate left or right.
Edit: I saw your YouTube link was broken, now I see it's working again. I think my answer is what you're after.

Related

Ellipse rotated not centered

I am trying to draw a rotated ellipse not centered at the origin (in c++).
so far my code "works":
for (double i = 0; i <= 360; i = i + 1) {
theta = i*pi / 180;
x = (polygonList[compt]->a_coeff / 2) * sin(theta) + polygonList[compt]->centroid->datapointx;
y = (polygonList[compt]->b_coeff / 2) * cos(theta) + polygonList[compt]->centroid->datapointy;
xTmp = (x - polygonList[compt]->centroid->datapointx)* cos(angle1) - (y - polygonList[compt]->centroid->datapointy)*sin(angle1) + polygonList[compt]->centroid->datapointx;
yTmp = (x - polygonList[compt]->centroid->datapointx)* sin(angle1) + (y - polygonList[compt]->centroid->datapointy)*cos(angle1) + polygonList[compt]->centroid->datapointy;
}
PolygonList is a list of "bloc" which will be replaced by an ellipse of same area.
My issue is that the angles are not quite exact, as if I had to put a protractor that'd fit the shape of my ellipse, the protractor would obviously get squeezed, and so would be the angles (is that clear ?)
Here is an example: I am trying to set a point on the top ellipse (E1) which would be lying on a line drawn between the centroid of E1, and any point on the second ellipse (E2).On this example, the point on E2 lies at an angle of ~220-230 degree. I am able to catch this angle, the angle seems ok.
The problem is that if I try to project this point on E1 by using this angle of ~225 degree, I end up on the second red circle on top. it looks like my angle is now ~265 degree, but in fact, if I shape the protractor to fit in my ellipse, I get the right angle (~225) ,cf img 2)
it is a bit hard to see the angle on that re-shaped protractor, but it does show ~225 degree.
My conclusion is that the ellipse is drawn like if I had to drew a circle and then I'd compress it, which changes the distance between the angles.
Could someone tell me how I could fix that ?
PS: to draw those ellipses I just use a for loop which plots a dot at every angle (from 0 to 360). we clearly see on the first picture that the distance between the dots are different whether we are at 0 or at 90 degree.
your parametrisation is exactly that, a circle is a case of ellipse with both axes are equal. It sounds like you need use rational representation of ellipse instead of standard: https://en.m.wikipedia.org/wiki/Ellipse
So, I've asked the question above so that I could find a possible overlap between 2 ellipses by checking the distance between any point on E2 and its projection on E1: if the distance between the centroid of E1 and the projected dot on E1 is larger than the distance between the centroid of E1 to a dot on E2 I'll assume an overlap. I reckon this solution has never been tried (or I haven't search enough) and should work fine. But before working I needed to get those angles right.
I have found a way to avoid using angles and projected dots, by checking the foci:
the sum of the distance between the focus A and B to any point around an axis is constant (let's call it DE1 for E1).
I then check the distance between my foci and any point on E2. If that distance becomes less than DE1, I'll assume a connection.
So far it seems to work fine :)
I'll put that here for anyone in need.
Flo

Adjusting glRotate, using dot product

Introducing:
I'm developing a little Tower defense game in opengl, currently I'm just despairing of a little problem....
I want the projectiles from the tower to aim with the head facing the unit. So my problem is more a mathmatical one but it belongs to opengl :)
I had the following idea; I could use a dot product to get an angle rotating around the x axis to get the head depending on the distance just straight down or flat to the ground and after that an additional angle to rotate around the y axis that the head of the arrow is everytime adjusted to the unit it's aiming on.
My code for the angle of rotation around the X axis (i called it m_fYNeigung because the height(Y) of the head changes by rotating around the x axis) looks like this:
plocalTowerArray[(sizeMapIndexY * 12) + sizeMapIndexX].Projektils[byteProjectilIndex].m_fYNeigung =
RADIANS_TO_DEGREES (acos ((float)
(
(faTowerPosition[0]) * (plocalTowerArray[(sizeMapIndexY * 12) + sizeMapIndexX].Projektils[byteProjectilIndex].m_faProDirectionVector[0]) +
(faTowerPosition[1] - 1) * (plocalTowerArray[(sizeMapIndexY * 12) + sizeMapIndexX].Projektils[byteProjectilIndex].m_faProDirectionVector[1]) +
(faTowerPosition[2]) * (plocalTowerArray[(sizeMapIndexY * 12) + sizeMapIndexX].Projektils[byteProjectilIndex].m_faProDirectionVector[2])
)
/
(
fabs (faTowerPosition[0]) * fabs (plocalTowerArray[(sizeMapIndexY * 12) + sizeMapIndexX].Projektils[byteProjectilIndex].m_faProDirectionVector[0]) +
fabs (faTowerPosition[1] - 1) * fabs (plocalTowerArray[(sizeMapIndexY * 12) + sizeMapIndexX].Projektils[byteProjectilIndex].m_faProDirectionVector[1]) +
fabs (faTowerPosition[2]) * fabs (plocalTowerArray[(sizeMapIndexY * 12) + sizeMapIndexX].Projektils[byteProjectilIndex].m_faProDirectionVector[2])
)
));
where faTowerPosition is the first vector, which is pointing down from the top of the tower (the arrow also starts at faTowerPosition[X/Y/Z]) the second vector for the dot product is m_faProDirectionVector which is a normalized direction vector describing the route of the arrow from the tower to the unit.
The Opengl Drawing part looks just as simple as this:
for (sizeJ = 0; sizeJ < localTowerArray[sizeI].m_byteProjectilAmount; sizeJ++)
{
if (localTowerArray[sizeI].Projektils[sizeJ].m_bOnFlight == true)
{
glPushMatrix();
glTranslatef (localTowerArray[sizeI].Projektils[sizeJ].m_faProPosition[0], localTowerArray[sizeI].Projektils[sizeJ].m_faProPosition[1], localTowerArray[sizeI].Projektils[sizeJ].m_faProPosition[2]);
//glRotatef (360.0f - localTowerArray[sizeI].Projektils[sizeJ].m_fXNeigung, 0, 1, 0);
glRotatef (localTowerArray[sizeI].Projektils[sizeJ].m_fYNeigung, 1, 0, 0);
DrawWaveFrontObject (m_pArrowProjektilObject);
glPopMatrix();
}
}
Just ignore the calculations I'm doing to the angle, I just did it to experiment with the acting of the arrows, i just noticed that it appears as would the arrow act different depending on the (i gotta say: the buildable map is scaled by x: -3.4 to 3.4 and z from 4 to -4) cords the tower was builded on -x/z,-z/x,z/x,-z/-x all these cases i guess are different and at least depending on the unit is running left or right side of the tower, the acting is also different.... so what i forgot to remind by using the dot product in this way?
First at all, your code is very difficult to understand, so I'm guessing a lot to try to answer you. If I assume something wrong, my apologize for it.
I am assuming that you want to use the euler angle rotation to align correctly your projectiles. So, first you will do a X rotation and after that, a Y rotation.
To do a X rotation, your vectors, for the dot product, must be on an YZ plane and assuming that your projectile start at Z direction, your first vector is (0, 0, 1). The second vector, as you said, is a vector pointing to unit and could be expressed by (px, py, pz). You must project this vector to the plane YZ to get the second vector for your dot product, so this vector will be (0, py, pz)
Now, to calculate the dot product you apply the following formule
x1.x2+y1.y2+z1.z2 = |p1|.|p2|.cos a, where |p1| and |p2| is the module of vector (its length)
In this example, the first vector is unitary, but the second not. So |p2| = sqrt(py^2 +pz^2). Thereafter:
acos(a) = pz/sqrt(py^2 + pz^2)
This will give you the angle around X axis. Do the same calculation to achieve Y angle rotation
PS. After I wrote this answer, I noted that you use the function "fabs". I guess you want to find the module of you second vector, but fabs give you the absolute value of a escalar. To calculate a module of a vector (its length) you need to use the above formulae as cited.

Ray Tracing: Sphere distortion due to Camera Movement

I am building a ray Tracer from scratch. My question is:
When I change camera coordinates the Sphere changes to ellipse. I don't understand why it's happening.
Here are some images to show the artifacts:
Sphere: 1 1 -1 1.0 (Center, radius)
Camera: 0 0 5 0 0 0 0 1 0 45.0 1.0 (eyepos, lookat, up, foy, aspect)
But when I changed camera coordinate, the sphere looks distorted as shown below:
Camera: -2 -2 2 0 0 0 0 1 0 45.0 1.0
I don't understand what is wrong. If someone can help that would be great!
I set my imagePlane as follows:
//Computing u,v,w axes coordinates of Camera as follows:
{
Vector a = Normalize(eye - lookat); //Camera_eye - Camera_lookAt
Vector b = up; //Camera Up Vector
m_w = a;
m_u = b.cross(m_w);
m_u.normalize();
m_v = m_w.cross(m_u);
}
After that I compute directions for each pixel from the Camera position (eye) as mentioned below:
//Then Computing direction as follows:
int half_w = m_width * 0.5;
int half_h = m_height * 0.5;
double half_fy = fovy() * 0.5;
double angle = tan( ( M_PI * half_fy) / (double)180.0 );
for(int k=0; k<pixels.size(); k++){
double j = pixels[k].x(); //width
double i = pixels[k].y(); //height
double XX = aspect() * angle * ( (j - half_w ) / (double)half_w );
double YY = angle * ( (half_h - i ) / (double)half_h );
Vector dir = (m_u * XX + m_v * YY) - m_w ;
directions.push_back(dir);
}
After that:
for each dir:
Ray ray(eye, dir);
int depth = 0;
t_color += Trace(g_primitive, ray, depth);
After playing a lot and with the help of the comments of all you guys I was able to create successfully my rayTracer properly. Sorry for answering late, but I would like to close this thread with few remarks.
So, the above mentioned code is perfectly correct. Based on my own assumptions (as mentioned in above comments) I have decided to set my Camera parameters like that.
The problem I mentioned above is a normal behaviour of the camera (as also mentioned above in the comments).
I have got good results now but there are few things to check while coding a rayTracer:
Always make sure to take care of Radians to Degrees (or vice versa) conversion while computing FOV and ASPECT RATIO. I did it as follows:
double angle = tan((M_PI * 0.5 * fovy) / 180.0);
double y = angle;
double x = aspect * angle;
2) While computing Triangle intersections, make sure to implement cross product properly.
3) While using intersections of different objects make sure to find the intersection which is at a minimum distance from the camera.
Here's the result I got:
Above is a very simple model (courtesy UCBerkeley), which I rayTraced.
This is the correct behavior. Get a camera with a wide angle lens, put the sphere near the edge of the field of view and take a picture. Then in a photo app draw a circle on top of the photo of the sphere and you will see that it's not a circular projection.
This effect will be magnified by the fact that you set aspect to 1.0 but your image is not square.
A few things to fix:
A direction vector is (to - from). You have (from - to), so a is pointing backward. You'll want to add m_w at the end, rather than subtract it. Also, this fix will rotate your m_u,m_v by 180 degrees, which will make you about to change (j - half_w) to (half_w - j).
Also, putting all the pixels and all the directions in lists is not as efficient as just looping over x,y values.

Two points rotating around same center but distance grows

I want to achieve that two points are rotating around each other. I therefore use a rotation matrix. However I now get the problem that the distance between the points is growing (see atached video 1). The distance however should stay constant over my whole simulation.
Here is my code I use for calculating the speed:
Where p1 and p2 are the two points.
double xPos = p0.x+p1.x;
double yPos = p0.y+p1.y;
//The center between p1 and p2
xPos /=2;
yPos /=2;
//the rotating angle
double omega = 0.1;
//calculate the new positions
double x0new = xPos + (p0.x-xPos)*std::cos(omega) - (p0.y-yPos)*std::sin(omega);
double y0new = yPos + (p0.x-xPos)*std::sin(omega) + (p0.y-yPos)*std::cos(omega);
double x1new = xPos + (p1.x-xPos)*std::cos(omega) - (p1.y-yPos)*std::sin(omega);
double y1new = yPos + (p1.x-xPos)*std::sin(omega) + (p1.y-yPos)*std::cos(omega);
//the speed is exatly the difference as I integrate one timestep
p0.setSpeed(p0.x-x0new, p0.y-y0new);
p1.setSpeed(p1.x-x1new, p1.y-y1new);
I then integrate the speed exactly one timestep. What is wrong in my calculation?
Update
It seems that my integration is wrong. If I set the positions direct it works perfect. However I do not now what is wrong with this integration:
setSpeed(ux,uy){
ux_=ux;
uy_=uy;
}
// integrate one timestep t = 1
move(){
x = x + ux_;
y = y + uy_;
}
Video of my behaviour
There's nothing clearly wrong in this code, but the "speed" integration that isn't shown, suggests that you might be integrating linearly between old and new position, which would make the orbits expand when speed > nominal speed and to contract when speed < nominal_speed.
As I suspected. The integration is actually extrapolation at the line segment between point p0 and p1 which are supposed to be at a fixed distance from origin (a physical simulation would probably make the trajectory elliptical...)
Thus if the extrapolation factor would be 0, the new position would be on the calculated perimeter. If it was < 0 (and > -1), you'd be interpolating inside the expected trajectory.
O This beautiful ascii art is trying to illustrate the integration
/ x is the original position, o is the new one and O is the
/ ___----- "integrated" value and the arc is a perfect circle :)
o-- Only at the calculated position o, there is no expansion.
--/
/ /
/ /
| /
x
At the first glance, the main reason is that you update p0 and p1 coordinates in the each iteration. That would accumulate inaccuracies, which are possibly coming from setSpeed.
Instead, you should use the constant initial coordinates p0 and p1, but increase omega angle.

Direct3D & iPhone Accelerometer Matrix

I am using a WinSock connection to get the accelerometer info off and iPhone and into a Direct3D application. I have modified Apples GLGravity's sample code to get my helicopter moving in relation to gravity, however I need to "cap" the movement so the helicopter can't fly upside down! I have tried to limit the output of the accelerometer like so
if (y < -0.38f) {
y = -0.38f;
}
Except this doesn't seem to work!? The only thing I can think of is I need to modify the custom matrix, but I can't seem to get my head around what I need to be changing. The matrix is code is below.
_x = acceleration.x;
_y = acceleration.y;
_z = acceleration.z;
float length;
D3DXMATRIX matrix, t;
memset(matrix, '\0', sizeof(matrix));
D3DXMatrixIdentity(&matrix);
// Make sure acceleration value is big enough.
length = sqrtf(_x * _x + _y * _y + _z * _z);
if (length >= 0.1f && kInFlight == TRUE) { // We have a acceleration value good enough to work with.
matrix._44 = 1.0f; //
// First matrix column is a gravity vector.
matrix._11 = _x / length;
matrix._12 = _y / length;
matrix._13 = _z / length;
// Second matrix is arbitrary vector in the plane perpendicular to the gravity vector {Gx, Gy, Gz}.
// defined by the equation Gx * x + Gy * y + Gz * z = 0 in which we set x = 0 and y = 1.
matrix._21 = 0.0f;
matrix._22 = 1.0f;
matrix._23 = -_y / _z;
length = sqrtf(matrix._21 * matrix._21 + matrix._22 * matrix._22 + matrix._23 * matrix._23);
matrix._21 /= length;
matrix._22 /= length;
matrix._23 /= length;
// Set third matrix column as a cross product of the first two.
matrix._31 = matrix._12 * matrix._23 - matrix._13 * matrix._22;
matrix._32 = matrix._21 * matrix._13 - matrix._23 * matrix._11;
matrix._33 = matrix._11 * matrix._22 - matrix._12 * matrix._21;
}
If anyone can help it would be much appreciated!
I think double integration is probably over-complicating things. If I understand the problem correctly, the iPhone is giving you a vector of values from the accelerometers. Assuming the user isn't waving it around, that vector will be of roughly constant length, and pointing directly downwards with gravity.
There is one major problem with this, and that is that you can't tell when the user rotates the phone around the horizontal. Imagine you lie your phone on the table, with the bottom facing you as you're sitting in front of it; the gravity vector would be (0, -1, 0). Now rotate your phone around 90 degrees so the bottom is facing off to your left, but is still flat on the table. The gravity vector is still going to be (0, -1, 0). But you'd really want your helicopter to have turned with the phone. It's a basic limitation of the fact that the iPhone only has a 2D accelerometer, and it's extrapolating a 3D gravity vector from that.
So let's assume that you've told the user they're not allowed to rotate their phone like that, and they have to keep it with the bottom point to you. That's fine, you can still get a lot of control from that.
Next, you need to cap the input such that the helicopter never goes more than 90 degrees over on it's side. Imagine the vector that you're given as being a stick attached to your phone, and dangling with gravity. The vector you have is describing the direction of gravity, relative to the phone's flat surface. If it were (0, -1, 0) the stick is pointing directly downwards (-y). if it were (1, 0, 0), the stick is pointing to the right of the phone (+x), and implies that the phone has been twisted 90 degrees clockwise (looking away from you at the phone).
Assume in this metaphor that the stick has full rotational freedom. It can be pointing in any direction from the phone. So moving the stick around describes the surface of a sphere. But crucially, you only want the stick to be able to move around the lower half of that sphere. If the user twists the phone so that the stick would be in the upper half of the sphere, you want it to cap such that it's pointing somewhere around the equator of the sphere.
You can achieve this quite cleanly by using polar co-ordinates. 3D vectors and polar co-ordinates are interchangeable - you can convert to and from without losing any information.
Convert the vector you have (normalised of course) into a set of 3D polar co-ordinates (you should be able to find this logic on the web quite easily). This will give you an angle around the horizontal plane, and an angle for vertical plane (and a distance from the origin - for a normalised vector, this should be 1.0). If the vertical angle is positive, the vector is in the upper half of the sphere, negative it's in the lower half. Then, cap the vertical angle so that it is always zero or less (and so in the lower half of the sphere). Then you can take the horizontal and capped vertical angle, and convert it back into a vector.
This new vector, if plugged into the matrix code you already have, will give you the correct orientation, limited to the range of motion you need. It will also be stable if the user turns their phone slightly beyond the 90 degree mark - this logic will keep your directional vector as close to the user's current orientation as possible, without going beyond the limit you set.
Try normalizing the acceleration vector first. (edit: after you check the length) (edit edit: I guess I need to learn how to read... how do I delete my answer?)
So if I understand this correctly, the iPhone is feeding you accelerometer data, saying how hard you're moving the iPhone in 3 axes.
I'm not familiar with that apple sample, so I don't know what its doing. However, it sounds like you're mapping acceleration directly to orientation, but I think what you want to do is doubly integrate the acceleration in order to obtain a position and look at changes in position in order to orient the helicopter. Basically, this is more of a physics problem than a Direct3D problem.
It looks like you are using the acceleration vector from the phone to define one axis of a orthogonal frame of reference. And I suppose +Y is points towards the ground so you are concerned about the case when the vector points towards the sky.
Consider the case when the iphone reports {0, -6.0, 0}. You will change this vector to {0, -.38, 0}. But they both normalize to {0, -1.0, 0}. So, the effect of clamping y at -.38 is influenced by magnitude of the other two components of the vector.
What you really want is to limit the angle of the vector to the XZ plane when Y is negative.
Say you want to limit it to be no more than 30 degrees to the XZ plane when Y is negative. First normalize the vector then:
const float limitAngle = 30.f * PI/180.f; // angle in radians
const float sinLimitAngle = sinf(limitAngle);
const float XZLimitLength = sqrtf(1-sinLimitAngle*sinLimitAngle);
if (_y < -sinLimitAngle)
{
_y = -sinLimitAngle;
float XZlengthScale = XZLimitLength / sqrtf(_x*_x + _z*_z);
_x *= XZlengthScale;
_z *= XZlengthScale;
}