Make sprite walk along an angle - c++

I am trying to make a sprite walk towards another sprite,
movement_angle = (atan2((y - target->y),(x - target->x)));
if(isWalkable(game,delta))
{
y -= ((int)(movementspeed)*delta)*sin(((movement_angle)));
x -= ((int)(movementspeed)*delta)*cos(((movement_angle)));
}
I am calculating the angle and adding the values to my x and y cords.
But the movement is not perfect, it does not really follow as I wish, takes weird turns and so on.

It's hard to say exactly what's going wrong from your code, but here are some things you could look at:
To get a vector from a point to target you'll need to subtract target->x by the point's x and target->y by the point's y
At least movement_angle needs to be a floating point number, though you'll likely want to use doubles for all your 3D geometry values
Why bother with trigonometric functions? You want to walk directly from point x, y to target->x, target->y, why not just use a vector, normalize it, and multiply by your movementspeed and delta?

Well you are casting an algebraic result to an int. I think that may be causing your problem.

Related

C++ dot product to get if one object is in front of another?

Im trying to use Vector2::Dot and Math::Acos to determine whether one gameobject is in front of another. With what I have, I'm able to get whether the first object's forward vector is at a perpendicular angle to second actor's forward vector:
float dotResult = Vector2::Dot(GetForwardDir(), secondActor->GetForward());
float angle = Math::Acos(dotResult);
if(angle <= minAngle)
//do something
Where GetForwardDir() is:
return Vector2(Math::Cos(mRotation), -Math::Sin(mRotation));
However this does not account for where the second actor is relative to the first actor. I don't think Im understanding something here about dotResult and the Acos angle.
How can I use this to get whether the first actor is in front of the second?
You probably want a cross product for this. The dot product will give you a scaled cos for the angle between two vectors. A cross product will give you the sin, which is more useful here.
Basically, make two vectors:
ActorA position + ActorA.Forward vector
ActorA position + (ActorB.position - ActorA.position)
Or, just
ActorA.forward X normalize(ActorB.Position - ActorA.position);
Since you're doing this in 2D, the result will be a scalar. If the sign of the result is positive, then ActorB is within 180º (+- 90º) of ActorA's forward vector, from ActorA's position.
Technically, you're calculating an orthogonal vector, the Z component of which will tell you if the target is in front of or behind the viewer, but the X and Y components of that vector are zero, because orthogonal.
Purists will say that the cross product does not exist in 2D, and they're technically correct, but this is how it's done.

'Ray' creation for raypicking not fully working

I'm trying to implement a 'raypicker' for selecting objects within my project. I do not fully understand how to implement this, but I understand conceptually how it should work. I've been trying to learn how to do this, but most tutorials I find go way over my head. My current code is based on one of the recent tutorials I found, here.
After several hours of revisions, I believe the problem I'm having with my raypicker is actually the creation of the ray in the first place. If I substitute/hardcode my near/far planes with a coordinate that would undisputably be located within the region of a triangle, the picker identifies it correctly.
My problem is this: my ray creation doesn't seem to fully take my current "camera" or perspective into account, so camera rotation won't affect where my mouse is.
I believe to remedy this I need something like using gluUnProject() or something, but whenever I used this the x,y,z coordinates returned would be incredibly small,
My current ray creation is a mess. I tried to use methods that others proposed initially, but it seemed like whatever method I tried it never worked with my picker/intersection function.
Here's the code for my ray creation:
void oglWidget::mousePressEvent(QMouseEvent *event)
{
QVector3D nearP = QVector3D(event->x()+camX, -event->y()-camY, -1.0);
QVector3D farP = QVector3D(event->x()+camX, -event->y()-camY, 1.0);
int i = -1;
for (int x = 0; x < tileCount; x++)
{
bool rayInter = intersect(nearP, farP, tiles[x]->vertices);
if (rayInter == true)
i = x;
}
if (i != -1)
{
tiles[i]->showSelection();
}
else
{
for (int x = 0; x < tileCount; x++)
tiles[x]->hideSelection();
}
//tiles[0]->showSelection();
}
To repeat, I used to load up the viewport, model & projection matrices, and unproject the mouse coordinates, but within a 1920x1080 window, all I get is values in the range of -2 to 2 for x y & z for each mouse event, which is why I'm trying this method, but this method doesn't work with camera rotation and zoom.
I don't want to do pixel color picking, because who knows I may need this technique later on, and I'd rather not give up after the amount of effort I put in so far
As you seem to have problems constructing your rays, here's how I would do it. This has not been tested directly. You could do it like this, making sure that all vectors are in the same space. If you use multiple model matrices (or stacks thereof) the calculation needs to be repeated separately with each of them.
use pos = gluUnproject(winx, winy, near, ...) to get the position of the mouse coordinate on the near plane in model space; near being the value given to glFrustum() or gluPerspective()
origin of the ray is the camera position in model space: rayorig = inv(modelmat) * camera_in_worldspace
the direction of the ray is the normalized vector from the position from 1. to the ray origin: raydir = normalize(pos - rayorig)
On the website linked they use two points for the ray and they don't seem to normalize the ray direction vector, so this is optional.
Ok, so this is the beginning of my trail of breadcrumbs.
I was somehow having issues with the QT datatypes for the matrices, and the logic pertaining to matrix transformations.
This particular problem in this question resulted from not actually performing any transformations whatsoever.
Steps to solving this problem were:
Converting mouse coordinates into NDC space (within the range of -1 to 1: x/screen width * 2 - 1, y - height / height * 2 - 1)
grabbing the 4x4 matrix for my view matrix (can be the one used when rendering, or re calculated)
In a new vector, have it equal the inverse view matrix multiplied by the inverse projection matrix.
In order to build the ray, I had to do the following:
Take the previously calculated value for the matrices that were multiplied together. This will be multiplied by a vector 4 (array of 4 spots), where it will hold the previously calculated x and y coordinates, as well as -1, then +1.
Then this vector will be divided by the last spot value of the entire vector
Create another vector 4 which was just like the last, but instead of -1, put "1" .
Once again divide that by its last spot value.
Now the coordinates for the ray have been created at the far and near planes, so it can intersect with anything along it in the scene.
I opened a series of questions (because of great uncertainty with my series of problems), so parts of my problem overlap in them too.
In here, I learned that I needed to take the screen height into consideration for switching the origin of the y axis for a Cartesian system, since windows has the y axis start at the top left. Additionally, retrieval of matrices was redundant, but also wrong since they were never declared "properly".
In here, I learned that unProject wasn't working because I was trying to pull the model and view matrices using OpenGL functions, but I never actually set them in the first place, because I built the matrices by hand. I solved that problem in 2 fold: I did the math manually, and I made all the matrices of the same data type (they were mixed data types earlier, leading to issues as well).
And lastly, in here, I learned that my order of operations was slightly off (need to multiply matrices by a vector, not the reverse), that my near plane needs to be -1, not 0, and that the last value of the vector which would be multiplied with the matrices (value "w") needed to be 1.
Credits goes to those individuals who helped me solve these problems:
srobins of facepunch, in this thread
derhass from here, in this question, and this discussion
Take a look at
http://www.realtimerendering.com/intersections.html
Lot of help in determining intersections between various kinds of geometry
http://geomalgorithms.com/code.html also has some c++ functions one of them serves your purpose

Robust atan(y,x) on GLSL for converting XY coordinate to angle

In GLSL (specifically 3.00 that I'm using), there are two versions of
atan(): atan(y_over_x) can only return angles between -PI/2, PI/2, while atan(y/x) can take all 4 quadrants into account so the angle range covers everything from -PI, PI, much like atan2() in C++.
I would like to use the second atan to convert XY coordinates to angle.
However, atan() in GLSL, besides not able to handle when x = 0, is not very stable. Especially where x is close to zero, the division can overflow resulting in an opposite resulting angle (you get something close to -PI/2 where you suppose to get approximately PI/2).
What is a good, simple implementation that we can build on top of GLSL atan(y,x) to make it more robust?
I'm going to answer my own question to share my knowledge. We first notice that the instability happens when x is near zero. However, we can also translate that as abs(x) << abs(y). So first we divide the plane (assuming we are on a unit circle) into two regions: one where |x| <= |y| and another where |x| > |y|, as shown below:
We know that atan(x,y) is much more stable in the green region -- when x is close to zero we simply have something close to atan(0.0) which is very stable numerically, while the usual atan(y,x) is more stable in the orange region. You can also convince yourself that this relationship:
atan(x,y) = PI/2 - atan(y,x)
holds for all non-origin (x,y), where it is undefined, and we are talking about atan(y,x) that is able to return angle value in the entire range of -PI,PI, not atan(y_over_x) which only returns angle between -PI/2, PI/2. Therefore, our robust atan2() routine for GLSL is quite simple:
float atan2(in float y, in float x)
{
bool s = (abs(x) > abs(y));
return mix(PI/2.0 - atan(x,y), atan(y,x), s);
}
As a side note, the identity for mathematical function atan(x) is actually:
atan(x) + atan(1/x) = sgn(x) * PI/2
which is true because its range is (-PI/2, PI/2).
Depending on your targeted platform, this might be a solved problem. The OpenGL spec for atan(y, x) specifies that it should work in all quadrants, leaving behavior undefined only when x and y are both 0.
So one would expect any decent implementation to be stable near all axes, as this is the whole purpose behind 2-argument atan (or atan2).
The questioner/answerer is correct in that some implementations do take shortcuts. However, the accepted solution makes the assumption that a bad implementation will always be unstable when x is near zero: on some hardware (my Galaxy S4 for example) the value is stable when x is near zero, but unstable when y is near zero.
To test your GLSL renderer's implementation of atan(y,x), here's a WebGL test pattern. Follow the link below and as long as your OpenGL implementation is decent, you should see something like this:
Test pattern using native atan(y,x): http://glslsandbox.com/e#26563.2
If all is well, you should see 8 distinct colors (ignoring the center).
The linked demo samples atan(y,x) for several values of x and y, including 0, very large, and very small values. The central box is atan(0.,0.)--undefined mathematically, and implementations vary. I've seen 0 (red), PI/2 (green), and NaN (black) on hardware I've tested.
Here's a test page for the accepted solution. Note: the host's WebGL version lacks mix(float,float,bool), so I added an implementation that matches the spec.
Test pattern using atan2(y,x) from accepted answer: http://glslsandbox.com/e#26666.0
Your proposed solution still fails in the case x=y=0. Here both of the atan() functions return NaN.
Further I would not rely on mix to switch between the two cases. I am not sure how this is implemented/compiled, but IEEE float rules for x*NaN and x+NaN result again in NaN. So if your compiler really used mix/interpolation the result should be NaN for x=0 or y=0.
Here is another fix which solved the problem for me:
float atan2(in float y, in float x)
{
return x == 0.0 ? sign(y)*PI/2 : atan(y, x);
}
When x=0 the angle can be ±π/2. Which of the two depends on y only. If y=0 too, the angle can be arbitrary (vector has length 0). sign(y) returns 0 in that case which is just ok.
Sometimes the best way to improve the performance of a piece of code is to avoid calling it in the first place. For example, one of the reasons you might want to determine the angle of a vector is so that you can use this angle to construct a rotation matrix using combinations of the angle's sine and cosine. However, the sine and cosine of a vector (relative to the origin) are already hidden in plain sight inside the vector itself. All you need to do is to create a normalized version of the vector by dividing each vector coordinate by the total length of the vector. Here's the two-dimensional example to calculate the sine and cosine of the angle of vector [ x y ]:
double length = sqrt(x*x + y*y);
double cos = x / length;
double sin = y / length;
Once you have the sine and cosine values, you can now directly populate a rotation matrix with these values to perform a clockwise or counterclockwise rotation of arbitrary vectors by the same angle, or you can concatenate a second rotation matrix to rotate to an angle other than zero. In this case, you can think of the rotation matrix as "normalizing" the angle to zero for an arbitrary vector. This approach is extensible to the three-dimensional (or N-dimensional) case as well, although for example you will have three angles and six sin/cos pairs to calculate (one angle per plane) for 3D rotation.
In situations where you can use this approach, you get a big win by bypassing the atan calculation completely, which is possible since the only reason you wanted to determine the angle was to calculate the sine and cosine values. By skipping the conversion to angle space and back, you not only avoid worrying about division by zero, but you also improve precision for angles which are near the poles and would otherwise suffer from being multiplied/divided by large numbers. I've successfully used this approach in a GLSL program which rotates a scene to zero degrees to simplify a computation.
It can be easy to get so caught up in an immediate problem that you can lose sight of why you need this information in the first place. Not that this works in every case, but sometimes it helps to think out of the box...
A formula that gives an angle in the four quadrants for any value
of coordinates x and y. For x=y=0 the result is undefined.
f(x,y)=pi()-pi()/2*(1+sign(x))* (1-sign(y^2))-pi()/4*(2+sign(x))*sign(y)
-sign(x*y)*atan((abs(x)-abs(y))/(abs(x)+abs(y)))

2d rotation on set of points causes skewing distortion

I'm writing an application in OpenGL (though I don't think this problem is related to that). I have some 2d point set data that I need to rotate. It later gets projected into 3d.
I apply my rotation using this formula:
x' = x cos f - y sin f
y' = y cos f + x sin f
Where 'f' is the angle. When I rotate the point set, the result is skewed. The severity of the effect varies with the angle.
It's hard to describe so I have pictures;
The red things are some simple geometry. The 2d point sets are the vertices for the white polylines you see around them. The first picture shows the undistorted pointsets, and the second picture shows them after rotation. It's not just skew that's occuring with the rotation; sometimes it seems like displacement occurs as well.
The code itself is trivial:
double cosTheta = cos(2.4);
double sinTheta = sin(2.4);
CalcSimplePolyCentroid(listHullVx,xlate);
for(size_t j=0; j < listHullVx.size(); j++) {
// translate
listHullVx[j] = listHullVx[j] - xlate;
// rotate
double xPrev = listHullVx[j].x;
double yPrev = listHullVx[j].y;
listHullVx[j].x = ((xPrev*cosTheta) - (yPrev*sinTheta));
listHullVx[j].y = ((yPrev*cosTheta) + (xPrev*sinTheta));
// translate
listHullVx[j] = listHullVx[j] + xlate;
}
If I comment out the code under '//rotate' above, the output of the application is the first image. And adding it back in gives the second image. There's literally nothing else that's going on (afaik).
The data types being used are all doubles so I don't think its a precision issue. Does anyone have any idea why rotation would cause skewing like the above pictures show?
EDIT
filipe's comment below was correct. This probably has nothing to do with the rotation and I hadn't provided enough information for the problem;
The geometry I've shown in the pictures represents buildings. They're generated from lon/lat map coordinates. In the point data I use to do the transform, I forgot to use an actual projection to cartesian coordinate space and just mapped x->lon, y->lat, and I think this is the reason I'm seeing the distortion. I'm going to request that this question be deleted since I don't think it'll be useful to anyone else.
Update:
As a result of your comments it tunred out the it is unlikely that the bug is in the presented code.
One final other hint: std transform formulars are only valid if the cooridnate system is cartesian,
on ios you sometimes have inverted y Achsis.

Quaternion - Rotate To

I have some object in world space, let's say at (0,0,0) and want to rotate it to face (10,10,10).
How do i do this using quaternions?
This question doesn't quite make sense. You said that you want an object to "face" a specific point, but that doesn't give enough information.
First, what does it mean to face that direction? In OpenGL, it means that the -z axis in the local reference frame is aligned with the specified direction in some external reference frame. In order to make this alignment happen, we need to know what direction the relevant axis of the object is currently "facing".
However, that still doesn't define a unique transformation. Even if you know what direction to make the -z axis point, the object is still free to spin around that axis. This is why the function gluLookAt() requires that you provide an 'at' direction and an 'up' direction.
The next thing that we need to know is what format does the end-result need to be in? The orientation of an object is often stored in quaternion format. However, if you want to graphically rotate the object, then you might need a rotation matrix.
So let's make a few assumptions. I'll assume that your object is centered at the world's point c and has the default alignment. I.e., the object's x, y, and z axes are aligned with the world's x, y, and z axes. This means that the orientation of the object, relative to the world, can be represented as the identity matrix, or the identity quaternion: [1 0 0 0] (using the quaternion convention where w comes first).
If you want the shortest rotation that will align the object's -z axis with point p:=[p.x p.y p.z], then you will rotate by φ around axis a. Now we'll find those values. First we find axis a by normalizing the vector p-c and then taking the cross-product with the unit-length -z vector and then normalizing again:
a = normalize( crossProduct(-z, normalize(p-c) ) );
The shortest angle between those two unit vectors found by taking the inverse cosine of their dot-product:
φ = acos( dotProduct(-z, normalize(p-c) ));
Unfortunately, this is a measure of the absolute value of the angle formed by the two vectors. We need to figure out if it's positive or negative when rotating around a. There must be a more elegant way, but the first way that comes to mind is to find a third axis, perpendicular to both a and -z and then take the sign from its dot-product with our target axis. Vis:
b = crossProduct(a, -z );
if ( dotProduct(b, normalize(p-c) )<0 ) φ = -φ;
Once we have our axis and angle, turning it into a quaternion is easy:
q = [cos(φ/2) sin(φ/2)a];
This new quaternion represents the new orientation of the object. It can be converted into a matrix for rendering purposes, or you can use it to directly rotate the object's vertices, if desired, using the rules of quaternion multiplication.
An example of calculating the Quaternion that represents the rotation between two vectors can be found in the OGRE source code for the Ogre::Vector3 class.
In response to your clarification and to just answer this, I've shamelessly copied a very interesting and neat algorithm for finding the quat between two vectors that looks like I have never seen before from here. Mathematically, it seems valid, and since your question is about the mathematics behind it, I'm sure you'll be able to convert this pseudocode into C++.
quaternion q;
vector3 c = cross(v1,v2);
q.v = c;
if ( vectors are known to be unit length ) {
q.w = 1 + dot(v1,v2);
} else {
q.w = sqrt(v1.length_squared() * v2.length_squared()) + dot(v1,v2);
}
q.normalize();
return q;
Let me know if you need help clarifying any bits of that pseudocode. Should be straightforward though.
dot(a,b) = a1*b1 + a2*b2 + ... + an*bn
and
cross(a,b) = well, the cross product. it's annoying to type out and
can be found anywhere.
You may want to use SLERP (Spherical Linear Interpolation). See this article for reference on how to do it in c++