How to get (X,Y) vectors from position and landmark - c++

I currently have an agent in a map, whose position is known as myPos=(myX,myY), but whose orientation myOri=(oriX,oriY) is unknown. I also see a landmark at position lm=(lmX,lmY) and I have both Cartesian and polar coordinates from my point of view to the landmark, as relLM=(relX,relY) and polLM=(r,theta), respectively.
My goal is to find how my orientation vector is related with the X and Y axis, as XX=(xX, xY) and YY=(yX, yY). Assume for the following examples that X grows to the right and Y grows upwards, and that an agent with 0 rotation follows the X axis (so an agent looking right has XX=(1,0) and YY=(0,1). This follows from the intuition where 0 angle rotation is on the X axis, PI/2 rotation is on the Y, PI is on -X, 3PI/2 is on -Y and 2PI is X.
Example) If myOri=(1,1) (agent is facing top right), then XX=(1, -1) (since the X axis is top right to him) and YY=(1, 1) (the Y axis is top left). In the picture below, X and Y are shown in red and green. My agent is in blue and the landmark in pink. Hence, our initial data are myPos=(0,-2), lm=(0,-1), relLM=(~0.7,~0.7).
By knowing myPos and lmPos, as well as relLM, this should be possible. However, I'm having trouble finding the appropriate vectors. What is the correct algorithm?
bool someFunction(Landmark *lm, Vector2f myPos, Vector2f *xx, Vector2f *yy){
// Vector from agent to landmark
Vector2f agentToLandmark(lm->position.x - myPos.x,
lm->position.y - myPos.y);
// Vector from agent to landmark, regarding agent's orientation
Vector2f agentFacingLandmark = lm->relPosition.toCartesian();
// Set the XX and YY values
// how?
}
My problem is actually in 3D, but using 2D makes the problem easier to explain.

Finding myOri
Since relLM is lm relative to myOri, lm + relLM must be in myPos + µ * myOri. Thus lm + relLM - myPos = myOri * µ. Since µ > 0 must be given in this case, and myOri only needs to indicate a direction, it's sufficient to choose an arbitrary µ > 0.
Finding xx and yy
I think your definition of xx is simply a vector representing the x-axis from the POV of the agent. And same for yy and the y-axis. This can easily be achieved. The angle between myOri and the x-axis is equal to the angle between the x-axis and xx, thus simply mirror myOri at the x-axis and you got xx. So xx = (myOri.x , myOri.y * (-1)). The angle between myOri and the y-axis is equal to the angle between myOri and yy, so yy = myOri.
Note that this is only a guess on what you mean.
Might be that I've misunderstood something. Just notify me if that's the case.

Related

Find a 2D point in space based on angle and distance

Ok.... so I made a quick diagram to sorta explain what I'm hoping to accomplish. Sadly math is not my forte and I'm hoping one of you wizards can give me the correct formulas :) This is for a c++ program, but really I'm looking for the formulas rather than c++ code.
Ok, now basically, the red circle is our 0,0 point, where I'm standing. The blue circle is 300 units above us and at what I would assume is a 0 degree's angle. I want to know, how I can find a find the x,y for a point in this chart using the angle of my choice as well as a certain distance of my choice.
I would want to know how to find the x,y of the green circle which is lets say 225 degrees and 500 units away.
So I assume I have to figure out a way to transpose a circle that is 500 units away from 0,0 at all points than pick a place on that circle based on the angle I want? But yeah no idea where to go from there.
A point on a plane can be expressed in two main mathematical representations, cartesian (thus x,y) and polar : using a distance from the center and an angle. Typically r and a greek letter, but let's use w.
Definitions
Under common conventions, r is the distance from the center (0,0) to your point, and
angles are measured going counterclockwise (for positive values, clockwise for negative), with the 0 being the horizontal on the right hand side.
Remarks
Note a few things about angles in polar representations :
angles can be expressed with radians as well, with π being the same angle as 180°, thus π/2 90° and so on. π=3.14 (approx.) is defined by 2π=the perimeter of a circle of radius 1.
angles can be represented modulo a full circle. A full circle is either 2π or 360°, thus +90° is the same as -270°, and +180° and -180° are the same, as well as 3π/4 and -5π/4, 2π and 0, 360° and 0°, etc. You can consider angles between [-π,π] (that is [-180,180]) or [0,2π] (i.e. [0,360]), or not restrain them at all, it doesn't matter.
when your point is in the center (r=0) then the angle w is not really defined.
r is by definition always positive. If r is negative, you can change its sign and add half a turn (π or 180°) to get coordinates for the same point.
Points on your graph
red : x=0, y=0 or r=0 w= any value
blue : x=0, y=300 or r=300 and w=90°
green : x=-400, y=-400 or r=-565 and w=225° (approximate values, I didn't do the actual measurements)
Note that for the blue point you can have w=-270°, and for the green w=-135°, etc.
Going from one representation to the other
Finally, you need trigonometry formulas to go back and forth between representations. The easier transformation is from polar to cartesian :
x=r*cos(w)
y=r*sin(w)
Since cos²+sin²=1, pythagoras, and so on, you can see that x² + y² = r²cos²(w) + r²sin²(w) = r², thus to get r, use :
r=sqrt(x²+y²)
And finally to get the angle, we use cos/sin = tan where tan is another trigonometry function. From y/x = r sin(w) / (r cos(w)) = tan(w), you get :
w = arctan(y/x) [mod π]
tan is a function modulo π, instead of 2π. arctan simply means the inverse of the function tan, and is sometimes written tan^-1 or atan.
By inverting the tangent, you get a result betweeen -π/2 and π/2 (or -90° and 90°) : you need to eventually add π to your result. This is done for angles between [π/2,π] and [-π,π/2] ([90,180] and [-180,-90]). These values are caracterized by the sign of the cos : since x = r cos(w) you know x is negative on all these angles. Try looking where these angles are on your graph, it's really straightforward. Thus :
w = arctan(y/x) + (π if x < 0)
Finally, you can not divide by x if it is 0. In that corner case, you have
if y > 0, w = π/2
if y < 0, w = -π/2
What is seems is that given polar coordinates, you want to obtain Cartesian coordinates from this. It's some simple mathematics and should be easy to do.
to convert polar(r, O) coordinates to cartesian(x, y) coordinates
x = r * cos(O)
y = r * sin(O)
where O is theta, not zero
reference: http://www.mathsisfun.com/polar-cartesian-coordinates.html

An inconsistency in my understanding of the GLM lookAt function

Firstly, if you would like an explanation of the GLM lookAt algorithm, please look at the answer provided on this question: https://stackoverflow.com/a/19740748/1525061
mat4x4 lookAt(vec3 const & eye, vec3 const & center, vec3 const & up)
{
vec3 f = normalize(center - eye);
vec3 u = normalize(up);
vec3 s = normalize(cross(f, u));
u = cross(s, f);
mat4x4 Result(1);
Result[0][0] = s.x;
Result[1][0] = s.y;
Result[2][0] = s.z;
Result[0][1] = u.x;
Result[1][1] = u.y;
Result[2][1] = u.z;
Result[0][2] =-f.x;
Result[1][2] =-f.y;
Result[2][2] =-f.z;
Result[3][0] =-dot(s, eye);
Result[3][1] =-dot(u, eye);
Result[3][2] = dot(f, eye);
return Result;
}
Now I'm going to tell you why I seem to be having a conceptual issue with this algorithm. There are two parts to this view matrix, the translation and the rotation. The translation does the correct inverse transformation, bringing the camera position to the origin, instead of the origin position to the camera. Similarly, you expect the rotation that the camera defines to be inversed before being put into this view matrix as well. I can't see that happening here, that's my issue.
Consider the forward vector, this is where your camera looks at. Consequently, this forward vector needs to be mapped to the -Z axis, which is the forward direction used by openGL. The way this view matrix is suppose to work is by creating an orthonormal basis in the columns of the view matrix, so when you multiply a vertex on the right hand side of this matrix, you are essentially just converting it's coordinates to that of different axes.
When I play the rotation that occurs as a result of this transformation in my mind, I see a rotation that is not the inverse rotation of the camera, like what's suppose to happen, rather I see the non-inverse. That is, instead of finding the camera forward being mapped to the -Z axis, I find the -Z axis being mapped to the camera forward.
If you don't understand what I mean, consider a 2D example of the same type of thing that is happening here. Let's say the forward vector is (sqr(2)/2 , sqr(2)/2), or sin/cos of 45 degrees, and let's also say a side vector for this 2D camera is sin/cos of -45 degrees. We want to map this forward vector to (0,1), the positive Y axis. The positive Y axis can be thought of as the analogy to the -Z axis in openGL space. Let's consider a vertex in the same direction as our forward vector, namely (1,1). By using the logic of GLM.lookAt, we should be able to map (1,1) to the Y axis by using a 2x2 matrix that consists of the forward vector in the first column and the side vector in the second column. This is an equivalent calculation of that calculation http://www.wolframalpha.com/input/?i=%28sqr%282%29%2F2+%2C+sqr%282%29%2F2%29++1+%2B+%28sqr%282%29%2F2%2C+-sqr%282%29%2F2+%29+1.
Note that you don't get your (1,1) vertex mapped the positive Y axis like you wanted, instead you have it mapped to the positive X axis. You might also consider what happened to a vertex that was on the positive Y axis if you applied this transformation. Sure enough, it is transformed to the forward vector.
Therefore it seems like something very fishy is going on with the GLM algorithm. However, I doubt this algorithm is incorrect since it is so popular. What am I missing?
Have a look at GLU source code in Mesa: http://cgit.freedesktop.org/mesa/glu/tree/src/libutil/project.c
First in the implementation of gluPerspective, notice the -1 is using the indices [2][3] and the -2 * zNear * zFar / (zFar - zNear) is using [3][2]. This implies that the indexing is [column][row].
Now in the implementation of gluLookAt, the first row is set to side, the next one to up and the final one to -forward. This gives you the rotation matrix which is post-multiplied by the translation that brings the eye to the origin.
GLM seems to be using the same [column][row] indexing (from the code). And the piece you just posted for lookAt is consistent with the more standard gluLookAt (including the translational part). So at least GLM and GLU agree.
Let's then derive the full construction step by step. Noting C the center position and E the eye position.
Move the whole scene to put the eye position at the origin, i.e. apply a translation of -E.
Rotate the scene to align the axes of the camera with the standard (x, y, z) axes.
2.1 Compute a positive orthonormal basis for the camera:
f = normalize(C - E) (pointing towards the center)
s = normalize(f x u) (pointing to the right side of the eye)
u = s x f (pointing up)
with this, (s, u, -f) is a positive orthonormal basis for the camera.
2.2 Find the rotation matrix R that aligns maps the (s, u, -f) axes to the standard ones (x, y, z). The inverse rotation matrix R^-1 does the opposite and aligns the standard axes to the camera ones, which by definition means that:
(sx ux -fx)
R^-1 = (sy uy -fy)
(sz uz -fz)
Since R^-1 = R^T, we have:
( sx sy sz)
R = ( ux uy uz)
(-fx -fy -fz)
Combine the translation with the rotation. A point M is mapped by the "look at" transform to R (M - E) = R M - R E = R M + t. So the final 4x4 transform matrix for "look at" is indeed:
( sx sy sz tx ) ( sx sy sz -s.E )
L = ( ux uy uz ty ) = ( ux uy uz -u.E )
(-fx -fy -fz tz ) (-fx -fy -fz f.E )
( 0 0 0 1 ) ( 0 0 0 1 )
So when you write:
That is, instead of finding the camera forward being mapped to the -Z
axis, I find the -Z axis being mapped to the camera forward.
it is very surprising, because by construction, the "look at" transform maps the camera forward axis to the -z axis. This "look at" transform should be thought as moving the whole scene to align the camera with the standard origin/axes, it's really what it does.
Using your 2D example:
By using the logic of GLM.lookAt, we should be able to map (1,1) to the Y
axis by using a 2x2 matrix that consists of the forward vector in the
first column and the side vector in the second column.
That's the opposite, following the construction I described, you need a 2x2 matrix with the forward and row vector as rows and not columns to map (1, 1) and the other vector to the y and x axes. To use the definition of the matrix coefficients, you need to have the images of the standard basis vectors by your transform. This gives directly the columns of the matrix. But since what you are looking for is the opposite (mapping your vectors to the standard basis vectors), you have to invert the transformation (transpose, since it's a rotation). And your reference vectors then become rows and not columns.
These guys might give some further insights to your fishy issue:
glm::lookAt vertical camera flips when z <= 0
The answer might be of interest to you?

Rotate a 3D- Point around another one

I have a function in my program which rotates a point (x_p, y_p, z_p) around another point (x_m, y_m, z_m) by the angles w_nx and w_ny.
The new coordinates are stored in global variables x_n, y_n, and z_n. Rotation around the y-axis (so changing value of w_nx - so that the y - values are not harmed) is working correctly, but as soon as I do a rotation around the x- or z- axis (changing the value of w_ny) the coordinates aren't accurate any more. I commented on the line I think my fault is in, but I can't figure out what's wrong with that code.
void rotate(float x_m, float y_m, float z_m, float x_p, float y_p, float z_p, float w_nx ,float w_ny)
{
float z_b = z_p - z_m;
float x_b = x_p - x_m;
float y_b = y_p - y_m;
float length_ = sqrt((z_b*z_b)+(x_b*x_b)+(y_b*y_b));
float w_bx = asin(z_b/sqrt((x_b*x_b)+(z_b*z_b))) + w_nx;
float w_by = asin(x_b/sqrt((x_b*x_b)+(y_b*y_b))) + w_ny; //<- there must be that fault
x_n = cos(w_bx)*sin(w_by)*length_+x_m;
z_n = sin(w_bx)*sin(w_by)*length_+z_m;
y_n = cos(w_by)*length_+y_m;
}
What the code almost does:
compute difference vector
convert vector into spherical coordinates
add w_nx and wn_y to the inclination and azimuth angle (see link for terminology)
convert modified spherical coordinates back into Cartesian coordinates
There are two problems:
the conversion is not correct, the computation you do is for two inclination vectors (one along the x axis, the other along the y axis)
even if computation were correct, transformation in spherical coordinates is not the same as rotating around two axis
Therefore in this case using matrix and vector math will help:
b = p - m
b = RotationMatrixAroundX(wn_x) * b
b = RotationMatrixAroundY(wn_y) * b
n = m + b
basic rotation matrices.
Try to use vector math. Decide in which order you rotate, first along x, then along y perhaps.
If you rotate along z-axis, [z' = z]
x' = x*cos a - y*sin a;
y' = x*sin a + y*cos a;
The same repeated for y-axis: [y'' = y']
x'' = x'*cos b - z' * sin b;
z'' = x'*sin b + z' * cos b;
Again rotating along x-axis: [x''' = x'']
y''' = y'' * cos c - z'' * sin c
z''' = y'' * sin c + z'' * cos c
And finally the question of rotating around some specific "point":
First, subtract the point from the coordinates, then apply the rotations and finally add the point back to the result.
The problem, as far as I see, is a close relative to "gimbal lock". The angle w_ny can't be measured relative to the fixed xyz -coordinate system, but to the coordinate system that is rotated by applying the angle w_nx.
As kakTuZ observed, your code converts point to spherical coordinates. There's nothing inherently wrong with that -- with longitude and latitude, one can reach all the places on Earth. And if one doesn't care about tilting the Earth's equatorial plane relative to its trajectory around the Sun, it's ok with me.
The result of not rotating the next reference axis along the first w_ny is that two points that are 1 km a part of each other at the equator, move closer to each other at the poles and at the latitude of 90 degrees, they touch. Even though the apparent purpose is to keep them 1 km apart where ever they are rotated.
if you want to transform coordinate systems rather than only points you need 3 angles. But you are right - for transforming points 2 angles are enough. For details ask Wikipedia ...
But when you work with opengl you really should use opengl functions like glRotatef. These functions will be calculated on the GPU - not on the CPU as your function. The doc is here.
Like many others have said, you should use glRotatef to rotate it for rendering. For collision handling, you can obtain its world-space position by multiplying its position vector by the OpenGL ModelView matrix on top of the stack at the point of its rendering. Obtain that matrix with glGetFloatv, and then multiply it with either your own vector-matrix multiplication function, or use one of the many ones you can obtain easily online.
But, that would be a pain! Instead, look into using the GL feedback buffer. This buffer will simply store the points where the primitive would have been drawn instead of actually drawing the primitive, and then you can access them from there.
This is a good starting point.

c++ opengl converting model coordinates to world coordinates for collision detection

(This is all in ortho mode, origin is in the top left corner, x is positive to the right, y is positive down the y axis)
I have a rectangle in world space, which can have a rotation m_rotation (in degrees).
I can work with the rectangle fine, it rotates, scales, everything you could want it to do.
The part that I am getting really confused on is calculating the rectangles world coordinates from its local coordinates.
I've been trying to use the formula:
x' = x*cos(t) - y*sin(t)
y' = x*sin(t) + y*cos(t)
where (x, y) are the original points,
(x', y') are the rotated coordinates,
and t is the angle measured in radians
from the x-axis. The rotation is
counter-clockwise as written.
-credits duffymo
I tried implementing the formula like this:
//GLfloat Ax = getLocalVertices()[BOTTOM_LEFT].x * cosf(DEG_TO_RAD( m_orientation )) - getLocalVertices()[BOTTOM_LEFT].y * sinf(DEG_TO_RAD( m_orientation ));
//GLfloat Ay = getLocalVertices()[BOTTOM_LEFT].x * sinf(DEG_TO_RAD( m_orientation )) + getLocalVertices()[BOTTOM_LEFT].y * cosf(DEG_TO_RAD( m_orientation ));
//Vector3D BL = Vector3D(Ax,Ay,0);
I create a vector to the translated point, store it in the rectangles world_vertice member variable. That's fine. However, in my main draw loop, I draw a line from (0,0,0) to the vector BL, and it seems as if the line is going in a circle from the point on the rectangle (the rectangles bottom left corner) around the origin of the world coordinates.
Basically, as m_orientation gets bigger it draws a huge circle around the (0,0,0) world coordinate system origin. edit: when m_orientation = 360, it gets set back to 0.
I feel like I am doing this part wrong:
and t is the angle measured in radians
from the x-axis.
Possibly I am not supposed to use m_orientation (the rectangles rotation angle) in this formula?
Thanks!
edit: the reason I am doing this is for collision detection. I need to know where the coordinates of the rectangles (soon to be rigid bodies) lie in the world coordinate place for collision detection.
What you do is rotation [ special linear transformation] of a vector with angle Q on 2d.It keeps vector length and change its direction around the origin.
[linear transformation : additive L(m + n) = L(m) + L(n) where {m, n} € vector , homogeneous L(k.m) = k.L(m) where m € vector and k € scalar ] So:
You divide your vector into two pieces. Like m[1, 0] + n[0, 1] = your vector.
Then as you see in the image, rotation is made on these two pieces, after that your vector take
the form:
m[cosQ, sinQ] + n[-sinQ, cosQ] = [mcosQ - nsinQ, msinQ + ncosQ]
you can also look at Wiki Rotation
If you try to obtain eye coordinates corresponding to your object coordinates, you should multiply your object coordinates by model-view matrix in opengl.
For M => model view matrix and transpose of [x y z w] is your object coordinates you do:
M[x y z w]T = Eye Coordinate of [x y z w]T
This seems to be overcomplicating things somewhat: typically you would store an object's world position and orientation separately from its set of own local coordinates. Rotating the object is done in model space and therefore the position is unchanged. The world position of each coordinate is the same whether you do a rotation or not - add the world position to the local position to translate the local coordinates to world space.
Any rotation occurs around a specific origin, and the typical sin/cos formula presumes (0,0) is your origin. If the coordinate system in use doesn't currently have (0,0) as the origin, you must translate it to one that does, perform the rotation, then transform back. Usually model space is defined so that (0,0) is the origin for the model, making this step trivial.

How to orbit around the Z-axis in 3D

I'm primarily a Flash AS3 dev, but I'm jumping into openframeworks and having trouble using 3D (these examples are in AS)
In 2D you can simulate an object orbiting a point by using Math.Sin() and Math.cos(), like so
function update(event:Event):void
{
dot.x = xCenter + Math.cos(angle*Math.PI/180) * range;
dot.y = yCenter + Math.sin(angle*Math.PI/180) * range;
angle+=speed;
}
I am wondering how I would translate this into a 3D orbit, if I wanted to also orbit in the third dimension.
function update(event:Event):void
{
...
dot.z = zCenter + Math.sin(angle*Math.PI/180) * range;
// is this valid?
}
An help is greatly appreciated.
If you are orbiting around the z-axis, you are leaving your z-coordinate fixed and changing your x- and y-coordinates. So your first code sample is what you are looking for.
To rotate around the x-axis (or y-axes), just replace x (or y) with z. Use Cos on whichever axis you want to be 0-degrees; the choice is arbitrary.
If what you actually want is to orbit an object around a point in 3d-space, you'll need two angles to describe the orbit: its elevation angle and its inclination angle. See here and here.
For reference, those equations are (where θ and φ are your angles)
x = x0 + r sin(θ) cos(φ)
y = y0 + r sin(θ) sin(φ)
z = z0 + r cos(θ)
If you are orbiting around Z axis, then you just do your first code, and leave Z coordinate as is.
I would pick two unit perpendicular vectors v, w that define the plane in which to orbit, then loop over the angle and pick the proper ratio of these vectors v and w to build your vector p = av + bw.
More details are coming.
EDIT:
This might be of help
http://en.wikipedia.org/wiki/Orbit_equation
EDIT: I think it is actually
center + sin(angle) * v * radius1 + cos(angle) * w * radius2
Here v and w are your unit vectors for the circle.
In 2D they were (1,0) and (0,1).
In 3D you will need to compute them - depends on orientation of the plane.
If you set radius1 = radius 2, you will get a circle. Otherwise, you should get an ellipse.
If you just want the orbit to happen at an angled plane and don't mind it being elliptic you can just do something like z = 0.2*x + 0.2*y, or any combination you fancy, after you have determined the x and y coordinates.