Angles of 3D vector - getting both - c++

I have object A, with a speed. Speed is specified as 3D vector a = (x, y, z). Position is 3D point A [X, Y, Z]. I need to find out, if the current speed leads this object to another object B on position B [X, Y, Z].
I've sucessfully implemented this in 2 dimensions, ignoring the third one:
/*A is projectile, B is static object*/
//entity is object A
// - .v[3] is the speed vector
//position[3] is array of coordinates of object B
double vector[3]; //This is the vector c = A-B
this->entityVector(-1, entity.id, vector); //Fills the correct data
double distance = vector_size(vector); //This is distance |AB|
double speed = vector_size(entity.v); //This is size of speed vector a
float dist_angle = (float)atan2(vector[2],vector[0])*(180.0/M_PI); //Get angle of vector c as seen from Y axis - using X, Z
float speed_angle = (float)atan2((double)entity.v[2],entity.v[0])*(180.0/M_PI); //Get angle of vector a seen from Y axis - using X, Z
dist_angle = deg180to360(dist_angle); //Converts value to 0-360
speed_angle = deg180to360(speed_angle); //Converts value to 0-360
int diff = abs((int)compare_degrees(dist_angle, speed_angle)); //Returns the difference of vectors direction
I need to create the very same comparison to make it work in 3D - right now, the Y positions and Y vector coordinates are ignored.
What calculation should I do to get the second angle?
Edit based on answer:
I am using spherical coordinates and comparing their angles to check if two vectors are pointing in the same direction. With one vector being the A-B and another A's speed, I'me checking id A is heading to B.

I'm assuming the "second angle" you're looking for is φ. That is to say, you're using spherical coordinates:
(x,y,z) => (r,θ,φ)
r = sqrt(x^2 + y^2 + z^2)
θ = cos^-1(z/r)
φ = tan^-1(y/x)
However, if all you want to do is find if A is moving with velocity a towards B, you can use a dot product for a basic answer.
1st vector: B - A (vector pointing from A to B)
2nd vector: a (velocity)
dot product: a * (B-A)
If the dot product is 0, it means that you're not getting any closer - you're moving around a sphere of constant radius ||B-A|| with B at the center. If the dot product > 0, you're moving towards the point, and if the dot product < 0, you're moving away from it.

Related

How can I iterate a coordinate sphere using an expanding spherical sector (cone)?

Given an integer 3D coordinate system, a center point P, a vector in some direction V, and a max sphere radius R:
I want to iterate over only integer points in a fashion that starts at P and goes along direction V until reaching the max radius R.
Then, for some small angle T iterate all points within the cone (or spherical sector) around V.
Incrementally expand T until T is pi/2 radians and every point within the sphere has been iterated.
I need to do this with O(1) space complexity. So the order of the points can't be precomputed/sorted but must result naturally from some math.
Example:
// Vector3 represents coordinates x, y, z
// where (typically) x is left/right, y is up/down, z is depth
Vector3 center = Vector3(0, 0, 0); // could be anything
Vector3 direction = Vector3(0, 100, 0); // could be anything
int radius = 4;
double piHalf = acos(0.0); // half of pi
std::queue<Vector3> list;
for (double angle = 0; angle < piHalf; angle+= .1)
{
int x = // confusion begins here
int y = // ..
int z = // ..
list.push(Vector3(x, y, z));
}
See picture for this example
The first coordinates that should be caught are:
A(0,0,0), C(0,1,0), D(0,2,0), E(0,3,0), B(0,4,0)
Then, expanding the angle somewhat (orange cone):
K(-1,0,3), X(1,0,3), (0,1,3), (0,-1,3)
Expanding the angle a bit more (green cone):
F(1,1,3), (-1,-1,3), (1,-1,3) (-1,1,3)
My guess for what would be next is:
L(1,0,2), (-1,0,2), (0,1,2), (0,-1,2)
M(2,0,3) would be hit somewhat after
Extra notes and observations:
A cone will hit a max of four points at its base, if the vector is perpendicular to an axis and originates at an integer point. It may also hit points along the cone wall depending on the angle
I am trying to do this in c++
I am aware of how to check whether a point X is within any given cone or spherical vector by comparing the angle between V and PX with T and am currently using this knowledge for a lesser solution.
This is not a homework question, I am working on a 3D video game~
iterate all integer positions Q in your sphere
simple 3x nested for loops through x,y,z in range <P-R,P+R> will do. Just check inside sphere so
u=(x,y,z)-P;
dot(u,u) <= R*R
test if point Q is exactly on V
simply by checking angle between PQ and V by dot product:
u = Q-P
u = u/|u|
v = V/|V|
if (dot(u,v)==1) point Q is on V
test if points is exactly on surface of "cone"
simply by checking angle between PQ and V by dot product:
u = Q-P
u = u/|u|
v = V/|V|
if (dot(u,v)==cos(T/2)) point Q is on "cone"
where I assume T is full "cone" angle not the half one.
Beware you need to use floats/double for this and make the comparison with some margin for error like:
if (fabs(dot(u,v)-1.0 )<1e-6) point Q is on V
if (fabs(dot(u,v)-cos(T/2))<1e-6) point Q is on "cone"

creating my own projection matrix in opengl

Here is my reasoning:
openGL draws everything within a 2x2x2 cube
the x,y values inside this cube determine where the point is drawn on the screen. The z value is used for other stuff...
if you want the z value to have some effect on perspective you need to mutate the scene (usually with a matrix) so that it gives an illusion of distant objects being smaller.
the z values of the cube go from -1 to 1.
Now I want it so that objects that are at z = 1 are infinitely zoomed, and objects that are at z = 0 are normal size, and objects that are at z = -1 are 1/2 size.
When I say an object is zoomed, I mean that the (x,y) coordinates of its points are multiplied by scaler zoom factor, which is based on its z coordinate.
If a point lies outside the 2x2x2 cube I want the calculations to still be done on it if it is between z = 1 and z = -1. Since the z value doesn't change I don't care what happens to any points that are not within this range, as long as their z value is not changed.
Generalized point transformation:
If I have a point P = (x, y, z), and -1 <= z <= 1 then:
the Zoom Factor, S = 1 / (1 - z)
so the translation is as follows:
(x, y, z) ==> (x * S, y * S, z)
Creating the matrix?
This is where I am having issues. I don't know how to create a matrix so that it will transform a generalized point to have the desired effect.
I am considering not using a matrix and applying this transformation via a function in glsl...
If someone has insight on how to create such a matrix I would like to know.

Distance between two cells in a 2D matrix

I have a 2D matrix represented as a vector of values, an index representing the first cell and a pair of coordinate representing the second cell.
vector<double> matrix;
auto index = 10;
auto x1 = index % width;
auto y1 = index / width;
auto x2 = ...
auto y2 = ...
I need to find the distance between these two cells, where the distance is equals to 1 for the first "ring" of the 8 neighbor cells, 2 for the second ring, and so on.
Is there a way faster than the euclidean distance?
What you need is something like a modified Manhattan Distance. I think there may be a specific name for your use case, but I don't know it. Anyway, this is how I'd do it.
Suppose the two points are x rows away and y columns away. Then x+y is the Manhattan Distance. But in your case, diagonal movements are also allowed. So, if you moved diagonally towards the point initially, you'd cover the smaller of x and y, with some amount remaining in the other. You can then move horizontally/vertically to cover the remaining distance. Hence, the distance by your metric would be max(x,y).
Given points (x1,y1) and (x2,y2), the answer would be max(|x1-x2|,|y1-y2|)

How to find correct rotation from one vector to another?

I have two objects, and each object has two vectors:
normal vector
up vector
Like on this image:
Up vector is perpendicular to normal vector. Now I want to find unique rotation from one object to another, how to do that?
I have one method to find rotation between one vector to another, and it works. The problem is that I need to take care the two vectors: normal vector and up vector. If I use this method to rotate normal vector from object one to normal from object two, the up vector could be pointing wrong way, and they needs to be parallel.
Here is the code for finding the shortest rotation:
GE::Quat GE::Quat::fromTo(const Vector3 &v1, const Vector3 &v2)
{
Vector3 a = Vector3::cross(v1, v2);
Quat q;
float dot = Vector3::dot(v1, v2);
if ( dot >= 1 )
{
q = Quat(0,0,0,1);
}
else if ( dot < -0.999999 )
{
Vector3 axis = Vector3::cross(Vector3(1,0,0),v2);
if (axis.length() == 0) // pick another if colinear
axis = Vector3::cross(Vector3(0,1,0),v2);
axis.normalize();
q = Quat::axisToQuat(axis,180);
}
else
{
float s = sqrt( (1+dot)*2 );
float invs = 1 / s;
Vector3 c = Vector3::cross(v1, v2);
q.x = c.x * invs;
q.y = c.y * invs;
q.z = c.z * invs;
q.w = s * 0.5f;
}
q.normalize();
return q;
}
What should I change/add to this code, to find the correct rotation?
Before we begin, I will assume that both UP vector and normal vector are normalized and orthogonal (dot product is zero) between them.
Let's say that you want to rotate your yellow plate to be aligned with the rose (red?) plate. So, our reference will be the vectors from yellow plate and we will call our coordinate system as XYZ, where Z -> normal yellow vector, Y -> Up yellow vector and X -> YxZ (cross product).
In the same way, for rose plate, the rotated coordinate system will be called X'Y'Z' where Z' -> normal rose vector, Y' -> up rose vector and X' -> Y'xZ' (cross product).
Ok to find the rotation matrix, we only need to make sure that our normal yellow vector will become normal rose vector; that our up yellow vector will be transfomed in the up rose vector, and so on, i.e.:
RyellowTOrose = |X'x Y'x Z'x|
|X'y Y'y Z'y|
|X'z Y'z Z'z|
in other words, after you have any primitives transformed to be in coordinates of yellow system, applying this transformation, will rotate it to be aligned with rose coordinates system
If your up and normal vector aren't orthogonal, you can correct one of them easily. Just make the cross product between normal and up (results in a vector called C, for convenience) and do again the cross product between with C and normal, to correct the up vector.
First of all, I make the claim that there is only one such transformation that will align the orientation of the two objects. So we needn't worry about finding the shortest one.
Let the object that will be rotated be called a, and call the object that stay stationary b. Let x and y be the normal and up vectors respectively for a, and similarly let u and v be these vectors for b. I will assume x, y, u, and v are unit length, and that is x is orthogonal to y, and u is orthogonal to v. If any of this is not the case code can be written to correct this (via planar projection and normalization).
Now let’s construct matrices defining the “world space” the orientation of a and b. (let ^ denote the cross product) construct z as x ^ y, and construct c as a ^ b. Writing x, y, z and a, b, c to columns of each matrix gives us the two matrices, call them A and B respectively. (the cross product here gives us a unit length and mutually orthogonal vector since the same is true of the operands)
The change of coordinate system transformation to obtain B in terms of A is A^-1 (the inverse of matrix A, where ^ denotes a generalization of an exponent), in this case A^-1 can be computed as A^T, the transpose, since A is an orthogonal matrix by construction. Then the physical transformation to B is just matrix B itself. So, transforming an object by A^-1, and then by B will give the desired result. However these transformations can be concatenated into one transformation by multiplying B on the right into A^-1 on the left.
You end up with this matrix (assuming no arithmetic errors):
_ _
| x0*u0+x1*u1+x2*u2 x0*v0+x1*v1+x2*v2 x0*(u1*v2-u2*v1)+x1*(u2*v0-u0*v2)+x2*(u0*v1-u1*v0) |
| |
| y0*u0+y1*u1+y2*u2 y0*v0+y1*v1+y2*v2 y0*(u1*v2-u2*v1)+y1*(u2*v0-u0*v2)+y2*(u0*v1-u1*v0) |
| |
| (x0*y2-x2*y1)*u0+(x2*y0-x0*y2)*u1+(x0*y1-x1*y0)*u2 (x0*y2-x2*y1)*v0+(x2*y0-x0*y2)*v1+(x0*y1-x1*y0)*v2 (x0*y2-x2*y1)*(u1*v2-u2*v1)+(x2*y0-x0*y2)*(u2*v0-u0*v2)+(x0*y1-x1*y0)*(u0*v1-u1*v0) |
|_ _|
The quaternion code rotates just one vector to another without "Up" vector.
In your case simply build rotation matrix from 3 orthogonal vectors
normalized (unit) direction vector
normalized (unit) up vector
cross product of direction and up vectors.
Than you will have R1 and R2 matrix (3x3) representing rotation of object in two cases.
To find rotation from R1 to R2 just do
R1_to_R2 = R2 * R1.inversed()
And matrix R1_to_R2 is the transformation matrix from one orientation to other. NOTE: R1.inversed() here can be replaced with R1.transposed()

What is wrong with my Z-buffer calculations?

I am implementing a Z-buffer to determine which pixels should be drawn in a simple scene filled with triangles. I have structural representations of a triangle, a vertex, a vector (the mathematical (x, y, z) kind, of course), as well as a function that draws an individual pixel to the screen. Here are the structures I have:
struct vertex{
float x, y, z;
... //other members for lighting, etc. that will be used later and are not relevant here
};
struct myVector{
float x, y, z;
};
struct triangle{
... //other stuff
vertex v[3];
};
Unfortunately, as I scan convert my triangles to the screen, which relies on calculating depths to determine what is visible and gets to be drawn, I am getting incorrect/unrealistic Z values (e.g., the depth at a point in the triangle is out of bounds of the depths of all 3 of its vertices)! I have been looking through my code over and over and cannot figure out whether my math is off or I have a careless mistake somewhere, so I will try to present exactly what I am trying to do in the hopes that someone else can see something that I don't. (And I have looked carefully at making sure that floating point values remain floating point values, that I am passing in arguments correctly, etc., so this is really baffling!)
Overall, my scan conversion algorithm fills pixels across a scan line like this (pseudocode):
for all triangles{
... //Do edge-related sorting stuff, etc...get ready to fill pixels
float zInit; //the very first z-value, with a longer calculation
float zPrev; //the "zk" needed when interpolating "zk+1" across a scan line
for(xPos = currentX at left side edge; xPos != currentX at right side edge; currentX++){
*if this is first pixel acorss scan line, calculate zInit and draw pixel/store z if depth is less
than current zBuffer value at this point. Then set zPrev = zInit.
*otherwise, interpolate zNext using zPrev. Draw pixel/store z if depth < current zBuffer value at
this point. Then set zPrev = zNext.
}
... //other scan conversion stuff...update x values, etc.
}
To get the value of zInit for each scan line, I consider the plane equation Ax + By + Cz + D = 0 and rearrange it to get z = -1*(Ax + By + D)/C, where x and y are plugged in as the current x value across a scan line and the current scan line value itself, respectively.
For subsequent z values across a scan line, I interpolate as zk+1 = zk - A/C, where A and C come from the plane equation.
To get the A, B and C for these z calculations, I need the normal vector of the plane defined by the 3 vertices (the array vertex v[3]) of the current triangle. To get this normal (which I named planeNormal in the code), I defined a cross product function:
myVector cross(float x1, float y1, float z1, float x2, float y2, float z2)
{
float crX = (y1*z2) - (z1*y2);
float crY = (z1*x2) - (x1*z2);
float crZ = (x1*y2) - (y1*x2);
myVector res;
res.x = crX;
res.y = crY;
res.z = crZ;
return res;
}
To get the D value for the plane equation/my z calculations, I use the plane equation A(x-x1) + B(y-y1) + C(z-z1) = 0, where (x1, y1, z1) is just a reference point in the plane. I just chose the triangle vertex v[0] for the reference point and rearranged:
Ax + By + Cz = Ax1 + By1 + Cz1
Thus, D = Ax1 + By1 + Cz1
So, finally, to get the A, B, C, and D for the z calculations, I did this for each triangle, where trianglelist[nt] is the triangle at current index nt in the overall triangle array for the scene:
float pA = planeNormal.x;
float pB = planeNormal.y;
float pC = planeNormal.z;
float pD = (pA*trianglelist[nt].v[0].x)+(pB*trianglelist[nt].v[0].y)+(pC*trianglelist[nt].v[0].z);
From here, within the scan conversion algorithm I described, I calculated the zs:
zInit = -1*((pA*cx)+(pB*scanLine)+(pD))/(pC); //cx is current x value; scanLine is current y value
...
...
float zNext = zPrev - (pA/pC);
Alas, after all that careful work, something is off! In some triangles, the depth values come out realistic (except for the sign). With triangle given by the vertices (200, 10, 75), (75, 200, 75) and (15, 60, 75), all depths come out as -75. The same happened for other triangles with all vertices at the same depth. But with the vertices (390, 300, 105), (170, 360, 80), (190, 240, 25), all of the z values are over 300! The very first one comes out as 310.5, and the rest just get bigger, with a max around 365. This should not happen when the deepest vertex is at z = 105!!! So, after all of the rambling, can anyone see what might have caused this? I wouldn't be surprised if it's a sign-related thing, but where (after all, the absolute values are right in the constant depth cases)?
The correct equations are:
n = cross (v[2] - v[0], v[1] - v[0]);
D = - dot (n, v[0]);
Note the minus sign.
you should have a look at www.scratchapixel.com, particularly this lesson:
http://scratchapixel.com/lessons/3d-advanced-lessons/perspective-and-orthographic-projection-matrix/
It contains a self-contained program that shows you how to project vertices.