Velocity components in 3d space - angle

I am trying to calculate the x,y,z velocity components for a projectile being shot from a cannon. I have the magnitude (power) of the cannon & the zRotation, yRotation, xRotation of the cannon. The calculations for zRotation, yRotation are fine, but i cannot figure out how to account for the third angle. Thanks.
PS: These are my calculations for velocity with yawn & pitch
cannon->magnitude * cosf(degToRads(cannon->zRotation)) * sinf(degToRads(cannon->yRotation)),
cannon->magnitude * sinf(degToRads(cannon->zRotation)),
cannon->magnitude * cosf(degToRads(cannon->zRotation)) *cosf(degToRads(cannon->yRotation)));
What i have attempted for all 3 angles:
cannon->magnitude * sinf(degToRads(zAngle)) * cosf(degToRads(xAngle)) * cosf(degToRads(yAngle)),
cannon->magnitude * sinf(degToRads(zAngle)),
cannon->magnitude * cosf(degToRads(zAngle)) * cosf(degToRads(xAngle))* sinf(abs(degToRads(yAngle))));

Without a diagram indicating how the angles are oriented, and without knowing units, I can't be 100% confident that my answer will satisfy you.
That said, if you are given three angles, each associated with one of the three spatial directions, then you may be dealing with something like the "Direction Cosines" example given on this webpage. If this is the case, then your velocity vector will consist of these three components:
v_x=|v_initial|*cos(xAngle)
v_y=|v_initial|*cos(yAngle)
v_z=|v_initial|*cos(zAngle)

Related

How to create point cloud from rgb & depth images?

For a university project I am currently working on, I have to create a point cloud by reading images from this dataset. These are basically video frames, and for each frame there is an rgb image along with a corresponding depth image.
I am familiar with the equation z = f*b/d, however I am unable to figure out how the data should be interpreted. Information about the camera that was used to take the video is not provided, and also the project states the following:
"Consider a horizontal/vertical field of view of the camera 48.6/62
degrees respectively"
I have little to no experience in computer vision, and I have never encountered 2 fields of view being used before. Assuming I use the depth from the image as is (for the z coordinate), how would I go about calculating the x and y coordinates of each point in the point cloud?
Here's an example of what the dataset looks like:
Yes, it's unusual to specify multiple fields of view. Given a typical camera (squarish pixels, minimal distortion, view vector through the image center), usually only one field-of-view angle is given -- horizontal or vertical -- because the other can then be derived from the image aspect ratio.
Specifying a horizontal angle of 48.6 and a vertical angle of 62 is particularly surprising here, since the image is a landscape view, where I'd expect the horizontal angle to be greater than the vertical. I'm pretty sure it's a typo:
When swapped, the ratio tan(62 * pi / 360) / tan(48.6 * pi / 360) is the 640 / 480 aspect ratio you'd expect, given the image dimensions and square pixels.
At any rate, a horizontal angle of t is basically saying that the horizontal extent of the image, from left edge to right edge, covers an arc of t radians of the visual field, so the pixel at the center of the right edge lies along a ray rotated t / 2 radians to the right from the central view ray. This "righthand" ray runs from the eye at the origin through the point (tan(t / 2), 0, -1) (assuming a right-handed space with positive x pointing right and positive y pointing up, looking down the negative z axis). To get the point in space at distance d from the eye, you can just normalize a vector along this ray and multiply by it by d. Assuming the samples are linearly distributed across a flat sensor, I'd expect that for a given pixel at (x, y) you could calculate its corresponding ray point with:
p = (dx * tan(hfov / 2), dy * tan(vfov / 2), -1)
where dx is 2 * (x - width / 2) / width, dy is 2 * (y - height / 2) / height, and hfov and vfov are the field-of-view angles in radians.
Note that the documentation that accompanies your sample data links to a Matlab file that shows the recommended process for converting the depth images into a point cloud and distance field. In it, the fields of view are baked with the image dimensions to a constant factor of 570.3, which can be used to recover the field of view angles that the authors believed their recording device had:
atan(320 / 570.3) * (360 / pi / 2) * 2 = 58.6
which is indeed pretty close to the 62 degrees you were given.
From the Matlab code, it looks like the value in the image is not distance from a given point to the eye, but instead distance along the view vector to a perpendicular plane containing the given point ("depth", or basically "z"), so the authors can just multiply it directly with the vector (dx * tan(hfov / 2), dy * tan(vfov / 2), -1) to get the point in space, skipping the normalization step mentioned earlier.

Proper calculation for the first element of an OpenGL projection Matrix?

Almost all the theoretical stuff I read about projection matrices have the first element being 2n/(r-l), but most of the open source implementations I've seen have it as 2n/((t-b)*a), -- which makes sense to me at first since (r-l) should be ((t-b)*a), but when I actually run the numbers, something feels off.
If we have a vertical field of view of 65 degrees, a near plane of .1, and an aspect ratio of 4:3, then I seem to get:
2n/(r-l) = .2 / (tan(65*(4/3)*.5) * .2) = 1.0599
but
2n((t-b)*a) = .2 / (tan(65*.5) * (4/3) * .2) = 1.1773
Why is there a difference between everything I read, and everything I see implemented? I didn't notice until I started implementing the same analytical inverse I see whose first element is (r-l)/2n, which isn't the inverse of these other implementations.
You can't multiply the aspect ratio into the angle. The tangens isn't a linear function. Having 65 degress vertical field of view does not mean that you're going to have 86,67 degrees horizontal FOV with 4:3 aspect, but ~80.69 degrees.

Converting quaternions to Euler angles. Problems with the range of Y angle

I'm trying to write a 3d simulation in C++ using Irrlicht as graphic engine and ODE for physics. Then I'm using a function to convert ODE quaternions to Irrlicht Euler angles. In order to do this, I'm using this code.
void QuaternionToEuler(const dQuaternion quaternion, vector3df &euler)
{
dReal w,x,y,z;
w = quaternion[0];
x = quaternion[1];
y = quaternion[2];
z = quaternion[3];
double sqw = w*w;
double sqx = x*x;
double sqy = y*y;
double sqz = z*z;
euler.Z = (irr::f32) (atan2(2.0 * (x*y + z*w),(sqx - sqy - sqz + sqw)) * (180.0f/irr::core::PI));
euler.X = (irr::f32) (atan2(2.0 * (y*z + x*w),(-sqx - sqy + sqz + sqw)) * (180.0f/irr::core::PI));
euler.Y = (irr::f32) (asin(-2.0 * (x*z - y*w)) * (180.0f/irr::core::PI));
}
It works fine for drawing in the correct position and rotation but the problems come with the asin instruction. It only return values in the range of 0..90 - 0..-90 and I need to get a range from 0..360 degrees. At least I need to get a rotation in the range of 0..360 when I call node->getRotation().Y.
Euler angles (of any type) have a singularity. In the case of those particular Euler angles that you are using (which look like Tait-Bryan angles, or some variation thereof), the singularity is at plus-minus 90 degrees of pitch (Y). This is an inherent limitation with Euler angles and one of the prime reasons why they are rarely used in any serious context (except in aircraft dynamics because all aircraft have a very limited ability to pitch w.r.t. their velocity vector (which might not be horizontal), so they rarely come anywhere near that singularity).
This also means that your calculation is actually just one of two equivalent solutions. For a given quaternion, there are two solutions for Euler angles that represent that same rotation, one on one side of the singularity and another that mirrors the first. Since both solutions are equivalent, you just pick the one on the easiest side, i.e., where the pitch is between -90 and 90 degrees.
Also, you code needs to deal with approaching the singularity in order to avoid getting NaN. In other words, you must check if you are getting close (with a small tolerance) to the singular points (-90 and 90 degrees on pitch), and if so, use an alternate formula (which can only compute one angle that best approximates the rotation).
If there is any way for you to avoid using Euler angles altogether, I highly suggest that you do that, pretty much any representation of rotations is preferable to Euler angles. Irrlicht uses matrices natively and also supports setting/getting rotations via an axis-angle representation, this is much nicer to work with (and much easier to obtain from a quaternion, and doesn't have singularities).
Think about the earth's globe. Each point on it can be defined only usin latitude(in the range [-90, 90]) and longitude(in the range [-180, 180]). So each point on a sphere may be specified by using these angles. Now a point on a sphere specifies a vector and all points on a sphere specify all possible vectors. So just like pointed out in this article, the formula you use will generate all possible directions.
Hope this helps.

OpenGL + SDL rotation around local axis

I've been working on a semi flight simulator. What I am trying to do is use a pitch roll and yaw to rotate an object. I have already looked online a lot, and although they explain what the problem is I have no idea how to implement the solution. So for example I do:
glRotatef(yaw,0,1,0);
glRotatef(pitch,1,0,0);
The yaw doesn't act properly, the pitch will work fine. And from what I have been reading it seems that the objects local axis has been changed so I need to find the object's new local axis and rotate around that. So I tried that with something like:
newpitch=pitch/57.29
VectorA(0,cos(newpitch)-sin(newpitch),sin(newpitch)+cos(newpitch));
glRotatef(yaw,vec.getXAxis(),vec.getYAxis(),vec.getZAxis());
glRotatef(pitch,1,0,0);
This seems to not work either.
I've also tried making a general rotation matrix and giving it both pitch and yaw and still the same problem. And I've tried using quaternions and the same problem still exists!
Here is my code for quaternions:
void Quat::eulerToQuat(float roll,float pitch,float yaw){
float radiansY = yaw/57.2;
float radiansZ = roll/57.2;
float radiansX = pitch/57.2;
float sY = sinf(radiansY * 0.5);
float cY = cosf(radiansY * 0.5);
float sZ = sinf(radiansZ * 0.5);
float cZ = cosf(radiansZ * 0.5);
float sX = sinf(radiansX * 0.5);
float cX = cosf(radiansX * 0.5);
w = cY * cZ * cX - sY * sZ * sX;
x = sY * sZ * cX + cY * cZ * sX;
y = sY * cZ * cX + cY * sZ * sX;
z = cY * sZ * cX - sY * cZ * sX;
}
Then I converted this into a matrix and use glMultMatrix(matrix) with the modelview matrix, and this has the same problem. So I'm confident it wouldn't be gimble lock =).
So in my code I do:
float matrix[4][4];
Quat this;
this.eularToQuat(roll,pitch,yaw);
this.toMatrix(matrix);
glMultMatrix(matrix);
I think you're referring to gimbal lock? You're right that each rotation modifies the axes around which subsequent local rotations will occur. In your case that affects the yaw because the OpenGL matrix stack works so that each thing you add to it occurs conceptually before whatever is already on the stack (ie, it's post multiplication in matrix terms).
Your solution, however, won't solve the problem even if implemented correctly. What you're trying to do is get the global y axis in local coordinate space so that you can rotate around the global y even after you've rotated around the global z, shifting the local axes. But that just buys you much the same problems as if you'd stuck with global axes throughout and applied the rotations in the other order. The second rotation will now interfere with the first rather than vice versa.
Another way to convince yourself that what you're doing is wrong is to look at how much information you have. You're trying to describe the orientation of an object with two numbers. Two numbers isn't enough to describe any rotation whatsoever, so there's obviously some other rule in there to convert two numbers into a complete orientation. Whatever you do to modify that rule, you're going to end up limiting the orientations you can reach. But with an aeroplane you really want to be able to reach any orientation, so that's a fundamental contradiction.
The confusion comes because, if you have a suitable way of storing orientation, it's completely valid to work forward from that by saying 'what is the orientation if I modify that by rotating around local y by 5, then around local z by 10?', etc. The problem is trying to aggregate all those transformations into a single pair of rotations. It isn't possible.
The easiest solution if you're already generally up on OpenGL tends to be to store the orientation as a complete matrix. You accumulate pitch and yaw rotations by applying them as they occur to that matrix. You pass that matrix to OpenGL via glMultMatrix to perform your drawing.
It's not an optimal solution but a quick fix test solution would be to use glLoadMatrix and glGet to apply transformations by loading your matrix to and then retrieving it from the OpenGL matrix stack, separately from your drawing. It's not really what the stack is for so you'll probably get some performance problems and over time rounding errors will cause odd behaviour but you can fix those once you're persuaded by the approach. The OpenGL man pages give the formulas for all transformation matrices and you should look up matrix normalisation (you'll probably be using an orthonormal matrix whether you realise it or not, which should help with Google) to deal with cumulative rounding.
EDIT: with respect to the code you've posted while I was rambling, quaternions are another valid way of representing orientation and another thing that you can apply incremental updates to safely. They're also compact very easy to protect from rounding errors. However I think your problem may be that you aren't using quaternions as the storage for orientation, merely as an intermediate container. So adding them to the chain doesn't fix any of your problems.
EDIT2: a further bit of hand-waving explanation to push the idea that directly storing pitch and yaw isn't good enough: imagine that, from the point of view of the pilot, you apply a yaw of 90 degrees, then a pitch of 30 degrees, then a yaw of -90 degrees. Then you end up exactly as if you'd applied a roll of 30 degrees. But if you're just storing pitch and yaw then you've no way of storing roll. Furthermore, if you just add up the total yaw and total pitch you end up thinking you've applied a pitch of 30 degrees rather than a roll. So it doesn't matter what order you apply pitch and yaw, or whether you use global or local axes, you get the wrong result.
You should yaw, pitch and roll using one transformation. Cause when you don't, you'll pushing yourself towards gimbal lock. Excerpt:
Gimbal lock is the loss of one degree of freedom in a
three-dimensional space that occurs when the axes of two of the three
gimbals are driven into a parallel configuration, "locking" the system
into rotation in a degenerate two-dimensional space.
Consider this example of Gimbal locked airplane:
When the pitch (green) and yaw (magenta)
gimbals become aligned, changes to roll (blue) and yaw apply the same
rotation to the airplane

How to create a rotation based Impulse Vector (Cocos2d, Chipmunk, Spacemanager)

So im trying to create character with two jetpacks - either of which can be fired independently of one another to create an impulse offset from the center of gravity (Using Cocos2d, Chipmunk, and SpaceManager).
My problem is that the by default, the impulse function doesn't take into account the current rotation of the object (i.e. which way its pointing), therefor the impulse offset and direction that I use ends up being the same no matter what direction the character is pointing in.
Im trying to create a more realistic model - where the impulse is based on the existing rotation of the object. Im sure I could programmatically just maintain a vector variable that holds the current direction the character is pointing and use that, but there has to be a simpler answer.
Iv heard people write about world space vs body relative coordinates and how impulse is world space by default, and body relative would fix my problem. Is this true? If so how do you convert between these two coordinate systems?
Any help you could give me would be Greatly appreciated.
If you have the current heading of your character (the angle it has rotated through, going counter-clockwise) stored in theta, and your impulse vector is in ix and iy, then the world-space vector will be
ix_world = ix * cos(theta) - iy * sin(theta);
iy_world = ix * sin(theta) + iy * cos(theta);
The angle theta must be in radians for cos and sin to work correctly. If you need to convert from degrees to radians, multiply the angle by PI / 180.0.
If you want to see where this formula came from, read here.