When applying rotations one after another, precision errors accumulate.
But I am surprised of how fast the error builds up.
In this example I am comparing 2 transformations that are equivalent in theory.
In practice I get 0.02 degrees error by doing just 2 rotations instead of one.
I was expecting the error to be lower.
Is there a way to make the result of these 2 transformations closer? Other than using double precision variables.
#include <glm/gtx/rotate_vector.hpp>
double RadToDeg(double rad)
{
return rad * 180.0 / M_PI;
}
const glm::vec3 UP(0, 0, 1);
void CompareRotations()
{
glm::vec3 v0 = UP;
glm::vec3 v1 = glm::normalize((glm::vec3(0.0491, 0.0057, 0.9987)));
glm::vec3 v2 = glm::normalize((glm::vec3(0.0493, 0.0057, 0.9987)));
glm::vec3 axis_0_to_1 = glm::cross(v0, v1);
glm::vec3 axis_1_to_2 = glm::cross(v1, v2);
glm::vec3 axis_global = glm::cross(v0, v2);
float angle_0_to_1 = RadToDeg(acos(glm::dot(v0, v1)));
float angle_1_to_2 = RadToDeg(acos(glm::dot(v1, v2)));
float angle_global = RadToDeg(acos(glm::dot(v0, v2)));
glm::vec3 v_step = UP;
v_step = glm::rotate(v_step, angle_0_to_1, axis_0_to_1);
v_step = glm::rotate(v_step, angle_1_to_2, axis_1_to_2);
glm::vec3 v_glob = UP;
v_glob = glm::rotate(v_glob, angle_global, axis_global);
float angle = RadToDeg(acos(glm::dot(v_step, v_glob)));
if (angle > 0.01)
{
printf("error");
}
}
If you just want to continue rotating along the same axis, then it would probably be best to just increment the rotation angle around that axis and recompute a new matrix from that angle every time. Note that you can directly compute a matrix for rotation around an arbitrary axis. Building rotations from Euler Angles, for example, is generally neither necessary nor a great solution (singularities, numerically not ideal, behavior not very intuitive). There is an overload of glm::rotate() that takes an axis and an angle that you could use for that.
If you really have to concatenate many arbitrary rotations around arbitrary axes, then using Quaternions to represent your rotations would potentially be numerically more stable. Since you're already using GLM, you could just use the quaternions in there. You might find this tutorial useful.
Floating-point multiplication isn't as precise as you think, and every time you multiply two floating-point numbers you lose precision -- quite rapidly, as you have discovered.
Generally you want to store your transforms not as the result matrix, but as the steps required to get that matrix; for example, if you are doing only a single-axis transform, you store your transform as the angle and recompute the matrix each time. However, if multiple axes are involved, this gets very complicated very quickly.
Another approach is to use an underlying representation of the transform that can itself be transformed precisely. Quaternions are very popular for this (per Michael Kenzel's answer), but another approach that can be easier to visualize is to use a pair of vectors that represent the transform in a way that you can reconstitute a normalized matrix. For example, you can think of your rotation as a pair of vectors, forward and up. From this you can compute your transformation matrix with e.g.:
z_axis = normalize(forward);
x_axis = normalize(cross(up, forward));
y_axis = normalize(cross(forward, x_axis));
and then you build your transform matrix from these vectors; given those axes and a pos for your position the (column-major) OpenGL matrix will be:
{ x_axis.x, x_axis.y, x_axis.z, 0,
y_axis.x, y_axis.y, y_axis.z, 0,
z_axis.x, z_axis.y, z_axis.z, 0,
pos.x, pos.y, pos.z, 1 }
Similarly, you can renormalize a transform matrix by extracting the Z and Y vectors from your matrix as direction and up, respectively, and reconstructing a new matrix from them.
This does take a lot more computational complexity than using quaternions, but I find it much easier to wrap my head around.
Related
I'am working with Quaternion and one LSM6DSO32 captor gyro + accel. So I fused datas coming from my captor and after that I have a Quaternion, everything works well.
Now I'd like to detect if my Quaternion has rotated more than 90° about a initial quaternion, here is what I do, first I have q1 is my initial quaternion, q2 is the Quaternion coming from my fusion data, to detect if q2 has rotated more than 90° from q1 I do :
q_conj = conjugateQuaternion(q2);
q_mulitply = multiplyQuaternion(q1, q_conj);
float angle = (2 * acos(q_mulitply.element.w)) * RAD_TO_DEG;
if(angle > 90.0f)
do something
this is works very well I can detect if q2 has rotated more than 90°. But my "problem" is I also detect 90° rotation in yaw, and I don't want integrate yaw in my test. Is it possible to nullify yaw (z component in my quaternion) without modify w, x and y component ?
My final objective is to detect a rotation more than 90° but without caring yaw, and I don't want to use Euler angle because I want avoid Gimbal lock
Edit : I want to calculate the magnitude between q1and q2 and don't care about yaw
The "yaw" of a quaternion generally means q_yaw in a quaternion formed by q_roll * q_pitch * q_yaw. So that quaternion without its yaw would be q_roll * q_pitch. If you have the pitch and roll values at hand, the easiest thing to do is just to reconstruct the quaternion while ignoring q_yaw.
However, if we are really dealing with a completely arbitrary quaternion, we'll have to get from q_roll * q_pitch * q_yaw to q_roll * q_pitch.
We can do it by appending the opposite transformation at the end of the equation: q_roll * q_pitch * q_yaw * conj(q_yaw). q_yaw * conj(q_yaw) is guaranteed to be the identity quaternion as long as we are only dealing with normalized quaternions. And since we are dealing with rotations, that's a safe-enough assumption.
In other words, removing the "Yaw" of a quaternion would involve:
Find the yaw of the quaternion
Multiply the quaternion by the conjugate of that.
So we need to find the yaw of the quaternion, which is how much the forward vector is rotated around the up axis by that quaternion.
The simplest way to do that is to just try it out, and measure the result:
Transform a reference forward vector (on the ground plane) by the quaternion
Take that and project it back on the ground plane.
Get the angle between this projection and the reference vector.
Form a "Yaw" quaternion with that angle around the Up axis.
Putting all this together, and assuming you are using a Y=up system of coordinates, it would look roughly like this:
quat remove_yaw(quat q) {
vec3 forward{0, 0, -1};
vec3 up{0, 1, 0};
vec3 transformed = q.rotate(forward);
vec3 projected = transformed.project_on_plane(up);
if( length(projected) < epsilon ) {
// TODO: unsolvable, what should happen here?
}
float theta = acos(dot(normalize(projected), forward));
quat yaw_quat = quat.from_axis_angle(up, theta);
return multiply(q, conjugate(yaw_quat));
}
This can be simplified a bit, obviously. For example, the conjugate of a axis-angle quaternion is the same thing as a quaternion of the negative angle around the same axis, and I'm sure there are other possible simplifications here. However, I wanted to illustrate the principle as clearly as possible.
There's also a singularity when the pitch is exactly ±90°. In these cases the yaw is gimbal-locked into being indistinguishable from roll, so you'll have to figure out what you want to do when length(projected) < epsilon.
Given the following coordinate system1, where positive z goes towards the ceiling:
I have a glm::vec3 called dir representing a (normalized) direction between two points A and B in 3D space2:
The two points A and B happen to be on the same plane, so the z coordinate for dir is zero. Now, given an angle α, I would like to rotate dir towards the ceiling by the specified amount. As an example, if α is 45 degrees, I would like dir to point in the same x/y direction, but 45 degrees towards the ceiling3:
My original idea was to calculate the "right" vector of dir, and use that as a rotation axis. I have attempted the following:
glm::vec3 rotateVectorUpwards(const glm::vec3& input, const float aRadians)
{
const glm::vec3 up{0.0, 0.0, 1.0};
const glm::vec3 right = glm::cross(input, glm::normalize(up));
glm::mat4 rotationMatrix(1); // Identity matrix
rotationMatrix = glm::rotate(rotationMatrix, aRadians, right);
return glm::vec3(rotationMatrix * glm::vec4(input, 1.0));
}
I would expect that invoking rotateVectorUpwards(dir, glm::radians(45)) would return a vector representing my desired new direction, but it always returns a vector with a zero z component.
I have also attempted to represent the same rotation with quaternions:
glm::quat q;
q = glm::rotate(q, aRadians, right);
return q * input;
But, again, the resulting vector always seems to have a zero z component.
What am I doing wrong?
Am I misunderstanding what the "axis of rotation" means?
Is my right calculation incorrect?
How can I achieve my desired result?
You don't need to normalize your up vector because you defined it to be a unit vector, but you do need to normalize your right vector.
However, while I am unfamiliar with glm, I suspect the problem is you are rotating the matrix (or quaternion) around your axis rather than creating a matrix/quaternion that represents a rotation around your axis. taking a quick look at the docs, it looks like you might want to use:
glm::mat4 rotationMatrix = glm::rotate(radians, right);
I have a simple example:
FVector TransformTest()
{
FVector translation;
{
translation.X = 0.0f;
translation.Y = 20.0f;
translation.Z = 0.0f;
}
FRotator rotation;
{
rotation.Roll = 0.0f;
rotation.Pitch = 0.0f;
rotation.Yaw = 45.0f;
}
FVector scale;
{
scale.X = 2.0f;
scale.Y = 1.0f;
scale.Z = 1.0f;
}
FTransform test_transform = FTransform(rotation, translation, scale);
return (test_transform.Inverse() * test_transform).GetTranslation();
}
And this function will return vector:
[X = -10.0, Y = -10.0, Z = 0.0].
Expected:
[X = 0.0, Y = 0.0, Z = 0.0].
I am doing transform, for example, LocalToWorld then WorldToLocal, those i start from some space and must return back into this space
(with small inaccuracies), but i finish my way in a strange space far away from the source.
I have submitted, as I think, a bug to Epic, and got a response:
Hello,
Thank you for submitting a bug report, however at this time we believe that the issue you are describing is not actually a bug with the Unreal Engine, and so we are not able to take any further action on this.
Here is UE4 forum thread.
Is this behavior correct?
Is it a bug or not?
Actually, it looks like it impossible to solve (w/o major changes).
FTransform has decomposed transformation (translation, rotation, and scale). Applying it goes in order scale, rotation, and translation.
But inverted transformation must be applied in inverse order - reverse translation, rotation, and scale.
However, it's the same class and it doesn't have any information about application order.
It works fine with a uniform scale, but it's not correct for non-uniform. If you use non-uniform you'll have to use FMatrix.
A Unreal transform is basically a combination of a 3d scaling, rotation, and translation. In that order. In general, such a transformation can be inverted unless the scale is zero. That's what math is telling us. Such a 3d transformation is also a linear transformation, which can be represented by a 4x4 matrix, if the inversion exists (i.e. the determinant is not zero). However, the latter cannot be represented by a combination of scaling, rotation, and translation any longer, thus it cannot be represented by a Unreal transform. This is not a bug, this is math!
Here is why: For a uniform scale s the inversion is simple and it is obvious that it can be represented either by a 4x4 matrix or a Unreal transform, because (sRT)^-1 = T^-1 * R^-1 * 1/s = 1/s * R^-1 * (R * T^-1 * R^-1) = s' * R' * T', which is another Unreal transform.
For a non-uniform scale, which can be represented by a scaling matrix S, the inversion is of course still possible, but it cannot be represented by a Unreal transform, because (SRT)^-1 = T^-1R^-1S^-1 = R'*T'*S' != S'R'T'. It can still be represented by a 4x4 matrix, namely T^-1R^-1S^-1.
Speaking less mathematically: A Unreal Transform is a means of describing a scaling operation followed by a rigid body transformation. The inversion of a scaling followed by a rigid body transformation is the inverse of the rigid body transformation followed by a scaling. This cannot be described any longer by a scaling followed by a rigid body transformation, if the scaling is non-uniform. This would require a linear transformation, which can be represented by 4x4 matrix, but not with a Unreal transform.
So to conclude: A Unreal transform is not a 4x4 Matrix. A 4x4 Matrix is more powerful.
This appears to be a design flaw, since the problem can be easily circumvented by letting a Unreal transform be (S1RTS2), so that (S1RTS2)^-1 = S2^-1 * R^-1 * (R * T^-1 * R^-1) * S1^-1 = (S1'*R'*T'*S2'). If it is carefully implemented, it doesn't even need to impact performance, since Unreal transforms are just a way of describing scaled rigid body transformations, which have to be converted to 4x4 matrices at some point of the 3d graphics pipeline anyways.
This still does not solve the more general problem of concatenating two Unreal transforms with non-uniform scaling. But if we assume that only the left-most or right-most scaling is non-uniform, we end up with a useable solution, since the typical use case is indeed that we perform a scaling of a 3d model first and then position it in world space, followed by some additional rigid body transformations or uniform scalings. Such a transformation would be entirely invertable, if a Unreal transform would be equal to (S1RT*S2). Unfortunately it is not, so we need to use 4x4 matrices in that case.
I'm having a problem understanding matrices. If I rotate my matrix 90 deg about X axis it works fine, but then, if I rotate it 90 deg about Y axis it actually rotates it on the Z axis. I guess after each rotation the axes move. How do I rotate a second time (or more) using the original axes? Is this called local and global rotation?
You don't "rotate" matrices. You apply rotation transformation matrices by multiplication. And yes, each time you call a OpenGL matrix manipulation function the outcome will be used as input for the next transformation multiplication.
A rotation by 90° about axis X will map the Y axis to Z and the Z axis to -Y, which is what you observe. So what ever transformation comes next start off with this.
Either build the whole transformation for each object anew using glLoadIdentity to reset to an identity, or use glPushMatrix / glPopMatrix to create a hierachy of "transformation blocks". Or better yet, abandon the OpenGL built-in matrix stack altogether and replace it with a proper matrix math library like GLM, Eigen or similar.
Add 'glLoadIdentity' between the rotations.
In practice best way to overcome this problem is to use quaternions, it is quite a bit math. You are right about; if you rotate it around Y 90 degrees than if you want to rotate it around Z you will be rotating around X.
Here is a nice source to convert euler angles to quaternions: http://www.euclideanspace.com/maths/geometry/rotations/conversions/eulerToQuaternion/
And here is how to make a rotation matrix out of a quaternion:
http://www.euclideanspace.com/maths/geometry/rotations/conversions/quaternionToMatrix/
After you have filled the matrix, you can multiply by calling glMultMatrix( qMatrix);.
Thinking about it last night I found the answer (I always seem to do this...)
I have an object called GLMatrix that holds the matrix:
class GLMatrix {
public float m[] = new float[16];
...includes many methods to deal with matrix...
}
And it has a function to add rotation:
public void addRotate2(float angle, float ax, float ay, float az) {
GLMatrix tmp = new GLMatrix();
tmp.setAA(angle, ax, ay, az);
mult4x4(tmp);
}
As you can see I use Axis Angles (AA) which is applied to a temp matrix using setAA() and then multiplied to the current matrix.
Last night I thought what if I rotate the input vector of the AA by the current matrix and then create the temp matrix and multiple.
So it would look like this:
public void addRotate4(float angle, float ax, float ay, float az) {
GLMatrix tmp = new GLMatrix();
GLVector3 vec = new GLVector3();
vec.v[0] = ax;
vec.v[1] = ay;
vec.v[2] = az;
mult(vec); //multiple vector by current matrix
tmp.setAA(angle, vec.v[0], vec.v[1], vec.v[2]);
mult4x4(tmp);
}
And it works as expected! The addRotate4() function now rotates on the original axis'es.
I have a function in my program which rotates a point (x_p, y_p, z_p) around another point (x_m, y_m, z_m) by the angles w_nx and w_ny.
The new coordinates are stored in global variables x_n, y_n, and z_n. Rotation around the y-axis (so changing value of w_nx - so that the y - values are not harmed) is working correctly, but as soon as I do a rotation around the x- or z- axis (changing the value of w_ny) the coordinates aren't accurate any more. I commented on the line I think my fault is in, but I can't figure out what's wrong with that code.
void rotate(float x_m, float y_m, float z_m, float x_p, float y_p, float z_p, float w_nx ,float w_ny)
{
float z_b = z_p - z_m;
float x_b = x_p - x_m;
float y_b = y_p - y_m;
float length_ = sqrt((z_b*z_b)+(x_b*x_b)+(y_b*y_b));
float w_bx = asin(z_b/sqrt((x_b*x_b)+(z_b*z_b))) + w_nx;
float w_by = asin(x_b/sqrt((x_b*x_b)+(y_b*y_b))) + w_ny; //<- there must be that fault
x_n = cos(w_bx)*sin(w_by)*length_+x_m;
z_n = sin(w_bx)*sin(w_by)*length_+z_m;
y_n = cos(w_by)*length_+y_m;
}
What the code almost does:
compute difference vector
convert vector into spherical coordinates
add w_nx and wn_y to the inclination and azimuth angle (see link for terminology)
convert modified spherical coordinates back into Cartesian coordinates
There are two problems:
the conversion is not correct, the computation you do is for two inclination vectors (one along the x axis, the other along the y axis)
even if computation were correct, transformation in spherical coordinates is not the same as rotating around two axis
Therefore in this case using matrix and vector math will help:
b = p - m
b = RotationMatrixAroundX(wn_x) * b
b = RotationMatrixAroundY(wn_y) * b
n = m + b
basic rotation matrices.
Try to use vector math. Decide in which order you rotate, first along x, then along y perhaps.
If you rotate along z-axis, [z' = z]
x' = x*cos a - y*sin a;
y' = x*sin a + y*cos a;
The same repeated for y-axis: [y'' = y']
x'' = x'*cos b - z' * sin b;
z'' = x'*sin b + z' * cos b;
Again rotating along x-axis: [x''' = x'']
y''' = y'' * cos c - z'' * sin c
z''' = y'' * sin c + z'' * cos c
And finally the question of rotating around some specific "point":
First, subtract the point from the coordinates, then apply the rotations and finally add the point back to the result.
The problem, as far as I see, is a close relative to "gimbal lock". The angle w_ny can't be measured relative to the fixed xyz -coordinate system, but to the coordinate system that is rotated by applying the angle w_nx.
As kakTuZ observed, your code converts point to spherical coordinates. There's nothing inherently wrong with that -- with longitude and latitude, one can reach all the places on Earth. And if one doesn't care about tilting the Earth's equatorial plane relative to its trajectory around the Sun, it's ok with me.
The result of not rotating the next reference axis along the first w_ny is that two points that are 1 km a part of each other at the equator, move closer to each other at the poles and at the latitude of 90 degrees, they touch. Even though the apparent purpose is to keep them 1 km apart where ever they are rotated.
if you want to transform coordinate systems rather than only points you need 3 angles. But you are right - for transforming points 2 angles are enough. For details ask Wikipedia ...
But when you work with opengl you really should use opengl functions like glRotatef. These functions will be calculated on the GPU - not on the CPU as your function. The doc is here.
Like many others have said, you should use glRotatef to rotate it for rendering. For collision handling, you can obtain its world-space position by multiplying its position vector by the OpenGL ModelView matrix on top of the stack at the point of its rendering. Obtain that matrix with glGetFloatv, and then multiply it with either your own vector-matrix multiplication function, or use one of the many ones you can obtain easily online.
But, that would be a pain! Instead, look into using the GL feedback buffer. This buffer will simply store the points where the primitive would have been drawn instead of actually drawing the primitive, and then you can access them from there.
This is a good starting point.