How to store and modify angles in 3D space - c++

This isn't about understanding angular physics, but more how to actually implement it.
For example, I'm using a simple linear interpolation per frame (with dT)
I'm having trouble with the angular units, I want to be able to rotate around arbitrary axes.
(with glm)
Using a vec3 for torque, inertia and angular velocity works excellent for a single axis.
Any more and you get gimbal lock. (i.e. You can rotate around a local x, y or z but superimposing prevents proper results)
Using quaternions I can't get it to function nicely with time, inertia or for an extended period.
Is there any tried-and-true method for representing these features?

The usual solution is to use the matrix representation of rotation. Two rotations in sequence can be expressed by multiplying their respective matrices. And because matrix multiplication is not symmetric, the order of the 2 rotations matters - as it should.

Related

Interchange the of origin of a 3D plane

I am working on a fiducial marker system (like Aruco) to obtain a 3d pose of markers (3d coordinates (x, y, z) and the roll, pitch, yaw of the marker) with respect to the camera. The overall setup is as shown in the figure.
Marker-Camera
Right now, for some reason, I am getting the pose representation of camera with respect to the marker (Thus, considering marker as an origin). But for my purpose, I want the pose representation of the marker, with respect to the camera. I cannot make changes in the way I am getting the pose, and I must use an external transformation. Currently, I using C++ Eigen library.
From what I have read so far, I have to do a rotation around the yaw (z) axis and then translate the obtained pose by the translation vector (x,y,z). But I am not sure how to represent this in Eigen. I tried to define my pose as Affine3f but I am not getting correct results.
Can anyone help me? Thanks!
If you are using ArUco, this might answer your questions: https://stackoverflow.com/a/59754199/8371691
However, if you are using some other marker system, the most robust way is to construct the attitude matrix and take inverse.
It is not clear how you represent your pose, but whether you use Euler angles or quaternion, it can be easily converted into an attitude matrix, R.
Then, the inverse transformation is simply taking inverse of R.
But given the nature of the configuration space that R belongs to, the inverse of R is also the transpose of R, which is computationally less expensive.
In Eigen, it's simply R.transpose().
If you are using ArUco with OpenCV, you can simply use built-in Rodrigues function.
But, if you are using ArUco, rvec is actually the rotation of the marker with respect to the camera frame.

Quaternion based camera

I try to implement an FPS camera based on quaternion math.
I store a rotation quaternion variable called _quat and multiply it by another quaternion when needed. Here's some code:
void Camera::SetOrientation(float rightAngle, float upAngle)//in degrees
{
glm::quat q = glm::angleAxis(glm::radians(-upAngle), glm::vec3(1,0,0));
q*= glm::angleAxis(glm::radians(rightAngle), glm::vec3(0,1,0));
_quat = q;
}
void Camera::OffsetOrientation(float rightAngle, float upAngle)//in degrees
{
glm::quat q = glm::angleAxis(glm::radians(-upAngle), glm::vec3(1,0,0));
q*= glm::angleAxis(glm::radians(rightAngle), glm::vec3(0,1,0));
_quat *= q;
}
The application can request the orientation matrix via GetOrientation, which simply casts the quaternion to a matrix.
glm::mat4 Camera::GetOrientation() const
{
return glm::mat4_cast(_quat);
}
The application changes the orientation in the following way:
int diffX = ...;//some computations based on mouse movement
int diffY = ...;
camera.OffsetOrientation(g_mouseSensitivity * diffX, g_mouseSensitivity * diffY);
This results in bad, mixed rotations around pretty much all the axes. What am I doing wrong?
The problem is the way that you are accumulating rotations. This would be the same whether you use quaternions or matrices. Combining a rotation representing pitch and yaw with another will introduce roll.
By far the easiest way to implement an FPS camera is to simply accumulate changes to the heading and pitch, then convert to a quaterion (or matrix) when you need to. I would change the methods in your camera class to:
void Camera::SetOrientation(float rightAngle, float upAngle)//in degrees
{
_rightAngle = rightAngle;
_upAngle = upAngle;
}
void Camera::OffsetOrientation(float rightAngle, float upAngle)//in degrees
{
_rightAngle += rightAngle;
_upAngle += upAngle;
}
glm::mat4 Camera::GetOrientation() const
{
glm::quat q = glm::angleAxis(glm::radians(-_upAngle), glm::vec3(1,0,0));
q*= glm::angleAxis(glm::radians(_rightAngle), glm::vec3(0,1,0));
return glm::mat4_cast(q);
}
The problem
As already pointed out by GuyRT, the way you do accumulation is not good. In theory, it would work that way. However, floating point math is far from being perfectly precise, and errors accumulate the more operations you do. Composing two quaternion rotations is 28 operations versus a single operation adding a value to an angle (plus, each of the operations in a quaternion multiplication affects the resulting rotation in 3D space in a very non-obvious way).
Also, quaternions used for rotation are rather sensible to being normalized, and rotating them de-normalizes them slightly (rotating them many times de-normalizes them a lot, and rotating them with another, already de-normalized quaternion amplifies the effect).
Reflection
Why do we use quaternions in the first place?
Quaternions are commonly used for the following reasons:
Avoiding the dreaded gimbal lock (although a lot of people don't understand the issue, replacing three angles with three quaternions does not magically remove the fact that one combines three rotations around the unit vectors -- quaternions must be used correctly to avoid this problem)
Efficient combination of many rotations, such as in skinning (28 ops versus 45 ops when using matrices), saving ALU.
Fewer values (and thus fewer degrees of freedom), fewer ops, so less opportunity for undesirable effects compared to using matrices when combining many transformations.
Fewer values to upload, for example when a skinned model has a couple of hundred bones or when drawing ten thousand instances of an object. Smaller vertex streams or uniform blocks.
Quaternions are cool, and people using them are cool.
Neither of these really make a difference for your problem.
Solution
Accumulate the two rotations as angles (normally undesirable, but perfectly acceptable for this case), and create a rotation matrix when you need it. This can be done either by combining two quaternions and converting to a matrix as in GuyRT's answer, or by directly generating the rotation matrix (which is likely more efficient, and all that OpenGL wants to see is that one matrix anyway).
To my knowledge, glm::rotate only does rotate-around-arbitrary-axis. Which you could of course use (but then you'd rather combine two quaternions!). Luckily, the formula for a matrix combining rotations around x, then y, then z is well-known and straightforward, you find it for example in the second paragraph of (3) here.
You do not wish to rotate around z, so cos(gamma) = 1 and sin(gamma) = 0, which greatly simplifies the formula (write it out on a piece of paper).
Using rotation angles is something that will make many people shout at you (often not entirely undeserved).
A cleaner alternative is keeping track of the direction you look at either with a vector pointing from your eye in the direction where you wish to look, or by remembering the point in space that you look at (this is something that combines nicely with physics in a 3rd person game, too). That also needs an "up" vector if you want to allow arbitrary rotations -- since then "up" isn't always the world space "up" -- so you may need two vectors. This is much nicer and more flexible, but also more complex.
For what is desired in your example, a FPS where your only options are to look left-right and up-down, I find rotation angles -- for the camera only -- entirely acceptable.
I haven't used GLM, so maybe you won't like this answer. However, performing quaternion rotation is not bad.
Let's say your camera has an initial saved orientation 'vecOriginalDirection' (a normalized vec3). Let's say you want it to follow another 'vecDirection' (also normalized). This way we can adapt a Trackball-like approach, and treat vecDirection as a deflection from whatever is the default focus of the camera.
The usually preferred way to do quaternion rotation in the real world is using NLERP. Let's see if I can remember: in pseudocode (assuming floating-point) I think it's this:
quat = normalize([ cross(vecDirection, vecOriginalDirection),
1. + dot(vecDirection, vecOriginalDirection)]);
(Don't forget the '1. +'; I forget why it's there, but it made sense at one time. I think I pulled my hair out for a few days until finding it. It's basically the unit quaternion, IIRC, which is getting averaged in, thereby making the double-angle act like the angle... maybe :))
Renormalizing, shown above as 'normalize()', is essential (it's the 'N' in NLERP). Of course, normalizing quat (x,y,z,w) is just:
quat /= sqrt(x*x+y*y+z*z+w*w);
Then, if you want to use your own function to make a 3x3 orientation matrix from quat:
xx=2.*x*x,
yy=2.*y*y,
zz=2.*z*z,
xy=2.*x*y,
xz=2.*x*z,
yz=2.*y*z,
wx=2.*w*x,
wy=2.*w*y,
wz=2.*w*z;
m[0]=1.-(yy+zz);
m[1]=xy+wz;
m[2]=xz-wy;
m[3]=xy-wz;
m[4]=1.-(xx+zz);
m[5]=yz+wx;
m[6]=xz+wy;
m[7]=yz-wx;
m[8]=1.-(xx+yy);
To actually implement a trackball, you'll need to calculate vecDirection when the finger is held down, and save it off to vecOriginalDirection when it is first pressed down (assuming touch interface).
You'll also probably want to calculate these values based on a piecewise half-sphere/hyperboloid function, if you aren't already. I think #minorlogic was trying to save some tinkering, since it sounds like you might be able to just use a drop-in virtual trackball.
The up angle rotation should be pre multiplied, post multiplying will rotate the world around the origin through (1,0,0), pre-multiplying will rotate the camera.
glm::quat q_up = glm::angleAxis(glm::radians(-upAngle), glm::vec3(1,0,0));
q_right = glm::angleAxis(glm::radians(rightAngle), glm::vec3(0,1,0));
_quat *= q_right;
_quat = q_up * _quat;

Quaternion 3 axis rotation

A little help here. I recieve 1 rotation per axis from a hardware gyroscope so 3 rotations for 3 axes (x,y,z) in total. When I use a matrix based rotation I get weird rotations perhaps because of the multiplication order (RotX*RotY*RotZ <> RotY*RotX*RotZ), I have also tried MatrixYawPitchRoll but the same effects appear. Thus I concluded that I should use quaternions but as fas as I can think I must create 3 quaternions, one per rotation, but when I combine them with multiplication I get the same effects as a matrix based rotation... So can someone please tell me how to properly use 3 rotations to create and combine quaternions whithout having the appearance of the previous multiplication effects?
P.S. D3DXQuaternionRotationYawPitchRoll still suffers the same effects as matrix based rotation.
Quaternions are not a magical salve that washes away rotational issues. All quaternions are is a cheap way to represent a specific orientation and to do orientation transforms.
Your problem is that you are not representing your orientation as a quaterion; you're representing it as a 3 angles. And it is that representation that causes your rotation problems.
You need to stop using angles. Represent an object's orientation as a quaternion. If you want to adjust your orientation, create a quaternion from your adjustment angle/axis, then multiply that into the object's orientation. Re-normalize the quaternion and you're done.
I see 2 main source of problems.
Your conversion from Euler Angels is broken.
You use invalid Euler Angle scheme. There are exists 24 types of schemes of Euler Angels
http://en.wikipedia.org/wiki/Euler_angles
Simply Euler Angle scheme is order of rotations around axis XYZ, ZYX, ZXZ ...
All conversions to/from matrix/quaternion can be found in source code to excellent article by Ken Shoemake, 1993.
http://tog.acm.org/resources/GraphicsGems/gemsiv/euler_angle/

how to extrapolate a rotation matrix values?

I am using chai3d api, which uses a 3x3 floating pt matrix for storing objects orientation in my virtual world.
I want to predict these orientations on client side, after periodic updates from server, so that I have a consistent virtual graphical world.
I predict the objects (e.g. opengl cube) position by sending a position and velocity value.
Is angular velocity for orientation same as velocity for position?
if yes how do I calculate the angular velocity from this 3x3 matrix and use it for extrapolation?
A transformation matrix is essentially a representation of a new coordinate system within another coordinate system. If you add a column you can even put the translation into it. If you remember your calculus and physics, then you may remember
r = 1/2 a² t + v0 t + r0
v = d/dt r = a + v0
a = d/dt v
To get from velocity 'v' to position 'r' you have to integrate. In the case of scalars you multiply v with time. But scalar multiplication with a matrix will just scale it, not rotate it. So you must do something else. The keyword, if you want to do this using matrices is matrix powers, i.e. calculating the powers of a matrix.
Say you have a differential rotation, d/dt R, then you would integrate this, by multiplying the corresponding rotation matrix infinitesimaly often with itself, i.e. take a power.
But there's also a mathematically much nicer way to do this. Something very close to just multiplying with a factor. And that is: Using quaternions instead of matrices to represent orientations. It turns out that simply scaling a quaternions is the same as just multiplying on the rotation it desscribes.
The keywords you should Google for (because StackOverflow is the wrong place for introducing one into the whole theory of quaternions) are:
quaternion
angular velocity
angular interpolation
SLERP http://en.wikipedia.org/wiki/Slerp

3d geometry: how to align an object to a vector

i have an object in 3d space that i want to align according to a vector.
i already got the Y-rotation out by doing an atan2 on the x and z component of the vector. but i would also like to have an X-rotation to make the object look downwards or upwards.
imagine a plane that does it's pitch yaw roll, just without the roll.
i am using openGL to set the rotations so i will need an Y-angle and an X-angle.
I would not use Euler angles, but rather a Euler axis/angle. For that matter, this is what Opengl glRotate uses as input.
If all you want is to map a vector to another vector, there are an infinite number of rotations to do that. For the shortest one, (the one with the smallest angle of rotation), you can use the vector found by the cross product of your from and to unit vectors.
axis = from X to
from there, the angle of rotation can be found from from.to = cos(theta) (assuming unit vectors)
theta = arccos(from.to)
glRotate(axis, theta) will then transform from to to.
But as I said, this is only one of many rotations that can do the job. You need a full referencial to define better how you want the transform done.
You should use some form of quaternion interpolation (Spherical Linear Interpolation) to animate your object going from its current orientation to this new orientation.
If you store the orientations using Quaternions (vector space math), then you can get the shortest path between two orientations very easily. For a great article, please read Understanding Slerp, Then Not Using It.
If you use Euler angles, you will be subject to gimbal lock and some really weird edge cases.
Actually...take a look at this article. It describes Euler Angles which I believe is what you want here.