Quaternion calculation in "RosInertialUnit.cpp" of Webots ROS default controller - c++

today I was taking a closer look at the quaternion calculation used in the "RosInertialUnit.cpp" file as part of the default ROS controller.
I wanted to try out the InterialUnit using the "keyboard_teleop.wbt" - world and added the sensor to the Pioneer robot.
I was then comparing the robot's rotation values given in the scene tree (in axis + angle format) with the output of the sensor in ROS (orientation converterd to a quaternion). You can see both in the screenshots below:
In my mind the quaternion output doesn't match the values given in the scene tree. When using MATLAB's function "quat = axang2quat(axang)" I would obtain the following for the example above:
quat = 0.7936 0.0131 -0.6082 0.0104 % w x y z
which when comparing with the ROS message shows that y and z are switched. I'm not quite sure if this is on purpose (maybe a different convention?). I didn't want to start a pull request right away but wanted to discuss the issue here before.
I was testing the following implementation in a changed version of "RosInertialUnit.cpp", which gives me the expected results (same results as calculated in MATLAB).
double halfRoll = mInertialUnit->getRollPitchYaw()[0] * 0.5; // turning around x
double halfPitch = mInertialUnit->getRollPitchYaw()[2] * 0.5; // turning around y
double halfYaw = mInertialUnit->getRollPitchYaw()[1] * 0.5; // turning around z
double cosYaw = cos(halfYaw);
double sinYaw = sin(halfYaw);
double cosPitch = cos(halfPitch);
double sinPitch = sin(halfPitch);
double cosRoll = cos(halfRoll);
double sinRoll = sin(halfRoll);
value.orientation.x = cosYaw * cosPitch * sinRoll - sinYaw * sinPitch * cosRoll;
value.orientation.y = sinYaw * cosPitch * sinRoll + cosYaw * sinPitch * cosRoll;
value.orientation.z = sinYaw * cosPitch * cosRoll - cosYaw * sinPitch * sinRoll;
value.orientation.w = cosYaw * cosPitch * cosRoll + sinYaw * sinPitch * sinRoll;
This is the same implementation as used in this wikipedia article.

This inversion is due to the fact that Webots and ROS coordinate systems are not equivalent.
In Webots:
X: left
Y: up
Z: forward
Which leads to: (https://cyberbotics.com/doc/reference/inertialunit#field-summary)
roll: left (Webots X)
pitch: forward (Webots Z)
yaw: up (Webots Y)
In ROS: (https://www.ros.org/reps/rep-0103.html#axis-orientation)
X: forward
Y: left
Z: up
Which leads to: (https://www.ros.org/reps/rep-0103.html#rotation-representation)
roll: forward (ROS X)
pitch: left (ROS Y)
yaw: up (ROS Z)
As you can see the roll and pitch axes are switched, this is why they are switched in the code too.

Related

Need help expanding particle system spread / divergence from 2 to 3 dimensions

I need help. I've been struggling with this for a week now and getting nowhere. I am building a 3D particle system mainly for learning and I am currently working on particle spread / divergence. In specific, introducing random direction to the particle direction so as to create something that looks more like a fountain as opposed to a solid stream.
I have been successful in getting this to work in one axis but no matter what I do, I cannot get it to work in 3 dimensions.
Here is what I am doing:
// Compute a random angle between -180 to +180 for velocity angle x, y and z. spreadAmount is a float from 0.0 to 1.0 to control degree of spread.
float velangrndx = spreadAmount * ((((double)(rand() % RAND_MAX) / (RAND_MAX)) - 0.5) * 360.0 * 3.14159265359 / 180.0);
float velangrndy = spreadAmount * ((((double)(rand() % RAND_MAX) / (RAND_MAX)) - 0.5) * 360.0 * 3.14159265359 / 180.0);
float velangrndz = spreadAmount * ((((double)(rand() % RAND_MAX) / (RAND_MAX)) - 0.5) * 360.0 * 3.14159265359 / 180.0);
// Compute Angles
float vsin_anglex_dir = -PF_SIN(velangrndx);
float vcos_anglex_dir = -PF_COS(velangrndx);
float vsin_angley_dir = -PF_SIN(velangrndy);
float vcos_angley_dir = -PF_COS(velangrndy);
float vsin_anglez_dir = -PF_SIN(velangrndz);
float vcos_anglez_dir = -PF_COS(velangrndz);
// Assign initial velocity to velocity x, y, z. vel is a float ranging from 0.0 - 0.1 specified by user. velx, vely, and velz are also floats.
velx = vel; vely = vel; velz = vel;
And finally, we get to the particle spread / divergence function below. If I use only the first X axis (comment out the Y and Z) it works as it should (see images), but if I use the Y and Z axis, it works totally incorrectly. px0, py0, and pz0 are temporary float variables so as to preserve the velocity variables.
// X Divergence
px0 = (velx * vsin_anglex_dir);
py0 = (velx * vcos_anglex_dir);
pz0 = velz;
velx = px0; vely = py0; velz = pz0;
// Y Divergence
py0 = (vely * vsin_angley_dir);
pz0 = (vely * vcos_angley_dir);
px0 = velx;
velx = px0; vely = py0; velz = pz0;
// Z Divergence
pz0 = (velz * vsin_anglez_dir);
px0 = (velz * vcos_anglez_dir);
py0 = vely;
velx = px0; vely = py0; velz = pz0;
The velx, vely, and velz are then used to calculate for particle screen position.
This is what the particle spread looks like at 25%, 75% and 100% for the X axis only (if I comment out the Y and Z code). This works as it should and in theory, if the rest of my code was working correctly, I should get this same result for the Y and Z axis. But I don't.
I could really use some help here. Any suggestions on what I am doing wrong and how to correctly expand the currently working spread function from 2 dimensions to 3?
Thanks,
-Richard
Likely it is because the values of velx, vely and velz are getting overwritten on subsequent calculations. See whether the below works the way you are expecting.
// X Divergence
float velxXD = (velx * vsin_anglex_dir);
float velyXD = (velx * vcos_anglex_dir);
float velzXD = velz;
// Y Divergence
float velxYD = velx;
float velyYD = (vely * vsin_angley_dir);
float velzYD = (vely * vcos_angley_dir);
// Z Divergence
float velxZD = (velz * vcos_anglez_dir);
float velyZD = vely;
float velzZD = (velz * vsin_anglez_dir);
velx=velxXD+velxYD+velxZD;
vely=velyXD+velyYD+velyZD;
velz=velzXD+velzYD+velzZD;

Finding Perpendicular Points Given An Angle With Piece-wise Hermite Splines

I am given a Hermite spline from which I want to create another spline with every point on that spline being exactly x distance away.
Here's an example of what I want to do:
.
I can find every derivative and point on the original spline. I also know all the coefficients of each polynomial.
Here's the code that I've came up with that does this for every control point of the original spline. Where controlPath[i] is a vector of control points that makeup the spline, and Point is a struct representing a 2D point with its facing angle.
double x, y, a;
a = controlPath[i].Angle + 90;
x = x * cosf(a * (PI / 180)) + controlPath[i].X;
y = x * sinf(a * (PI / 180)) + controlPath[i].Y;
Point l(x, y, a - 90);
a = controlPath[i].Angle - 90;
x = x * cosf(a * (PI / 180)) + controlPath[i].X;
y = x * sinf(a * (PI / 180)) + controlPath[i].Y;
Point r(x, y, a + 90);
This method work to an extent, but its results are subpar.
Result of this method using input:
The inaccuracy is not good. How do I confront this issue?
If you build normals of given length in every point of Hermite spline and connect endpoint of these normals, resulting curve (so-called parallel curve) is not Hermit spline in general case. The same is true for Bezier curve and the most of pther curve (only circle arc generates self-similar curve and some exotic curves).
So to generate reliable result, it is worth to subdivide curve into small pieces, build normals in all intermediate points and generate smooth piecewise splines through "parallel points"
Also note doubtful using x in the right part of formulas - should be some distance.
Also you don't need to calculate sin/cos twice
double x, y, a, d, c, s;
a = controlPath[i].Angle + 90;
c = d * cosf(a * (PI / 180));
s = d * sinf(a * (PI / 180))
x = c + controlPath[i].X;
y = s + controlPath[i].Y;
Point l(x, y, controlPath[i].Angle);
x = -c + controlPath[i].X;
y = -s + controlPath[i].Y;
Point l(x, y, controlPath[i].Angle);

From Euler angles to Quaternions

I am working on a simulation of plane movement. For now, I used Euler angles to transform "body frame" to "world frame" and it works fine.
Recently I learned about quaternions and their advantages over the rotation matrix (gimbal lock) and I tried to implement it using yaw/pitch/roll angles from the simulator.
Quaternion
If I understand correctly the quaternion represents two things. It has an x, y, and z component, which represents the axis about which a rotation will occur. It also has a w component, which represents the amount of rotation which will occur about this axis. In short, a vector, and a float. A quaternion can be represented as 4 element vector:
q=[w,x,y,z]
To calculate result (after full rotation) equation is using:
p'=qpq'
where:
p=[0,x,y,z]-direction vector
q=[w,x,y,z]-rotation
q'=[w,-x,-y,-z]
Algorithm
Create quaternion:
Using wikipedia I create quaternion by rotating around 3 axes (q):
Quaterniond toQuaternion(double yaw, double pitch, double roll) // yaw (Z), pitch (Y), roll (X)
{
//Degree to radius:
yaw = yaw * M_PI / 180;
pitch = pitch * M_PI / 180;
roll = roll * M_PI / 180;
// Abbreviations for the various angular functions
double cy = cos(yaw * 0.5);
double sy = sin(yaw * 0.5);
double cp = cos(pitch * 0.5);
double sp = sin(pitch * 0.5);
double cr = cos(roll * 0.5);
double sr = sin(roll * 0.5);
Quaterniond q;
q.w = cy * cp * cr + sy * sp * sr;
q.x = cy * cp * sr - sy * sp * cr;
q.y = sy * cp * sr + cy * sp * cr;
q.z = sy * cp * cr - cy * sp * sr;
return q;
}
Define plane direction (heading) vector:
p = [0,1,0,0]
Calculate Hamilton product:
p'=qpq'
q'= [w, -qx, -qy, -qz]
p' = (H(H(q, p), q')
Quaterniond HamiltonProduct(Quaterniond u, Quaterniond v)
{
Quaterniond result;
result.w = u.w*v.w - u.x*v.x - u.y*v.y - u.z*v.z;
result.x = u.w*v.x + u.x*v.w + u.y*v.z - u.z*v.y;
result.y = u.w*v.y - u.x*v.z + u.y*v.w + u.z*v.x;
result.z = u.w*v.z + u.x*v.y - u.y*v.x + u.z*v.w;
return result;
}
Result
My result will be a vector:
v=[p'x,p'y,p'z]
It works fine but the same as Euler angle rotation (gimbal lock). Is it because I use also euler angles here? I don't really see how it should work without rotation around 3 axes. Should I rotate around every axis separately?
I will be grateful for any advice and help with understanding this problem.
EDIT (how application works)
1. My application based on data streaming, it means that after every 1ms it checks if there are new data (new orientation of plane).
Example:
At the begging pitch/roll/yaw = 0, after 1ms yaw is changed by 10 degree so application reads pitch=0, roll=0, yaw = 10. After next 1ms yaw is changed again by 20 degrees. So input data will look like this: pitch=0, roll=0, yaw = 30.
2. Create direction quaternion - p
At the begging, I define that direction (head) of my plane is on X axis. So my local direction is
v=[1,0,0]
in quaternion (my p) is
p=[0,1,0,0]
Vector3 LocalDirection, GlobalDirection; //head
Quaterniond p,P,q, Q, pq; //P = p', Q=q'
LocalDirection.x = 1;
LocalDirection.y = 0;
LocalDirection.z = 0;
p.w = 0;
p.x = direction.x;
p.y = direction.y;
p.z = direction.z;
3. Create rotation
After every 1ms I check the rotation angles (Euler) from data streaming and calculate q using toQuaternion
q = toQuaternion(yaw, pitch, roll); // create quaternion after rotation
Q.w = q.w;
Q.x = -q.x;
Q.y = -q.y;
Q.z = -q.z;
4. Calculate "world direction"
Using Hamilton product I calculate quaternion after rotation which is my global direction:
pq = HamiltonProduct(q, p);
P = HamiltonProduct(pq, Q);
GlobalDirection.x = P.x;
GlobalDirection.y = P.y;
GlobalDirection.z = P.z;
5. Repeat 3-4 every 1ms
It seems that your simulation uses Euler angles for rotating objects each frame. You then convert those angles to quaternions afterwards. That won't solve gimbal lock.
Gimbal lock can happen anytime when you add Euler angles to Euler angles. It's not enough to solve this when going from local space to world space. You also need your simulation to use quaternions between frames.
Basically everytime your object is changing its rotation, convert the current rotation to a quaternion, multiply in the new rotation delta, and convert the result back to Euler angles or whatever you use to store rotations.
I'd recommend rewriting your application to use and story only quaternions. Whenever a user does input or some other logic of your game wants to rotate something, you immediately convert that input into a quaternion and feed it into the simulation.
With your toQuaternion and HamiltonProduct you have all the tools you need for that.
EDIT In response to your edit explaining how your application works.
At the begging pitch/roll/yaw = 0, after 1ms yaw is changed by 10 degree so application reads pitch=0, roll=0, yaw = 10. After next 1ms yaw is changed again by 20 degrees. So input data will look like this: pitch=0, roll=0, yaw = 30.
That is where gimbal lock happens. You convert to quaternions after you calculate the rotations. That is wrong. You need to use quaternions in this very first step. Don't do after 1ms yaw is changed by 10 degree so application reads pitch=0, roll=0, yaw = 10, do this:
Store the rotation as quaternion, not as euler angles;
Convert the 10 degree yaw turn into a quaternion;
Multiply the stored quaternion and the 10 degree yaw quaternion;
Store the result.
To clarify: Your steps 2, 3 and 4 are fine. The problem is in step 1.
On a side note, this:
It has an x, y, and z component, which represents the axis about which a rotation will occur. It also has a w component, which represents the amount of rotation which will occur about this axis
is not exactly correct. The components of a quaternion aren't directly axis and angle, they are sin(angle/2) * axis and cos(angle/2) (which is what your toQuaternion method produces). This is important as it gives you nice unit quaternions that form a 4D sphere where every point on the surface (surspace?) represents a rotation in 3D space, beautifully allowing for smooth interpolations between any two rotations.

Issues with rotation matrix

I'm currently working on the intermediates between a physics engine and a rendering engine. My physics engine takes in a series of forces and positions and returns a quaternion.
I am currently converting that quaternion into a rotation matrix using the answers to my previous question (which is working fine). My co-ordinate system is z - into the screen, y - up, and x - right.
Now after all that exposition, I have been testing via rotating a single axis at a time. I can rotate about the y axis and z axis without any issues what so ever. However, when i attempt to rotate around the z axis the system is producing a bizarre result. The rotation is fine, but as it rotates the object flattens (ie: negatively scales) in the z direction, before "flipping" and returning to full scale. It does so every 90 degrees, at a 45 degree offset to the cardinal directions.
This is my code to convert my quaternion to a rotation matrix:
Matrix4f output = new Matrix4f();
output.setIdentity();
if(input.length()!=0){
input.normalise();
}
float xx = input.x * input.x;
float xy = input.x * input.y;
float xz = input.x * input.z;
float xw = input.x * input.w;
float yy = input.y * input.y ;
float yz = input.y * input.z;
float yw = input.y * input.w;
float zz = input.z * input.z;
float zw = input.z * input.w;
output.m00 = 1 -2*((yy+zz));
output.m01 = 2*(xy+zw);
output.m02 = 2*(xz-yw);
output.m10 = 2*(xy-zw);
output.m11 = 1 - (2*(xx+zz));
output.m12 = 2*(yz+xw);
output.m20 = 2*(xz+yw);
output.m21 = 2*(yz+xw);
output.m22 = 1-(2*(xx+yy));
Now I'm viewing this in real time as the object rotates, and I see nothing that shouldn't be there. Also, this passes untouched from this equation directly to opengl, so it is really beyond me why I should have this issue. Any ideas?
output.m21 = 2*(yz+xw); should be output.m21 = 2*(yz-xw);

Lissajous figure in Direct3D

I made a cube in DirectX, but now I want the cube to move around. I want this cube to move around in a Lissajous pattern. But for some reason no matter what variables I enter my cube just makes circles instead of the Lissajous figure.
I'm not familiar with this function and I've been searching for answers but I can't seem to fix my problem. So may be I made a mistake in the function, or maybe I'm doing everything completely wrong.
This is the code I use to calculate the position, where m_Angle changes every frame so the cube keeps moving.
float scale = 3.f;
float valueA = 1.0f;
float valueB = 2.0f;
float valueX = scale * valueA * sin(m_Angle + ((valueB - 1) / valueB)*(XM_PIDIV2));
float valueZ = scale * valueB * sin(m_Angle);
m_pColoredCube_1->SetPos(XMFLOAT3(valueX, 0.0f, valueZ));
Liassajous figures are just an interference of different oscillations. An oscillation can be described as:
y(t) = amplitude * sin(2 * PI * frequency * t + phase)
In your case, t is m_Angle.
You then set different oscillations for the x and z component (and possibly for the y component, too). If you set both frequencies equal (as you did), you get a circle or ellipse, depending on the phase. What you want to do instead is:
float frequencyRatio = ...;
float phaseDifference = ...;
float valueX = scale * sin(m_Angle * frequencyRatio + phaseDifference);
float valueZ = scale * sin(m_Angle);
If you set frequencyRatio = 2.0f and phaseDifference = 0, you get the following figure:
Or for frequencyRatio = 5.0f / 4.0f and phaseDifference = 0: