Slerp issues, perspective warping [closed] - c++

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
I'm essentially working on a function for slerping and while it kinda works, it's having a weird perspective warping issue that I'm stuck trying to work out right now.
Quaternion sLerp(Quaternion start, Quaternion end, float s)
{
float dot = qDot(start, end);
float theta = std::acos(dot);
float sTheta = std::sin(theta);
float w1 = sin((1.0f-s)*theta) / sTheta;
float w2 = sin(s*theta) / sTheta;
Quaternion Temp(0,0,0,0);
Temp = start*w1 + end*w2;
return Temp;
}
Essentially what it's doing (or should be doing) is just slerping between two values to provide a rotation, and the result from this is being converted to a rotation matrix. But what's going wrong is a horribly, horribly stretched view... for some reason during the rotation it stretched everything, starting with everything too long / thin and reaching a midpoint of being much shorter before starting to go back to being thin. Any help would be great.

Your slerp code seems fine, although one would normally make sure that dot>=0 because otherwise, you're rotating the long way around the circle. In general, it's also important to make sure that dot!=1 because you'll run into divide-by-zero problems.
A proper quaternion should never stretch the view. Either you're passing in non-unit-length quaternions for start or end, or your quaternion-to-matrix code is suspect (or you're getting funky behavior because the angle between the two quaternions is very small and you're dividing by almost zero).
My code for converting from quaternion to a matrix for use in OpenGL:
// First row
glMat[ 0] = 1.0f - 2.0f * ( q[1] * q[1] + q[2] * q[2] );
glMat[ 1] = 2.0f * (q[0] * q[1] + q[2] * q[3]);
glMat[ 2] = 2.0f * (q[0] * q[2] - q[1] * q[3]);
glMat[ 3] = 0.0f;
// Second row
glMat[ 4] = 2.0f * ( q[0] * q[1] - q[2] * q[3] );
glMat[ 5] = 1.0f - 2.0f * ( q[0] * q[0] + q[2] * q[2] );
glMat[ 6] = 2.0f * (q[2] * q[1] + q[0] * q[3] );
glMat[ 7] = 0.0f;
// Third row
glMat[ 8] = 2.0f * ( q[0] * q[2] + q[1] * q[3] );
glMat[ 9] = 2.0f * ( q[1] * q[2] - q[0] * q[3] );
glMat[10] = 1.0f - 2.0f * ( q[0] * q[0] + q[1] * q[1] );
glMat[11] = 0.0f;
// Fourth row
glMat[12] = 0.0;
glMat[13] = 0.0;
glMat[14] = 0.0;
glMat[15] = 1.0f;

Do you need to normalise the quaternion?
I think the following:
float sTheta = std::sin(theta);
should be:
float sTheta = sqrt(1.0f - sqr(theta));

Related

Quaternion rotation works fine with y/z rotation but gets messed up when I add x rotation

So I've been learning about quaternions recently and decided to make my own implementation. I tried to make it simple but I still can't pinpoint my error. x/y/z axis rotation works fine on it's own and y/z rotation work as well, but the second I add x axis to any of the others I get a strange stretching output. I'll attach the important code for the rotations below:(Be warned I'm quite new to cpp).
Here is how I describe a quaternion (as I understand since they are unit quaternions imaginary numbers aren't required):
struct Quaternion {
float w, x, y, z;
};
The multiplication rules of quaternions:
Quaternion operator* (Quaternion n, Quaternion p) {
Quaternion o;
// implements quaternion multiplication rules:
o.w = n.w * p.w - n.x * p.x - n.y * p.y - n.z * p.z;
o.x = n.w * p.x + n.x * p.w + n.y * p.z - n.z * p.y;
o.y = n.w * p.y - n.x * p.z + n.y * p.w + n.z * p.x;
o.z = n.w * p.z + n.x * p.y - n.y * p.x + n.z * p.w;
return o;
}
Generating the rotation quaternion to multiply the total rotation by:
Quaternion rotate(float w, float x, float y, float z) {
Quaternion n;
n.w = cosf(w/2);
n.x = x * sinf(w/2);
n.y = y * sinf(w/2);
n.z = z * sinf(w/2);
return n;
}
And finally, the matrix calculations which turn the quaternion into an x/y/z position:
inline vector<float> quaternion_matrix(Quaternion total, vector<float> vec) {
float x = vec[0], y = vec[1], z = vec[2];
// implementation of 3x3 quaternion rotation matrix:
vec[0] = (1 - 2 * pow(total.y, 2) - 2 * pow(total.z, 2))*x + (2 * total.x * total.y - 2 * total.w * total.z)*y + (2 * total.x * total.z + 2 * total.w * total.y)*z;
vec[1] = (2 * total.x * total.y + 2 * total.w * total.z)*x + (1 - 2 * pow(total.x, 2) - 2 * pow(total.z, 2))*y + (2 * total.y * total.z + 2 * total.w * total.x)*z;
vec[2] = (2 * total.x * total.z - 2 * total.w * total.y)*x + (2 * total.y * total.z - 2 * total.w * total.x)*y + (1 - 2 * pow(total.x, 2) - 2 * pow(total.y, 2))*z;
return vec;
}
That's pretty much it (I also have a normalize function to deal with floating point errors), I initialize all objects quaternion to: w = 1, x = 0, y = 0, z = 0. I rotate a quaternion using an expression like this:
obj.rotation = rotate(angle, x-axis, y-axis, z-axis) * obj.rotation
where obj.rotation is the objects total quaternion rotation value.
I appreciate any help I can get on this issue, if anyone knows what's wrong or has also experienced this issue before. Thanks
EDIT: multiplying total by these quaternions output the expected rotation:
rotate(angle,1,0,0)
rotate(angle,0,1,0)
rotate(angle,0,0,1)
rotate(angle,0,1,1)
However, any rotations such as these make the model stretch oddly:
rotate(angle,1,1,0)
rotate(angle,1,0,1)
EDIT2: here is the normalize function I use to normalize the quaternions:
Quaternion normalize(Quaternion n, double tolerance) {
// adds all squares of quaternion values, if normalized, total will be 1:
double total = pow(n.w, 2) + pow(n.x, 2) + pow(n.y, 2) + pow(n.z, 2);
if (total > 1 + tolerance || total < 1 - tolerance) {
// normalizes value of quaternion if it exceeds a certain tolerance value:
n.w /= (float) sqrt(total);
n.x /= (float) sqrt(total);
n.y /= (float) sqrt(total);
n.z /= (float) sqrt(total);
}
return n;
}
To implement two rotations in sequence you need the quaternion product of the two elementary rotations. Each elementary rotation is specified by an axis and an angle. But in your code you did not make sure you have a unit vector (direction vector) for the axis.
Do the following modification
Quaternion rotate(float w, float x, float y, float z) {
Quaternion n;
float f = 1/sqrtf(x*x+y*y+z*z)
n.w = cosf(w/2);
n.x = f * x * sinf(w/2);
n.y = f * y * sinf(w/2);
n.z = f * z * sinf(w/2);
return n;
}
and then use it as follows
Quaternion n = rotate(angle1,1,0,0) * rotate(angle2,0,1,0)
for the combined rotation of angle1 about the x-axis, and angle2 about the y-axis.
As pointed out in comments, you are not initializing your quaternions correctly.
The following quaternions are not rotations:
rotate(angle,0,1,1)
rotate(angle,1,1,0)
rotate(angle,1,0,1)
The reason is the axis is not normalized e.g., the vector (0,1,1) is not normalized. Also make sure your angles are in radians.

Getting Quaternions From Gyro Data. How do I get body coordinates?

I've got a gyro hooked up to an arduino and I'm getting angular rate out in rad/sec in all three axes.
I want to be able to get out yaw, pitch, roll in body coordinates so the three axes of rotation are fixed to the body. The problem I'm having now is that when I roll the sensor, the yaw and pitch I get out become swapped. As I roll the sensor 90 degrees, the yaw and pitch change places. Anywhere in between, the yaw and pitch are a mixture between the two.
Instead, I want to keep the pitch and yaw relative to the new body rotation rather than the initial position.
Here is my code:
void loop() {
currentTime = millis();
dt = ((currentTime - prevTime) / 1000.0 );
// Puts gyro data into data[2], data[4], data[5]
readBMI();
if(firstPass == false) {
omega[0] = (data[3]);
omega[1] = (data[4]);
omega[2] = (data[5]);
wLength = sqrt(sq(omega[0]) + sq(omega[1]) + sq(omega[2]));
theta = wLength * dt;
q_new[0] = cos(theta/2);
q_new[1] = (omega[0] / wLength * sin(theta / 2));
q_new[2] = (omega[1] / wLength * sin(theta / 2));
q_new[3] = (omega[2] / wLength * sin(theta / 2));
q[0] = q[0] * q_new[0] - q[1] * q_new[1] - q[2] * q_new[2] - q[3] * q_new[3];
q[1] = q[0] * q_new[1] + q[1] * q_new[0] + q[2] * q_new[3] - q[3] * q_new[2];
q[2] = q[0] * q_new[2] - q[1] * q_new[3] + q[2] * q_new[0] + q[3] * q_new[1];
q[3] = q[0] * q_new[3] + q[1] * q_new[2] - q[2] * q_new[1] + q[3] * q_new[0];
float sinr_cosp = 2 * (q[0] * q[1] + q[2] * q[3]);
float cosr_cosp = 1 - 2 * (sq(q[1]) + sq(q[2]));
roll = atan2(sinr_cosp, cosr_cosp) * 180 / PI;
pitch = asin(2 * (q[0] * q[2] - q[3] * q[1])) * 180 / PI;
double siny_cosp = 2 * (q[0] * q[3] + q[1] * q[2]);
double cosy_cosp = 1 - 2 * (sq(q[2]) + sq(q[3]));
yaw = atan2(siny_cosp, cosy_cosp) * 180 / PI;
}
Serial.print(roll);
Serial.print(" ");
Serial.print(pitch);
Serial.print(" ");
Serial.print(yaw);
Serial.print(" ");
Serial.println();
delay(20);
prevTime = currentTime;
}
I'm getting the angles out correctly but my only problem is the yaw and pitch swap when it rolls. So I'm guessing I need a way to convert from world to body coodrinates?

Calculating position of a item relative to camera

I'm making a 3D-game using C++ and Irrlicht. It is a FPS-style game, so player should be able to carry weapons. But I've been struggling with calculating the position relative to the camera. If camera wouldn't rotate, calculation would be easy:
// node is camera's child
vector3df modifier = vector3df(2, 0, 2);
node->setPosition(node->getPosition() + modifier);
But however, camera isn't a static but a rotating object, so things are a bit more complicated. Here's an image which hopefully crearifies what I'm trying to say:
This should work in all dimensions, X, Y and Z. I think there's only two trigonometric functions for this kind of purpose, sine and cosine, which are for calculating X and Y coordinates. Am I at the wrong path or can they be applied to this? Or is there a solution in Irrlicht itself? There's the code which I've tried to use (found it from SO):
vector3df obj = vector3df(2, 0, 2);
vector3df n = vector3df(0, 0, 0);
n.X = obj.X * cos(60 * DEGTORAD) - obj.Z * sin(60 * DEGTORAD);
n.Z = obj.Z * cos(60 * DEGTORAD) + obj.X * sin(60 * DEGTORAD);
node->setPosition(node->getPosition() + n);
But the weapon just flies forward.
I would be glad for any kind of help or guidance.
P.S. Hopefully this question is clearer than the previous one
The problem with your code is that the rotation is performed around the origin, not around the camera.
What you want to do is to rotate the object (weapon) around the center of the camera by the angle that the camera rotates:
In order to do that you need to perform the following steps:
1 - Translate all the points so that the center of the camera is at the origin,
2 - Apply the rotation matrix (angle is alpha):
[cos (alpha) -sin (alpha)]
[sin (alpha) cos (alpha)]
3 - Undo the step 1 on the rotated point.
Sample algorithm:
Position of the weapon: (xObject, yObject)
Position of the camera: (xCamera, yCamera)
Turning angle: alpha
//step 1:
xObject -= xCamera;
yObject -= yCamera;
// step 2
xRot = xObject * cos(alpha) - yObject * sin(alpha);
yRot = xObject * sin(alpha) + yObject * cos(alpha);
// step 3:
xObject = xRot + xCamera;
yObject = yRot + yCamera;
This algorithm is on XY plane but can be modified for XZ plane. Assuming that in your code obj represent the position of the weapon. Your code can be something like:
...
// Step 1
obj.X-=cam.X;
obj.Z-=cam.Z;
//Step 2
n.X = obj.X * cos(60 * DEGTORAD) - obj.Z * sin(60 * DEGTORAD);
n.Z = obj.Z * cos(60 * DEGTORAD) + obj.X * sin(60 * DEGTORAD);
// Step 3
obj.X = n.X + cam.X;
obj.Z = n.Z + cam.Z;
...
Hope that helps!

Calculating 3D Coordinate

I have recently been trying to calculate a 3D point out of a mouse position. So far I have this:
const D3DXMATRIX* pmatProj = g_Camera.GetProjMatrix();
POINT ptCursor;
GetCursorPos( &ptCursor );
ScreenToClient( DXUTGetHWND(), &ptCursor );
// Compute the vector of the pick ray in screen space
D3DXVECTOR3 v;
v.x = ( ( ( 2.0f * ptCursor.x ) / pd3dsdBackBuffer->Width ) - 1 ) / pmatProj->_11;
v.y = -( ( ( 2.0f * ptCursor.y ) / pd3dsdBackBuffer->Height ) - 1 ) / pmatProj->_22;
v.z = 1.0f;
// Get the inverse view matrix
const D3DXMATRIX matView = *g_Camera.GetViewMatrix();
const D3DXMATRIX matWorld = *g_Camera.GetWorldMatrix();
D3DXMATRIX mWorldView = matWorld * matView;
D3DXMATRIX m;
D3DXMatrixInverse( &m, NULL, &mWorldView );
// Transform the screen space pick ray into 3D space
vPickRayDir.x = v.x * m._11 + v.y * m._21 + v.z * m._31;
vPickRayDir.y = v.x * m._12 + v.y * m._22 + v.z * m._32;
vPickRayDir.z = v.x * m._13 + v.y * m._23 + v.z * m._33;
vPickRayOrig.x = m._41;
vPickRayOrig.y = m._42;
vPickRayOrig.z = m._43;
However, as my mathematical skills are lacklustre, I am unsure how to utilise the direction and origin to produce a position. What calculations/formulas do I need to perform to produce the desired results?
It's just like a * x + b, except three times.
For any distance d (positive or negative) from vPickRayOrig:
newPos.x = d * vPickRayDir.x + vPickRayOrig.x;
newPos.y = d * vPickRayDir.y + vPickRayOrig.y;
newPos.z = d * vPickRayDir.z + vPickRayOrig.z;

C++ rotation of point in float coordinates accuracy

i study OpenGL ES 2.0. But i think it's more C++ question rather then OpenGL. I'am stuck with rotation question. It is known, that rotation transformation can be applied using the following equations:
p'x = cos(theta) * (px-ox) - sin(theta) * (py-oy) + ox
p'y = sin(theta) * (px-ox) + cos(theta) * (py-oy) + oy
But it seems that when i perform this rotation operation several times the accuracy problem is occured. I guess, that the core of this problem is in uncertain results of cos function and floating point limitations. As a result i see that my rotating object is getting smaller and smaller and smaller. So:
1.) How do you think, does this issue really connected with floating point accuracy problem?
2.) If so, how can i handle this.
Suppose that float _points[] is array containing coordinates x1,y1,x2,y2...xn,yn. Then i recompute my coordinates after rotation in the following way:
/* For x */
float angle = .... ;
pair<float, float> orig_coordinates(0, 0);
for (; coors_ctr < _n_points * 2; coors_ctr += 2)
_points[coors_ctr] = cos(angle) * (_points[coors_ctr] - _orig_coordinates.first) -
sin(angle) * (_points[coors_ctr + 1] - _orig_coordinates.second) +
_orig_coordinates.first;
/* For y */
coors_ctr = 1;
for (; coors_ctr < _n_points * 2; coors_ctr += 2)
_points[coors_ctr] = sin(angle) * (_points[coors_ctr - 1] - _orig_coordinates.first) +
cos(angle) * (_points[coors_ctr] - _orig_coordinates.second) + _orig_coordinates.second;
I think the problem is that you're writing the rotated result back to the input array.
p'x = cos(theta) * (px-ox) - sin(theta) * (py-oy) + ox
p'y = sin(theta) * (p'x-ox) + cos(theta) * (py-oy) + oy
Try doing the rotation out of place, or use temporary variables and do one point (x,y) at a time.