rotation matrix to quaternion (and back) what is wrong? - c++

I copied a code for conversion of 3D roation matrix to quaternions and back. The same code is used in jMonkey (I just rewrote it into my C++ class). However, it does not work properly (at least not as I would expect.)
e.g. I made this test:
matrix (a,b,c):
a : 0.707107 0.000000 0.707107
b : 0.000000 -1.000000 0.000000
c : -0.707107 0.000000 0.707107
>>> ortonormality:
a.a b.b c.c 1.000000 1.000000 1.000000
a.b a.c b.c 0.000000 0.000000 0.000000
>>> matrix -> quat
quat: 0.000000 0.594604 0.000000 0.594604 norm(quat) 0.707107
>>> quat -> matrix
matrix (a,b,c):
a: 0.000000 0.000000 1.000000
b: 0.000000 1.000000 0.000000
c: -1.000000 0.000000 0.000000
I think the problem is in matrix -> quat because I have used quat -> matrix procedure before, and it was working fine. Also it is strange that quaternion made from orthonormal matrix is not unitary.
the matrix -> quat procedure
inline void fromMatrix( TYPE m00, TYPE m01, TYPE m02, TYPE m10, TYPE m11, TYPE m12, TYPE m20, TYPE m21, TYPE m22) {
// Use the Graphics Gems code, from
// ftp://ftp.cis.upenn.edu/pub/graphics/shoemake/quatut.ps.Z
TYPE t = m00 + m11 + m22;
// we protect the division by s by ensuring that s>=1
if (t >= 0) { // by w
TYPE s = sqrt(t + 1);
w = 0.5 * s;
s = 0.5 / s;
x = (m21 - m12) * s;
y = (m02 - m20) * s;
z = (m10 - m01) * s;
} else if ((m00 > m11) && (m00 > m22)) { // by x
TYPE s = sqrt(1 + m00 - m11 - m22);
x = s * 0.5;
s = 0.5 / s;
y = (m10 + m01) * s;
z = (m02 + m20) * s;
w = (m21 - m12) * s;
} else if (m11 > m22) { // by y
TYPE s = sqrt(1 + m11 - m00 - m22);
y = s * 0.5;
s = 0.5 / s;
x = (m10 + m01) * s;
z = (m21 + m12) * s;
w = (m02 - m20) * s;
} else { // by z
TYPE s = sqrt(1 + m22 - m00 - m11);
z = s * 0.5;
s = 0.5 / s;
x = (m02 + m20) * s;
y = (m21 + m12) * s;
w = (m10 - m01) * s;
}
}
the quat -> matrix procedure
inline void toMatrix( MAT& result) const {
TYPE r2 = w*w + x*x + y*y + z*z;
//TYPE s = (r2 > 0) ? 2d / r2 : 0;
TYPE s = 2 / r2;
// compute xs/ys/zs first to save 6 multiplications, since xs/ys/zs
// will be used 2-4 times each.
TYPE xs = x * s; TYPE ys = y * s; TYPE zs = z * s;
TYPE xx = x * xs; TYPE xy = x * ys; TYPE xz = x * zs;
TYPE xw = w * xs; TYPE yy = y * ys; TYPE yz = y * zs;
TYPE yw = w * ys; TYPE zz = z * zs; TYPE zw = w * zs;
// using s=2/norm (instead of 1/norm) saves 9 multiplications by 2 here
result.xx = 1 - (yy + zz);
result.xy = (xy - zw);
result.xz = (xz + yw);
result.yx = (xy + zw);
result.yy = 1 - (xx + zz);
result.yz = (yz - xw);
result.zx = (xz - yw);
result.zy = (yz + xw);
result.zz = 1 - (xx + yy);
};
sorry for TYPE, VEC, MAT, QUAT it is part of class tepmpltes... should be replaced by double, Vec3d, Mat3d, Quat3d or float, Vec3f, Mat3f, Quat3f.
EDIT:
I also checked if I get the same behaviour with jMonkey directly (in case I made a bug in Java to C++ conversion ). And I do - using this code:
Matrix3f Min = new Matrix3f( 0.707107f, 0.000000f, 0.707107f, 0.000000f, -1.000000f, 0.000000f, -0.707107f, 0.000000f, 0.707107f );
Matrix3f Mout = new Matrix3f( );
Quaternion q = new Quaternion();
q.fromRotationMatrix(Min);
System.out.println( q.getX()+" "+q.getY()+" "+q.getZ()+" "+q.getW() );
q.toRotationMatrix(Mout);
System.out.println( Mout.get(0,0) +" "+Mout.get(0,1)+" "+Mout.get(0,2) );
System.out.println( Mout.get(1,0) +" "+Mout.get(1,1)+" "+Mout.get(1,2) );
System.out.println( Mout.get(2,0) +" "+Mout.get(2,1)+" "+Mout.get(2,2) );

Your matrix:
matrix (a,b,c):
a : 0.707107 0.000000 0.707107
b : 0.000000 -1.000000 0.000000
c : -0.707107 0.000000 0.707107
is orthogonal but it is not a rotation matrix. A rotation matrix has determinant 1; your matrix has determinant -1 and is thus an improper rotation.
I think your code is likely correct and the issue is in your data. Try it with a real rotation matrix.

Related

How do I get correct answers using my code with the barycentric formula?

My function getHeightOfTerrain() is calling a barycentric formula function that is not returning the correct height for the one set test height in : heightMapFromArray[][].
I've tried watching OpenGL JAVA Game tutorials 14,21, 22, by "thin matrix" and I am confused on how to use my array: heightMapforBaryCentric in both of the supplied functions, and how to set the arguments that are passed to the baryCentic() function in some sort of manner so that I can solve the problem.
int creaateTerrain(int height, int width)
{
float holderY[6] = { 0.f ,0.f,0.f,0.f,0.f,0.f };
float scaleit = 1.5f;
float holder[6] = { 0.f,0.f,0.f,0.f,0.f,0.f };
for (int z = 0, z2 =0; z < iterationofHeightMap;z2++)
{
//each loop is two iterations and creates one quad (two triangles)
//however because each iteration is by two (i.e. : x=x+2) om bottom
//the amount of triangles is half the x value
//
//number of vertices : 80 x 80 x 6.
//column
for (int x = 0, x2 = 0; x < iterationofHeightMap;x2++)
{
//relevant - A : first triangle - on left triangle
//[row] [colum[]
holder[0] = heightMapFromArray[z][x];
//holder[0] = (float)imageData[(z / 2 * MAP_Z + (x / 2)) * 3];
//holder[0] = holder[0] / 255;// *scaleit;
vertices.push_back(glm::vec3(x, holder[0], z));
//match height map with online barycentric use
heightMapforBaryCentric[x2][z2] = holder[0];
holder[1] = heightMapFromArray[z+2][x];
//holder[1] = (float)imageData[(((z + 2) / 2 * MAP_Z + ((x) / 2))) * 3];
//holder[1] = holder[1] / 255;// 6 * scaleit;
vertices.push_back(glm::vec3(x, holder[1], z + 2));
//match height map with online barycentric use
heightMapforBaryCentric[x2][z2+1] = holder[1];
holder[2] = heightMapFromArray[z+2][x+2];
//holder[2] = (float)imageData[(((z + 2) / 2 * MAP_Z + ((x + 2) / 2))) * 3];
//holder[2] = holder[2] / 255;// *scaleit;
vertices.push_back(glm::vec3(x + 2, holder[2], z + 2));
////match height map with online barycentric use
heightMapforBaryCentric[x2+1][z2+1] = holder[2];
//relevant - B - second triangle (on right side)
holder[3] = heightMapFromArray[z][x];
//holder[3] = (float)imageData[((z / 2)*MAP_Z + (x / 2)) * 3];
//holder[3] = holder[3] / 255;// 256 * scaleit;
vertices.push_back(glm::vec3(x, holder[3], z));
holder[4] = heightMapFromArray[x+2][z+2];
//holder[4] = (float)imageData[(((z + 2) / 2 * MAP_Z + ((x + 2) / 2))) * 3];
//holder[4] = holder[4] / 255;// *scaleit;
vertices.push_back(glm::vec3(x + 2, holder[4], z + 2));
holder[5] = heightMapFromArray[x+2][z];
//holder[5] = (float)imageData[((z / 2)*MAP_Z + ((x + 2) / 2)) * 3];
//holder[5] = holder[5] / 255;// *scaleit;
vertices.push_back(glm::vec3(x + 2, holder[5], z));
x = x + 2;
}
z = z + 2;
}
return(1);
}
float getHeightOfTerrain(float worldX, float worldZ) {
float terrainX = worldX;
float terrainZ = worldZ;
int gridSquareSize = 2.0f;
gridX = (int)floor(terrainX / gridSquareSize);
gridZ = (int)floor(terrainZ / gridSquareSize);
xCoord = ((float)(fmod(terrainX, gridSquareSize)) / (float)gridSquareSize);
zCoord = ((float)(fmod(terrainZ, gridSquareSize)) / (float)gridSquareSize);
if (xCoord <= (1 - zCoord))
{
answer = baryCentric(
//left triangle
glm::vec3(0.0f, heightMapforBaryCentric[gridX][gridZ], 0.0f),
glm::vec3(0.0f, heightMapforBaryCentric[gridX][gridZ+1], 1.0f),
glm::vec3(1.0f, heightMapforBaryCentric[gridX+1][gridZ+1], 1.0f),
glm::vec2(xCoord, zCoord));
// if (answer != 1)
// {
// fprintf(stderr, "Z:gridx: %d gridz: %d answer: %f\n", gridX, gridZ,answer);
//
// }
}
else
{
//right triangle
answer = baryCentric(glm::vec3(0, heightMapforBaryCentric[gridX][gridZ], 0),
glm::vec3(1,heightMapforBaryCentric[gridX+1][gridZ+1], 1),
glm::vec3(1,heightMapforBaryCentric[gridX+1][gridZ], 0),
glm::vec2(xCoord, zCoord));
}
if (answer == 1)
{
answer = 0;
}
//answer = abs(answer - 1);
return(answer);
}
float baryCentric(glm::vec3 p1, glm::vec3 p2, glm::vec3 p3 , glm::vec2 pos) {
float det = (p2.z - p3.z) * (p1.x - p3.x) + (p3.x - p2.x) * (p1.z - p3.z);
float l1 = ((p2.z - p3.z) * (pos.x - p3.x) + (p3.x - p2.x) * (pos.y - p3.z)) / det;
float l2 = ((p3.z - p1.z) * (pos.x - p3.x) + (p1.x - p3.x) * (pos.y - p3.z)) / det;
float l3 = 1.0f - l1 - l2;
return (l1 * p1.y + l2 * p2.y + l3 * p3.y);
}
My expected results were that the center of the test grid's height to be the set value .5 and gradually less as the heights declined. My results were the heights being all the same, varied, or increasing. Usually these heights were under the value of one.

Wrong uv in ray tracer

I have a problem with my ray tracer program. The image looks wrong. Here is the output image:
Barycentric coordinates and collision calculation codes are as follows:
bool CTriangle::Intersect(Calculus::CRay& ray, CIntersection* isect) const {
// Möller–Trumbore intersection algorithm
const Calculus::CPoint3<float>& p1 = v_points[0];
const Calculus::CPoint3<float>& p2 = v_points[1];
const Calculus::CPoint3<float>& p3 = v_points[2];
Calculus::CVector3<float> e1 = p2 - p1;
Calculus::CVector3<float> e2 = p3 - p1;
Calculus::CVector3<float> s1 = Calculus::Math::Cross(ray.direction, e2);
float determinant = Calculus::Math::Dot(s1, e1);
if (determinant == 0.0f)
return false;
float inv_determinant = 1.0f / determinant;
Calculus::CVector3<float> s = ray.origin - p1;
float b1 = Calculus::Math::Dot(s, s1) * inv_determinant;
if (b1 < 0.0f || b1 > 1.0f)
return false;
Calculus::CVector3<float> s2 = Calculus::Math::Cross(s, e1);
float b2 = Calculus::Math::Dot(ray.direction, s2) * inv_determinant;
if (b2 < 0.0f || b1 + b2 > 1.0f)
return false;
float b0 = 1 - b1 - b2;
float thit = Calculus::Math::Dot(e2, s2) * inv_determinant;
if (thit < ray.mint || thit > ray.maxt)
return false;
isect->p = ray(thit);
isect->n = Calculus::Math::Normalize(Calculus::CVector3<float>
(v_normals[0].x, v_normals[0].y, v_normals[0].z) * b0 +
Calculus::CVector3<float>(v_normals[1].x, v_normals[1].y,
v_normals[1].z) * b1 +
Calculus::CVector3<float>(v_normals[2].x, v_normals[2].y,
v_normals[2].z) * b2);
isect->uv = v_uvs[0] * b0 + v_uvs[1] * b1 + v_uvs[2] * b2;
isect->tHit = thit;
isect->ray_epsilon = 1e-5f * thit;
return true;
}
Texture i used int the ray trace program:(file type: bmp)
my obj file is as follows. The background shape consists of two triangles. Texture projection is applied only to the background shape:
v -24.1456 -11.1684 -26.2413
v 24.1455 -11.1684 -26.2413
v -24.1456 37.1227 -26.2413
v 24.1455 37.1227 -26.2413
# 4 vertices
vn 0.0000 0.0000 1.0000
vn 0.0000 0.0000 1.0000
vn 0.0000 0.0000 1.0000
vn 0.0000 0.0000 1.0000
vn 0.0000 0.0000 1.0000
vn 0.0000 0.0000 1.0000
# 6 vertex normals
vt 0.9995 0.0005 0.0000
vt 0.0005 0.0005 0.0000
vt 0.9995 0.9995 0.0000
vt 0.0005 0.9995 0.0000
# 4 texture coords
o back
g back
usemtl default
s 1
f 1/1/1 2/2/2 4/4/3
f 4/4/4 3/3/5 1/1/6
# 2 faces
Here is the interpolated uv draw call.
Here is indexing algorithm, i'm starting from zero:
...
Calculus::CPoint3<unsigned short> p, t, n;
sscanf_s(token, "%hu/%hu/%hu %hu/%hu/%hu %hu/%hu/%hu",
&p.x, &t.x, &n.x, &p.y, &t.y, &n.y, &p.z, &t.z, &n.z);
pi.push_back(p);
ti.push_back(t);
ni.push_back(n);
…
index = ti[i].x - 1;
temp_t[0] = vt[index]; // first uv
index = ti[i].y - 1;
temp_t[1] = vt[index]; // second uv
index = ti[i].z - 1;
temp_t[2] = vt[index]; // third uv
I wonder where I'm making a mistake. Thank you.
isect->uv = v_uvs[0] * b1 + v_uvs[1] * b2;
This is not the correct parametric interpolation of vertex attributes:
The parameters b1, b2 are being applied to the wrong vertices
You are not taking the third vertex v_uvs[2] into account
Correct version:
isect->uv = v_uvs[0] * b0 + v_uvs[1] * b1 + v_uvs[2] * b2;

What's different with my lookAt and perspective calls VS gluPerspective and glLookat (cube stretched)

Side note: hey everyone, if you found my question/answer helpful, please don't forget to up vote. I kind of need it...
So there seems to be something different with my implementation of both matrix [projection and model] (other than the stuff I've commented out for debugging purposes). Below is a screenshot of the bug I see when drawing a cube. Keep in mind I do keep the viewport and matrix up to date with the window size and calculate screen ratio with float and not int, so don't bother asking, I've checked the usual suspects.....
Screen Shot
Files (linux build, see readme in ./build)
side note: while debugging, I've changed the cube's distance. To reproduce the screen shot, on line 76 of workspace.cpp set mDistance to about 90 and stretch the window frame to dimensions noted at lower right corner of the window.
Please keep in mind the screen shot and the debug text output are seperate events as I'm constantly debugging this problem and getting new numbers.
The code:
#define _AP_MAA 0
#define _AP_MAB 1
#define _AP_MAC 2
#define _AP_MAD 3
#define _AP_MBA 4
#define _AP_MBB 5
#define _AP_MBC 6
#define _AP_MBD 7
#define _AP_MCA 8
#define _AP_MCB 9
#define _AP_MCC 10
#define _AP_MCD 11
#define _AP_MDA 12
#define _AP_MDB 13
#define _AP_MDC 14
#define _AP_MDD 15
Setting up the camera perspective:
void APCamera::setPerspective(GMFloat_t fov, GMFloat_t aspect, GMFloat_t near, GMFloat_t far)
{
GMFloat_t difZ = near - far;
GMFloat_t *data;
mProjection->clear(); //set to identity matrix
data = mProjection->getData();
GMFloat_t v = 1.0f / tan(fov / 2.0f);
data[_AP_MAA] = v / aspect;
data[_AP_MBB] = v;
data[_AP_MCC] = (far + near) / (difZ);
data[_AP_MCD] = -1.0f;
data[_AP_MDD] = 0.0f;
data[_AP_MDC] = (2.0f * far * near)/ (difZ);
mRatio = aspect;
mInvProjOutdated = true;
mIsPerspective = true;
}
Setting up the camera direction:
bool APCamera::lookTo(Coordinate &to, Coordinate &from, Coordinate &up)
{
Coordinate f, unitUp, right;
GMFloat_t *data;
CoordinateOp::diff(&to, &from, &f);
VectorOp::toUnit(&f, &f);
VectorOp::toUnit(&up, &unitUp);
VectorOp::cross(&f, &unitUp, &right);
if((fabs(right.x) < FLOAT_THRESHOLD) && (fabs(right.y) < FLOAT_THRESHOLD) && (fabs(right.z) < FLOAT_THRESHOLD))
{
return false;
}
mCamPt = from;
VectorOp::toUnit(&right, &mRight);
mForward = f;
VectorOp::cross(&mRight, &mForward, &mUp);
mModelView->clear();
data = mModelView->getData();
data[_AP_MAA] = mRight.x;
data[_AP_MBA] = mRight.y;
data[_AP_MCA] = mRight.z;
data[_AP_MAB] = mUp.x;
data[_AP_MBB] = mUp.y;
data[_AP_MCB] = mUp.z;
data[_AP_MAC] = -mForward.x;
data[_AP_MBC] = -mForward.y;
data[_AP_MCC] = -mForward.z;
//translation part is commented out to narrow bugs down, "camera" is kept at the center (0,0,0)
//data[_AP_MDA] = (data[_AP_MAA] * -mCamPt.x) + (data[_AP_MBA] * -mCamPt.y) + (data[_AP_MCA] * -mCamPt.z);
//data[_AP_MDB] = (data[_AP_MAB] * -mCamPt.x) + (data[_AP_MBB] * -mCamPt.y) + (data[_AP_MCB] * -mCamPt.z);
//data[_AP_MDC] = (data[_AP_MAC] * -mCamPt.x) + (data[_AP_MBC] * -mCamPt.y) + (data[_AP_MCC] * -mCamPt.z);
mInvViewOutdated = true;
return true;
}
The debug output:
LookTo() From:<0,0,0> To:<-1,0,0>:
0.000000 0.000000 -1.000000 0.000000
0.000000 1.000000 0.000000 0.000000
1.000000 -0.000000 -0.000000 0.000000
0.000000 0.000000 0.000000 1.000000
setPerspective() fov:0.785398 ratio:1.185185 near:0.500000 far:100.000000:
2.036993 0.000000 0.000000 0.000000
0.000000 2.414213 0.000000 0.000000
0.000000 0.000000 -1.010050 -1.005025
0.000000 0.000000 -1.000000 0.000000
In the end, it looks like the trouble maker was just the FOV. So the quick answer is NO I didn't do anything different from the documented perspective and look at function. For anyone having a similar problem 2.0f * atan(tan(DEFAULT_FOV_RAD/mRatio) * mRatio) did the job for me.

Bending a wire to form circle and ellipse

I have given N points on a straight line, these are lets say- (x1,y1) , (x2, y2), .... (xn, yn) , these points represent a wire in 3D. I want this wire to bend to form shape of circle and ellipse. So these points will map to points on circle and ellipse. Tell about some mapping technique that maps points on straight line onto points on circle and ellipse.
Reduce the line points to scalar parametric coordinates 0 <= t <= 1.
Multiply the t coordinates by 2*pi (giving theta) and plug them into the parametric circle equation:
x = cos( theta )
y = sin( theta )
Example:
Given 4 points (0,0), (1,1), (5,5), and (10,10) convert to parametric coordinates like so:
length = | (10,10) - (0,0) | = sqrt( 10^2 + 10^2 ) = sqrt( 200 )
t0 = 0.0 = | (0,0) - (0,0) | / length = 0
t1 = 0.1 = | (1,1) - (0,0) | / length = sqrt( 2 ) / length
t2 = 0.5 = | (5,5) - (0,0) | / length = sqrt( 50 ) / length
t3 = 1.0 = | (10,10) - (0,0) | / length = sqrt( 200 ) / length
p0.x = cos( t0 * 2 * pi ) = 1
p0.y = sin( t0 * 2 * pi ) = 0
p1.x = cos( t1 * 2 * pi ) = 0.80901699437
p1.y = sin( t1 * 2 * pi ) = 0.58778525229
...

2d rotation opengl

Here is the code I am using.
#define ANGLETORADIANS 0.017453292519943295769236907684886f // PI / 180
#define RADIANSTOANGLE 57.295779513082320876798154814105f // 180 / PI
rotation = rotation *ANGLETORADIANS;
cosRotation = cos(rotation);
sinRotation = sin(rotation);
for(int i = 0; i < 3; i++)
{
px[i] = (vec[i].x + centerX) * (cosRotation - (vec[i].y + centerY)) * sinRotation;
py[i] = (vec[i].x + centerX) * (sinRotation + (vec[i].y + centerY)) * cosRotation;
printf("num: %i, px: %f, py: %f\n", i, px[i], py[i]);
}
so far it seams my Y value is being fliped.. say I enter the value of X = 1 and Y = 1 with a 45 rotation you should see about x = 0 and y = 1.25 ish but I get x = 0 y = -1.25.
Also my 90 degree rotation always return x = 0 and y = 0.
p.s I know I'm only centering my values and not putting them back where they came from. It's not needed to put them back as all I need to know is the value I'm getting now.
Your bracket placement doesn't look right to me. I would expect:
px[i] = (vec[i].x + centerX) * cosRotation - (vec[i].y + centerY) * sinRotation;
py[i] = (vec[i].x + centerX) * sinRotation + (vec[i].y + centerY) * cosRotation;
Your brackets are wrong. It should be
px[i] = ((vec[i].x + centerX) * cosRotation) - ((vec[i].y + centerY) * sinRotation);
py[i] = ((vec[i].x + centerX) * sinRotation) + ((vec[i].y + centerY) * cosRotation);
instead