I have recently been trying to calculate a 3D point out of a mouse position. So far I have this:
const D3DXMATRIX* pmatProj = g_Camera.GetProjMatrix();
POINT ptCursor;
GetCursorPos( &ptCursor );
ScreenToClient( DXUTGetHWND(), &ptCursor );
// Compute the vector of the pick ray in screen space
D3DXVECTOR3 v;
v.x = ( ( ( 2.0f * ptCursor.x ) / pd3dsdBackBuffer->Width ) - 1 ) / pmatProj->_11;
v.y = -( ( ( 2.0f * ptCursor.y ) / pd3dsdBackBuffer->Height ) - 1 ) / pmatProj->_22;
v.z = 1.0f;
// Get the inverse view matrix
const D3DXMATRIX matView = *g_Camera.GetViewMatrix();
const D3DXMATRIX matWorld = *g_Camera.GetWorldMatrix();
D3DXMATRIX mWorldView = matWorld * matView;
D3DXMATRIX m;
D3DXMatrixInverse( &m, NULL, &mWorldView );
// Transform the screen space pick ray into 3D space
vPickRayDir.x = v.x * m._11 + v.y * m._21 + v.z * m._31;
vPickRayDir.y = v.x * m._12 + v.y * m._22 + v.z * m._32;
vPickRayDir.z = v.x * m._13 + v.y * m._23 + v.z * m._33;
vPickRayOrig.x = m._41;
vPickRayOrig.y = m._42;
vPickRayOrig.z = m._43;
However, as my mathematical skills are lacklustre, I am unsure how to utilise the direction and origin to produce a position. What calculations/formulas do I need to perform to produce the desired results?
It's just like a * x + b, except three times.
For any distance d (positive or negative) from vPickRayOrig:
newPos.x = d * vPickRayDir.x + vPickRayOrig.x;
newPos.y = d * vPickRayDir.y + vPickRayOrig.y;
newPos.z = d * vPickRayDir.z + vPickRayOrig.z;
Related
So I've been learning about quaternions recently and decided to make my own implementation. I tried to make it simple but I still can't pinpoint my error. x/y/z axis rotation works fine on it's own and y/z rotation work as well, but the second I add x axis to any of the others I get a strange stretching output. I'll attach the important code for the rotations below:(Be warned I'm quite new to cpp).
Here is how I describe a quaternion (as I understand since they are unit quaternions imaginary numbers aren't required):
struct Quaternion {
float w, x, y, z;
};
The multiplication rules of quaternions:
Quaternion operator* (Quaternion n, Quaternion p) {
Quaternion o;
// implements quaternion multiplication rules:
o.w = n.w * p.w - n.x * p.x - n.y * p.y - n.z * p.z;
o.x = n.w * p.x + n.x * p.w + n.y * p.z - n.z * p.y;
o.y = n.w * p.y - n.x * p.z + n.y * p.w + n.z * p.x;
o.z = n.w * p.z + n.x * p.y - n.y * p.x + n.z * p.w;
return o;
}
Generating the rotation quaternion to multiply the total rotation by:
Quaternion rotate(float w, float x, float y, float z) {
Quaternion n;
n.w = cosf(w/2);
n.x = x * sinf(w/2);
n.y = y * sinf(w/2);
n.z = z * sinf(w/2);
return n;
}
And finally, the matrix calculations which turn the quaternion into an x/y/z position:
inline vector<float> quaternion_matrix(Quaternion total, vector<float> vec) {
float x = vec[0], y = vec[1], z = vec[2];
// implementation of 3x3 quaternion rotation matrix:
vec[0] = (1 - 2 * pow(total.y, 2) - 2 * pow(total.z, 2))*x + (2 * total.x * total.y - 2 * total.w * total.z)*y + (2 * total.x * total.z + 2 * total.w * total.y)*z;
vec[1] = (2 * total.x * total.y + 2 * total.w * total.z)*x + (1 - 2 * pow(total.x, 2) - 2 * pow(total.z, 2))*y + (2 * total.y * total.z + 2 * total.w * total.x)*z;
vec[2] = (2 * total.x * total.z - 2 * total.w * total.y)*x + (2 * total.y * total.z - 2 * total.w * total.x)*y + (1 - 2 * pow(total.x, 2) - 2 * pow(total.y, 2))*z;
return vec;
}
That's pretty much it (I also have a normalize function to deal with floating point errors), I initialize all objects quaternion to: w = 1, x = 0, y = 0, z = 0. I rotate a quaternion using an expression like this:
obj.rotation = rotate(angle, x-axis, y-axis, z-axis) * obj.rotation
where obj.rotation is the objects total quaternion rotation value.
I appreciate any help I can get on this issue, if anyone knows what's wrong or has also experienced this issue before. Thanks
EDIT: multiplying total by these quaternions output the expected rotation:
rotate(angle,1,0,0)
rotate(angle,0,1,0)
rotate(angle,0,0,1)
rotate(angle,0,1,1)
However, any rotations such as these make the model stretch oddly:
rotate(angle,1,1,0)
rotate(angle,1,0,1)
EDIT2: here is the normalize function I use to normalize the quaternions:
Quaternion normalize(Quaternion n, double tolerance) {
// adds all squares of quaternion values, if normalized, total will be 1:
double total = pow(n.w, 2) + pow(n.x, 2) + pow(n.y, 2) + pow(n.z, 2);
if (total > 1 + tolerance || total < 1 - tolerance) {
// normalizes value of quaternion if it exceeds a certain tolerance value:
n.w /= (float) sqrt(total);
n.x /= (float) sqrt(total);
n.y /= (float) sqrt(total);
n.z /= (float) sqrt(total);
}
return n;
}
To implement two rotations in sequence you need the quaternion product of the two elementary rotations. Each elementary rotation is specified by an axis and an angle. But in your code you did not make sure you have a unit vector (direction vector) for the axis.
Do the following modification
Quaternion rotate(float w, float x, float y, float z) {
Quaternion n;
float f = 1/sqrtf(x*x+y*y+z*z)
n.w = cosf(w/2);
n.x = f * x * sinf(w/2);
n.y = f * y * sinf(w/2);
n.z = f * z * sinf(w/2);
return n;
}
and then use it as follows
Quaternion n = rotate(angle1,1,0,0) * rotate(angle2,0,1,0)
for the combined rotation of angle1 about the x-axis, and angle2 about the y-axis.
As pointed out in comments, you are not initializing your quaternions correctly.
The following quaternions are not rotations:
rotate(angle,0,1,1)
rotate(angle,1,1,0)
rotate(angle,1,0,1)
The reason is the axis is not normalized e.g., the vector (0,1,1) is not normalized. Also make sure your angles are in radians.
I use glm::decompose (https://glm.g-truc.net/0.9.6/api/a00204.html) in a way similar to the following:
glm::mat4 matrix;
// ...
glm::vec3 scale;
glm::quat rotation;
glm::vec3 translation;
glm::vec3 skew;
glm::vec4 perspective;
glm::decompose(matrix, scale, rotation, translation, skew, perspective);
Now I would like to compose the matrix back again using all above properties. The thing is simple if all I have in my matrix are scale, rotation and translation (glm::scale, glm::rotate, glm::translate) but what interests me the most is the "skew" property. How can I apply all transformation to a new matrix so that after computation I would get the "matrix" back again?
As mentioned in the comments, the answer is in the source code, the last function in the file b, cited in a
Bring the function recompose
void TransformationMatrix::recompose(const DecomposedType& decomp)
{
makeIdentity();
// first apply perspective
m_matrix[0][3] = (float) decomp.perspectiveX;
m_matrix[1][3] = (float) decomp.perspectiveY;
m_matrix[2][3] = (float) decomp.perspectiveZ;
m_matrix[3][3] = (float) decomp.perspectiveW;
// now translate
translate3d((float) decomp.translateX,
(float) decomp.translateY,
(float) decomp.translateZ);
// apply rotation
double xx = decomp.quaternionX * decomp.quaternionX;
double xy = decomp.quaternionX * decomp.quaternionY;
double xz = decomp.quaternionX * decomp.quaternionZ;
double xw = decomp.quaternionX * decomp.quaternionW;
double yy = decomp.quaternionY * decomp.quaternionY;
double yz = decomp.quaternionY * decomp.quaternionZ;
double yw = decomp.quaternionY * decomp.quaternionW;
double zz = decomp.quaternionZ * decomp.quaternionZ;
double zw = decomp.quaternionZ * decomp.quaternionW;
// Construct a composite rotation matrix from the quaternion values
TransformationMatrix rotationMatrix(
1 - 2 * (yy + zz), 2 * (xy - zw) , 2 * (xz + yw) , 0,
2 * (xy + zw) , 1 - 2 * (xx + zz), 2 * (yz - xw) , 0,
2 * (xz - yw) , 2 * (yz + xw) , 1 - 2 * (xx + yy), 0,
0 , 0 , 0 , 1);
multLeft(rotationMatrix);
//////////////////////////////////////////
// THIS IS WHAT YOU ARE INTERESTED //
//////////////////////////////////////////
// now apply skew
if (decomp.skewYZ) {
TransformationMatrix tmp;
tmp.setM32((float) decomp.skewYZ);
multLeft(tmp);
}
if (decomp.skewXZ) {
TransformationMatrix tmp;
tmp.setM31((float) decomp.skewXZ);
multLeft(tmp);
}
if (decomp.skewXY) {
TransformationMatrix tmp;
tmp.setM21((float) decomp.skewXY);
multLeft(tmp);
}
// finally, apply scale
scale3d((float) decomp.scaleX,
(float) decomp.scaleY,
(float) decomp.scaleZ);
}
I'm building a renderer using rasterization and depth-buffering in the CPU and now I've included normals maps. Everything works like you can see in the next image:
The issue is that, even that it works, I don't understand WHY! The implementation is against what I think. This is the code to get the normal at each fragment:
const Vector3D TexturedMaterial::getNormal(const Triangle3D& triangle_world, const Vector2D& text_coords) const {
Vector3D tangent, bitangent;
calculateTangentSpace(tangent, bitangent, triangle_world);
// Gets the normal from a RGB Texture [0,1] and maps it to [-1, 1]
const Vector3D normal_tangent = (Vector3D) getTextureColor(m_texture_normal, m_texture_normal_width, m_texture_normal_height, text_coords);
const Vector3D normal_world = TangentToWorld(normal_tangent, tangent, bitangent, normal_tangent);
return normal_world;
}
void TexturedMaterial::calculateTangentSpace(Vector3D& tangent, Vector3D& bitangent, const Triangle3D& triangle_world) const {
const Vector3D q1 = triangle_world.v2.position - triangle_world.v1.position;
const Vector3D q2 = triangle_world.v3.position - triangle_world.v2.position;
const double s1 = triangle_world.v2.texture_coords.x - triangle_world.v1.texture_coords.x;
const double s2 = triangle_world.v3.texture_coords.x - triangle_world.v2.texture_coords.x;
const double t1 = triangle_world.v2.texture_coords.y - triangle_world.v1.texture_coords.y;
const double t2 = triangle_world.v3.texture_coords.y - triangle_world.v2.texture_coords.y;
tangent = t2 * q1 - t1 * q2;
bitangent = -s2 * q1 + s1 * q2;
tangent.normalize();
bitangent.normalize();
}
My confusion is here:
const Vector3D TexturedMaterial::TangentToWorld(const Vector3D& v, const Vector3D& tangent, const Vector3D& bitangent, const Vector3D& normal) const {
const int handness = -1; // Left coordinate system
// Vworld = Vtangent * TBN
Vector3D v_world = {
v.x * tangent.x + v.y * bitangent.x + v.z * normal.x,
v.x * tangent.y + v.y * bitangent.y + v.z * normal.y,
v.x * tangent.z + v.y * bitangent.z + v.z * normal.z,
};
// Vworld = Vtangent * TBN(-1) = V * TBN(T)
Vector3D v_world2 = {
v.x * tangent.x + v.y * tangent.y + v.z * tangent.z,
v.x * bitangent.x + v.y * bitangent.y + v.z * bitangent.z,
v.x * normal.x + v.y * normal.y + v.z * normal.z,
};
v_world2.normalize();
// return handness * v_world; --> DOES NOT WORK
return handness * v_world2; --> WORKS
}
Assuming that I'm working with row vectors:
V = (Vx, Vy, Vz)
[Tx Ty Tz]
TBN = [Bx By Bz]
[Nx Ny Nz]
[Tx Bx Nx]
TBN(-1) = [Ty By Ny] // Assume basis are orthogonal TBN(-1) = TBN(T)
[Tz Bz Nz]
Then, if T, B and N are the basis vectors of the TBN expressed in the world coordinate system the transformations should be:
Vworld = Vtangent * TBN
Vtangent = Vworld * TBN(-1)
But, in my code I am doing exactly the opposite. To transform the normal in tangent space to world space I am multiplying by the inverse of the TBN.
What I am missing or misunderstanding? Is the assumption that T, B and N are expressed in the world coordinate system wrong?
Thank you!
Your reasoning is correct - the second version is wrong. A more intuitive way to see that is analyzing what happens when the tangent space normal is (0, 0, 1). In this case, you obviously want to use the triangle normal, which is exactly what the first version should do.
However, you are feeding a wrong parameter:
const Vector3D normal_world = TangentToWorld(normal_tangent,
tangent, bitangent, normal_tangent);
The last parameter needs to be the triangle normal, not the normal you fetch from the texture.
I have a line in a vertex shader
gl_Position = gl_ModelViewProjectionMatrix * vertex;
I need to do the same computation without a shader, like:
float vertex[4];
float modelviewProjection[16];
glGetFloatv(GL_MODELVIEW_MATRIX, modelviewProjection);
glMatrixMode(GL_PROJECTION_MATRIX);
glMultMatrixf(modelviewProjection);
for ( counter = 0; counter < numPoints; counter++ )
{
vertex[0] = *vertexPointer + randomAdvance(timeAlive) + sin(ParticleTime);
vertex[1] = *( vertexPointer + 1 ) + randomAdvance(timeAlive) + timeAlive * 0.6f;
vertex[2] = *( vertexPointer + 2 );
glPushMatrix();
glMultMatrixf(vertex);
*vertexPointer = vertex[0];
*( vertexPointer + 1 ) = vertex[1];
*( vertexPointer + 2 ) = vertex[2];
vertexPointer += 3;
glPopMatrix();
}
If you have no suitable vector/matrix library, look into GLM (it can do that kind of thing without any fuss).
If you want to do it manually, the components of the transformed vector are the dot products of the respective rows in the matrix and the untransformed vector. That is because a vector can be seen as a matrix with one column (then just apply the rules of matrix multiplication).
Thus, assuming OpenGL memory layout, that would be:
x = x*m[0] + y*m[4] + z*m[8] + w*m[12], y = x*m[1] + y*m[5] + z*m[9] + w*m[13], etc.
I think I understand why calling glRotate(#, 0, 0, 0) results in a divide-by-zero. The rotation vector, a, is normalized: a' = a/|a| = a/0
Is that the only situation glRotate could result in a divide-by-zero? Yes, I know glRotate is deprecated. Yes, I know the matrix is on the OpenGL manual. No, I don't know linear algebra enough to confidently answer the question from the matrix. Yes, I think it would help. Yes, I asked this already in #opengl (can you tell?). And no, I didn't get an answer.
I would say yes. And I would say that you are right about the normalization step as well. The matrix shown in the OpenGL manual only consists of multiplications. And multiplying a vector would result into the same. Of course, it would do strange things if you result in a vector of (0,0,0). OpenGL states in the same manual that |x,y,z|=1 (or OpenGL will normalize).
So IF it wouldn't normalize, you would end up with a very empty matrix of:
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 1
Which will implode your object in the strangest ways. So DON'T call this function with a zero-vector. If you would like to, tell me why.
And I recommend using a library like GLM to do your matrix calculations if it gets too complicated for some simple glRotates.
Why should it divide by zero when you can check for that?:
/**
* Generate a 4x4 transformation matrix from glRotate parameters, and
* post-multiply the input matrix by it.
*
* \author
* This function was contributed by Erich Boleyn (erich#uruk.org).
* Optimizations contributed by Rudolf Opalla (rudi#khm.de).
*/
void
_math_matrix_rotate( GLmatrix *mat,
GLfloat angle, GLfloat x, GLfloat y, GLfloat z )
{
GLfloat xx, yy, zz, xy, yz, zx, xs, ys, zs, one_c, s, c;
GLfloat m[16];
GLboolean optimized;
s = (GLfloat) sin( angle * DEG2RAD );
c = (GLfloat) cos( angle * DEG2RAD );
memcpy(m, Identity, sizeof(GLfloat)*16);
optimized = GL_FALSE;
#define M(row,col) m[col*4+row]
if (x == 0.0F) {
if (y == 0.0F) {
if (z != 0.0F) {
optimized = GL_TRUE;
/* rotate only around z-axis */
M(0,0) = c;
M(1,1) = c;
if (z < 0.0F) {
M(0,1) = s;
M(1,0) = -s;
}
else {
M(0,1) = -s;
M(1,0) = s;
}
}
}
else if (z == 0.0F) {
optimized = GL_TRUE;
/* rotate only around y-axis */
M(0,0) = c;
M(2,2) = c;
if (y < 0.0F) {
M(0,2) = -s;
M(2,0) = s;
}
else {
M(0,2) = s;
M(2,0) = -s;
}
}
}
else if (y == 0.0F) {
if (z == 0.0F) {
optimized = GL_TRUE;
/* rotate only around x-axis */
M(1,1) = c;
M(2,2) = c;
if (x < 0.0F) {
M(1,2) = s;
M(2,1) = -s;
}
else {
M(1,2) = -s;
M(2,1) = s;
}
}
}
if (!optimized) {
const GLfloat mag = SQRTF(x * x + y * y + z * z);
if (mag <= 1.0e-4) {
/* no rotation, leave mat as-is */
return;
}
x /= mag;
y /= mag;
z /= mag;
/*
* Arbitrary axis rotation matrix.
*
* This is composed of 5 matrices, Rz, Ry, T, Ry', Rz', multiplied
* like so: Rz * Ry * T * Ry' * Rz'. T is the final rotation
* (which is about the X-axis), and the two composite transforms
* Ry' * Rz' and Rz * Ry are (respectively) the rotations necessary
* from the arbitrary axis to the X-axis then back. They are
* all elementary rotations.
*
* Rz' is a rotation about the Z-axis, to bring the axis vector
* into the x-z plane. Then Ry' is applied, rotating about the
* Y-axis to bring the axis vector parallel with the X-axis. The
* rotation about the X-axis is then performed. Ry and Rz are
* simply the respective inverse transforms to bring the arbitrary
* axis back to its original orientation. The first transforms
* Rz' and Ry' are considered inverses, since the data from the
* arbitrary axis gives you info on how to get to it, not how
* to get away from it, and an inverse must be applied.
*
* The basic calculation used is to recognize that the arbitrary
* axis vector (x, y, z), since it is of unit length, actually
* represents the sines and cosines of the angles to rotate the
* X-axis to the same orientation, with theta being the angle about
* Z and phi the angle about Y (in the order described above)
* as follows:
*
* cos ( theta ) = x / sqrt ( 1 - z^2 )
* sin ( theta ) = y / sqrt ( 1 - z^2 )
*
* cos ( phi ) = sqrt ( 1 - z^2 )
* sin ( phi ) = z
*
* Note that cos ( phi ) can further be inserted to the above
* formulas:
*
* cos ( theta ) = x / cos ( phi )
* sin ( theta ) = y / sin ( phi )
*
* ...etc. Because of those relations and the standard trigonometric
* relations, it is pssible to reduce the transforms down to what
* is used below. It may be that any primary axis chosen will give the
* same results (modulo a sign convention) using thie method.
*
* Particularly nice is to notice that all divisions that might
* have caused trouble when parallel to certain planes or
* axis go away with care paid to reducing the expressions.
* After checking, it does perform correctly under all cases, since
* in all the cases of division where the denominator would have
* been zero, the numerator would have been zero as well, giving
* the expected result.
*/
xx = x * x;
yy = y * y;
zz = z * z;
xy = x * y;
yz = y * z;
zx = z * x;
xs = x * s;
ys = y * s;
zs = z * s;
one_c = 1.0F - c;
/* We already hold the identity-matrix so we can skip some statements */
M(0,0) = (one_c * xx) + c;
M(0,1) = (one_c * xy) - zs;
M(0,2) = (one_c * zx) + ys;
/* M(0,3) = 0.0F; */
M(1,0) = (one_c * xy) + zs;
M(1,1) = (one_c * yy) + c;
M(1,2) = (one_c * yz) - xs;
/* M(1,3) = 0.0F; */
M(2,0) = (one_c * zx) - ys;
M(2,1) = (one_c * yz) + xs;
M(2,2) = (one_c * zz) + c;
/* M(2,3) = 0.0F; */
/*
M(3,0) = 0.0F;
M(3,1) = 0.0F;
M(3,2) = 0.0F;
M(3,3) = 1.0F;
*/
}
#undef M
matrix_multf( mat, m, MAT_FLAG_ROTATION );
}