Why is D3DXQuaternionToAxisAngle being called in the following code? - c++

I am attempting to convert some code over to glm/opengl that was originally using direct3d, and have run into a block that does not make sense according to what I found in the documentation on microsoft's website. The block in question is detailed in comments below:
Gx::Quaternion Gx::Quaternion::rotationBetween(const Gx::Vec3 &a, const Gx::Vec3 &b)
{
Quaternion q;
Vec3 v0 = a.normalized();
Vec3 v1 = b.normalized();
float d = v0.dot(v1);
if(d >= 1.0f)
{
return Quaternion{ 0, 0, 0, 0 };
}
if(d < (1e-6f - 1.0f))
{
Vec3 axis = Vec3(1, 0, 0).cross(a);
if(axis.dot(axis) == 0)
{
axis = Vec3(0, 1, 0).cross(a);
}
axis = axis.normalized();
float ang = static_cast<float>(M_PI);
D3DXQuaternionToAxisAngle(&q, &axis, &ang);
// This block does not appear to be doing anything as
// according to microsofts documentation on D3DXQuaternionToAxisAngle,
// the function "Computes a quaternion's axis and angle of rotation" and
// does not modify the quaternion value passed as it's passed as const.
// Therefore I am confused as to why this block exists as it does not
// affect the returned quaternion, and the variables axis and ang are
// scoped to this block and not taken into account anywhere else in this
// function.
}
else
{
float s = std::sqrt((1 + d) * 2);
float invs = 1 / s;
Vec3 c = v0.cross(v1);
q.x = c.x * invs;
q.y = c.y * invs;
q.z = c.z * invs;
q.w = s * 0.5f;
D3DXQuaternionNormalize(&q, &q);
}
return q;
}
Link to microsofts api documentation
Am I correct in my conclusion that this if block is superfluous? Or am I possibly missing something?

As you note, the code in the the first if case is broken. They may have meant to use D3DXQuaternionRotationAxis which has the same signature.
As a reminder, these are 'D3DXMath' functions which were in the now deprecated D3DX9/D3DX10 utility libraries. The modern solution is DirectXMath. There's a list of D3DXMath equivalents in DirectXMath here.

Related

Using The Dot Product to determine whether an object is on the left hand side or right hand side of the direction of the object

so I currently am trying to create some method which when taking in a simulation vehicles position, direction, and an objects position, Will determine whether or not the object lies on the right and side or left hand side of that vehicles direction. This is what i have implemented so far (Note I am in a 2D co-ord system):
This is the code block that uses the method
void Class::leftOrRight()
{
// Clearing both _lhsCones and _rhsCones vectors
_rhsCones.clear();
_lhsCones.clear();
for (int i =0; i < _cones.size(); i++)
{
if (dotAngleFromYaw(_x, _y, _cones[i].x(), _cones[i].y(), _yaw) > 0)
{
_lhsCones.push_back(_cones[i]);
}
else
{
_rhsCones.push_back(_cones[i]);
}
}
return;
}
This is the code block which computes the angle
double Class::dotAngleFromYaw(double xCar, double yCar, double xCone, double yCone, double yawCar)
{
double iOne = cos(yawCar);
double jOne = sin(yawCar);
double iTwo = xCone - xCar;
double jTwo = yCone - yCar;
//ensure to normalise the vector two
double magTwo = std::sqrt(std::pow(iTwo, 2) + std::pow(jTwo, 2));
iTwo = iTwo / magTwo;
jTwo = jTwo / magTwo;
double theta = acos((iOne * iTwo) + (jOne * jTwo)); // in radians
return theta;
}
My issue with this is that dotAngleFromYaw(0,0,0,1,0) = +pi/2 and dotAngleFromYaw(0,0,0,-1,0) = +pi/2 hence the if statements fail to sort the cones.
Any help would be great
*Adjustments made from comment suggestions
I have change the sort method as follows
double Class::indicateSide(double xCar, double yCar, double xCone, double yCone, double yawCar)
{
// Compute the i and j compoents of the yaw measurment as a unit vector i.e Vector Mag = 1
double iOne = cos(yawCar);
double jOne = sin(yawCar);
// Create the Car to Cone Vector
double iTwo = xCone - xCar;
double jTwo = yCone - yCar;
//ensure to normalise the vCar to Cone Vector
double magTwo = std::sqrt(std::pow(iTwo, 2) + std::pow(jTwo, 2));
iTwo = iTwo / magTwo;
jTwo = jTwo / magTwo;
// // Using the transformation Matrix with Theta = yaw (angle in radians) transform the axis to the augmented 2D space
// double Ex = cos(yawCar)*iOne - sin(yawCar)*jOne;
// double Ey = sin(yawCar)*iOne + cos(yawCar)*jOne;
// Take the Cross Product of < Ex, 0 > x < x', y' > where x', y' have the same location in the simulation space.
double result = iOne*jTwo - jOne*iTwo;
return result;
}
However I still am having issues defining the left and right, note that I have also become aware that objects behind the vehicle are still passed to every instance of the array of objects to be evaluated and hence I have implemented a dot product check elsewhere that seems to work fine for now, which is why I have not included it here I can make another adjustment to the post to include said code. I did try to implement the co-ordinate system transformation however i did not see improvements compared to when the added lines are not commented out and implemented.
Any further feedback is greatly appreciated
If the angle does not matter and you only want to know whether "left or right" I'd go for another approach.
Set up a plane that has xCar and yCar on its surface. When setting it up it's up to you how to define the plane's normal i.e. the side its facing to.
After that you can apply the dot-product to determine the 'sign' indicating which side it's on.
Note that dot product does not provide information about left/right position.
Sign of dot product says whether position is ahead or backward.
To get left/right side, you need to check sign of cross product
cross = iOne * jTwo - jOne * iTwo
(note subtraction and i/j alternation)
To see the difference between dot and cross product info:
Quick test. Mathematical coordinate system (CCW) is used (left/right depends on CW/CCW)
BTW, in kinematics simulations it is worth to store components of direction vector rather than angle.
#define _USE_MATH_DEFINES // для C++
#include <cmath>
#include <iostream>
void check_target(float carx, float cary, float dirx, float diry, float tx, float ty) {
float cross = (tx - carx) * diry - (ty - cary) * dirx;
float dot = (tx - carx) * dirx + (ty - cary) * diry;
if (cross >= 0) {
if (dot >= 0)
std::cout << "ahead right\n";
else
std::cout << "behind right\n";
}
else {
if (dot >= 0)
std::cout << "ahead left\n";
else
std::cout << "behind left\n";
}
}
int main()
{
float carx, cary, car_dir_angle, dirx, diry;
float tx, ty;
carx = 1;
cary = 1;
car_dir_angle = M_PI / 4;
dirx = cos(car_dir_angle);
diry = sin(car_dir_angle);
check_target(carx, cary, dirx, diry, 2, 3);
check_target(carx, cary, dirx, diry, 2, 1);
check_target(carx, cary, dirx, diry, 1, 0);
check_target(carx, cary, dirx, diry, 0, 1);
}

Proper sphere collision resolution in C++

I am implementing a sphere to sphere collision resolution and I am a little confused on where to start. First question, is there a standard way that games/engines do sphere to sphere collision resolution? Is there only like a couple standard ways to do it? Or does the resolution vary very heavily based on whats needed?
I want to implement this in my engine and I wrote a basic one that pushes a sphere and another sphere (so basically the one interacting can push the other) but this was just a super simple concept. How exactly can I improve this to make it more accurate? (Mind you the code isn't optimized since I am still testing)
It seems like there is a lack of solid documentation on collision resolution in general as it's a more niche topic. Most resources I found only concern the detection part.
bool isSphereInsideSphere(glm::vec3 sphere, float sphereRadius, glm::vec3 otherSphere, float otherSphereRadius, Entity* e1, Entity* e2)
{
float dist = glm::sqrt((sphere.x - otherSphere.x) * (sphere.x - otherSphere.x) + (sphere.y - otherSphere.y) * (sphere.y - otherSphere.y) + (sphere.z - otherSphere.z) * (sphere.z - otherSphere.z));
if (dist <= (sphereRadius + otherSphereRadius))
{
//Push code
e1->move(-e1->xVelocity / 2, 0, -e1->zVelocity / 2);
e2->move(e1->xVelocity / 2, 0, e1->zVelocity / 2);
}
return dist <= (sphereRadius + otherSphereRadius);
}
Using std::sqrt is unnecessary and it's probably a lot quicker to compare the squared length against (sphereRadius + otherSphereRadius)2.
Example:
#include <glm/glm.hpp>
#include <iostream>
#include <cstdlib>
auto squared_length(const glm::vec3& v) {
return std::abs(v.x * v.x + v.y * v.y + v.z * v.z);
}
class Sphere {
public:
Sphere(const glm::vec3& Position, float Radius) :
position{Position}, radius(Radius) {}
bool isSphereInsideSphere(const Sphere& other) const {
auto dist = squared_length(position - other.position);
// compare the squared values
if(dist <= (radius + other.radius) * (radius + other.radius)) {
// Push code ...
return true;
}
return false;
}
private:
glm::vec3 position;
float radius;
};
int main() {
Sphere a({2, 3, 0}, 2.5);
Sphere b({5, 7, 0}, 2.5);
std::cout << std::boolalpha << a.isSphereInsideSphere(b) << '\n'; // prints true
}
Here is a simpler example (without involving new classes).
bool isSphereInsideSphere(glm::vec3 sphere, float sphereRadius, glm::vec3 otherSphere, float otherSphereRadius, Entity* e1, Entity* e2)
{
auto delta = otherSphere - sphere;
auto r2 = (sphereRadius + otherSphereRadius)*(sphereRadius + otherSphereRadius);
if (glm::dot(delta,delta) <= r2)
{
//Push code
return true;
}
return false;
}

Ray vs ellipsoid intersection

I am trying to implement ray vs ellipsoid intersection by "squishing" space and doing ray vs sphere:
create mat3 S with ellipsoid radius at diagonal
squish ray by multiplying start and direction by an inverse of S
intersect ray with sphere of radius 1.0 in local space
multiply hitPoint by S to unsquish it.
Here is ray vs sphere:
float P = glm::dot(dir, sphereCenter-start);
float L = glm::distance(start, sphereCenter);
float d = sqrt(L*L - P*P);
if (d < radius) {
float x0 = sqrt(1.f - d*d);
hitPoint = start + dir*(P - x0);
hitNormal = glm::normalize(hitPoint - sphereCenter);
}
else if (d == radius) {
hitPoint = start + dir*P;
hitNormal = glm::normalize(hitPoint - sphereCenter);
}
else {
return false;
}
if (glm::distance(start, hitPoint) > dist) return false;
return true;
Here is the squishing part:
glm::vec3 S = start;
glm::vec3 Dir = dir;
auto sphereCenter = thisEntity()->transform()->getPosition();
auto scale = thisEntity()->transform()->getScale();
glm::mat3 q = glm::mat3(0);
float x = _radius.x * scale.x;
float y = _radius.y * scale.y;
float z = _radius.z * scale.z;
q[0][0] = x;
q[1][1] = y;
q[2][2] = z;
glm::mat3 qI = glm::inverse(q);
S = qI * S;
Dir = qI * Dir;
//calculate hit point in world space squished
glm::vec3 hitPoint, hitNormal;
if (!IntersectionsMath::instance()->segmentVsSphere(sphereCenter, S, Dir, dist, 1.f, hitPoint, hitNormal)) return;
hitPoint = q * hitPoint;
hit.pushHit(hitPoint, hitNormal, this);
Current ray sphere code is for world position, i'm trying to make it work at the origin so it shouldn't matter. Ray vs regular sphere works fine, ellipsoid is the problem.
I spent a lot of time on this and something somewhere is wrong.
Problem:
The center of scaling matters.
Solution:
Perform the scaling about the center of the ellipsoid.
... and not the origin as you are doing right now. This is because, although the direction of the ray will be the same (it is just a directional vector), the relative displacement between the scaled source and center of the sphere will be different:
Scaling about origin (current code):
Source S' = qI * S, center C' = qI * C --- S' - C' = qI * (S - C)
Scaling about ellipsoid center (correct procedure):
Source S" = qI * (S - C), center C" = C --- S" - C" = qI * (S - C) - C
The two displacements differ by the position of the original ellipsoid; thus your current ray will likely miss / give false positives.
Corrected code:
// scale about the ellipsoid's position by subtracting before multiplying
// more appropriate name would be "ellipseCenter" to avoid confusion
S_ = qI * (S - sphereCenter);
// this ::normalize should really be in the intersection function
Dir_ = glm::normalize(qI * Dir);
// calculate hit point in world space squished
// ... but around the origin in the squashed coordinate system
glm::vec3 hitPoint, hitNormal;
if (!IntersectionsMath::instance()->segmentVsSphere(
glm::vec3::ZERO, S_, Dir_,
dist, 1.f,
hitPoint, hitNormal)) return;
// re-apply the offset
hitPoint = q * hitPoint + sphereCenter
// problem: hitNormal will not be correct for the ellipsoid when scaled
// solution: divide through each component by square of respective semi-axis
// (will provide proof upon request)
hitNormal.x /= (x * x); hitNormal.y /= (y * y); hitNormal.z /= (z * z);

Ray transformation in a Ray - OBB intersection test

I've implemented an algorithm that tests for a Ray - AABB intersection and it works fine. But when I try to transform Ray to the AABB's local space (making this a Ray - OBB test), I can't get correct results. I've studied several forums and other resources, but still missing something. (Some sources suggesting to apply inverted transformation to the ray origin and its end, and only then calc. direction, other - to apply transformation to origin and direction). Can someone point in the right direction (no pun intended)?
Here goes two functions responsible for the math:
1) Calculating inverses and other things to perform tests
bool Ray::intersectsMesh(const Mesh& mesh, const Transformation& transform) {
float largestNearIntersection = std::numeric_limits<float>::min();
float smallestFarIntersection = std::numeric_limits<float>::max();
glm::mat4 modelTransformMatrix = transform.modelMatrix();
Box boundingBox = mesh.boundingBox();
glm::mat4 inverse = glm::inverse(transform.modelMatrix());
glm::vec4 newOrigin = inverse * glm::vec4(mOrigin, 1.0);
newOrigin /= newOrigin.w;
mOrigin = newOrigin;
mDirection = glm::normalize(inverse * glm::vec4(mDirection, 0.0));
glm::vec3 xAxis = glm::vec3(glm::column(modelTransformMatrix, 0));
glm::vec3 yAxis = glm::vec3(glm::column(modelTransformMatrix, 1));
glm::vec3 zAxis = glm::vec3(glm::column(modelTransformMatrix, 2));
glm::vec3 OBBTranslation = glm::vec3(glm::column(modelTransformMatrix, 3));
printf("trans x %f y %f z %f\n", OBBTranslation.x, OBBTranslation.y, OBBTranslation.z);
glm::vec3 delta = OBBTranslation - mOrigin;
bool earlyFalseReturn = false;
calculateIntersectionDistances(xAxis, delta, boundingBox.min.x, boundingBox.max.x, &largestNearIntersection, &smallestFarIntersection, &earlyFalseReturn);
if (smallestFarIntersection < largestNearIntersection || earlyFalseReturn) { return false; }
calculateIntersectionDistances(yAxis, delta, boundingBox.min.y, boundingBox.max.y, &largestNearIntersection, &smallestFarIntersection, &earlyFalseReturn);
if (smallestFarIntersection < largestNearIntersection || earlyFalseReturn) { return false; }
calculateIntersectionDistances(zAxis, delta, boundingBox.min.z, boundingBox.max.z, &largestNearIntersection, &smallestFarIntersection, &earlyFalseReturn);
if (smallestFarIntersection < largestNearIntersection || earlyFalseReturn) { return false; }
return true;
}
2) Helper function (probably not needed here as its relates only to AABB tests and works fine)
void Ray::calculateIntersectionDistances(const glm::vec3& axis,
const glm::vec3& delta,
float minPointOnAxis,
float maxPointOnAxis,
float *largestNearIntersection,
float *smallestFarIntersection,
bool *earlyFalseRerutn)
{
float divident = glm::dot(axis, delta);
float denominator = glm::dot(mDirection, axis);
if (fabs(denominator) > 0.001f) {
float t1 = (divident + minPointOnAxis) / denominator;
float t2 = (divident + maxPointOnAxis) / denominator;
if (t1 > t2) { std::swap(t1, t2); }
*smallestFarIntersection = std::min(t2, *smallestFarIntersection);
*largestNearIntersection = std::max(t1, *largestNearIntersection);
} else if (-divident + minPointOnAxis > 0.0 || -divident + maxPointOnAxis < 0.0) {
*earlyFalseRerutn = true;
}
}
As it turned out, the ray's world -> model transformation was correct. The bug was in the intersection test. I had to completely replace the intersection code, because I wasn't able to identify the bug in the old code, unfortunately.
Ray transformation code:
glm::mat4 inverse = glm::inverse(transform.modelMatrix());
glm::vec4 start = inverse * glm::vec4(mOrigin, 1.0);
glm::vec4 direction = inverse * glm::vec4(mDirection, 0.0);
direction = glm::normalize(direction);
And the Ray - AABB test was stolen from here

Realtime object painting

I am trying to perform a realtime painting to the object texture. Using Irrlicht for now, but that does not really matter.
So far, i've got the right UV coordinates using this algorithm:
find out which object's triangle user selected (raycasting, nothing
really difficult)
find out the UV (baricentric) coordinates of intersection point on
that triangle
find out the UV (texture) coordinates of each triangle vertex
find out the UV (texture) coordinates of intersection point
calculate the texture image coordinates for intersection point
But somehow, when i am drawing in the point i got in the 5th step on texture image, i get totally wrong results. So, when drawing a rectangle in cursor point, the X (or Z) coordinate of its is inverted:
Here's the code i am using to fetch texture coordinates:
core::vector2df getPointUV(core::triangle3df tri, core::vector3df p)
{
core::vector3df
v0 = tri.pointC - tri.pointA,
v1 = tri.pointB - tri.pointA,
v2 = p - tri.pointA;
float dot00 = v0.dotProduct(v0),
dot01 = v0.dotProduct(v1),
dot02 = v0.dotProduct(v2),
dot11 = v1.dotProduct(v1),
dot12 = v1.dotProduct(v2);
float invDenom = 1.f / ((dot00 * dot11) - (dot01 * dot01)),
u = (dot11 * dot02 - dot01 * dot12) * invDenom,
v = (dot00 * dot12 - dot01 * dot02) * invDenom;
scene::IMesh* m = Mesh->getMesh(((scene::IAnimatedMeshSceneNode*)Model)->getFrameNr());
core::array<video::S3DVertex> VA, VB, VC;
video::SMaterial Material;
for (unsigned int i = 0; i < m->getMeshBufferCount(); i++)
{
scene::IMeshBuffer* mb = m->getMeshBuffer(i);
video::S3DVertex* vertices = (video::S3DVertex*) mb->getVertices();
for (unsigned long long v = 0; v < mb->getVertexCount(); v++)
{
if (vertices[v].Pos == tri.pointA)
VA.push_back(vertices[v]); else
if (vertices[v].Pos == tri.pointB)
VB.push_back(vertices[v]); else
if (vertices[v].Pos == tri.pointC)
VC.push_back(vertices[v]);
if (vertices[v].Pos == tri.pointA || vertices[v].Pos == tri.pointB || vertices[v].Pos == tri.pointC)
Material = mb->getMaterial();
if (VA.size() > 0 && VB.size() > 0 && VC.size() > 0)
break;
}
if (VA.size() > 0 && VB.size() > 0 && VC.size() > 0)
break;
}
core::vector2df
A = VA[0].TCoords,
B = VB[0].TCoords,
C = VC[0].TCoords;
core::vector2df P(A + (u * (C - A)) + (v * (B - A)));
core::dimension2du Size = Material.getTexture(0)->getSize();
CursorOnModel = core::vector2di(Size.Width * P.X, Size.Height * P.Y);
int X = Size.Width * P.X, Y = Size.Height * P.Y;
// DRAWING SOME RECTANGLE
Material.getTexture(0)->lock(true);
Device->getVideoDriver()->setRenderTarget(Material.getTexture(0), true, true, 0);
Device->getVideoDriver()->draw2DRectangle(video::SColor(255, 0, 100, 75), core::rect<s32>((X - 10), (Y - 10),
(X + 10), (Y + 10)));
Device->getVideoDriver()->setRenderTarget(0, true, true, 0);
Material.getTexture(0)->unlock();
return core::vector2df(X, Y);
}
I just wanna make my object paintable in realtime. My current problems are: wrong texture coordinate calculation and non-unique vertex UV coordinates (so, drawing something on the one side of the dwarfe's axe would draw the same on the other side of that axe).
How should i do this?
I was able to use your codebase and get it to work for me.
Re your second problem "non-unique vertex UV coordinates":
Well you are absolutely right, you need unique vertexUVs to get this working, which means that you have to unwrap you models and don't make use of shared uv-space for e.g. mirrored elements and stuff. (e.g. left/right boot - if they use the same uv-space, you'll paint automatically on both, where you want the one to be red and the other to be green). You can check out "uvlayout" (tool) or the uv-unwrap modifier ind 3ds max.
Re the first and more important problem: "**wrong texture coordinate calculation":
the calculation of your baycentric coordinates is correct, but as i suppose your input-data is wrong. I assume you get the triangle and the collisionPoint by using irrlicht's CollisionManager and TriangleSelector. The problem is, that the positions of the triangle's vertices (which you get as returnvalue from the collisionTest) is in WorldCoordiates. But you'll need them in ModelCoordinates for the calculation, so here's what you need to do:
pseudocode:
add the node which contains the mesh of the hit triangle as parameter to getPointUV()
get the inverse absoluteTransformation-Matrix by calling node->getAbsoluteTransformation() [inverse]
transform the vertices of the triangle by this inverse Matrix and use those values for the rest of the method.
Below you'll find my optimized method wich does it for a very simple mesh (one mesh, only one meshbuffer).
Code:
irr::core::vector2df getPointUV(irr::core::triangle3df tri, irr::core::vector3df p, irr::scene::IMeshSceneNode* pMeshNode, irr::video::IVideoDriver* pDriver)
{
irr::core::matrix4 inverseTransform(
pMeshNode->getAbsoluteTransformation(),
irr::core::matrix4::EM4CONST_INVERSE);
inverseTransform.transformVect(tri.pointA);
inverseTransform.transformVect(tri.pointB);
inverseTransform.transformVect(tri.pointC);
irr::core::vector3df
v0 = tri.pointC - tri.pointA,
v1 = tri.pointB - tri.pointA,
v2 = p - tri.pointA;
float dot00 = v0.dotProduct(v0),
dot01 = v0.dotProduct(v1),
dot02 = v0.dotProduct(v2),
dot11 = v1.dotProduct(v1),
dot12 = v1.dotProduct(v2);
float invDenom = 1.f / ((dot00 * dot11) - (dot01 * dot01)),
u = (dot11 * dot02 - dot01 * dot12) * invDenom,
v = (dot00 * dot12 - dot01 * dot02) * invDenom;
irr::video::S3DVertex A, B, C;
irr::video::S3DVertex* vertices = static_cast<irr::video::S3DVertex*>(
pMeshNode->getMesh()->getMeshBuffer(0)->getVertices());
for(unsigned int i=0; i < pMeshNode->getMesh()->getMeshBuffer(0)->getVertexCount(); ++i)
{
if( vertices[i].Pos == tri.pointA)
{
A = vertices[i];
}
else if( vertices[i].Pos == tri.pointB)
{
B = vertices[i];
}
else if( vertices[i].Pos == tri.pointC)
{
C = vertices[i];
}
}
irr::core::vector2df t2 = B.TCoords - A.TCoords;
irr::core::vector2df t1 = C.TCoords - A.TCoords;
irr::core::vector2df uvCoords = A.TCoords + t1*u + t2*v;
return uvCoords;
}