Rotating one object to face another C++ - c++

I am trying to rotate a an object (a car) to face an object however I am having quite a bit of difficulty with it.
I've had varying success with this code, it knows when the angle is correct and will stop rotating when if it manages to face the right way but most of the time it gets stuck rapidly twitching due to the sign value flicking from 1 > -1 or from -1 > 1 every single frame, thus causing all rotation to stop and for the car to just twitch uncontrollably and never really correct itself.
Honestly, I'm not sure if my Sign calculation is using the correct values as the car moves on the X/Z axis but rotate on the Y-axis to turn left/right
I'm using C++ and DirectX11 by the way
Any advice/help would be appreciated, thanks in advance!
bool ParticleModel::FaceTarget(XMFLOAT3 target)
{
XMFLOAT3 toNormalize;
toNormalize.x = transform->GetPosition().x - target.x;
toNormalize.y = transform->GetPosition().y - target.y;
toNormalize.z = transform->GetPosition().z - target.z;
XMFLOAT3 toTarget = NormalizeVector(toNormalize);
//Determine the angle between the heading vector and the target.
XMFLOAT3 carHeading;
carHeading.x = sin(transform->GetRotation().y);
carHeading.y = sin(transform->GetRotation().x);
carHeading.z = cos(transform->GetRotation().y);
carHeading = NormalizeVector(carHeading);
double angle = acos(DotProduct(carHeading, toTarget));
//Return true if the player is facing the target.
if (angle < 0.01000)
{
return true;
}
/*if (signTimer < signCooldown)
{
signTimer++;
}
else if (signTimer >= signCooldown)
{
signTimer = 0;
sign = GetSign(carHeading, toTarget);
}*/
sign = GetSign(carHeading, toTarget);
transform->SetYRot(transform->GetRotation().y + (0.030f*sign));
//bool thing = false;
//RotateHeadingByRadian(angle, mHeading.Sign(toTarget));
return true;
}
float ParticleModel::DotProduct(XMFLOAT3 a, XMFLOAT3 b)
{
float dot;
dot = (a.x * b.x) + (a.y * b.y) + (a.z * b.z);
return dot;
}
int ParticleModel::GetSign(XMFLOAT3 v1, XMFLOAT3 v2)
{
if (v1.y*v2.z > v1.z*v2.y)
{
return -1;
}
else
{
return 1;
}
}

It's ok I found out how to do it, essentially this is now the code
XMFLOAT3 distance;
distance.x = transform->GetPosition().x - target.x;
distance.y = transform->GetPosition().y - target.y;
distance.z = transform->GetPosition().z - target.z;
XMFLOAT3 carHeading;
carHeading.x = sin(transform->GetRotation().y);
carHeading.y = sin(transform->GetRotation().x);
carHeading.z = cos(transform->GetRotation().y);
if (VectorLength(distance) < 0.5)
{
return true;
}
//Don't actually need to call normalize for directionA - just doing it to indicate
//that this vector must be normalized.
XMFLOAT3 directionA = carHeading;
XMFLOAT3 directionB = NormalizeVector(distance);
float rotationAngle = (float)acos(DotProduct(directionA, directionB));
if (abs(rotationAngle) < 0.5)
{
return true;
}
XMFLOAT3 rotationAxis = CrossProduct(directionA, directionB);
rotationAxis = NormalizeVector(rotationAxis);
transform->SetYRot(transform->GetRotation().y + (rotationAngle*rotationAxis.y));

Related

Ray tracing: Ellipsoid Hit function

We are working with shape generation using ray tracing. We believe that the hit function is not working appropriately. Currently, the "front" side of the ellipsoid is lit correctly. As the shape rotates, it has problems reflecting light. Example of rotating the ellipsoid.
Below is the current version of the Hit function.
bool Ellipsoid::Hit(const Ray& ray,
const double minHitDistance,
const double maxHitDistance,
HitRecord& hitRecord,
const Vector3d& light) const {
if (hitRecord.shapeHit == this) {
return false;
}
// M is a 3x3 matrix
const Vector3d& direction = (M.inverse()*ray.Direction());
const Vector3d oc = M.inverse()*(ray.Origin() -this->center);
const double a = direction.dot(direction);
const double b = oc.dot(direction);
const double c = oc.dot(oc) - 1;
const double discriminant = b*b - a*c;
if (discriminant > 0) {
const double sqrtDiscriminant = sqrt(discriminant);
const double aReciprocal = 1.0 / a;
double temp = (-b - sqrtDiscriminant) * aReciprocal;
if (temp < maxHitDistance && temp > minHitDistance) {
hitRecord.distance = temp;
hitRecord.closestIntersection = ray.PointAtParameter(hitRecord.distance);
hitRecord.normal = M.inverse()*((hitRecord.closestIntersection - center) / 1);
hitRecord.shapeHit = this;
return true;
}
temp = (-b + sqrtDiscriminant) * aReciprocal;
if (temp < maxHitDistance && temp > minHitDistance) {
hitRecord.distance = temp;
hitRecord.closestIntersection = ray.PointAtParameter(hitRecord.distance);
hitRecord.normal = M.inverse()*((hitRecord.closestIntersection - center) / 1);
hitRecord.shapeHit = this;
return true;
}
}
return false;
}
The first thing we tried was changing M.transpose() to M.inverse() on the lines where we set hitRecord.normal. This made the light look significantly better but did not completely help the light accuracy after rotating the object. We are unsure if the Hit function is completely correct. We just need some guidance to see if we need to look for other faults in the program. Any advice helps.
Edit: We have tried modifying the discriminant as well. This causes the function to break because the "4" is accounted for throughout the function (as seen in comment section).
After completely rewriting the function, we have determined the issue does not lie within the hit function.

Wierd Raytracing Artifacts

I am trying to create a ray tracer using Qt, but I have some really weird artifacts going on.
Before I implemented shading, I just had 4 spheres, 3 triangles and 2 bounded planes in my scene. They all showed up as expected and as the color expected however, for my planes, I would see dots the same color as the background. These dots would stay static from my view position, so if I moved the camera around the dots would move around as well. However they only affected the planes and triangles and would never appear on the spheres.
One I implemented shading the issue got worse. The dots now also appear on spheres in the light source, so any part affected by the diffuse.
Also, my one plane of pure blue (RGB 0,0,255) has gone straight black. Since I have two planes I switched their colors and again the blue one went black, so it's a color issue and not a plane issue.
If anyone has any suggestions as to what the problem could be or wants to see any particular code let me know.
#include "plane.h"
#include "intersection.h"
#include <math.h>
#include <iostream>
Plane::Plane(QVector3D bottomLeftVertex, QVector3D topRightVertex, QVector3D normal, QVector3D point, Material *material)
{
minCoords_.setX(qMin(bottomLeftVertex.x(),topRightVertex.x()));
minCoords_.setY(qMin(bottomLeftVertex.y(),topRightVertex.y()));
minCoords_.setZ(qMin(bottomLeftVertex.z(),topRightVertex.z()));
maxCoords_.setX(qMax(bottomLeftVertex.x(),topRightVertex.x()));
maxCoords_.setY(qMax(bottomLeftVertex.y(),topRightVertex.y()));
maxCoords_.setZ(qMax(bottomLeftVertex.z(),topRightVertex.z()));
normal_ = normal;
normal_.normalize();
point_ = point;
material_ = material;
}
Plane::~Plane()
{
}
void Plane::intersect(QVector3D rayOrigin, QVector3D rayDirection, Intersection* result)
{
if(normal_ == QVector3D(0,0,0)) //plane is degenerate
{
cout << "degenerate plane" << endl;
return;
}
float minT;
//t = -Normal*(Origin-Point) / Normal*direction
float numerator = (-1)*QVector3D::dotProduct(normal_, (rayOrigin - point_));
float denominator = QVector3D::dotProduct(normal_, rayDirection);
if (fabs(denominator) < 0.0000001) //plane orthogonal to view
{
return;
}
minT = numerator / denominator;
if (minT < 0.0)
{
return;
}
QVector3D intersectPoint = rayOrigin + (rayDirection * minT);
//check inside plane dimensions
if(intersectPoint.x() < minCoords_.x() || intersectPoint.x() > maxCoords_.x() ||
intersectPoint.y() < minCoords_.y() || intersectPoint.y() > maxCoords_.y() ||
intersectPoint.z() < minCoords_.z() || intersectPoint.z() > maxCoords_.z())
{
return;
}
//only update if closest object
if(result->distance_ > minT)
{
result->hit_ = true;
result->intersectPoint_ = intersectPoint;
result->normalAtIntersect_ = normal_;
result->distance_ = minT;
result->material_ = material_;
}
}
QVector3D MainWindow::traceRay(QVector3D rayOrigin, QVector3D rayDirection, int depth)
{
if(depth > maxDepth)
{
return backgroundColour;
}
Intersection* rayResult = new Intersection();
foreach (Shape* shape, shapeList)
{
shape->intersect(rayOrigin, rayDirection, rayResult);
}
if(rayResult->hit_ == false)
{
return backgroundColour;
}
else
{
QVector3D intensity = QVector3D(0,0,0);
QVector3D shadowRay = pointLight - rayResult->intersectPoint_;
shadowRay.normalize();
Intersection* shadowResult = new Intersection();
foreach (Shape* shape, shapeList)
{
shape->intersect(rayResult->intersectPoint_, shadowRay, shadowResult);
}
if(shadowResult->hit_ == true)
{
intensity += shadowResult->material_->diffuse_ * intensityAmbient;
}
else
{
intensity += rayResult->material_->ambient_ * intensityAmbient;
// Diffuse
intensity += rayResult->material_->diffuse_ * intensityLight * qMax(QVector3D::dotProduct(rayResult->normalAtIntersect_,shadowRay), 0.0f);
// Specular
QVector3D R = ((2*(QVector3D::dotProduct(rayResult->normalAtIntersect_,shadowRay))* rayResult->normalAtIntersect_) - shadowRay);
R.normalize();
QVector3D V = rayOrigin - rayResult->intersectPoint_;
V.normalize();
intensity += rayResult->material_->specular_ * intensityLight * pow(qMax(QVector3D::dotProduct(R,V), 0.0f), rayResult->material_->specularExponent_);
}
return intensity;
}
}
So I figured out my issues. They are due to float being terrible at precision, any check for < 0.0 would intermittently fail because of floats precision. I had to add an offset to all my checks so that I was checking for < 0.001.

Dragging 3-Dimensional Objects with C++ and OpenGL

I have been developing a 3d chessboard and I have been stuck trying to drag the pieces for a few days now.
Once I select an object using my ray-caster, I start my dragging function which calculates the difference between the current location of the mouse (in world coordinates) and its previous location, I then translate my object by the difference of these coordinates.
I debug my ray-caster by drawing lines so I am sure those coordinates are accurate.
Translating my object based on the ray-caster coordinates only moves the object a fraction of the distance it should actually go.
Am I missing a step?
-Calvin
I believe my issue is in this line of code....
glm::vec3 World_Delta = Current_World_Location - World_Start_Location;
If I multiply the equation by 20 the object start to move more like I would expect it to, but it is never completely accurate.
Below is some relevent code
RAY-CASTING:
void CastRay(int mouse_x, int mouse_y) {
int Object_Selected = -1;
float Closest_Object = -1;
//getWorldCoordinates calls glm::unproject
nearPoint = Input_Math.getWorldCoordinates(glm::vec3(mouse_x, Window_Input_Info.getScreenHeight()-mouse_y, 0.0f));
farPoint = Input_Math.getWorldCoordinates(glm::vec3(mouse_x, Window_Input_Info.getScreenHeight()-mouse_y, 1.0f));
glm::vec3 direction = Input_Math.normalize(farPoint - nearPoint);
//getObjectStack() Retrieves all objects in the current scene
std::vector<LoadOBJ> objectList = Object_Input_Info.getObjectStack();
for (int i = 0; i < objectList.size(); i++) {
std::vector<glm::vec3> Vertices = objectList[i].getVertices();
for(int j = 0; j < Vertices.size(); j++) {
if ( ( j + 1 ) % 3 == 0 ) {
glm::vec3 face_normal = Input_Math.normalize(Input_Math.CrossProduct(Vertices[j-1] - Vertices[j-2], Vertices[j] - Vertices[j-2]));
float nDotL = glm::dot(direction, face_normal);
if (nDotL <= 0.0f ) { //if nDotL == 0 { Perpindicular } else if nDotL < 0 { SameDirection } else { OppositeDirection }
float distance = glm::dot(face_normal, (Vertices[j-2] - nearPoint)) / nDotL;
glm::vec3 p = nearPoint + distance * direction;
glm::vec3 n1 = Input_Math.CrossProduct(Vertices[j-1] - Vertices[j-2], p - Vertices[j-2]);
glm::vec3 n2 = Input_Math.CrossProduct(Vertices[j] - Vertices[j-1], p - Vertices[j-1]);
glm::vec3 n3 = Input_Math.CrossProduct(Vertices[j-2] - Vertices[j], p - Vertices[j]);
if( glm::dot(face_normal, n1) >= 0.0f && glm::dot(face_normal, n2) >= 0.0f && glm::dot(face_normal, n3) >= 0.0f ) {
if(p.z > Closest_Object) {
//I Create this "dragplane" to be used by my dragging function.
Drag_Plane[0] = (glm::vec3(Vertices[j-2].x, Vertices[j-2].y, p.z ));
Drag_Plane[1] = (glm::vec3(Vertices[j-1].x, Vertices[j-1].y, p.z ));
Drag_Plane[2] = (glm::vec3(Vertices[j].x , Vertices[j].y , p.z ));
//This is the object the we selected in the scene
Object_Selected = i;
//These are the coordinate the ray intersected the object
World_Start_Location = p;
}
}
}
}
}
}
if(Object_Selected >= 0) { //If an object was intersected by the ray
//selectObject -> Simply sets the boolean "dragging" to true
selectObject(Object_Selected, mouse_x, mouse_y);
}
DRAGGING
void DragObject(int mouse_x, int mouse_y) {
if(dragging) {
//Finds the Coordinates where the ray intersects the "DragPlane" set by original object intersection
farPoint = Input_Math.getWorldCoordinates(glm::vec3(mouse_x, Window_Input_Info.getScreenHeight()-mouse_y, 1.0f));
nearPoint = Input_Math.getWorldCoordinates(glm::vec3(mouse_x, Window_Input_Info.getScreenHeight()-mouse_y, 0.0f));
glm::vec3 direction = Input_Math.normalize(farPoint - nearPoint);
glm::vec3 face_normal = Input_Math.normalize(Input_Math.CrossProduct(Drag_Plane[1] - Drag_Plane[0], Drag_Plane[2] - Drag_Plane[0]));
float nDotL = glm::dot(direction, face_normal);
float distance = glm::dot(face_normal, (Drag_Plane[0] - nearPoint)) / nDotL;
glm::vec3 Current_World_Location = nearPoint + distance * direction;
//Calculate the difference between the current mouse location and its previous location
glm::vec3 World_Delta = Current_World_Location - World_Start_Location;
//Set the "start location" to the current location for the next loop
World_Start_Location = Current_World_Location;
//get the current object
Object_Input_Info = Object_Input_Info.getObject(currentObject);
//adds a translation matrix to the stack
Object_Input_Info.TranslateVertices(World_Delta.x, World_Delta.y, World_Delta.z);
//calculates the new vertices
Object_Input_Info.Load_Data();
//puts the new object back
Object_Input_Info.Update_Object_Stack(currentObject);
}
}
I have already faced similar problems to what your reporting.
Instead of keeping track of the translation during mouse movement, you can do the following:
In your mouse button callback, store a 'Delta' vector from the mouse position (in world coordinates) (P_mouse) to your object position (P_object). It would be something like:
Delta = P_object - P_mouse;
For every call of your mouse motion callback, you just need to update the object position by:
P_object = P_mouse + Delta;
Notice that Delta is constant during the whole dragging process.

Sphere-cube collision detection in Opengl?

I am trying to build a game in Opengl. Before I start making better movement mechanics I want to get collision working. I have cube-cube collision working and I have sphere-sphere collision working, but can't figure out cube-sphere collision. Since I want it in 3d I have the pivot at the center of the objects. Anyone have any suggestions?
EDIT: This is the code I currently have:
bool SphereRectCollision( Sphere& sphere, Rectangle& rect)
{
//Closest point on collision box
float cX, cY;
//Find closest x offset
if( sphere.getCenterX() < rect.GetCenterX())//checks if the center of the circle is to the left of the rectangle
cX = rect.GetCenterX();
else if( sphere.getCenterX() > rect.GetCenterX() + rect.GetWidth()) //checks if the center of the circle is to the right of the rectangle
cX = rect.GetCenterX() + rect.GetWidth();
else //the circle is inside the rectagle
cX = sphere.getCenterX();
//Find closest y offset
if( sphere.getCenterY() > rect.GetCenterY() + rect.GetHeight() )
cY = rect.GetCenterY();
else if( sphere.getCenterY() < rect.GetCenterY() - rect.GetHeight() )
cY = rect.GetCenterY() + rect.GetHeight();
else
cY = sphere.getCenterY();
//If the closest point is inside the circle
if( distanceSquared( sphere.getCenterX(), sphere.getCenterY(), cX, cY ) < sphere.getRadius() * sphere.getRadius() )
{
//This box and the circle have collided
return false;
}
//If the shapes have not collided
return true;
}
float distanceSquared( float x1, float y1, float x2, float y2 )
{
float deltaX = x2 - x1;
float deltaY = y2 - y1;
return deltaX*deltaX + deltaY*deltaY;
}
I found the solution. I had the right idea, but didn't quite know how to execute it:
bool SphereRectCollision( Sphere& sphere, Rectangle& rect)
{
float sphereXDistance = abs(sphere.X - rect.X);
float sphereYDistance = abs(sphere.Y - rect.Y);
float sphereZDistance = abs(sphere.Z - rect.Z);
if (sphereXDistance >= (rect.Width + sphere.Radius)) { return false; }
if (sphereYDistance >= (rect.Height + sphere.Radius)) { return false; }
if (sphereZDistance >= (rect.Depth + sphere.Radius)) { return false; }
if (sphereXDistance < (rect.Width)) { return true; }
if (sphereYDistance < (rect.Height)) { return true; }
if (sphereZDistance < (rect.GetDepth)) { return true; }
float cornerDistance_sq = ((sphereXDistance - rect.Width) * (sphereXDistance - rect.Width)) +
((sphereYDistance - rect.Height) * (sphereYDistance - rect.Height) +
((sphereYDistance - rect.Depth) * (sphereYDistance - rect.Depth)));
return (cornerDistance_sq < (sphere.Radius * sphere.Radius));
}
This algorithm doesn't work when a hit happen on an edge, the 2nd set of if conditions triggers but a collision isn't occuring

Optimizing / simplifying a path where many points are close together?

I have a path of points that represent the outline of a polygon. The path is constructed from pixels.
This means all points are very very close to each other, but I've ensured they are all unique.
Right now I'm checking if 3 points are collinear, and if they are, I remove the middle one.
I check if they are collinear using dot product. I observed however that many of my dot products are 0.0f. What could be wrong?
void ImagePolygon::computeOptimized()
{
m_optimized = m_hull;
m_optimized.erase(
std::unique(m_optimized.begin(),
m_optimized.end()),
m_optimized.end());
int first = 0;
int second = 1;
std::vector<int> removeList;
for(int i = 2; i < m_optimized.size(); ++i)
{
second = i - 1;
first = i - 2;
if(isColinear(m_optimized[i - 2],m_optimized[i - 1],m_optimized[i]))
{
m_optimized.erase(m_optimized.begin() + i - 1);
removeList.push_back(i - 1);
}
}
std::sort(removeList.rbegin(),removeList.rend());
for(int i = 0; i < removeList.size(); ++i)
{
m_optimized.erase(m_optimized.begin() + removeList[i]);
}
}
bool ImagePolygon::isColinear( const b2Vec2& a, const b2Vec2& b, const b2Vec2& c ) const
{
b2Vec2 vec1 = b2Vec2(b.x - a.x, b.y - a.y);
vec1.Normalize();
b2Vec2 vec2 = b2Vec2(c.x - b.x, c.y - b.y);
vec2.Normalize();
float dotProduct = vec1.x * vec2.x + vec1.y * vec2.y;
//test value
return abs(dotProduct) > 0.00001f;
}
The major problem is that I'm getting a lot of 0 dot products when I should not so therefore no matter where I set the threshold the path is not optimized as much as it should be.
Thanks
float32 Normalize()
{
float32 length = Length();
if (length < b2_epsilon)
{
return 0.0f;
}
float32 invLength = 1.0f / length;
x *= invLength;
y *= invLength;
return length;
}
You want the 2x2 determinant vec1.x * vec2.y - vec1.y * vec2.x instead of the dot product. The determinant is zero iff the points are collinear, whereas the dot product is zero iff the points form a right angle.
This:
return abs(dotProduct) > 0.00001f;
is actually telling you whether your vectors are (not) perpendicular, not whether they are parallel. Check if it's close to 1 rather than close to 0 for parallel.
You should not increment index in case the element is deleted. You are skipping some values. Try the following:
for(int i = 2; i < m_optimized.size();) {
second = i - 1;
first = i - 2;
if (isColinear(m_optimized[i - 2],m_optimized[i - 1],m_optimized[i])) {
m_optimized.erase(m_optimized.begin() + i - 1);
removeList.push_back(i - 1);
} else i++;
}
Also I can not understand the purpose of the removeList. You erase some points inside of the main loop and try to erase the same points in the subsidiary loop. It seems to be an error. BTW, there is no reason to sort removeList due to the way it was constructed.