I have a triangle, each point of which is defined by a position (X,Y,Z) and a UV coordinate (U,V):
struct Vertex
{
Vector mPos;
Point mUV;
inline Vector& ToVector() {return mPos;}
inline Vector& ToUV() {return mUV;}
};
With this function, I am able to get the UV coordinate at a specific XYZ position:
Point Math3D::TriangleXYZToUV(Vector thePos, Vertex* theTriangle)
{
Vector aTr1=theTriangle->ToVector()-(theTriangle+1)->ToVector();
Vector aTr2=theTriangle->ToVector()-(theTriangle+2)->ToVector();
Vector aF1 = theTriangle->ToVector()-thePos;
Vector aF2 = (theTriangle+1)->ToVector()-thePos;
Vector aF3 = (theTriangle+2)->ToVector()-thePos;
float aA=aTr1.Cross(aTr2).Length();
float aA1=aF2.Cross(aF3).Length()/aA;
float aA2=aF3.Cross(aF1).Length()/aA;
float aA3=aF1.Cross(aF2).Length()/aA;
Point aUV=(theTriangle->ToUV()*aA1)+((theTriangle+1)->ToUV()*aA2)+((theTriangle+2)->ToUV()*aA3);
return aUV;
}
I attempted to reverse-engineer this to make a function that gets the XYZ coordinate from a specific UV position:
Vector Math3D::TriangleUVToXYZ(Point theUV, Vertex* theTriangle)
{
Point aTr1=theTriangle->ToUV()-(theTriangle+1)->ToUV();
Point aTr2=theTriangle->ToUV()-(theTriangle+2)->ToUV();
Point aF1 = theTriangle->ToUV()-theUV;
Point aF2 = (theTriangle+1)->ToUV()-theUV;
Point aF3 = (theTriangle+2)->ToUV()-theUV;
float aA=gMath.Abs(aTr1.Cross(aTr2)); // NOTE: Point::Cross looks like this: const float Cross(const Point &thePoint) const {return mX*thePoint.mY-mY*thePoint.mX;}
float aA1=aF2.Cross(aF3)/aA;
float aA2=aF3.Cross(aF1)/aA;
float aA3=aF1.Cross(aF2)/aA;
Vector aXYZ=(theTriangle->ToVector()*aA1)+((theTriangle+1)->ToVector()*aA2)+((theTriangle+2)->ToVector()*aA3);
return aXYZ;
}
This works MOST of the time. However, it seems to exponentially "approach" the right-angled corner of the triangle-- or something. I'm not really sure what's going on except that the result gets wildly inaccurate the closer it gets to the right-angle.
What do I need to do to this TriangleUVtoXYZ function to make it return accurate results?
I haven't tested your implementation, but you only need to compute two parametric coordinates - the third being redundant since they should sum to 1.
Vector Math3D::TriangleUVToXYZ(Point theUV, Vertex* theTriangle)
{
// T2-T1, T3-T1, P-T1
Point aTr12 = theTriangle[1].ToUV() - theTriangle[0].ToUV();
Point aTr13 = theTriangle[2].ToUV() - theTriangle[0].ToUV();
Point aP1 = theUV - theTriangle[0].ToUV();
// don't need Abs() for the denominator
float aA23 = aTr12.Cross(aTr13);
// parametric coordinates [s,t]
// s = (P-T1)x(T2-T1) / (T3-T1)x(T2-T1)
// t = (P-T1)x(T3-T1) / (T2-T1)x(T3-T1)
float aA12 = aP1.Cross(aTr12) / -aA23;
float aA13 = aP1.Cross(aTr13) / aA23;
// XYZ = V1 + s(V2-V1) + t(V3-V1)
return theTriangle[0].ToVector()
+ aA12 * (theTriangle[1].ToVector() - theTriangle[0].ToVector())
+ aA13 * (theTriangle[2].ToVector() - theTriangle[0].ToVector());
}
Related
In raycaster I am developing I am trying to implement hemisphere random sampling, with option to rotate hemisphere to direction and then take random point.
First version worked fine because sampling was uniform, and change of direction was just swapping to other hemisphere, which was simple.
Vec3f UniformSampleSphere() {
const Vec2f& u = GetVec2f(); // get two random numbers
float z = 1 - 2 * u.x;
float r = std::sqrt(std::max((float)0, (float)1 - z * z));
float phi = 2 * PI_F * u.y;
return Vec3f(r * std::cos(phi), r * std::sin(phi), z);
}
Vec3f GetRandomOnHemiSphere(Vec3f direction) {
auto toReturn = GetRandomOnSphere();
if (Dot(toReturn - direction, toReturn) < 0)
toReturn = -toReturn;
return toReturn;
}
But with cosine weighted hemisphere sampling I am in trouble to rotate properly and find random direction in correctly rotated hemisphere.
On picture's left we can see what is working now, and on right is after applying magic rotation that is that big deal I want.
So final function will be something like this:
Vec3f GetRandomOnHemiSphere(Vec3f direction) {
auto toReturn = CosineSampleHemisphere();
/*
Some magic here that rotates to correct direction of hemisphere
*/
return toReturn;
}
I used code from Socine weighted hemisphere sampling.
I have a problem with the syntax of the function std::transform. So, I have a structure AirportInfo that contains information about the airports. Every structure is then arranged in a dictionary, so that they have unique IDs. In the structure there is a vector of pairs m_routes which contains the ID of the destination airport and also whether the flight is direct or not. (In this case only direct flight are to be considered, because all non-direct flights have already been deleted, so the second item of the pair will always be 0). The function calculateDistanceBetween returns the distance between 2 airports, by knowing their coordinates, that are being stored also in the structure in pos. Now I have to calculate the distance for every route, but I cannot get over the syntax :( Any Help will be appreciated, Thank you!
This piece of code works
// Calculates the distance between two points on earth specified by longitude/latitude.
// Function taken and adapted from http://www.codeproject.com/Articles/22488/Distance-using-Longitiude-and-latitude-using-c
float calculateDistanceBetween(float lat1, float long1, float lat2, float long2)
{
// main code inside the class
float dlat1 = lat1 * ((float)M_PI / 180.0f);
float dlong1 = long1 * ((float)M_PI / 180.0f);
float dlat2 = lat2 * ((float)M_PI / 180.0f);
float dlong2 = long2 * ((float)M_PI / 180.0f);
float dLong = dlong1 - dlong2;
float dLat = dlat1 - dlat2;
float aHarv = pow(sin(dLat / 2.0f), 2.0f) + cos(dlat1) * cos(dlat2) * pow(sin(dLong / 2), 2);
float cHarv = 2 * atan2(sqrt(aHarv), sqrt(1.0f - aHarv));
// earth's radius from wikipedia varies between 6,356.750 km and 6,378.135 km
// The IUGG value for the equatorial radius of the Earth is 6378.137 km
const float earth = 6378.137f;
return earth * cHarv;
}
struct AirportInfo
{
std::string m_name;
std::string m_city;
std::string m_country;
float pos[2]; // x: latitude, y: longitude
std::vector<std::pair<int, int>> m_routes; // dest_id + numStops
std::vector<float> m_routeLengths;
float m_averageRouteLength;
};
Here is what causes the trouble:
//- For each route in AirportInfo::m_routes, calculate the distance between start and destination. Store the results in AirportInfo::m_routeLengths. Use std::transform() and calculateDistanceBetween().
void calculateDistancePerRoute(std::map<int, AirportInfo>& airportInfo)
{ //loop all structures
for(int i = 0; i < airportInfo.size(); i++ ){
// START END SAVE
std::transform(airportInfo[i].pos[0], airportInfo[i].pos[1], /*...*/ , airportInfo[i].m_routeLengths.begin(),
calculateDistanceBetween);
}
std::cout << "Calculate distance for each route" << std::endl;
}
Use std::back_inserter(airportInfo[i].m_routeLengths) (and if performance is important, reserve vector sizes in advance), instead of airportInfo[i].m_routeLengths.begin(). Also, iterating by index when there is nothing "enforcing" that the indecies in the map are going from 0...map.size() is not safe, you should prefer using a vector for the shown usecase.
I think this is something like what you want:
void calculateDistancePerRoute(std::map<int, AirportInfo>& airportInfo)
{
for(int i = 0; i < airportInfo.size(); i++ )
{
float currentPosX = airportInfo.at(i).pos[0];
float currentPosY = airportInfo.at(i).pos[1];
std::transform(airportInfo.begin(), airportInfo.end(), std::back_inserter(airportInfo.at(i).m_routeLengths), [&] (const auto& otherAirport)
{
return calculateDistanceBetween(currentPosX, currentPosY, otherAirport.second.pos[0], otherAirport.second.pos[1]);
});
}
}
Example in Godbolt
Problem
I am writing a ray tracer as a use case for a specific machine learning approach in Computer Graphics.
My problem is that, when I try to find the intersection between a ray and a surface, the result is not exact.
Basically, if I am scattering a ray from point O towards a surface located at (x,y,z), where z = 81, I would expect the solution to be something like S = (x,y,81). The problem is: I get a solution like (x,y,81.000000005).
This is of course a problem, because following operations depend on that solution, and it needs to be the exact one.
Question
My question is: how do people in Computer Graphics deal with this problem? I tried to change my variables from float to double and it does not solve the problem.
Alternative solutions
I tried to use the function std::round(). This can only help in specific situations, but not when the exact solution contains one or more significant digits.
Same for std::ceil() and std::floor().
EDIT
This is how I calculate the intersection with a surface (rectangle) parallel to the xz axes.
First of all, I calculate the distance t between the origin of my Ray and the surface. In case my Ray, in that specific direction, does not hit the surface, t is returned as 0.
class Rectangle_xy: public Hitable {
public:
float x1, x2, y1, y2, z;
...
float intersect(const Ray &r) const { // returns distance, 0 if no hit
float t = (y - r.o.y) / r.d.y; // ray.y = t* dir.y
const float& x = r.o.x + r.d.x * t;
const float& z = r.o.z + r.d.z * t;
if (x < x1 || x > x2 || z < z1 || z > z2 || t < 0) {
t = 0;
return 0;
} else {
return t;
}
....
}
Specifically, given a Ray and the id of an object in the list (that I want to hit):
inline Vec hittingPoint(const Ray &r, int &id) {
float t; // distance to intersection
if (!intersect(r, t, id))
return Vec();
const Vec& x = r.o + r.d * t;// ray intersection point (t calculated in intersect())
return x ;
}
The function intersect() in the previous snippet of code checks for every Rectangle in the List rect if I intersect some object:
inline bool intersect(const Ray &r, float &t, int &id) {
const float& n = NUMBER_OBJ; //Divide allocation of byte of the whole scene, by allocation in byte of one single element
float d;
float inf = t = 1e20;
for (int i = 0; i < n; i++) {
if ((d = rect[i]->intersect(r)) && d < t) { // Distance of hit point
t = d;
id = i;
}
}
// Return the closest intersection, as a bool
return t < inf;
}
The coordinate is then obtained using the geometric interpolation between a line and a surface in the 3D space:
Vec& x = r.o + r.d * t;
where:
r.o: it represents the ray origin. It's defined as a r.o : Vec(float a, float b, float c)
r.d : this is the direction of the ray. As before: r.d: Vec(float d, float e, float f).
t: float representing the distance between the object and the origin.
You could look into using std::numeric_limits<T>::epsilon for your float/double comparison. And see if your result is in the region +-epsilon.
An alternative would be to not ray trace towards a point. Maybe just place relatively small box or sphere there.
I am trying to create a uvn quaternion based camera in opengl, having used a variety of tutorials listed below, and having read up on quaternions and axis angle rotation. I am left with a peculiar bug which I cannot seem to fix.
Basically the camera seems to work fine up until the camera is rotated approx 45 degrees from +z at this point tilting the camera up or down seems to tilt the camera around its target axis, turning the up vector.
By the time the camera faces along -z tilting up or down gives the illusion of the opposite, up tilts down and down tilts up.
I have seen other implementations suggesting the use of a non uvn system where quaternions are accumulated into one which describes the current orientation as a delta from some arbitrary start angle. This sounds great however I can't seem to work out exactly how I would implement this, specifically the conversion from this to a view matrix.
Elsewhere on SO I read about splitting the rotation into two quaternions that represent the yaw and pitch separately but I'm not convinced that this is the cause of the problem since in this context, correct me if I am wrong but my understanding is that the order in which you apply the two rotations does not matter.
Relevant Source Code Snippets:
Quarternion Operations
Quaternion<TValue> conjugate() const{
return Quaternion({ { -m_values[X], -m_values[Y], -m_values[Z], m_values[W] } });
};
Quaternion<TValue>& operator*=(const Quaternion<TValue>& rhs) {
TValue x, y, z, w;
w = rhs[W] * m_values[W] - rhs[X] * m_values[X] - rhs[Y] * m_values[Y] - rhs[Z] * m_values[Z];
x = rhs[W] * m_values[X] + rhs[X] * m_values[W] - rhs[Y] * m_values[Z] + rhs[Z] * m_values[Y];
y = rhs[W] * m_values[Y] + rhs[X] * m_values[Z] + rhs[Y] * m_values[W] - rhs[Z] * m_values[X];
z = rhs[W] * m_values[Z] - rhs[X] * m_values[Y] + rhs[Y] * m_values[X] + rhs[Z] * m_values[W];
m_values[X] = x;
m_values[Y] = y;
m_values[Z] = z;
m_values[W] = w;
return *this;
};
static Quaternion<TValue> rotation(Vector<3, TValue> axis, TValue angle){
float x, y, z, w;
TValue halfTheta = angle / 2.0f;
TValue sinHalfTheta = sin(halfTheta);
return Quaternion<TValue>({ { axis[X] * sinHalfTheta, axis[Y] * sinHalfTheta, axis[Z] * sinHalfTheta, cos(halfTheta) } });
};
Vector Rotation Operation
Vector<dimensions, TValue> rotate(const Vector<3, TValue> axis, float angle){
Quaternion<TValue> R = Quaternion<TValue>::rotation(axis, angle);
Quaternion<TValue> V = (*this);
Vector<dimensions, TValue> result = R * V * R.conjugate();
return result;
}
Camera Methods
Camera::Camera(Vector<2, int> windowSize, float fov, float near, float far):
m_uvn(Matrix<4, float>::identity()),
m_translation(Matrix<4, float>::identity()),
m_ar(windowSize[Dimensions::X] / (float)windowSize[Dimensions::Y]),
m_fov(fov),
m_near(near),
m_far(far),
m_position(),
m_forward({ { 0, 0, 1 } }),
m_up({ { 0, 1, 0 } })
{
setViewMatrix(Matrix<4, float>::identity());
setProjectionMatrix(Matrix<4, float>::perspective(m_ar, m_near, m_far, m_fov));
};
Matrix<4, float> Camera::getVPMatrix() const{
return m_vp;
};
const Vector<3, float> Camera::globalY = Vector<3, float>({ { 0, 1, 0 } });
void Camera::setProjectionMatrix(const Matrix<4, float> p){
m_projection = p;
m_vp = m_projection * m_view;
};
void Camera::setViewMatrix(const Matrix<4, float> v){
m_view = v;
m_vp = m_projection * m_view;
};
void Camera::setTranslationMatrix(const Matrix<4, float> t){
m_translation = t;
setViewMatrix(m_uvn * m_translation);
}
void Camera::setPosition(Vector<3, float> position){
if (position != m_position){
m_position = position;
setTranslationMatrix(Matrix<4, float>::translation(-position));
}
};
void Camera::moveForward(float ammount){
setPosition(m_position + (m_forward * ammount));
}
void Camera::moveRight(float ammount){
setPosition(m_position + (getRight() * ammount));
}
void Camera::moveUp(float ammount){
setPosition(m_position + (m_up * ammount));
}
void Camera::setLookAt(Vector<3, float> target, Vector<3, float> up){
Vector<3, float> newUp = up.normalize();
Vector<3, float> newForward = target.normalize();
if (newUp != m_up || newForward != m_forward){
m_up = newUp;
m_forward = newForward;
Vector<3, float> newLeft = getLeft();
m_up = newLeft * m_forward;
m_uvn = generateUVN();
setViewMatrix(m_uvn * m_translation);
}
};
void Camera::rotateX(float angle){
Vector<3, float> hAxis = (globalY * m_forward).normalize();
m_forward = m_forward.rotate(hAxis, angle).normalize();
m_up = (m_forward * hAxis).normalize();
m_uvn = generateUVN();
setViewMatrix(m_translation * m_uvn);
}
void Camera::rotateY(float angle){
Vector<3, float> hAxis = (globalY * m_forward).normalize();
m_forward = m_forward.rotate(globalY, angle).normalize();
m_up = (m_forward * hAxis).normalize();
m_uvn = generateUVN();
setViewMatrix(m_translation * m_uvn);
}
Vector<3, float> Camera::getRight(){
return (m_forward * m_up).normalize();
}
Vector <3, float> Camera::getLeft(){
return (m_up * m_forward).normalize();
}
};
I am guessing that the problem is in either my implementation of a quaternion or the way I am using it, but due to the complex nature of the system I cannot seem to pin down the problem any further than that. Due to the weird bugs being experienced I am unsure if there is just something wrong with the way I am trying to implement the camera?
Tutorials
https://www.youtube.com/watch?v=1Aw1PDu33PI
http://www.gamedev.net/page/resources/_/technical/math-and-physics/a-simple-quaternion-based-camera-r1997
Quarternion/Vector Math
http://mathworld.wolfram.com/Quaternion.html
https://en.wikipedia.org/wiki/Cross_product
http://ogldev.atspace.co.uk/www/tutorial13/tutorial13.html
http://www.euclideanspace.com/maths/algebra/realNormedAlgebra/quaternions/index.htm
Old question but still a relevant topic so I'll give some pointers. The thing to remember is a quaternion is a 3D map. Unit, and to a lesser degree Pure, quaternions provide a consistent way to say this rotation is this value. The benefits of this over the Euler angles are they attempt to reconstruct orientation from rotation about some axis where as the quaternion directly correlates to the orientation and can avoid gimbal lock
Specifically for a quaternion camera let Q{1,0,0,0} where w=1, the corresponding matrix to this quaternion is the identity matrix. There for any valid unit quaternion when decomposed into a 3x3 matrix gives you the (usually) world space rotation of the camera. However you don't even need that because you can define your camera space so that say X{1,0,0} Y{0,1,0} Z{0,0,-1} then multiply these unit axes by your cameras orientation quaternion and the resulting vector is the transformed unit vector. These then create your right up and front vectors which can be used to build the 3x3 rotation of the view transform.
Moving the camera should be relatively straight forward at that point. The linear movement vectors can be easily reconstructed and applied to the camera position and the angular movement can be achieved by multiplying the the camera quaternion with the direction which is the normal of the corresponding plane in which the rotation happens. For example, turning left and right is a rotation that happens in there XZ plane there for the Y vector{as a unit quaternion not a pure quaternion} would be multiplied with the camera quaternion producing the desired rotational effect
I'm making a application for school in which I have to click a particular object.
EDIT: This is being made in 2D
I have a rectangle, I rotate this rectangle by X.
The rotation of the rectangle has made my rectangles (x,y,width,height) become a new rectangle around the rotated rectangle.
http://i.stack.imgur.com/MejMA.png
(excuse me for my terrible paint skills)
The Black lines describe the rotated rectangle, the red lines are my new rectangle.
I need to find out if my mouse is within the black rectangle or not. Whatever rotation I do I already have a function for getting the (X,Y) for each corner of the black rectangle.
Now I'm trying to implement this Check if point is within triangle (The same side technique).
So I can either check if my mouse is within each triangle or if theres a way to check if my mouse is in the rotated rectangle that would be even better.
I practically understand everything written in the triangle document, but I simply don't have the math skills to calculate the cross product and the dot product of the 2 cross products.
This is supposed to be the cross product:
a × b = |a| |b| sin(θ) n
|a| is the magnitude (length) of vector a
|b| is the magnitude (length) of vector b
θ is the angle between a and b
n is the unit vector at right angles to both a and b
But how do I calculate the unit vector to both a and b?
And how do I get the magnitude of a vector?
EDIT:
I forgot to ask for the calculation of the dotproduct between 2 cross products.
function SameSide(p1,p2, a,b)
cp1 = CrossProduct(b-a, p1-a)
cp2 = CrossProduct(b-a, p2-a)
if DotProduct(cp1, cp2) >= 0 then return true
else return false
Thank you everyone for your help I think I got the hang of it now, I wish I could accept multiple answers.
If you are having to carry out loads of check, I would shy away from using square root functions: they are computationally expensive. for comparison purposes, just multiply everything by itself and you can bypass the square rooting:
magnitude of vector = length of vector
If vector is defined as float[3] length can be calculated as follows:
double magnitude = sqrt( a[0]*a[0] + a[1]*a[1] + a[2]*a[2] );
However that is expensive computationally so I would use
double magnitudeSquared = a[0]*a[0] + a[1]*a[1] + a[2]*a[2];
Then modify any comparative calculations to use the squared version of the distance or magnitude and it will be more performant.
For the cross product, please forgive me if this maths is shaky, it has been a couple of years since I wrote functions for this (code re-use is great but terrible for remembering things):
double c[3];
c[0] = ( a[1]*b[2] - a[2]*b[1] );
c[1] = ( a[2]*b[0] - a[0]*b[2] );
c[2] = ( a[0]*b[1] - a[1]*b[0] );
To simplify it all I would put a vec3d in a class of its own, with a very simple representation being:
class vec3d
{
public:
float x, y, z;
vec3d crossProduct(vec3d secondVector)
{
vec3d retval;
retval.x = (this.y * secondVector.z)-(secondVector.y * this.z);
retval.y = -(this.x * secondVector.z)+(secondVector.x * this.z);
retval.z = (this.x * secondVector.y)-(this.y * secondVector.x);
return retval;
}
// to get the unit vector divide by a vectors length...
void normalise() // this will make the vector into a 1 unit long variant of itself, or a unit vector
{
if(fabs(x) > 0.0001){
x= x / this.magnitude();
}
if(fabs(y) > 0.0001){
y= y / this.magnitude();
}
if(fabs(z) > 0.0001){
z = / this.magnitude();
}
}
double magnitude()
{
return sqrt((x*x) + (y*y) + (z*z));
}
double magnitudeSquared()
{
return ((x*x) + (y*y) + (z*z));
}
};
A fuller implementation of a vec3d class can be had from one of my old 2nd year coding excercises: .h file and .cpp file.
And here is a minimalist 2d implementation (doing this off the top of my head so forgive the terse code please, and let me know if there are errors):
vec2d.h
#ifndef VEC2D_H
#define VEC2D_H
#include <iostream>
using namespace std;
class Vec2D {
private:
double x, y;
public:
Vec2D(); // default, takes no args
Vec2D(double, double); // user can specify init values
void setX(double);
void setY(double);
double getX() const;
double getY() const;
double getMagnitude() const;
double getMagnitudeSquared() const;
double getMagnitude2() const;
Vec2D normalize() const;
double crossProduct(Vec2D secondVector);
Vec2D crossProduct(Vec2D secondVector);
friend Vec2D operator+(const Vec2D&, const Vec2D&);
friend ostream &operator<<(ostream&, const Vec2D&);
};
double dotProduct(const Vec2D, const Vec2D);
#endif
vec2d.cpp
#include <iostream>
#include <cmath>
using namespace std;
#include "Vec2D.h"
// Constructors
Vec2D::Vec2D() { x = y = 0.0; }
Vec2D::Vec2D(double a, double b) { x = a; y = b; }
// Mutators
void Vec2D::setX(double a) { x = a; }
void Vec2D::setY(double a) { y = a; }
// Accessors
double Vec2D::getX() const { return x; }
double Vec2D::getY() const { return y; }
double Vec2D::getMagnitude() const { return sqrt((x*x) + (y*y)); }
double Vec2D::getMagnitudeSquared() const { return ((x*x) + (y*y)); }
double Vec2D::getMagnitude2 const { return getMagnitudeSquared(); }
double Vec2d::crossProduct(Vec2D secondVector) { return ((this.x * secondVector.getY())-(this.y * secondVector.getX()));}
Vec2D crossProduct(Vec2D secondVector) {return new Vec2D(this.y,-(this.x));}
Vec2D Vec2D::normalize() const { return Vec2D(x/getMagnitude(), y/getMagnitude());}
Vec2D operator+(const Vec2D& a, const Vec2D& b) { return Vec2D(a.x + b.x, a.y + b.y);}
ostream& operator<<(ostream& output, const Vec2D& a) { output << "(" << a.x << ", " << a.y << ")" << endl; return output;}
double dotProduct(const Vec2D a, const Vec2D b) { return a.getX() * b.getX() + a.getY() * b.getY();}
Check if a point is inside a triangle described by three vectors:
float calculateSign(Vec2D v1, Vec2D v2, Vec2D v3)
{
return (v1.getX() - v3.getX()) * (v2.getY() - v3.getY()) - (v2.getX() - v3.getX()) * (v1.getY() - v3.getY());
}
bool isPointInsideTriangle(Vec2D point2d, Vec2D v1, Vec2D v2, Vec2D v3)
{
bool b1, b2, b3;
// the < 0.0f is arbitrary, could have just as easily been > (would have flipped the results but would compare the same)
b1 = calculateSign(point2d, v1, v2) < 0.0f;
b2 = calculateSign(point2d, v2, v3) < 0.0f;
b3 = calculateSign(point2d, v3, v1) < 0.0f;
return ((b1 == b2) && (b2 == b3));
}
In the code above if calculateSign is in the triangle you will get a true returned :)
Hope this helps, let me know if you need more info or a fuller vec3d or 2d class and I can post:)
Addendum
I have added in a small 2d-vector class, to show the differences in the 2d and 3d ones.
The magnitude of a vector is its length. In C++, if you have a vector represented as a double[3], you would calculate the length via
#include <math.h>
double a_length = sqrt( a[0]*a[0] + a[1]*a[1] + a[2]*a[2] );
However, I understand what you actually want is the cross product? In that case, you may want to calculate it directly. The result is a vector, i.e. c = a x b.
You code it like this for example:
double c[3];
c[0] = ( a[2]*b[3] - a[3]*b[2] );
c[1] = ( a[3]*b[1] - a[1]*b[3] );
c[2] = ( a[1]*b[2] - a[2]*b[1] );
You can calculate the magnitude of vector by sqrt(x*x + y*y). Also you can calculate the crossproduct simpler: a x b = a.x * b.y - a.y * b.x. Checking that a point is inside triangle can be done by counting the areas for all 4 triangles. For example a is the area of the source triangle, b,c,d are areas of other ones. If b + c + d = a then the point is inside. Counting the area of triangle is simple: we have vectors a, b that are vertexes of triangle. The area of triangle then is (a x b) / 2
One simple way without getting into vectors is to check for area.
For example ,lets say you have a rectangle with corners A,B,C,D. and point P.
first calculate the area of rectangle, simply find height and width of the rectangle and multiply.
B D
| /
| /
|/____ C
A
For calculating the height,width take one point lets say A, find its distance from all other three points i.e AB,AC,AD 1st and 2nd minimum will be width,and height, max will be diagonal length.
Now store the points from which you get the height, width, lets says those points are B,C.
So now you know how rectangle looks, i.e
B _____ D
| |
|_____|
A C
Then calculate the sum of area of triangles ACP,ABP,BDP,CDP (use heros formula to compute area of rectangle), if it equals to the area of rectangle, point P is inside else outside the rectangle.