Ray tracing sphere reflection bug - c++

I am trying to implement the ray tracing algorithm and I have some trouble computing the reflected rays of spherical objects.It seems that
for some particular rays, the reflected ray just passes through and is collinear with the traced ray.
Bellow is how i record the ray - sphere intersection:
bool Sphere::intersectLocal(const ray & r, isect & i) const {
Vec3d P = r.getPosition();
Vec3d D = r.getDirection();
//D.normalize();
double a = dot(D, D);
double b = 2 * dot(P, D);
double c = dot(P, P) - 1;
double delta = b * b - 4 * a * c;
if (delta < 0)
return false;
if (delta == 0) {
double t = -b / 2 * a;
Vec3d Q = P + t * D;
Vec3d N = Q;
N.normalize();
i.setT(t);
i.setN(N);
i.setObject(this);
return true;
}
if (delta > 0) {
double t1 = (-b - sqrt(delta)) / 2 * a;
double t2 = (-b + sqrt(delta)) / 2 * a;
double t;
if (t1 > 0) t = t1;
else if (t2 > 0) t = t2;
else return false;
Vec3d N = P + t * D;
N.normalize();
i.setT(t);
i.setN(N);
i.setObject(this);
return true;
}
return false;
}
And this is how I compute the reflected ray for each intersection:
isect i;
if (scene - > intersect(r, i)) {
// An intersection occured!
const Material & m = i.getMaterial();
double t = i.t;
Vec3d N = i.N;
Vec3d I = m.shade(scene, r, i); //local illumination
if (!m.kr(i).iszero() && depth >= 0) {
// compute reflection direction
Vec3d raydir = r.getDirection();
Vec3d refldir = 2 * dot(-raydir, i.N) * i.N + raydir;
refldir.normalize();
ray reflectionRay = ray(r.at(i.t), refldir, ray::RayType::REFLECTION);
Vec3d reflection = traceRay(reflectionRay, thresh, depth - 1);
Vec3d R = reflection;
I += R;
}
return I;
} else {
// No intersection. This ray travels to infinity, so we color
// it according to the background color, which in this (simple) case
// is just black.
return Vec3d(0.0, 0.0, 0.0);
}
The code above seems to work fine for most of the points on the sphere where the rays intersect, but for others it does not reflect as i expected

If I see right, this makes the normal face same direction as the ray. So with ray==normal==reflected_ray nothing gets reflected.
Vec3d Q = P + t * D;
Vec3d N = Q;

About errors in floating-point arithmetic and how to deal with it:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Here you can find how to compare floating-point numbers. Searching for relative absolute compare floating you may find more information.
https://floating-point-gui.de/errors/comparison/
This is an excerpt from my code in C#. Almost never use absolute compare.
public static bool IsAlmostRelativeEquals(this double d1, double d2, double epsilon)
{
double absDiff = Math.Abs(d1 - d2);
if (double.IsPositiveInfinity(absDiff))
return false;
if (absDiff < epsilon)
return true;
double absMax = Math.Max(Math.Abs(d1), Math.Abs(d2));
return Math.Abs(d1 - d2) <= epsilon * absMax;
}
public static bool IsAlmostZero(this double d, double epsilon)
{
double abs = Math.Abs(d);
if (double.IsPositiveInfinity(abs))
return false;
return abs < epsilon;
}

Related

GLSL Sphere - ray intersection geometric solution

I'm trying to implement sphere ray intersection in GLSL, both the geometric and analytical solution. I'm having trouble solving the geom one, it should have something to do with how I return true or false:
bool hitSphere(Ray ray, Sphere sphere, float t_min, float t_max, out float t_out) {
// Geometric solution
float R2 = sphere.radius * sphere.radius;
vec3 L = sphere.position - ray.origin;
float tca = dot(L, normalize(ray.direction));
// if(tca < 0) return false;
float D2 = dot(L, L) - tca * tca;
if(D2 > R2) return false;
float thc = sqrt(R2 - D2);
float t0 = tca - thc;
float t1 = tca + thc;
if(t0 < t_max && t0 > t_min) {
t_out = t0;
return true;
}
if(t1 < t_max && t1 > t_min) {
t_out = t1;
return true;
}
return false;
}
I think the problem is with how I deal with t0 and t1 for none, one or both intersection cases.
Edit: the analytic version that does work:
vec3 oc = ray.origin - sphere.position;
float a = dot(ray.direction, ray.direction);
float b = dot(oc, ray.direction);
float c = dot(oc, oc) - sphere.radius * sphere.radius;
float discriminant = b * b - a * c;
if (discriminant > 0.0f) {
if(b > 0)
t_out = (-b + sqrt(discriminant)) / a;
else
t_out = (-b - sqrt(discriminant)) / a;
if(t_out < t_max && t_out > t_min) {
return true;
}
}
return false;
The issue is caused by t_out. The algorithm has to compute t_out in that way, that X is the intersected point of the ray and the surface of the sphere, for:
X = ray.origin + ray.direction * t_out;
In the working algorithm t_out depends on the length of ray.direction. t_out becomes smaller, if the magnitude of the vector ray.direction is greater.
In the algorithm, which doesn't work, ray.direction is normalized.
float tca = dot(L, normalize(ray.direction));
Hence t_out is computed for a ray direction length of 1. Actually you compute a t_out' where t_out' = t_out * length(ray.direction).
Divide t0 respectively t1 by the length of ray.direction:
bool hitSphere_2(Ray ray, Sphere sphere, float t_min, float t_max, out float t_out)
{
float R2 = sphere.radius * sphere.radius;
vec3 L = sphere.position - ray.origin;
float tca = dot(L, normalize(ray.direction));
// if(tca < 0) return false;
float D2 = dot(L, L) - tca * tca;
if(D2 > R2) return false;
float thc = sqrt(R2 - D2);
float t0 = tca - thc;
float t1 = tca + thc;
if (t0 < t_max && t0 > t_min) {
t_out = t0 / length(ray.direction); // <---
return true;
}
if (t1 < t_max && t1 > t_min) {
t_out = t1 / length(ray.direction); // <---
return true;
}
return false;
}

Problem with triangle-triangle collision detection [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I'm developing a game engine for a university project and I can't get the collision detection system to work. I've found this paper that explains an algorithm for triangle-triangle collision detection created by Chaman-Leopoldj, but somehow I can't implement it. I know it is a bit long but the algorithm can be found at pages 8 and 22-24
here is the code I wrote:
this is the wrapper function
bool Octree::triangleTriangleIntersection(glm::vec3 A, glm::vec3 B, glm::vec3 C, glm::vec3 P, glm::vec3 Q, glm::vec3 R) {
glm::vec3 U = B - A;
glm::vec3 V = C - A;
glm::vec3 S = Q - P;
glm::vec3 T = R - P;
glm::vec3 AP = P - A;
float sigma = dot(U * V, U * V);
glm::vec3 alpha = (S * (U * V)) / sigma;
glm::vec3 beta = (T * (U * V)) / sigma;
glm::vec3 gamma = (AP * (U * V)) / sigma;
float alphau = dot(alpha, U);
float alphav = dot(alpha, V);
float alphauv = dot(alpha, U - V);
float gammau = dot(gamma, U);
float gammav = dot(gamma, V);
float gammauv = dot(gamma, U - V);
float betau = dot(beta, U);
float betav = dot(beta, V);
float betauv = dot(beta, U - V);
float Xm, XM, Sm = 0, SM = 1;
float Ym, YM, Tm = 0, TM = 1;
if (findSolution_x(-gammau, alphau, betau, 1 - gammau, -1 - gammav, alphav, betav, -gammav, Xm, XM)) {
if (Xm > Sm) Sm = Xm;
if (XM < SM) SM = XM;
}
else {
return false;
}
if (findSolution_x(-gammau, alphau, betau, 1 - gammau, -gammauv, alphauv, betauv, 1 - gammauv, Xm, XM)) {
if (Xm > Sm) Sm = Xm;
if (XM < SM) SM = XM;
}
else {
return false;
}
if (findSolution_x(-1 - gammav, alphav, betav, -gammav, -gammauv, alphauv, betauv, 1 - gammauv, Xm, XM)) {
if (Xm > Sm) Sm = Xm;
if (XM < SM) SM = XM;
}
else {
return false;
}
if (Sm > SM)
return false;
else {
float delta = (SM - Sm) / 20;
for (float s = Sm; s <= SM; s += delta) {
if (findSolution_y(-gammau, alphau, betau, 1 - gammau, -1 - gammav, alphav, betav, -gammav, s, Ym, YM)) {
if (Ym > Tm) Tm = Ym;
if (YM < TM) TM = YM;
}
else {
return false;
}
if (findSolution_y(-gammau, alphau, betau, 1 - gammau, -gammauv, alphauv, betauv, 1 - gammauv, s, Ym, YM)) {
if (Ym > Tm) Tm = Ym;
if (YM < TM) TM = YM;
}
else {
return false;
}
if (findSolution_y(-1 - gammav, alphav, betav, -gammav, -gammauv, alphauv, betauv, 1 - gammauv, s, Ym, YM)) {
if (Ym > Tm) Tm = Ym;
if (YM < TM) TM = YM;
}
else {
return false;
}
if (Tm > TM)
return false;
else
return true;
}
}
return false;}
solve_x
bool Octree::findSolution_x(float m, float a, float b, float n, float M, float A, float B, float N, float& Xm, float& XM) {
const float epsilon = 0.00001;
float denom = (a*B- A* b);
float Sm1, SM1;
Sm1 = (m* B- N* b);
SM1 = (n* B- M* b);
if (b< 0 || B< 0) {
Sm1 *= -1;
SM1 *= -1;
}
Sm1 /= denom;
SM1 /= denom;
float Sm1Rounded = round(Sm1);
float SM1Rounded = round(SM1);
if (abs(Sm1Rounded - Sm1 <= epsilon)) Sm1 = Sm1Rounded;
if (abs(SM1Rounded - SM1 <= epsilon)) SM1 = SM1Rounded;
Xm = Sm1;
XM = SM1;
if (denom == 0) {
Xm *= -1;
}
return true;}
solve_y
bool Octree::findSolution_y(float m, float a, float b, float n, float M, float A, float B, float N, float x, float& Ym, float& YM) {
const float epsilon = 0.00001;
float Sm1, SM1, Sm2, SM2;
Sm1 = m- (a* x);
Sm2 = M- (A* x);
SM1 = n- (a* x);
SM2 = N- (A* x);
if (b< 0 || B< 0) {
Sm1 *= -1;
SM1 *= -1;
Sm2 *= -1;
SM2 *= -1;
}
if (Sm1 > SM1 || Sm2 > SM2) return false;
Sm1 /= b;
SM1 /= b;
Sm2 /= B;
SM2 /= B;
float Sm1Rounded = round(Sm1);
float SM1Rounded = round(SM1);
float Sm2Rounded = round(Sm2);
float SM2Rounded = round(SM2);
if (abs(Sm1Rounded - Sm1 <= epsilon)) Sm1 = Sm1Rounded;
if (abs(SM1Rounded - SM1 <= epsilon)) SM1 = SM1Rounded;
if (abs(Sm2Rounded - Sm2 <= epsilon)) Sm2 = Sm2Rounded;
if (abs(SM2Rounded - SM2 <= epsilon)) SM2 = SM2Rounded;
if (param2 > 0 && param6 > 0) {
Sm1 >= Sm2 ? Ym = Sm1 : Ym = Sm2;
SM1 >= SM2 ? YM = SM2 : YM = SM1;
}
else if (param2 > 0) {
Ym = Sm1;
YM = SM1;
}
else if (param6 > 0) {
Ym = Sm2;
YM = SM2;
}
return true;}
I suspect I've put wrong conditions in one of my ifs but I just followed the guide lines of the paper so I really don't know. Hope you guys can help me.
EDIT: the epsilon is needed to round values below certain error. this is a problem deriving from assimp not reading values of OBJs properly, turning a 1.000000 into 1.0000045 for example.
I'm not going to try to debug your code for you, and someone is going to downvote me for an incomplete answer, but I'm going to offer some basic advice.
This is basic advice on debugging something this big. In my opinion, you need to set up a simple test. Write a tiny program that links with your code. Create your two triangles manually that you know collide, and then see if your code detects it.
No? Figure out HOW they collide and HOW you should have detected it, and then add print statements to your code where it should be colliding, and see why it isn't catching it.
What you might want to do is use some paper. Lay out a couple of triangles and then manually (no computer involved) step through the code you're using and see if it makes sense.
And if it doesn't, come up with your own code. I think you could define triangle collision as:
If any segment of T1 intersects with any segment of T2. You should be able to figure out a way of testing if two line segments intersect, and then just run all segments of T1 against T2.
OR if one triangle is entirely encapsulated inside the other. A big triangle and a little triangle.
This is part of the joy and frustration of coding -- learning to understand the algorithms you're using.

Raytracing - Ray/Triangle Intersection

I am having an issue with my algorithm to check if my ray intersect a 3D triangle. It seems to be still drawing in the circle behind it(top left hand corner). I can't seem to find out where in my code is causing this slight error.
bool Mesh::intersectTriangle(Ray const &ray,
Triangle const &tri,
Intersection &hit) const{
// Extract vertex positions from the mesh data.
Vector const &p0 = positions[tri[0].pi];
Vector const &p1 = positions[tri[1].pi];
Vector const &p2 = positions[tri[2].pi];
Vector e1 = p1 - p0;
Vector e2 = p2 - p0;
Vector e1e2 = e1.cross(e2);
Vector p = ray.direction.cross(e2);
e1e2.normalized();
float a = e1.dot(p);
if(a < 0.000001)
return false;
float f = 1 / a;
Vector s = ray.origin - p0;
float u = f*(s.dot(p));
if(u < 0.0 || u > 1.0)
return false;
Vector q = s.cross(e1);
float v = f * (ray.direction.dot(q));
if(v < 0.0 || u + v > 1.0)
return false;
float t = f * (e2.dot(q));
hit.depth = t;
hit.normal = e1e2;
hit.position = hit.position *t;
return true;

ray tracing triangular mesh objects

I'm trying to write a ray tracer for any objects formed of triangular meshes. I'm using an external library to load a cube from .ply format and then trace it down. So far, I've implemented most of the tracer, and now I'm trying to test it with a single cube, but for some reason all I get on the screen is a red line. I've tried several ways to fix it but I simply can't figure it out anymore. For this primary test, I'm only creating primary rays, and if they hit my cube, then I color that pixel to the cube's diffuse color and return. For checking ray-object intersections, I am going through all the triangles that form that object and return the distance to the closest one. It would be great if you could have a look at the code and tell me what could have gone wrong and where. I would greatly appreciate it.
Ray-Triangle intersection:
bool intersectTri(const Vec3D& ray_origin, const Vec3D& ray_direction, const Vec3D& v0, const Vec3D& v1, const Vec3D& v2, double &t, double &u, double &v) const
{
Vec3D edge1 = v1 - v0;
Vec3D edge2 = v2 - v0;
Vec3D pvec = ray_direction.cross(edge2);
double det = edge1.dot(pvec);
if (det > - THRESHOLD && det < THRESHOLD)
return false;
double invDet = 1/det;
Vec3D tvec = ray_origin - v0;
u = tvec.dot(pvec)*invDet;
if (u < 0 || u > 1)
return false;
Vec3D qvec = tvec.cross(edge1);
v = ray_direction.dot(qvec)*invDet;
if (v < 0 || u + v > 1)
return false;
t = edge2.dot(qvec)*invDet;
if (t < 0)
return false;
return true;
}
//Object intersection
bool intersect(const Vec3D& ray_origin, const Vec3D& ray_direction, IntersectionData& idata, bool enforce_max) const
{
double tClosest;
if (enforce_max)
{
tClosest = idata.t;
}
else
{
tClosest = TMAX;
}
for (int i = 0 ; i < indices.size() ; i++)
{
const Vec3D v0 = vertices[indices[i][0]];
const Vec3D v1 = vertices[indices[i][1]];
const Vec3D v2 = vertices[indices[i][2]];
double t, u, v;
if (intersectTri(ray_origin, ray_direction, v0, v1, v2, t, u, v))
{
if (t < tClosest)
{
idata.t = t;
tClosest = t;
idata.u = u;
idata.v = v;
idata.index = i;
}
}
}
return (tClosest < TMAX && tClosest > 0) ? true : false;
}
Vec3D trace(World world, Vec3D &ray_origin, Vec3D &ray_direction)
{
Vec3D objColor = world.background_color;
IntersectionData idata;
double coeff = 1.0;
int depth = 0;
double tClosest = TMAX;
Object *hitObject = NULL;
for (unsigned int i = 0 ; i < world.objs.size() ; i++)
{
IntersectionData idata_curr;
if (world.objs[i].intersect(ray_origin, ray_direction, idata_curr, false))
{
if (idata_curr.t < tClosest && idata_curr.t > 0)
{
idata.t = idata_curr.t;
idata.u = idata_curr.u;
idata.v = idata_curr.v;
idata.index = idata_curr.index;
tClosest = idata_curr.t;
hitObject = &(world.objs[i]);
}
}
}
if (hitObject == NULL)
{
return world.background_color;
}
else
{
return hitObject->getDiffuse();
}
}
int main(int argc, char** argv)
{
parse("cube.ply");
Vec3D diffusion1(1, 0, 0);
Vec3D specular1(1, 1, 1);
Object cube1(coordinates, connected_vertices, diffusion1, specular1, 0, 0);
World wrld;
// Add objects to the world
wrld.objs.push_back(cube1);
Vec3D background(0, 0, 0);
wrld.background_color = background;
// Set light color
Vec3D light_clr(1, 1, 1);
wrld.light_colors.push_back(light_clr);
// Set light position
Vec3D light(0, 64, -10);
wrld.light_positions.push_back(light);
int width = 128;
int height = 128;
Vec3D *image = new Vec3D[width*height];
Vec3D *pixel = image;
// Trace rays
for (int y = -height/2 ; y < height/2 ; ++y)
{
for (int x = -width/2 ; x < width/2 ; ++x, ++pixel)
{
Vec3D ray_dir(x+0.5, y+0.5, -1.0);
ray_dir.normalize();
Vec3D ray_orig(0.5*width, 0.5*height, 0.0);
*pixel = trace(wrld, ray_orig, ray_dir);
}
}
savePPM("./test.ppm", image, width, height);
return 0;
}
I've just ran a test case and I got this:
for a unit cube centered at (0,0, -1.5) and scaled on the X and Y axis by 100. It seems that there is something wrong with the projection, but I can't really tell exactly what from the result. Also, shouldn't, in this case (cube is centered at (0,0)) the final object also appear in the middle of the picture?
FIX: I fixed the centering problem by doing ray_dir = ray_dir - ray_orig before normalizing and calling the trace function. Still, the perspective seems to be plain wrong.
I continued the work and now I started implementing the diffuse reflection according to Phong.
Vec3D trace(World world, Vec3D &ray_origin, Vec3D &ray_direction)
{
Vec3D objColor = Vec3D(0);
IntersectionData idata;
double coeff = 1.0;
int depth = 0;
do
{
double tClosest = TMAX;
Object *hitObject = NULL;
for (unsigned int i = 0 ; i < world.objs.size() ; i++)
{
IntersectionData idata_curr;
if (world.objs[i].intersect(ray_origin, ray_direction, idata_curr, false))
{
if (idata_curr.t < tClosest && idata_curr.t > 0)
{
idata.t = idata_curr.t;
idata.u = idata_curr.u;
idata.v = idata_curr.v;
idata.index = idata_curr.index;
tClosest = idata_curr.t;
hitObject = &(world.objs[i]);
}
}
}
if (hitObject == NULL)
{
return world.background_color;
}
Vec3D newStart = ray_origin + ray_direction*idata.t;
// Compute normal at intersection by interpolating vertex normals (PHONG Idea)
Vec3D v0 = hitObject->getVertices()[hitObject->getIndices()[idata.index][0]];
Vec3D v1 = hitObject->getVertices()[hitObject->getIndices()[idata.index][1]];
Vec3D v2 = hitObject->getVertices()[hitObject->getIndices()[idata.index][2]];
Vec3D n1 = hitObject->getNormals()[hitObject->getIndices()[idata.index][0]];
Vec3D n2 = hitObject->getNormals()[hitObject->getIndices()[idata.index][1]];
Vec3D n3 = hitObject->getNormals()[hitObject->getIndices()[idata.index][2]];
// Vec3D N = n1 + (n2 - n1)*idata.u + (n3 - n1)*idata.v;
Vec3D N = v0.computeFaceNrm(v1, v2);
if (ray_direction.dot(N) > 0)
{
N = N*(-1);
}
N.normalize();
Vec3D lightray_origin = newStart;
for (unsigned int itr = 0 ; itr < world.light_positions.size() ; itr++)
{
Vec3D lightray_dir = world.light_positions[0] - newStart;
lightray_dir.normalize();
double cos_theta = max(N.dot(lightray_dir), 0.0);
objColor.setX(objColor.getX() + hitObject->getDiffuse().getX()*hitObject->getDiffuseReflection()*cos_theta);
objColor.setY(objColor.getY() + hitObject->getDiffuse().getY()*hitObject->getDiffuseReflection()*cos_theta);
objColor.setZ(objColor.getZ() + hitObject->getDiffuse().getZ()*hitObject->getDiffuseReflection()*cos_theta);
return objColor;
}
depth++;
} while(coeff > 0 && depth < MAX_RAY_DEPTH);
return objColor;
}
When I reach an object with the primary ray, I send another ray to the light source positioned at (0,0,0) and return the color according to the Phong illumination model for diffuse reflection, but the result is really not the expected one: http://s15.postimage.org/vc6uyyssr/test.png. The cube is a unit cube centered at (0,0,0) and then translated by (1.5, -1.5, -1.5). From my point of view, the left side of the cube should get more light and it actually does. What do you think of it?

Finding the centroid of a polygon?

To get the center, I have tried, for each vertex, to add to the total, divide by the number of vertices.
I've also tried to find the topmost, bottommost -> get midpoint... find leftmost, rightmost, find the midpoint.
Both of these did not return the perfect center because I'm relying on the center to scale a polygon.
I want to scale my polygons, so I may put a border around them.
What is the best way to find the centroid of a polygon given that the polygon may be concave, convex and have many many sides of various lengths?
The formula is given here for vertices sorted by their occurance along the polygon's perimeter.
For those having difficulty understanding the sigma notation in those formulas, here is some C++ code showing how to do the computation:
#include <iostream>
struct Point2D
{
double x;
double y;
};
Point2D compute2DPolygonCentroid(const Point2D* vertices, int vertexCount)
{
Point2D centroid = {0, 0};
double signedArea = 0.0;
double x0 = 0.0; // Current vertex X
double y0 = 0.0; // Current vertex Y
double x1 = 0.0; // Next vertex X
double y1 = 0.0; // Next vertex Y
double a = 0.0; // Partial signed area
// For all vertices except last
int i=0;
for (i=0; i<vertexCount-1; ++i)
{
x0 = vertices[i].x;
y0 = vertices[i].y;
x1 = vertices[i+1].x;
y1 = vertices[i+1].y;
a = x0*y1 - x1*y0;
signedArea += a;
centroid.x += (x0 + x1)*a;
centroid.y += (y0 + y1)*a;
}
// Do last vertex separately to avoid performing an expensive
// modulus operation in each iteration.
x0 = vertices[i].x;
y0 = vertices[i].y;
x1 = vertices[0].x;
y1 = vertices[0].y;
a = x0*y1 - x1*y0;
signedArea += a;
centroid.x += (x0 + x1)*a;
centroid.y += (y0 + y1)*a;
signedArea *= 0.5;
centroid.x /= (6.0*signedArea);
centroid.y /= (6.0*signedArea);
return centroid;
}
int main()
{
Point2D polygon[] = {{0.0,0.0}, {0.0,10.0}, {10.0,10.0}, {10.0,0.0}};
size_t vertexCount = sizeof(polygon) / sizeof(polygon[0]);
Point2D centroid = compute2DPolygonCentroid(polygon, vertexCount);
std::cout << "Centroid is (" << centroid.x << ", " << centroid.y << ")\n";
}
I've only tested this for a square polygon in the upper-right x/y quadrant.
If you don't mind performing two (potentially expensive) extra modulus operations in each iteration, then you can simplify the previous compute2DPolygonCentroid function to the following:
Point2D compute2DPolygonCentroid(const Point2D* vertices, int vertexCount)
{
Point2D centroid = {0, 0};
double signedArea = 0.0;
double x0 = 0.0; // Current vertex X
double y0 = 0.0; // Current vertex Y
double x1 = 0.0; // Next vertex X
double y1 = 0.0; // Next vertex Y
double a = 0.0; // Partial signed area
// For all vertices
int i=0;
for (i=0; i<vertexCount; ++i)
{
x0 = vertices[i].x;
y0 = vertices[i].y;
x1 = vertices[(i+1) % vertexCount].x;
y1 = vertices[(i+1) % vertexCount].y;
a = x0*y1 - x1*y0;
signedArea += a;
centroid.x += (x0 + x1)*a;
centroid.y += (y0 + y1)*a;
}
signedArea *= 0.5;
centroid.x /= (6.0*signedArea);
centroid.y /= (6.0*signedArea);
return centroid;
}
The centroid can be calculated as the weighted sum of the centroids of the triangles it can be partitioned to.
Here is the C source code for such an algorithm:
/*
Written by Joseph O'Rourke
orourke#cs.smith.edu
October 27, 1995
Computes the centroid (center of gravity) of an arbitrary
simple polygon via a weighted sum of signed triangle areas,
weighted by the centroid of each triangle.
Reads x,y coordinates from stdin.
NB: Assumes points are entered in ccw order!
E.g., input for square:
0 0
10 0
10 10
0 10
This solves Exercise 12, p.47, of my text,
Computational Geometry in C. See the book for an explanation
of why this works. Follow links from
http://cs.smith.edu/~orourke/
*/
#include <stdio.h>
#define DIM 2 /* Dimension of points */
typedef int tPointi[DIM]; /* type integer point */
typedef double tPointd[DIM]; /* type double point */
#define PMAX 1000 /* Max # of pts in polygon */
typedef tPointi tPolygoni[PMAX];/* type integer polygon */
int Area2( tPointi a, tPointi b, tPointi c );
void FindCG( int n, tPolygoni P, tPointd CG );
int ReadPoints( tPolygoni P );
void Centroid3( tPointi p1, tPointi p2, tPointi p3, tPointi c );
void PrintPoint( tPointd p );
int main()
{
int n;
tPolygoni P;
tPointd CG;
n = ReadPoints( P );
FindCG( n, P ,CG);
printf("The cg is ");
PrintPoint( CG );
}
/*
Returns twice the signed area of the triangle determined by a,b,c,
positive if a,b,c are oriented ccw, and negative if cw.
*/
int Area2( tPointi a, tPointi b, tPointi c )
{
return
(b[0] - a[0]) * (c[1] - a[1]) -
(c[0] - a[0]) * (b[1] - a[1]);
}
/*
Returns the cg in CG. Computes the weighted sum of
each triangle's area times its centroid. Twice area
and three times centroid is used to avoid division
until the last moment.
*/
void FindCG( int n, tPolygoni P, tPointd CG )
{
int i;
double A2, Areasum2 = 0; /* Partial area sum */
tPointi Cent3;
CG[0] = 0;
CG[1] = 0;
for (i = 1; i < n-1; i++) {
Centroid3( P[0], P[i], P[i+1], Cent3 );
A2 = Area2( P[0], P[i], P[i+1]);
CG[0] += A2 * Cent3[0];
CG[1] += A2 * Cent3[1];
Areasum2 += A2;
}
CG[0] /= 3 * Areasum2;
CG[1] /= 3 * Areasum2;
return;
}
/*
Returns three times the centroid. The factor of 3 is
left in to permit division to be avoided until later.
*/
void Centroid3( tPointi p1, tPointi p2, tPointi p3, tPointi c )
{
c[0] = p1[0] + p2[0] + p3[0];
c[1] = p1[1] + p2[1] + p3[1];
return;
}
void PrintPoint( tPointd p )
{
int i;
putchar('(');
for ( i=0; i<DIM; i++) {
printf("%f",p[i]);
if (i != DIM - 1) putchar(',');
}
putchar(')');
putchar('\n');
}
/*
Reads in the coordinates of the vertices of a polygon from stdin,
puts them into P, and returns n, the number of vertices.
The input is assumed to be pairs of whitespace-separated coordinates,
one pair per line. The number of points is not part of the input.
*/
int ReadPoints( tPolygoni P )
{
int n = 0;
printf("Polygon:\n");
printf(" i x y\n");
while ( (n < PMAX) && (scanf("%d %d",&P[n][0],&P[n][1]) != EOF) ) {
printf("%3d%4d%4d\n", n, P[n][0], P[n][1]);
++n;
}
if (n < PMAX)
printf("n = %3d vertices read\n",n);
else
printf("Error in ReadPoints:\too many points; max is %d\n", PMAX);
putchar('\n');
return n;
}
There's a polygon centroid article on the CGAFaq (comp.graphics.algorithms FAQ) wiki that explains it.
boost::geometry::centroid(your_polygon, p);
Here is Emile Cormier's algorithm without duplicated code or expensive modulus operations, best of both worlds:
#include <iostream>
using namespace std;
struct Point2D
{
double x;
double y;
};
Point2D compute2DPolygonCentroid(const Point2D* vertices, int vertexCount)
{
Point2D centroid = {0, 0};
double signedArea = 0.0;
double x0 = 0.0; // Current vertex X
double y0 = 0.0; // Current vertex Y
double x1 = 0.0; // Next vertex X
double y1 = 0.0; // Next vertex Y
double a = 0.0; // Partial signed area
int lastdex = vertexCount-1;
const Point2D* prev = &(vertices[lastdex]);
const Point2D* next;
// For all vertices in a loop
for (int i=0; i<vertexCount; ++i)
{
next = &(vertices[i]);
x0 = prev->x;
y0 = prev->y;
x1 = next->x;
y1 = next->y;
a = x0*y1 - x1*y0;
signedArea += a;
centroid.x += (x0 + x1)*a;
centroid.y += (y0 + y1)*a;
prev = next;
}
signedArea *= 0.5;
centroid.x /= (6.0*signedArea);
centroid.y /= (6.0*signedArea);
return centroid;
}
int main()
{
Point2D polygon[] = {{0.0,0.0}, {0.0,10.0}, {10.0,10.0}, {10.0,0.0}};
size_t vertexCount = sizeof(polygon) / sizeof(polygon[0]);
Point2D centroid = compute2DPolygonCentroid(polygon, vertexCount);
std::cout << "Centroid is (" << centroid.x << ", " << centroid.y << ")\n";
}
Break it into triangles, find the area and centroid of each, then calculate the average of all the partial centroids using the partial areas as weights. With concavity some of the areas could be negative.