Triangle normal vs vertex normal - opengl

I can calculate triangle normal N by having three vertex positions of v0, v1, v2 and using cross product:
The problem is a 3D mesh data structure needs a normal for each vertex. I mean, it needs n1, n2 and n3. I don't know what is the correct way to calculate them.
Tried
I tried to use the same N value for n1, n2 and n3, but I'm not sure if it is the correct approach:
n1 = n2 = n3 = N

Implemented smooth vertex normal like this:
std::vector<Vec3d> points = ...
std::vector<Vec3i> facets = ...
// Count how many faces/triangles a vertex is shared by
std::vector<int> counters;
counters.resize(points.size());
// Compute normals
norms.clear();
norms.resize(points.size());
for (Vec3i f : facets) {
int i0 = f.x();
int i1 = f.y();
int i2 = f.z();
Vec3d pos0 = points.at(i0);
Vec3d pos1 = points.at(i1);
Vec3d pos2 = points.at(i2);
Vec3d N = triangleNormal(pos0, pos1, pos2);
// Must be normalized
// https://stackoverflow.com/a/21930058/3405291
N.normalize();
norms[i0] += N;
norms[i1] += N;
norms[i2] += N;
counters[i0]++;
counters[i1]++;
counters[i2]++;
}
for (int i = 0; i < static_cast<int>(norms.size()); ++i) {
if (counters[i] > 0)
norms[i] /= counters[i];
else
norms[i].normalize();
}

Related

Calculating the diffuse r,g,bvalue of pixel on raytracer using Blinn-Phong

I am trying to calculate the RGB value of a pixel using the Blinn-Phong formula. For that I use this function:
Material getPixelColor(Ray ray, double min, int index, std::vector<Object*> Objects, std::vector<Object*> lightSources) {
Vector intersectionPoint = ray.getOrigin() + ray.getDirection() * min;
Vector n = Objects.at(index)->getNormalAt(intersectionPoint);
Vector reflectiondirection = ray.getDirection() - n * Vector::dot(ray.getDirection(), n) * 2;
Ray reflectionRay(intersectionPoint, reflectiondirection);
// check if ray intersects any other object;
double minimum = INFINITY;
int count = 0, indx = -1;
for (auto const& obj : Objects) {
double distance = obj->Intersect(reflectionRay);
if (minimum > distance) {
minimum = distance;
indx = count;
}
count++;
}
Material result(0,0,0);
if (recurseDepth >= 5 || indx == -1) {
recurseDepth = 0;
// Check if object is lit for each light source
for (auto const& light : lightSources) {
// Blinn-Phong
Vector lightDirection = (light->getPosition() - intersectionPoint).normalize();
double nl = Vector::dot(n, lightDirection);
nl = nl > 0 ? nl : 0.0;
result = result + (Objects.at(index)->getMaterial() * light->getMaterial() * nl);
}
}
else{
recurseDepth++;
result = result + getPixelColor(reflectionRay, minimum, indx, Objects, lightSources);
}
return result;
}
The result that I get is this:
This is how it was without shading:
I have been trying to find a solution for hours and can't. Am I using the wrong formula?
After a lot of research, I removed the part where it is getting color from other objects:
Material getPixelColor(Ray ray, double min, int index, std::vector<Object*> Objects, std::vector<Object*> lightSources) {
Vector intersectionPoint = ray.getOrigin() + ray.getDirection() * min;
Vector n = Objects.at(index)->getNormalAt(intersectionPoint);
Material result(0,0,0);
// Check if object is lit for each light source
for (auto const& light : lightSources) {
//create a ray to the light and check if there is an object between the two
Vector lightDirection = (light->getPosition() - intersectionPoint).normalize();
Ray lightRay(intersectionPoint, lightDirection);
bool hit = false;
for (auto const& obj : Objects) {
double distance = obj->Intersect(lightRay);
if (INFINITY > distance && distance > 0.0001) {
hit = true;
break;
}
}
if (!hit) {
// Blinn-Phong
double nl = Vector::dot(n, lightDirection);
// clamp nl between 0 and 1
if (nl > 1.0) {
nl = 1.0;
}
else if (nl < 0.0) {
nl = 0.0;
}
result = result + (Objects.at(index)->getMaterial() * nl);
}
}
return result;
}
And so I got the desired result:

Connect two lines by zigzag lines

I have two points on XZ plane, larger/taller point is L=(XL, ZL) and smaller/shorter point is S=(XS, ZS)
By connecting L and S points to Z=0 line, I have two lines
I intend to connect my two lines by zigzag diagonal cross lines
I need to find points L0, L1, Lk, ... LN-1 and also S0, S1, Sk, ... SN-1, SN.
I already know two points:
S0 = S = (XS, ZS)
SN = (XS, 0)
So far, I have implemented this algorithm:
float lX, lZ = ... // "L" (larger/taller) point coordinate on (X, Z) plane
float sX, sZ = ... // "S" (smaller/shorter) point coordinate on (X, Z) plane
size_t N = 5; // N sections below S
float sZsectionLength = sZ / N; // length of each section below S
std::vector<float> sZ_dots(N+1, 0.0); // N+1 points/dots below S
for (size_t k = 0; k < N+1; ++k) {
sZ_dots[k] = sZ - k * sZsectionLength;
}
std::vector<float> lZ_dots(N, 0.0); // N points/dots below L
for (size_t k = 0; k < N; ++k) {
// // Each point below L is average of two points below S
lZ_dots[k] = ( sZ_dots[k] + sZ_dots[k+1] ) / 2.0f;
}
for (size_t k = 0; k < N; ++k) {
Line *zig = new Line();
zig->setStartDot(sX, sZ_dots[k]);
zig->setCloseDot(lX, lZ_dots[k]);
linesContainer.append(zig);
Line *zag = new Line();
zag->setStartDot(lX, lZ_dots[k]);
zag->setCloseDot(sX, sZ_dots[k+1]);
linesContainer.append(zag);
}
The above algorithm generates the zig zags just fine. However, I wonder if there is any faster algorithm to generate the zig zag cross lines. Anything which I'm missing?
I would implement it like this:
struct Line
{
Line(float x1, float z1, float x2, float z2)
:
m_x1(x1),
m_z1(z1),
m_x2(x2),
m_z2(z2)
{}
float m_x1;
float m_z1;
float m_x2;
float m_z2;
};
using LineContainer = std::vector<Line>;
LineContainer getZigZag(float lx, float sx, float sz, size_t sectionCount)
{
assert(lx < sx && sz > 0.0f);
LineContainer lines;
auto sectionHeight = sz / sectionCount;
for (auto i = 0; i < sectionCount; ++i)
{
auto sz1 = sz - sectionHeight * i;
auto sz2 = sz - sectionHeight * (i + 1);
auto lz = sz1 - (sz1 - sz2) / 2.0f;
// A section.
//
// From S to L
lines.emplace_back(sx, sz1, lx, lz);
// From L to S
lines.emplace_back(lx, lz, sx, sz2);
}
return lines;
}
and use the function like this:
int main()
{
auto zigzag = getZigZag(1.0f, 2.0f, 4.0f, 2);
[..]
As you probably noticed, I replaced three loops with a single one that creates two lines (a single section) on each iteration.

Normal averaging of heightmap

i have the following code for calculating Heightmap normals
void CalcMapNormals(HeightMap * map, Vec3f normals[])
{
int dst, i, j, right, bottom;
Vec3f p0, p1, p2;
Vec3f n0;
/* Avoid writing map->rows|cols - 1 all the time */
right = map->cols - 1;
bottom = map->rows - 1;
dst = 0;
for (i = 0; i < map->rows; i++) {
for (j = 0; j < map->cols; j++) {
Vec3Set(normals[dst], 0, 0, 0);
/* Vertex can have 2, 3, or 4 neighbours horizontally and vertically */
if (i < bottom && j < right) {
/* Right and below */
GetHeightPoint(map, i, j, p0);
GetHeightPoint(map, i + 1, j, p1);
GetHeightPoint(map, i + 1, j + 1, p2);
CalcTriNormal(n0, p0, p1, p2);
VecAdd(normals[dst], normals[dst], n0);
}
/* TODO: the other three possibilities */
VecNormalize(normals[dst]);
dst += 1;
}
}
/* Sanity check */
if (dst != map->rows * map->cols)
Fail("Internal error in CalcMapNormals: normals count mismatch");
}
I understand that the code get the three vertexes of the triangle, calculate its normal and then add them and normalize them to get averaged normal. But i don't know how you can get the other three possibilities, ive been doing something like the following:
void CalcMapNormals(HeightMap * map, Vec3f normals[])
{
int dst, i, j, right, bottom;
Vec3f p0, p1, p2;
Vec3f n0;
Vec3f p3, p4, p5;
Vec3f n1;
Vec3f p6, p7, p8;
Vec3f n2;
Vec3f p9, p10, p11;
Vec3f n3;
/* Avoid writing map->rows|cols - 1 all the time */
right = map->cols - 1;
bottom = map->rows - 1;
dst = 0;
for (i = 0; i < map->rows; i++) {
for (j = 0; j < map->cols; j++) {
Vec3Set(normals[dst], 0, 0, 0);
/* Vertex can have 2, 3, or 4 neighbours horizontally and vertically */
if (i < bottom && j < right) {
/* Right and below */
GetHeightPoint(map, i, j, p0);
GetHeightPoint(map, i + 1, j, p1);
GetHeightPoint(map, i + 1, j + 1, p2);
CalcTriNormal(n0, p0, p1, p2);
VecAdd(normals[dst], normals[dst], n0);
}
if ( i > bottom && j > 0)
{
GetHeightPoint(map, i, j, p3);
GetHeightPoint(map, i + 1, j, p4);
GetHeightPoint(map, i, j -1, p5);
CalcTriNormal(n1, p3, p4, p5);
VecAdd(normals[dst], normals[dst], n1);
}
if ( i > 0 && j > 0)
{
GetHeightPoint(map, i, j, p6);
GetHeightPoint(map, i, j - 1, p7);
GetHeightPoint(map, i - 1, j, p8);
CalcTriNormal(n2, p6, p7, p8);
VecAdd(normals[dst], normals[dst], n2);
}
if ( i > bottom && j < right)
{
GetHeightPoint(map, i, j, p9);
GetHeightPoint(map, i-1, j, p10);
GetHeightPoint(map, i, j+1, p11);
CalcTriNormal(n3, p9, p10, p11);
VecAdd(normals[dst], normals[dst], n3);
}
/* TODO: the other three possibilities */
VecNormalize(normals[dst]);
dst += 1;
}
}
/* Sanity check */
if (dst != map->rows * map->cols)
Fail("Internal error in CalcMapNormals: normals count mismatch");
}
But i don't think its giving me the result i wanted, i get the concept of normal averaging, but can't figure out the code.
Hi Yzwboy here is one way I would try to make "smoothed" normals (averaged based on adjacent triangles):
In order to compute "smoothed" normals you will need to assign to each vertex a normal which is averaged across the normals of the triangles adjacent to the vertex.
I would calculate a weighted average based on the angle between the two edges adjacent to the vertex in question (cross product which is an easy calculation):
Pseudocode:
Vec3F faceNormal(int face_id, int vertex_id) // assumes C-->B-->A is clockwise
{
Vec3f A = triangleMesh.face[face_id].vertex[vertex_id]; // A
Vec3f B = triangleMesh.face[face_id].vertex[(vertex_id+1)%3]; // B
Vec3f C = triangleMesh.face[face_id].vertex[(vertex_id+2)%3]; // C
Vec3f BA = B-A;
Vec3f CA = C-A;
Vec3f Normal = BA.cross(CA);
float sin_alpha = length(Normal) / (BA.len() * CA.len() ); // depending on your implementation of Vec3f it could be .magnitude() or .length() instead of .len()
return (Normal.normalize() * asin(sin_alpha);)
}
And then to vary the normal by vertex:
void computeNormals() {
for (vertex v in triangleMesh)
{
Vec3f Normal (0,0,0);
for (int i = 0;i < TriangleCount;i++)
if (triangleMesh.face[i].contains(v) )
{
int vertID = vertexPositionInTriangle(i,v); //Can be 0,1 or 2. Use an enum to make A = 0, B=1, C=2 if that is easier to read:)
Normal = Normal + faceNormal(i,vertID);
}
addNormalToVertexV(Normal.normalize(),v); // this is a function to set the normal for a vertex, the vertex class must have a member for normal though and the arguments for the function are Vec3f, Vec3f
}
}
You could also compute the area of a each triangle to use as the weighting, though I find using the angles works best for looks most times.
I have tried to use names which match the Vec3f spec, as well as inbuilt functions to save work, but you will need to do some coding to get the pseudocode working (I dont have access to a GL Test environment here).
Hope this helps :)

ray tracing triangular mesh objects

I'm trying to write a ray tracer for any objects formed of triangular meshes. I'm using an external library to load a cube from .ply format and then trace it down. So far, I've implemented most of the tracer, and now I'm trying to test it with a single cube, but for some reason all I get on the screen is a red line. I've tried several ways to fix it but I simply can't figure it out anymore. For this primary test, I'm only creating primary rays, and if they hit my cube, then I color that pixel to the cube's diffuse color and return. For checking ray-object intersections, I am going through all the triangles that form that object and return the distance to the closest one. It would be great if you could have a look at the code and tell me what could have gone wrong and where. I would greatly appreciate it.
Ray-Triangle intersection:
bool intersectTri(const Vec3D& ray_origin, const Vec3D& ray_direction, const Vec3D& v0, const Vec3D& v1, const Vec3D& v2, double &t, double &u, double &v) const
{
Vec3D edge1 = v1 - v0;
Vec3D edge2 = v2 - v0;
Vec3D pvec = ray_direction.cross(edge2);
double det = edge1.dot(pvec);
if (det > - THRESHOLD && det < THRESHOLD)
return false;
double invDet = 1/det;
Vec3D tvec = ray_origin - v0;
u = tvec.dot(pvec)*invDet;
if (u < 0 || u > 1)
return false;
Vec3D qvec = tvec.cross(edge1);
v = ray_direction.dot(qvec)*invDet;
if (v < 0 || u + v > 1)
return false;
t = edge2.dot(qvec)*invDet;
if (t < 0)
return false;
return true;
}
//Object intersection
bool intersect(const Vec3D& ray_origin, const Vec3D& ray_direction, IntersectionData& idata, bool enforce_max) const
{
double tClosest;
if (enforce_max)
{
tClosest = idata.t;
}
else
{
tClosest = TMAX;
}
for (int i = 0 ; i < indices.size() ; i++)
{
const Vec3D v0 = vertices[indices[i][0]];
const Vec3D v1 = vertices[indices[i][1]];
const Vec3D v2 = vertices[indices[i][2]];
double t, u, v;
if (intersectTri(ray_origin, ray_direction, v0, v1, v2, t, u, v))
{
if (t < tClosest)
{
idata.t = t;
tClosest = t;
idata.u = u;
idata.v = v;
idata.index = i;
}
}
}
return (tClosest < TMAX && tClosest > 0) ? true : false;
}
Vec3D trace(World world, Vec3D &ray_origin, Vec3D &ray_direction)
{
Vec3D objColor = world.background_color;
IntersectionData idata;
double coeff = 1.0;
int depth = 0;
double tClosest = TMAX;
Object *hitObject = NULL;
for (unsigned int i = 0 ; i < world.objs.size() ; i++)
{
IntersectionData idata_curr;
if (world.objs[i].intersect(ray_origin, ray_direction, idata_curr, false))
{
if (idata_curr.t < tClosest && idata_curr.t > 0)
{
idata.t = idata_curr.t;
idata.u = idata_curr.u;
idata.v = idata_curr.v;
idata.index = idata_curr.index;
tClosest = idata_curr.t;
hitObject = &(world.objs[i]);
}
}
}
if (hitObject == NULL)
{
return world.background_color;
}
else
{
return hitObject->getDiffuse();
}
}
int main(int argc, char** argv)
{
parse("cube.ply");
Vec3D diffusion1(1, 0, 0);
Vec3D specular1(1, 1, 1);
Object cube1(coordinates, connected_vertices, diffusion1, specular1, 0, 0);
World wrld;
// Add objects to the world
wrld.objs.push_back(cube1);
Vec3D background(0, 0, 0);
wrld.background_color = background;
// Set light color
Vec3D light_clr(1, 1, 1);
wrld.light_colors.push_back(light_clr);
// Set light position
Vec3D light(0, 64, -10);
wrld.light_positions.push_back(light);
int width = 128;
int height = 128;
Vec3D *image = new Vec3D[width*height];
Vec3D *pixel = image;
// Trace rays
for (int y = -height/2 ; y < height/2 ; ++y)
{
for (int x = -width/2 ; x < width/2 ; ++x, ++pixel)
{
Vec3D ray_dir(x+0.5, y+0.5, -1.0);
ray_dir.normalize();
Vec3D ray_orig(0.5*width, 0.5*height, 0.0);
*pixel = trace(wrld, ray_orig, ray_dir);
}
}
savePPM("./test.ppm", image, width, height);
return 0;
}
I've just ran a test case and I got this:
for a unit cube centered at (0,0, -1.5) and scaled on the X and Y axis by 100. It seems that there is something wrong with the projection, but I can't really tell exactly what from the result. Also, shouldn't, in this case (cube is centered at (0,0)) the final object also appear in the middle of the picture?
FIX: I fixed the centering problem by doing ray_dir = ray_dir - ray_orig before normalizing and calling the trace function. Still, the perspective seems to be plain wrong.
I continued the work and now I started implementing the diffuse reflection according to Phong.
Vec3D trace(World world, Vec3D &ray_origin, Vec3D &ray_direction)
{
Vec3D objColor = Vec3D(0);
IntersectionData idata;
double coeff = 1.0;
int depth = 0;
do
{
double tClosest = TMAX;
Object *hitObject = NULL;
for (unsigned int i = 0 ; i < world.objs.size() ; i++)
{
IntersectionData idata_curr;
if (world.objs[i].intersect(ray_origin, ray_direction, idata_curr, false))
{
if (idata_curr.t < tClosest && idata_curr.t > 0)
{
idata.t = idata_curr.t;
idata.u = idata_curr.u;
idata.v = idata_curr.v;
idata.index = idata_curr.index;
tClosest = idata_curr.t;
hitObject = &(world.objs[i]);
}
}
}
if (hitObject == NULL)
{
return world.background_color;
}
Vec3D newStart = ray_origin + ray_direction*idata.t;
// Compute normal at intersection by interpolating vertex normals (PHONG Idea)
Vec3D v0 = hitObject->getVertices()[hitObject->getIndices()[idata.index][0]];
Vec3D v1 = hitObject->getVertices()[hitObject->getIndices()[idata.index][1]];
Vec3D v2 = hitObject->getVertices()[hitObject->getIndices()[idata.index][2]];
Vec3D n1 = hitObject->getNormals()[hitObject->getIndices()[idata.index][0]];
Vec3D n2 = hitObject->getNormals()[hitObject->getIndices()[idata.index][1]];
Vec3D n3 = hitObject->getNormals()[hitObject->getIndices()[idata.index][2]];
// Vec3D N = n1 + (n2 - n1)*idata.u + (n3 - n1)*idata.v;
Vec3D N = v0.computeFaceNrm(v1, v2);
if (ray_direction.dot(N) > 0)
{
N = N*(-1);
}
N.normalize();
Vec3D lightray_origin = newStart;
for (unsigned int itr = 0 ; itr < world.light_positions.size() ; itr++)
{
Vec3D lightray_dir = world.light_positions[0] - newStart;
lightray_dir.normalize();
double cos_theta = max(N.dot(lightray_dir), 0.0);
objColor.setX(objColor.getX() + hitObject->getDiffuse().getX()*hitObject->getDiffuseReflection()*cos_theta);
objColor.setY(objColor.getY() + hitObject->getDiffuse().getY()*hitObject->getDiffuseReflection()*cos_theta);
objColor.setZ(objColor.getZ() + hitObject->getDiffuse().getZ()*hitObject->getDiffuseReflection()*cos_theta);
return objColor;
}
depth++;
} while(coeff > 0 && depth < MAX_RAY_DEPTH);
return objColor;
}
When I reach an object with the primary ray, I send another ray to the light source positioned at (0,0,0) and return the color according to the Phong illumination model for diffuse reflection, but the result is really not the expected one: http://s15.postimage.org/vc6uyyssr/test.png. The cube is a unit cube centered at (0,0,0) and then translated by (1.5, -1.5, -1.5). From my point of view, the left side of the cube should get more light and it actually does. What do you think of it?

Finding the centroid of a polygon?

To get the center, I have tried, for each vertex, to add to the total, divide by the number of vertices.
I've also tried to find the topmost, bottommost -> get midpoint... find leftmost, rightmost, find the midpoint.
Both of these did not return the perfect center because I'm relying on the center to scale a polygon.
I want to scale my polygons, so I may put a border around them.
What is the best way to find the centroid of a polygon given that the polygon may be concave, convex and have many many sides of various lengths?
The formula is given here for vertices sorted by their occurance along the polygon's perimeter.
For those having difficulty understanding the sigma notation in those formulas, here is some C++ code showing how to do the computation:
#include <iostream>
struct Point2D
{
double x;
double y;
};
Point2D compute2DPolygonCentroid(const Point2D* vertices, int vertexCount)
{
Point2D centroid = {0, 0};
double signedArea = 0.0;
double x0 = 0.0; // Current vertex X
double y0 = 0.0; // Current vertex Y
double x1 = 0.0; // Next vertex X
double y1 = 0.0; // Next vertex Y
double a = 0.0; // Partial signed area
// For all vertices except last
int i=0;
for (i=0; i<vertexCount-1; ++i)
{
x0 = vertices[i].x;
y0 = vertices[i].y;
x1 = vertices[i+1].x;
y1 = vertices[i+1].y;
a = x0*y1 - x1*y0;
signedArea += a;
centroid.x += (x0 + x1)*a;
centroid.y += (y0 + y1)*a;
}
// Do last vertex separately to avoid performing an expensive
// modulus operation in each iteration.
x0 = vertices[i].x;
y0 = vertices[i].y;
x1 = vertices[0].x;
y1 = vertices[0].y;
a = x0*y1 - x1*y0;
signedArea += a;
centroid.x += (x0 + x1)*a;
centroid.y += (y0 + y1)*a;
signedArea *= 0.5;
centroid.x /= (6.0*signedArea);
centroid.y /= (6.0*signedArea);
return centroid;
}
int main()
{
Point2D polygon[] = {{0.0,0.0}, {0.0,10.0}, {10.0,10.0}, {10.0,0.0}};
size_t vertexCount = sizeof(polygon) / sizeof(polygon[0]);
Point2D centroid = compute2DPolygonCentroid(polygon, vertexCount);
std::cout << "Centroid is (" << centroid.x << ", " << centroid.y << ")\n";
}
I've only tested this for a square polygon in the upper-right x/y quadrant.
If you don't mind performing two (potentially expensive) extra modulus operations in each iteration, then you can simplify the previous compute2DPolygonCentroid function to the following:
Point2D compute2DPolygonCentroid(const Point2D* vertices, int vertexCount)
{
Point2D centroid = {0, 0};
double signedArea = 0.0;
double x0 = 0.0; // Current vertex X
double y0 = 0.0; // Current vertex Y
double x1 = 0.0; // Next vertex X
double y1 = 0.0; // Next vertex Y
double a = 0.0; // Partial signed area
// For all vertices
int i=0;
for (i=0; i<vertexCount; ++i)
{
x0 = vertices[i].x;
y0 = vertices[i].y;
x1 = vertices[(i+1) % vertexCount].x;
y1 = vertices[(i+1) % vertexCount].y;
a = x0*y1 - x1*y0;
signedArea += a;
centroid.x += (x0 + x1)*a;
centroid.y += (y0 + y1)*a;
}
signedArea *= 0.5;
centroid.x /= (6.0*signedArea);
centroid.y /= (6.0*signedArea);
return centroid;
}
The centroid can be calculated as the weighted sum of the centroids of the triangles it can be partitioned to.
Here is the C source code for such an algorithm:
/*
Written by Joseph O'Rourke
orourke#cs.smith.edu
October 27, 1995
Computes the centroid (center of gravity) of an arbitrary
simple polygon via a weighted sum of signed triangle areas,
weighted by the centroid of each triangle.
Reads x,y coordinates from stdin.
NB: Assumes points are entered in ccw order!
E.g., input for square:
0 0
10 0
10 10
0 10
This solves Exercise 12, p.47, of my text,
Computational Geometry in C. See the book for an explanation
of why this works. Follow links from
http://cs.smith.edu/~orourke/
*/
#include <stdio.h>
#define DIM 2 /* Dimension of points */
typedef int tPointi[DIM]; /* type integer point */
typedef double tPointd[DIM]; /* type double point */
#define PMAX 1000 /* Max # of pts in polygon */
typedef tPointi tPolygoni[PMAX];/* type integer polygon */
int Area2( tPointi a, tPointi b, tPointi c );
void FindCG( int n, tPolygoni P, tPointd CG );
int ReadPoints( tPolygoni P );
void Centroid3( tPointi p1, tPointi p2, tPointi p3, tPointi c );
void PrintPoint( tPointd p );
int main()
{
int n;
tPolygoni P;
tPointd CG;
n = ReadPoints( P );
FindCG( n, P ,CG);
printf("The cg is ");
PrintPoint( CG );
}
/*
Returns twice the signed area of the triangle determined by a,b,c,
positive if a,b,c are oriented ccw, and negative if cw.
*/
int Area2( tPointi a, tPointi b, tPointi c )
{
return
(b[0] - a[0]) * (c[1] - a[1]) -
(c[0] - a[0]) * (b[1] - a[1]);
}
/*
Returns the cg in CG. Computes the weighted sum of
each triangle's area times its centroid. Twice area
and three times centroid is used to avoid division
until the last moment.
*/
void FindCG( int n, tPolygoni P, tPointd CG )
{
int i;
double A2, Areasum2 = 0; /* Partial area sum */
tPointi Cent3;
CG[0] = 0;
CG[1] = 0;
for (i = 1; i < n-1; i++) {
Centroid3( P[0], P[i], P[i+1], Cent3 );
A2 = Area2( P[0], P[i], P[i+1]);
CG[0] += A2 * Cent3[0];
CG[1] += A2 * Cent3[1];
Areasum2 += A2;
}
CG[0] /= 3 * Areasum2;
CG[1] /= 3 * Areasum2;
return;
}
/*
Returns three times the centroid. The factor of 3 is
left in to permit division to be avoided until later.
*/
void Centroid3( tPointi p1, tPointi p2, tPointi p3, tPointi c )
{
c[0] = p1[0] + p2[0] + p3[0];
c[1] = p1[1] + p2[1] + p3[1];
return;
}
void PrintPoint( tPointd p )
{
int i;
putchar('(');
for ( i=0; i<DIM; i++) {
printf("%f",p[i]);
if (i != DIM - 1) putchar(',');
}
putchar(')');
putchar('\n');
}
/*
Reads in the coordinates of the vertices of a polygon from stdin,
puts them into P, and returns n, the number of vertices.
The input is assumed to be pairs of whitespace-separated coordinates,
one pair per line. The number of points is not part of the input.
*/
int ReadPoints( tPolygoni P )
{
int n = 0;
printf("Polygon:\n");
printf(" i x y\n");
while ( (n < PMAX) && (scanf("%d %d",&P[n][0],&P[n][1]) != EOF) ) {
printf("%3d%4d%4d\n", n, P[n][0], P[n][1]);
++n;
}
if (n < PMAX)
printf("n = %3d vertices read\n",n);
else
printf("Error in ReadPoints:\too many points; max is %d\n", PMAX);
putchar('\n');
return n;
}
There's a polygon centroid article on the CGAFaq (comp.graphics.algorithms FAQ) wiki that explains it.
boost::geometry::centroid(your_polygon, p);
Here is Emile Cormier's algorithm without duplicated code or expensive modulus operations, best of both worlds:
#include <iostream>
using namespace std;
struct Point2D
{
double x;
double y;
};
Point2D compute2DPolygonCentroid(const Point2D* vertices, int vertexCount)
{
Point2D centroid = {0, 0};
double signedArea = 0.0;
double x0 = 0.0; // Current vertex X
double y0 = 0.0; // Current vertex Y
double x1 = 0.0; // Next vertex X
double y1 = 0.0; // Next vertex Y
double a = 0.0; // Partial signed area
int lastdex = vertexCount-1;
const Point2D* prev = &(vertices[lastdex]);
const Point2D* next;
// For all vertices in a loop
for (int i=0; i<vertexCount; ++i)
{
next = &(vertices[i]);
x0 = prev->x;
y0 = prev->y;
x1 = next->x;
y1 = next->y;
a = x0*y1 - x1*y0;
signedArea += a;
centroid.x += (x0 + x1)*a;
centroid.y += (y0 + y1)*a;
prev = next;
}
signedArea *= 0.5;
centroid.x /= (6.0*signedArea);
centroid.y /= (6.0*signedArea);
return centroid;
}
int main()
{
Point2D polygon[] = {{0.0,0.0}, {0.0,10.0}, {10.0,10.0}, {10.0,0.0}};
size_t vertexCount = sizeof(polygon) / sizeof(polygon[0]);
Point2D centroid = compute2DPolygonCentroid(polygon, vertexCount);
std::cout << "Centroid is (" << centroid.x << ", " << centroid.y << ")\n";
}
Break it into triangles, find the area and centroid of each, then calculate the average of all the partial centroids using the partial areas as weights. With concavity some of the areas could be negative.