I know there are many similar questions for this issue, such as this one, but I can't seem to figure out what is going wrong in my program.
I am attempting to create a unit sphere using the naive longitude/latitude method, then I attempt to wrap a texture around the sphere using UV coordinates.
I am seeing the classic vertical seam issue, but I'm also some strangeness at both poles.
North Pole...
South Pole...
Seam...
The images are from a sphere with 180 stacks and 360 slices.
I create it as follows.
First, here are a couple of convenience structures I'm using...
struct Point {
float x;
float y;
float z;
float u;
float v;
};
struct Quad {
Point lower_left; // Lower left corner of quad
Point lower_right; // Lower right corner of quad
Point upper_left; // Upper left corner of quad
Point upper_right; // Upper right corner of quad
};
I first specify a sphere which is '_stacks' high and '_slices' wide.
float* Sphere::generate_glTriangle_array(int& num_elements) const
{
int elements_per_point = 5; //xyzuv
int points_per_triangle = 3;
int triangles_per_mesh = _stacks * _slices * 2; // 2 triangles makes a quad
num_elements = triangles_per_mesh * points_per_triangle * elements_per_point;
float *buff = new float[num_elements];
int i = 0;
Quad q;
for (int stack=0; stack<_stacks; ++stack)
{
for (int slice=0; slice<_slices; ++slice)
{
q = generate_sphere_quad(stack, slice);
load_quad_into_array(q, buff, i);
}
}
return buff;
}
Quad Sphere::generate_sphere_quad(int stack, int slice) const
{
Quad q;
std::cout << "Stack " << stack << ", Slice: " << slice << std::endl;
std::cout << " Lower left...";
q.lower_left = generate_sphere_coord(stack, slice);
std::cout << " Lower right...";
q.lower_right = generate_sphere_coord(stack, slice+1);
std::cout << " Upper left...";
q.upper_left = generate_sphere_coord(stack+1, slice);
std::cout << " Upper right...";
q.upper_right = generate_sphere_coord(stack+1, slice+1);
std::cout << std::endl;
return q;
}
Point Sphere::generate_sphere_coord(int stack, int slice) const
{
Point p;
p.y = 2.0 * stack / _stacks - 1.0;
float r = sqrt(1 - p.y * p.y);
float angle = 2.0 * M_PI * slice / _slices;
p.x = r * sin(angle);
p.z = r * cos(angle);
p.u = (0.5 + ( (atan2(p.z, p.x)) / (2 * M_PI) ));
p.v = (0.5 + ( (asin(p.y)) / M_PI ));
std::cout << " Point: (x: " << p.x << ", y: " << p.y << ", z: " << p.z << ") [u: " << p.u << ", v: " << p.v << "]" << std::endl;
return p;
}
I then load my array, specifying vertices of two CCW triangles for each Quad...
void Sphere::load_quad_into_array(const Quad& q, float* buff, int& buff_idx, bool counter_clockwise=true)
{
if (counter_clockwise)
{
// First triangle
load_point_into_array(q.lower_left, buff, buff_idx);
load_point_into_array(q.upper_right, buff, buff_idx);
load_point_into_array(q.upper_left, buff, buff_idx);
// Second triangle
load_point_into_array(q.lower_left, buff, buff_idx);
load_point_into_array(q.lower_right, buff, buff_idx);
load_point_into_array(q.upper_right, buff, buff_idx);
}
else
{
// First triangle
load_point_into_array(q.lower_left, buff, buff_idx);
load_point_into_array(q.upper_left, buff, buff_idx);
load_point_into_array(q.upper_right, buff, buff_idx);
// Second triangle
load_point_into_array(q.lower_left, buff, buff_idx);
load_point_into_array(q.upper_right, buff, buff_idx);
load_point_into_array(q.lower_right, buff, buff_idx);
}
}
void Sphere::load_point_into_array(const Point& p, float* buff, int& buff_idx)
{
buff[buff_idx++] = p.x;
buff[buff_idx++] = p.y;
buff[buff_idx++] = p.z;
buff[buff_idx++] = p.u;
buff[buff_idx++] = p.v;
}
My vertex and fragment shaders are simple...
// Vertex shader
#version 450 core
in vec3 vert;
in vec2 texcoord;
uniform mat4 matrix;
out FS_INPUTS {
vec2 i_texcoord;
} tex_data;
void main(void) {
tex_data.i_texcoord = texcoord;
gl_Position = matrix * vec4(vert, 1.0);
}
// Fragment shader
#version 450 core
in FS_INPUTS {
vec2 i_texcoord;
};
layout (binding=1) uniform sampler2D tex_id;
out vec4 color;
void main(void) {
color = texture(tex_id, texcoord);
}
My draw command is:
glDrawArrays(GL_TRIANGLES, 0, num_elements/5);
Thanks!
First of all, this code does some funny extra work:
Point Sphere::generate_sphere_coord(int stack, int slice) const
{
Point p;
p.y = 2.0 * stack / _stacks - 1.0;
float r = sqrt(1 - p.y * p.y);
float angle = 2.0 * M_PI * slice / _slices;
p.x = r * sin(angle);
p.z = r * cos(angle);
p.u = (0.5 + ( (atan2(p.z, p.x)) / (2 * M_PI) ));
p.v = (0.5 + ( (asin(p.y)) / M_PI ));
return p;
}
Calling cos and sin just to cal atan2 on the result is just extra work in the best case, and in the worst case you might get the wrong branch cuts. You can calculate p.u directly from slice and slice instead.
The Seam
You are going to have a seam in your sphere. This is normal, most models will have a seam (or many seams) in their UV maps somewhere. The problem is that the UV coordinates should still increase linearly next to the seam. For example, think about a loop of vertices that go around the globe's equator. At some point, the UV coordinates will wrap around, something like this:
0.8, 0.9, 0.0, 0.1, 0.2
The problem is that you'll get four quads, but one of them will be wrong:
quad 1: u = 0.8 ... 0.9
quad 2: u = 0.9 ... 0.0 <<----
quad 3: u = 0.0 ... 0.1
quad 4: u = 0.1 ... 0.2
Look at how messed up quad 2 is. You will have to generate instead the following data:
quad 1: u = 0.8 ... 0.9
quad 2: u = 0.9 ... 1.0
quad 3: u = 0.0 ... 0.1
quad 4: u = 0.1 ... 0.2
A Fixed Version
Here is a sketch of a fixed version.
namespace {
const float pi = std::atan(1.0f) * 4.0f;
// Generate point from the u, v coordinates in (0..1, 0..1)
Point sphere_point(float u, float v) {
float r = std::sin(pi * v);
return Point{
r * std::cos(2.0f * pi * u),
r * std::sin(2.0f * pi * u),
std::cos(pi * v),
u,
v
};
}
}
// Create array of points with quads that make a unit sphere.
std::vector<Point> sphere(int hSize, int vSize) {
std::vector<Point> pt;
for (int i = 0; i < hSize; i++) {
for (int j = 0; j < vSize; j++) {
float u0 = (float)i / (float)hSize;
float u1 = (float)(i + 1) / (float)hSize;
float v0 = (float)j / (float)vSize;
float v1 = (float)(j + 1) / float(vSize);
// Create quad as two triangles.
pt.push_back(sphere_point(u0, v0));
pt.push_back(sphere_point(u1, v0));
pt.push_back(sphere_point(u0, v1));
pt.push_back(sphere_point(u0, v1));
pt.push_back(sphere_point(u1, v0));
pt.push_back(sphere_point(u1, v1));
}
}
}
Note that there is some easy optimization you could do, and also note that due to rounding errors, the seam might not line up quite correctly. These are left as an exercise for the reader.
More Problems
Even with the fixed version, you will likely see artifacts at the poles. This is because the screen space texture coordinate derivatives have a singularity at the poles.
The recommended way to fix this is to use a cube map texture instead. This will also greatly simplify the sphere geometry data, since you can completely eliminate the UV coordinates and you won't have a seam.
As a kludge, you can enable anisotropic filtering instead.
Related
I am converting this depth image to a pcl::pointcloud.
using the following:
PointCloud::Ptr PointcloudUtils::RGBDtoPCL(cv::Mat depth_image, Eigen::Matrix3f& _intrinsics)
{
PointCloud::Ptr pointcloud(new PointCloud);
float fx = _intrinsics(0, 0);
float fy = _intrinsics(1, 1);
float cx = _intrinsics(0, 2);
float cy = _intrinsics(1, 2);
float factor = 1;
depth_image.convertTo(depth_image, CV_32F); // convert the image data to float type
if (!depth_image.data) {
std::cerr << "No depth data!!!" << std::endl;
exit(EXIT_FAILURE);
}
pointcloud->width = depth_image.cols; //Dimensions must be initialized to use 2-D indexing
pointcloud->height = depth_image.rows;
pointcloud->resize(pointcloud->width*pointcloud->height);
#pragma omp parallel for
for (int v = 0; v < depth_image.rows; v += 4)
{
for (int u = 0; u < depth_image.cols; u += 4)
{
float Z = depth_image.at<float>(v, u) / factor;
PointT p;
p.z = Z;
p.x = (u - cx) * Z / fx;
p.y = (v - cy) * Z / fy;
p.z = p.z / 1000;
p.x = p.x / 1000;
p.y = p.y / 1000;
pointcloud->points.push_back(p);
}
}
return pointcloud;
}
this works great, I have run some processing on the cloud, and now I need to convert the pointcloud back to a cv::Mat depth image. I cannot find an example for this, and am having trouble getting m head around it. What is the opposite of the above function?
How can i convert a pcl::pointcloud back to a cv::mat?
Thank you.
This is untested code, since I don't have point cloud on my machine.
From your own conversion code I am assuming your image a single channel image.
void PCL2Mat(PointCloud::Ptr pointcloud, cv::Mat& depth_image, int original_width, int original_height)
{
if (!depth_image.empty())
depth_image.release();
depth_image.create(original_height, original_width, CV_32F);
int count = 0;
#pragma omp parallel for
for (int v = 0; v < depth_image.rows; ++v)
{
for (int u = 0; u < depth_image.cols; ++u)
{
depth_image.at<float>(v, u) = pointcloud->points.at(count++).z * 1000;
}
}
depth_image.convertTo(depth_image,CV_8U);
}
I don't know about OpenCV methods, but in case you do something that makes your point cloud unstructured your process could be something like this
% rescale the points by 1000
p.x = p.x * 1000; p.y = p.y * 1000; p.z = p.z * 1000;
% project points on image plane and correct center point + factor
image_p.x = ( p.x * fx / p.z -cf ) * factor;
image_p.y = ( p.y * fy / p.z -cy ) * factor;
Now depending on what you have done with the point cloud the points might not map exactly to image matrix pixel center points (or top left corner for some applications) or you might be missing points -> NaN/0 value pixels. How you process that is up to you, but the most simple way would be to cast image_p.x and image_p.y as integers, make sure they are withing image boundaries and set
depth_image.at<float>(image_p.y, image_p.x) = p.Z;`
I found an example online that shows how to draw a cone in OpenGL, which is located here: It was written in C++, and so I translated it to C#. Here is the new code:
public void RenderCone(Vector3 d, Vector3 a, float h, float rd, int n)
{
Vector3 c = new Vector3(a + (-d * h));
Vector3 e0 = Perp(d);
Vector3 e1 = Vector3.Cross(e0, d);
float angInc = (float)(360.0 / n * GrimoireMath.Pi / 180);
// calculate points around directrix
List<Vector3> pts = new List<Vector3>();
for (int i = 0; i < n; ++i)
{
float rad = angInc * i;
Vector3 p = c + (((e0 * (float)Math.Cos((rad)) + (e1 * (float)Math.Sin(rad))) * rd));
pts.Add(p);
}
// draw cone top
GL.Begin(PrimitiveType.TriangleFan);
GL.Vertex3(a);
for (int i = 0; i < n; ++i)
{
GL.Vertex3(pts[i]);
}
GL.End();
// draw cone bottom
GL.Begin(PrimitiveType.TriangleFan);
GL.Vertex3(c);
for (int i = n - 1; i >= 0; --i)
{
GL.Vertex3(pts[i]);
}
GL.End();
}
public Vector3 Perp(Vector3 v)
{
float min = Math.Abs(v.X);
Vector3 cardinalAxis = new Vector3(1, 0, 0);
if (Math.Abs(v.Y) < min)
{
min = Math.Abs(v.Y);
cardinalAxis = new Vector3(0, 1, 0);
}
if (Math.Abs(v.Z) < min)
{
cardinalAxis = new Vector3(0, 0, 1);
}
return Vector3.Cross(v, cardinalAxis);
}
I think I am using the parameters correctly(the page isnt exactly coherent in terms of actual function-usage). Here is the legend that the original creator supplied:
But when I enter in the following as parameters:
RenderCone(new Vector3(0.0f, 1.0f, 0.0f), new Vector3(1.0f, 1.0f, 1.0f), 20.0f, 10.0f, 8);
I receive this(Wireframe enabled):
As you can see, I'm missing a slice, either at the very beginning, or the very end. Does anyone know what's wrong with this method? Or what I could be doing wrong that would cause an incomplete cone?
// draw cone bottom
GL.Begin(PrimitiveType.TriangleFan);
GL.Vertex3(c);
for (int i = n - 1; i >= 0; --i)
{
GL.Vertex3(pts[i]);
}
GL.End();
That connects all vertices to each other and center but there is one connection missing. There is nothing the specifies connection from first to last vertex. Adding GL.Vertex3(pts[n-1]); after loop would add the missing connection.
The Solution was actually extremely simple, I needed to increase the number of slices by 1. Pretty special if you ask me.
public void RenderCone(Vector3 baseToApexLength, Vector3 apexLocation, float height, float radius, int slices)
{
Vector3 c = new Vector3(apexLocation + (-baseToApexLength * height));
Vector3 e0 = Perpendicular(baseToApexLength);
Vector3 e1 = Vector3.Cross(e0, baseToApexLength);
float angInc = (float)(360.0 / slices * GrimoireMath.Pi / 180);
slices++; // this was the fix for my problem.
/**
* Compute the Vertices around the Directrix
*/
Vector3[] vertices = new Vector3[slices];
for (int i = 0; i < vertices.Length; ++i)
{
float rad = angInc * i;
Vector3 p = c + (((e0 * (float)Math.Cos((rad)) + (e1 * (float)Math.Sin(rad))) * radius));
vertices[i] = p;
}
/**
* Draw the Top of the Cone.
*/
GL.Begin(PrimitiveType.TriangleFan);
GL.Vertex3(apexLocation);
for (int i = 0; i < slices; ++i)
{
GL.Vertex3(vertices[i]);
}
GL.End();
/**
* Draw the Base of the Cone.
*/
GL.Begin(PrimitiveType.TriangleFan);
GL.Vertex3(c);
for (int i = slices - 1; i >= 0; --i)
{
GL.Vertex3(vertices[i]);
}
GL.End();
}
I am trying to implement an omni-directional light source (a.k.a., point light source) in my raytracing program in C++. I am not getting the expected results, but I can't figure out the problem. Maybe someone can see what I am doing wrong.
I have included the two functions that are responsible for raytracing and the light. The ClosestIntersection function finds the closest intersection and a triangle. That is used later in the DirectLight function.
I would really appreciate any help.
#include <iostream>
#include <glm/glm.hpp>
#include <SDL.h>
#include "SDLauxiliary.h"
#include "TestModel.h"
#include "math.h"
using namespace std;
using glm::vec3;
using glm::mat3;
// ----------------------------------------------------------------------------
// GLOBAL VARIABLES
const int SCREEN_WIDTH = 500;
const int SCREEN_HEIGHT = 500;
SDL_Surface* screen;
int t;
vector<Triangle> triangles;
float focalLength = 900;
vec3 cameraPos(0, 0, -4.5);
vec3 lightPos(0.5, 0.5, 0);
vec3 lightColor = 14.f * vec3(1,1,1);
// Translate camera
float translation = 0.1; // use this to set translation increment
// Rotate camera
float yaw;
vec3 trueCameraPos;
const float PI = 3.1415927;
// ----------------------------------------------------------------------------
// CLASSES
class Intersection;
// ----------------------------------------------------------------------------
// FUNCTIONS
void Update();
void Draw();
bool ClosestIntersection(vec3 start, vec3 dir, const vector<Triangle>& triangles,
Intersection& closestIntersection);
vec3 DirectLight(const Intersection& i);
// ----------------------------------------------------------------------------
// STRUCTURES
struct Intersection
{
vec3 position;
float distance;
int triangleIndex;
};
float m = std::numeric_limits<float>::max();
int main(int argc, char* argv[])
{
LoadTestModel(triangles);
screen = InitializeSDL(SCREEN_WIDTH, SCREEN_HEIGHT);
t = SDL_GetTicks(); // Set start value for timer.
while (NoQuitMessageSDL())
{
Update();
Draw();
}
SDL_SaveBMP(screen, "screenshot.bmp");
return 0;
}
void Update()
{
// Compute frame time:
int t2 = SDL_GetTicks();
float dt = float(t2 - t);
t = t2;
cout << "Render time: " << dt << " ms." << endl;
}
}
void Draw()
{
if (SDL_MUSTLOCK(screen))
SDL_LockSurface(screen);
for (int y = 0; y<SCREEN_HEIGHT; ++y)
{
for (int x = 0; x < SCREEN_WIDTH; ++x)
{
vec3 start = cameraPos;
vec3 dir(x - SCREEN_WIDTH / 2, y - SCREEN_HEIGHT / 2, focalLength);
Intersection intersection;
if (ClosestIntersection(start, dir, triangles, intersection))
{
//vec3 theColor = triangles[intersection.triangleIndex].color;
vec3 theColor = DirectLight(intersection);
PutPixelSDL(screen, x, y, theColor);
}
else
{
vec3 color(0, 0, 0);
PutPixelSDL(screen, x, y, color);
}
}
}
if (SDL_MUSTLOCK(screen))
SDL_UnlockSurface(screen);
SDL_UpdateRect(screen, 0, 0, 0, 0);
}
bool ClosestIntersection(vec3 s, vec3 d,
const vector<Triangle>& triangles, Intersection& closestIntersection)
{
closestIntersection.distance = m;
for (size_t i = 0; i < triangles.size(); i++)
{
vec3 v0 = triangles[i].v0;
vec3 v1 = triangles[i].v1;
vec3 v2 = triangles[i].v2;
vec3 u = v1 - v0;
vec3 v = v2 - v0;
vec3 b = s - v0;
vec3 x;
// Determinant of A = [-d u v]
float det = -d.x * ((u.y * v.z) - (v.y * u.z)) -
u.x * ((-d.y * v.z) - (v.y * -d.z)) +
v.x * ((-d.y * u.z) - (u.y * -d.z));
// Cramer'r Rule for t = x.x
x.x = (b.x * ((u.y * v.z) - (v.y * u.z)) -
u.x * ((b.y * v.z) - (v.y * b.z)) +
v.x * ((b.y * u.z) - (u.y * b.z))) / det;
if (x.x >= 0)
{
// Cramer'r Rule for u = x.y
x.y = (-d.x * ((b.y * v.z) - (v.y * b.z)) -
b.x * ((-d.y * v.z) - (v.y * -d.z)) +
v.x * ((-d.y * b.z) - (b.y * -d.z))) / det;
// Cramer'r Rule for v = x.z
x.z = (-d.x * ((u.y * b.z) - (b.y * u.z)) -
u.x * ((-d.y * b.z) - (b.y * -d.z)) +
b.x * ((-d.y * u.z) - (u.y * -d.z))) / det;
if (x.y >= 0 && x.z >= 0 && x.y + x.z <= 1 && x.x < closestIntersection.distance)
{
closestIntersection.position = x;
closestIntersection.distance = x.x;
closestIntersection.triangleIndex = i;
}
}
}
//end of for loop
if (closestIntersection.distance != m)
{
return true;
}
else
{
return false;
}
}
vec3 DirectLight(const Intersection& i)
{
vec3 n = triangles[i.triangleIndex].normal;
vec3 r = lightPos - i.position;
float R2 = r.x * r.x + r.y * r.y + r.z * r.z;
vec3 D = (lightColor * fmaxf((glm::dot(glm::normalize(r), n)), 0)) / (4 * PI * R2);
return D;
}
If I'm understanding the code in ClosestIntersection correctly, here's what it's doing for each triangle:
Let u,v be the vectors from one vertex of the triangle to the other two vertices. Let d be (the reverse of) the direction of the ray we're considering.
And let b be the vector from that vertex of the triangle to the camera.
Find p,q,r so that b = pd+qu+rv (p,q,r are what your code calls x.x, x.y, x.z).
Now the ray meets the triangle if p>0, q>=0, r>=0, q+r<=1 and the distance to the intersection point is p.
So, the conditions on q,r make sense; the idea is that b-qu-rv is the vector from the camera to the relevant point in the triangle and it's in direction d. Your distances aren't really distances, but along a single ray they're the same multiple of the actual distance, which means that this works fine for determining which triangle you've hit, and that's all you use them for. So far, so good.
But then you say closestIntersection.position = x; and surely that's all wrong, because this x isn't in the same coordinate system as your camera location, triangle vertices, etc. It's in this funny "how much of d, how much of u, how much of v" coordinate system which isn't even the same from one triangle to the next. (Which is why you are getting discontinuities at triangle boundaries even within a single face, I think.)
Try setting it to v0+x.y*(v1-v0)+x.z*(v2-v0) instead (I think this is right; it's meant to be the actual point where the ray crosses the triangle, in the same coordinates as all your other points) and see what it does.
This isn't a super-great answer, but I managed to make your code work without the strange shading discontinuities. The problem happens in ClosestIntersection and maybe Gareth's answer covers it. I need to stop looking at this now, but I wanted to show you what I have before I leave, and I need an Answer to post some code.
// This starts with some vec3 helper functions which make things
// easier to look at
float Dot(const vec3& a, const vec3& b) {
return a.x * b.x + a.y * b.y + a.z * b.z;
}
vec3 Cross(const vec3& a, const vec3& b) {
return vec3(a.y*b.z - a.z*b.y, a.z*b.x - a.x*b.z, a.x*b.y - a.y*b.x);
}
float L2(const vec3& v) { return v.x*v.x + v.y*v.y + v.z*v.z; }
float Abs(const vec3& v) { return std::sqrt(L2(v)); }
// Here is the replacement version of ClosestIntersection
bool ClosestIntersection(vec3 cam, vec3 dir,
const vector<Triangle>& triangles, Intersection& closestIntersection)
{
closestIntersection.distance = m;
vec3 P0 = cam;
vec3 P1 = cam + dir;
for (size_t i = 0; i < triangles.size(); ++i) {
vec3 v0 = triangles[i].v0;
vec3 v1 = triangles[i].v1;
vec3 v2 = triangles[i].v2;
// Dan Sunday
// http://geomalgorithms.com/a06-_intersect-2.html
vec3 u = v1 - v0;
vec3 v = v2 - v0;
// w = P-v0, solve w = su +tv (s, t are parametric scalars)
vec3 n = Cross(u, v);
float ri = Dot(n, (v0 - P0)) / Dot(n, (P1 - P0));
vec3 Pi = P0 + ri * (P1- P0);
vec3 w = Pi - v0;
// s = w . (n x v) / (u . (n x v))
// t = w . (n x u) / (v . (n x u))
float s = Dot(w, Cross(n, v)) / Dot(u, Cross(n, v));
float t = Dot(w, Cross(n, u)) / Dot(v, Cross(n, u));
if(s >= 0 && t >= 0 && s+t <= 1) {
float dist = Abs(cam - Pi);
if(dist < closestIntersection.distance) {
closestIntersection.position = Pi;
closestIntersection.distance = dist;
closestIntersection.triangleIndex = int(i);
}
}
}
return closestIntersection.distance != m;
}
Good luck.
I have a function called getWorldPosition that is supposed to return a vec3 representing the current position of any VisualObject (a super class I defined).
glm::vec3 VisualObject::getWorldPosition()
{
glm::mat4 totalTransformation = getParentModelMatrix() * modelMatrix;
return totalTransformation[3].xyz;
} // end getWorldPosition
I am trying to use the getWorldPosition function to calculate the distance between two objects in the world.
for (int i = enemiesOnBoard.size() - 1; i >= 0; i--){
EnemySphere* s = (EnemySphere*)enemiesOnBoard.at(i);
glm::vec3 sPosition = s->getWorldPosition();
cout << sPosition[0] << endl;
for (int j = cannonBallsOnBoard.size() - 1; j >= 0; j--){
CannonBall* cb = (CannonBall*)cannonBallsOnBoard.at(i);
glm::vec3 cbPosition = cb->getWorldPosition();
GLfloat radiiSum = s->sRadius + cb->sRadius;
GLfloat distance = calcDistance(sPosition, cbPosition);
//cout << distance << endl;
if (distance < radiiSum){
//cout << "COLLISION BABY!" << endl;
}
}
}
The problem is that every getWorldPosition is returning a vec3 with 0 for the x,y,z coordinate.
One of the spheres is defined as such,
EnemySphere* s = new EnemySphere();
enemiesOnBoard.push_back(s);
s->setShader(glutObjectShaderProgram);
s->addController(new EnemySphereController(rand() % 8 - 3.5, 1.0));
s->initialize();
addChild(s);
The relevant controller is this:
EnemySphereController::EnemySphereController(GLfloat x, GLfloat r, GLfloat t)
: Controller(), startX(x), rate(r), translation(t)
{ }
void EnemySphereController::update(float elapsedTimeSec){
if (translation < 3.9f){
translation += elapsedTimeSec * rate;
}
else {
target->getParent()->removeChild(target->getObjectSerialNumber());
}
glm::mat4 t4;
t4 = glm::translate(glm::mat4(1.0f), glm::vec3(startX, -2.50f, translation)); //add 0.5 because the sphere is calculated from the center
target->fixedTransformation = t4;
}
I know this is a complicated problem, but do you guys have any ideas on where I can start?
As I know, the world position resides in glm::vec3(mat[3], mat[7], mat[11]), so you should change your function to:
glm::vec3 VisualObject::getWorldPosition()
{
glm::mat4 totalTransformation = getParentModelMatrix() * modelMatrix;
return glm::vec3(totalTransformation[3], totalTransformation[7], totalTransformation[11]);
} // end getWorldPosition
The part which you were using as translation is used for projection.
To get the center, I have tried, for each vertex, to add to the total, divide by the number of vertices.
I've also tried to find the topmost, bottommost -> get midpoint... find leftmost, rightmost, find the midpoint.
Both of these did not return the perfect center because I'm relying on the center to scale a polygon.
I want to scale my polygons, so I may put a border around them.
What is the best way to find the centroid of a polygon given that the polygon may be concave, convex and have many many sides of various lengths?
The formula is given here for vertices sorted by their occurance along the polygon's perimeter.
For those having difficulty understanding the sigma notation in those formulas, here is some C++ code showing how to do the computation:
#include <iostream>
struct Point2D
{
double x;
double y;
};
Point2D compute2DPolygonCentroid(const Point2D* vertices, int vertexCount)
{
Point2D centroid = {0, 0};
double signedArea = 0.0;
double x0 = 0.0; // Current vertex X
double y0 = 0.0; // Current vertex Y
double x1 = 0.0; // Next vertex X
double y1 = 0.0; // Next vertex Y
double a = 0.0; // Partial signed area
// For all vertices except last
int i=0;
for (i=0; i<vertexCount-1; ++i)
{
x0 = vertices[i].x;
y0 = vertices[i].y;
x1 = vertices[i+1].x;
y1 = vertices[i+1].y;
a = x0*y1 - x1*y0;
signedArea += a;
centroid.x += (x0 + x1)*a;
centroid.y += (y0 + y1)*a;
}
// Do last vertex separately to avoid performing an expensive
// modulus operation in each iteration.
x0 = vertices[i].x;
y0 = vertices[i].y;
x1 = vertices[0].x;
y1 = vertices[0].y;
a = x0*y1 - x1*y0;
signedArea += a;
centroid.x += (x0 + x1)*a;
centroid.y += (y0 + y1)*a;
signedArea *= 0.5;
centroid.x /= (6.0*signedArea);
centroid.y /= (6.0*signedArea);
return centroid;
}
int main()
{
Point2D polygon[] = {{0.0,0.0}, {0.0,10.0}, {10.0,10.0}, {10.0,0.0}};
size_t vertexCount = sizeof(polygon) / sizeof(polygon[0]);
Point2D centroid = compute2DPolygonCentroid(polygon, vertexCount);
std::cout << "Centroid is (" << centroid.x << ", " << centroid.y << ")\n";
}
I've only tested this for a square polygon in the upper-right x/y quadrant.
If you don't mind performing two (potentially expensive) extra modulus operations in each iteration, then you can simplify the previous compute2DPolygonCentroid function to the following:
Point2D compute2DPolygonCentroid(const Point2D* vertices, int vertexCount)
{
Point2D centroid = {0, 0};
double signedArea = 0.0;
double x0 = 0.0; // Current vertex X
double y0 = 0.0; // Current vertex Y
double x1 = 0.0; // Next vertex X
double y1 = 0.0; // Next vertex Y
double a = 0.0; // Partial signed area
// For all vertices
int i=0;
for (i=0; i<vertexCount; ++i)
{
x0 = vertices[i].x;
y0 = vertices[i].y;
x1 = vertices[(i+1) % vertexCount].x;
y1 = vertices[(i+1) % vertexCount].y;
a = x0*y1 - x1*y0;
signedArea += a;
centroid.x += (x0 + x1)*a;
centroid.y += (y0 + y1)*a;
}
signedArea *= 0.5;
centroid.x /= (6.0*signedArea);
centroid.y /= (6.0*signedArea);
return centroid;
}
The centroid can be calculated as the weighted sum of the centroids of the triangles it can be partitioned to.
Here is the C source code for such an algorithm:
/*
Written by Joseph O'Rourke
orourke#cs.smith.edu
October 27, 1995
Computes the centroid (center of gravity) of an arbitrary
simple polygon via a weighted sum of signed triangle areas,
weighted by the centroid of each triangle.
Reads x,y coordinates from stdin.
NB: Assumes points are entered in ccw order!
E.g., input for square:
0 0
10 0
10 10
0 10
This solves Exercise 12, p.47, of my text,
Computational Geometry in C. See the book for an explanation
of why this works. Follow links from
http://cs.smith.edu/~orourke/
*/
#include <stdio.h>
#define DIM 2 /* Dimension of points */
typedef int tPointi[DIM]; /* type integer point */
typedef double tPointd[DIM]; /* type double point */
#define PMAX 1000 /* Max # of pts in polygon */
typedef tPointi tPolygoni[PMAX];/* type integer polygon */
int Area2( tPointi a, tPointi b, tPointi c );
void FindCG( int n, tPolygoni P, tPointd CG );
int ReadPoints( tPolygoni P );
void Centroid3( tPointi p1, tPointi p2, tPointi p3, tPointi c );
void PrintPoint( tPointd p );
int main()
{
int n;
tPolygoni P;
tPointd CG;
n = ReadPoints( P );
FindCG( n, P ,CG);
printf("The cg is ");
PrintPoint( CG );
}
/*
Returns twice the signed area of the triangle determined by a,b,c,
positive if a,b,c are oriented ccw, and negative if cw.
*/
int Area2( tPointi a, tPointi b, tPointi c )
{
return
(b[0] - a[0]) * (c[1] - a[1]) -
(c[0] - a[0]) * (b[1] - a[1]);
}
/*
Returns the cg in CG. Computes the weighted sum of
each triangle's area times its centroid. Twice area
and three times centroid is used to avoid division
until the last moment.
*/
void FindCG( int n, tPolygoni P, tPointd CG )
{
int i;
double A2, Areasum2 = 0; /* Partial area sum */
tPointi Cent3;
CG[0] = 0;
CG[1] = 0;
for (i = 1; i < n-1; i++) {
Centroid3( P[0], P[i], P[i+1], Cent3 );
A2 = Area2( P[0], P[i], P[i+1]);
CG[0] += A2 * Cent3[0];
CG[1] += A2 * Cent3[1];
Areasum2 += A2;
}
CG[0] /= 3 * Areasum2;
CG[1] /= 3 * Areasum2;
return;
}
/*
Returns three times the centroid. The factor of 3 is
left in to permit division to be avoided until later.
*/
void Centroid3( tPointi p1, tPointi p2, tPointi p3, tPointi c )
{
c[0] = p1[0] + p2[0] + p3[0];
c[1] = p1[1] + p2[1] + p3[1];
return;
}
void PrintPoint( tPointd p )
{
int i;
putchar('(');
for ( i=0; i<DIM; i++) {
printf("%f",p[i]);
if (i != DIM - 1) putchar(',');
}
putchar(')');
putchar('\n');
}
/*
Reads in the coordinates of the vertices of a polygon from stdin,
puts them into P, and returns n, the number of vertices.
The input is assumed to be pairs of whitespace-separated coordinates,
one pair per line. The number of points is not part of the input.
*/
int ReadPoints( tPolygoni P )
{
int n = 0;
printf("Polygon:\n");
printf(" i x y\n");
while ( (n < PMAX) && (scanf("%d %d",&P[n][0],&P[n][1]) != EOF) ) {
printf("%3d%4d%4d\n", n, P[n][0], P[n][1]);
++n;
}
if (n < PMAX)
printf("n = %3d vertices read\n",n);
else
printf("Error in ReadPoints:\too many points; max is %d\n", PMAX);
putchar('\n');
return n;
}
There's a polygon centroid article on the CGAFaq (comp.graphics.algorithms FAQ) wiki that explains it.
boost::geometry::centroid(your_polygon, p);
Here is Emile Cormier's algorithm without duplicated code or expensive modulus operations, best of both worlds:
#include <iostream>
using namespace std;
struct Point2D
{
double x;
double y;
};
Point2D compute2DPolygonCentroid(const Point2D* vertices, int vertexCount)
{
Point2D centroid = {0, 0};
double signedArea = 0.0;
double x0 = 0.0; // Current vertex X
double y0 = 0.0; // Current vertex Y
double x1 = 0.0; // Next vertex X
double y1 = 0.0; // Next vertex Y
double a = 0.0; // Partial signed area
int lastdex = vertexCount-1;
const Point2D* prev = &(vertices[lastdex]);
const Point2D* next;
// For all vertices in a loop
for (int i=0; i<vertexCount; ++i)
{
next = &(vertices[i]);
x0 = prev->x;
y0 = prev->y;
x1 = next->x;
y1 = next->y;
a = x0*y1 - x1*y0;
signedArea += a;
centroid.x += (x0 + x1)*a;
centroid.y += (y0 + y1)*a;
prev = next;
}
signedArea *= 0.5;
centroid.x /= (6.0*signedArea);
centroid.y /= (6.0*signedArea);
return centroid;
}
int main()
{
Point2D polygon[] = {{0.0,0.0}, {0.0,10.0}, {10.0,10.0}, {10.0,0.0}};
size_t vertexCount = sizeof(polygon) / sizeof(polygon[0]);
Point2D centroid = compute2DPolygonCentroid(polygon, vertexCount);
std::cout << "Centroid is (" << centroid.x << ", " << centroid.y << ")\n";
}
Break it into triangles, find the area and centroid of each, then calculate the average of all the partial centroids using the partial areas as weights. With concavity some of the areas could be negative.