convert a convex path to triangle list - c++

What is the best way to convert a convex path (it is describing in points set) to a list of triangles to be used in opengl render. I think the best stuff is sample code or demo :) thanks!

It sounds like you are looking for one of the many "convert a polygon to a series of triangles" solutions:
Maybe something in one of these will help:
Ear Clipping
List item
poly2tri (with source code)
If you are trying to understand the concepts, the first two are a good place to start.
If you need an implementation, start with the third.
Was this helpful?

If your polygon is really convex and not concave you can just draw it as a triangle fan. That is guaranteed to work.
Here is a alternative recursive algorithm that I wrote a few years ago. It also triangulates a concave polygon and on average generates a much nicer triangulation (e.g. less sliver polygons):
void ConcaveTesselator (unsigned a_NumVertices)
{
unsigned left[32]; // enough space for 2^32 recursions:
unsigned right[32];
unsigned stacktop = 0;
// prepare stack:
left[0] = 0;
right[0] = a_NumVertices-1;
stacktop = 1;
while (stacktop)
{
unsigned l,r,m;
// pop current interval from the stack and subdivide:
stacktop--;
l = left[stacktop];
r = right[stacktop];
m = (l+r)>>1;
// replace this with your triangle drawing function
// or store the indices l,m,r and draw the triangles
// as a triangle list later:
DrawTriangleWithIndices (l,m,r);
// recursive subdivide:
if (m-l > 1)
{
left[stacktop] = l;
right[stacktop] = m;
stacktop++;
}
if (r-m > 1)
{
left[stacktop] = m;
right[stacktop] = r;
stacktop++;
}
}
}

Related

Issues turning loaded meshes into cloth simulation

I'm having a bit of issue trying to get meshes I import into my program to have cloth simulation physics using a particle/spring system. I'm kind of a beginner into graphics programming, so sorry if this is super obvious and I'm just missing something. I'm using C++ with OpenGL, as well as Assimp to import the models. I'm fairly sure my code to calculate the constraints/springs and step each particle is correct, as I tested it out with generated meshes (with quads instead of triangles), and it looked fine, but idk.
I've been using this link to study up on how to actually do this: https://nccastaff.bournemouth.ac.uk/jmacey/MastersProjects/MSc2010/07LuisPereira/Thesis/LuisPereira_Thesis.pdf
What it looks like in-engine: https://www.youtube.com/watch?v=RyAan27wryU
I'm pretty sure it's an issue with the connections/springs, as the imported model thats just a flat plane seems to work fine, for the most part. The other model though.. seems to just fall apart. I keep looking at papers on this, and from what I understand everything should be working right, as I connect the edge/bend springs seemingly correctly, and the physics side seems to work from the flat planes. I really can't figure it out for the life of me! Any tips/help would be GREATLY appreciated! :)
Code for processing Mesh into Cloth:
// Container to temporarily hold faces while we process springs
std::vector<Face> faces;
// Go through indices and take the ones making a triangle.
// Indices come from assimp, so i think this is the right thing to do to get each face?
for (int i = 0; i < this->indices.size(); i+=3)
{
std::vector<unsigned int> faceIds = { this->indices.at(i), this->indices.at(i + 1), this->indices.at(i + 2) };
Face face;
face.vertexIDs = faceIds;
faces.push_back(face);
}
// Iterate through faces and add constraints when needed.
for (int l = 0; l < faces.size(); l++)
{
// Adding edge springs.
Face temp = faces[l];
makeConstraint(particles.at(temp.vertexIDs[0]), particles.at(temp.vertexIDs[1]));
makeConstraint(particles.at(temp.vertexIDs[0]), particles.at(temp.vertexIDs[2]));
makeConstraint(particles.at(temp.vertexIDs[1]), particles.at(temp.vertexIDs[2]));
// We need to get the bending springs as well, and i've just written a function to do that.
for (int x = 0; x < faces.size(); x++)
{
Face temp2 = faces[x];
if (l != x)
{
verticesShared(temp, temp2);
}
}
}
And heres the code where I process the bending springs as well:
// Container for any indices the two faces have in common.
std::vector<glm::vec2> traversed;
// Loop through both face's indices, to see if they match eachother.
for (int i = 0; i < a.vertexIDs.size(); i++)
{
for (int k = 0; k < b.vertexIDs.size(); k++)
{
// If we do get a match, we push a vector into the container containing the two indices of the faces so we know which ones are equal.
if (a.vertexIDs.at(i) == b.vertexIDs.at(k))
{
traversed.push_back(glm::vec2(i, k));
}
}
// If we're here, if means we have an edge in common, aka that we have two vertices shared between the two faces.
if (traversed.size() == 2)
{
// Get the adjacent vertices.
int face_a_adj_ind = 3 - ((traversed[0].x) + (traversed[1].x));
int face_b_adj_ind = 3 - ((traversed[0].y) + (traversed[1].y));
// Turn the stored ones from earlier and just get the ACTUAL indices from the face. Indices of indices, eh.
unsigned int adj_1 = a.vertexIDs[face_a_adj_ind];
unsigned int adj_2 = b.vertexIDs[face_b_adj_ind];
// And finally, make a bending spring between the two adjacent particles.
makeConstraint(particles.at(adj_1), particles.at(adj_2));
}
}

Marching Cubes Issues

I've been trying to implement the marching cubes algorithm with C++ and Qt. Anyway, so far all the steps have been written, but I'm getting a really bad result. I'm looking for orientation or advices about what can be going wrong. I suspect one of the problems may be with the voxel conception, specifically about which vertex goes in which corner (0, 1, ..., 7). Also, I'm not a 100% sure about how to interpret the input for the algorithm (I'm using datasets). Should I read it in the ZYX order and move the marching cube in the same way or it doesn't matter at all? (Leaving aside the fact that no every dimension has to have the same size).
Here is what I'm getting against what it should look like...
http://i57.tinypic.com/2nb7g46.jpg
http://en.wikipedia.org/wiki/Marching_cubes
http://en.wikipedia.org/wiki/Marching_cubes#External_links
Paul Bourke. "Overview and source code".
http://paulbourke.net/geometry/polygonise/
Qt_MARCHING_CUBES.zip: Qt/OpenGL example courtesy Dr. Klaus Miltenberger.
http://paulbourke.net/geometry/polygonise/Qt_MARCHING_CUBES.zip
The example requires boost, but looks like it probably should work.
In his example, it has in marchingcubes.cpp, a few different methods for calculating the marching cubes: vMarchCube1 and vMarchCube2.
In the comments it says vMarchCube2 performs the Marching Tetrahedrons algorithm on a single cube by making six calls to vMarchTetrahedron.
Below is the source for the first one vMarchCube1:
//vMarchCube1 performs the Marching Cubes algorithm on a single cube
GLvoid GL_Widget::vMarchCube1(const GLfloat &fX, const GLfloat &fY, const GLfloat &fZ, const GLfloat &fScale, const GLfloat &fTv)
{
GLint iCorner, iVertex, iVertexTest, iEdge, iTriangle, iFlagIndex, iEdgeFlags;
GLfloat fOffset;
GLvector sColor;
GLfloat afCubeValue[8];
GLvector asEdgeVertex[12];
GLvector asEdgeNorm[12];
//Make a local copy of the values at the cube's corners
for(iVertex = 0; iVertex < 8; iVertex++)
{
afCubeValue[iVertex] = (this->*fSample)(fX + a2fVertexOffset[iVertex][0]*fScale,fY + a2fVertexOffset[iVertex][1]*fScale,fZ + a2fVertexOffset[iVertex][2]*fScale);
}
//Find which vertices are inside of the surface and which are outside
iFlagIndex = 0;
for(iVertexTest = 0; iVertexTest < 8; iVertexTest++)
{
if(afCubeValue[iVertexTest] <= fTv) iFlagIndex |= 1<<iVertexTest;
}
//Find which edges are intersected by the surface
iEdgeFlags = aiCubeEdgeFlags[iFlagIndex];
//If the cube is entirely inside or outside of the surface, then there will be no intersections
if(iEdgeFlags == 0)
{
return;
}
//Find the point of intersection of the surface with each edge
//Then find the normal to the surface at those points
for(iEdge = 0; iEdge < 12; iEdge++)
{
//if there is an intersection on this edge
if(iEdgeFlags & (1<<iEdge))
{
fOffset = fGetOffset(afCubeValue[ a2iEdgeConnection[iEdge][0] ],afCubeValue[ a2iEdgeConnection[iEdge][1] ], fTv);
asEdgeVertex[iEdge].fX = fX + (a2fVertexOffset[ a2iEdgeConnection[iEdge][0] ][0] + fOffset * a2fEdgeDirection[iEdge][0]) * fScale;
asEdgeVertex[iEdge].fY = fY + (a2fVertexOffset[ a2iEdgeConnection[iEdge][0] ][1] + fOffset * a2fEdgeDirection[iEdge][1]) * fScale;
asEdgeVertex[iEdge].fZ = fZ + (a2fVertexOffset[ a2iEdgeConnection[iEdge][0] ][2] + fOffset * a2fEdgeDirection[iEdge][2]) * fScale;
vGetNormal(asEdgeNorm[iEdge], asEdgeVertex[iEdge].fX, asEdgeVertex[iEdge].fY, asEdgeVertex[iEdge].fZ);
}
}
//Draw the triangles that were found. There can be up to five per cube
for(iTriangle = 0; iTriangle < 5; iTriangle++)
{
if(a2iTriangleConnectionTable[iFlagIndex][3*iTriangle] < 0) break;
for(iCorner = 0; iCorner < 3; iCorner++)
{
iVertex = a2iTriangleConnectionTable[iFlagIndex][3*iTriangle+iCorner];
vGetColor(sColor, asEdgeVertex[iVertex], asEdgeNorm[iVertex]);
glColor4f(sColor.fX, sColor.fY, sColor.fZ, 0.6);
glNormal3f(asEdgeNorm[iVertex].fX, asEdgeNorm[iVertex].fY, asEdgeNorm[iVertex].fZ);
glVertex3f(asEdgeVertex[iVertex].fX, asEdgeVertex[iVertex].fY, asEdgeVertex[iVertex].fZ);
}
}
}
UPDATE: Github working example, tested
https://github.com/peteristhegreat/qt-marching-cubes
Hope that helps.
Finally, I found what was wrong.
I use a VBO indexer class to reduce the ammount of duplicated vertices and make the render faster. This class is implemented with a std::map to find and discard already existing vertices, using a tuple of < vec3, unsigned short >. As you may imagine, a marching cubes algorithm generates structures with thousands if not millions of vertices. The highest number a common unsigned short can hold is 65536, or 2^16. So, when the output geometry had more than that, the map index started to overflow and the result was a mess, since it started to overwrite vertices with the new ones. I just changed my implementation to draw with common VBO and not indexed while I fix my class to support millions of vertices.
The result, with some minor vertex normal issues, speaks for itself:
http://i61.tinypic.com/fep2t3.jpg

Optimizing the Dijkstra's algorithm

I need a graph-search algorithm that is enough in our application of robot navigation and I chose Dijkstra's algorithm.
We are given the gridmap which contains free, occupied and unknown cells where the robot is only permitted to pass through the free cells. The user will input the starting position and the goal position. In return, I will retrieve the sequence of free cells leading the robot from starting position to the goal position which corresponds to the path.
Since executing the dijkstra's algorithm from start to goal would give us a reverse path coming from goal to start, I decided to execute the dijkstra's algorithm backwards such that I would retrieve the path from start to goal.
Starting from the goal cell, I would have 8 neighbors whose cost horizontally and vertically is 1 while diagonally would be sqrt(2) only if the cells are reachable (i.e. not out-of-bounds and free cell).
Here are the rules that should be observe in updating the neighboring cells, the current cell can only assume 8 neighboring cells to be reachable (e.g. distance of 1 or sqrt(2)) with the following conditions:
The neighboring cell is not out of bounds
The neighboring cell is unvisited.
The neighboring cell is a free cell which can be checked via the 2-D grid map.
Here is my implementation:
#include <opencv2/opencv.hpp>
#include <algorithm>
#include "Timer.h"
/// CONSTANTS
static const int UNKNOWN_CELL = 197;
static const int FREE_CELL = 255;
static const int OCCUPIED_CELL = 0;
/// STRUCTURES for easier management.
struct vertex {
cv::Point2i id_;
cv::Point2i from_;
vertex(cv::Point2i id, cv::Point2i from)
{
id_ = id;
from_ = from;
}
};
/// To be used for finding an element in std::multimap STL.
struct CompareID
{
CompareID(cv::Point2i val) : val_(val) {}
bool operator()(const std::pair<double, vertex> & elem) const {
return val_ == elem.second.id_;
}
private:
cv::Point2i val_;
};
/// Some helper functions for dijkstra's algorithm.
uint8_t get_cell_at(const cv::Mat & image, int x, int y)
{
assert(x < image.rows);
assert(y < image.cols);
return image.data[x * image.cols + y];
}
/// Some helper functions for dijkstra's algorithm.
bool checkIfNotOutOfBounds(cv::Point2i current, int rows, int cols)
{
return (current.x >= 0 && current.y >= 0 &&
current.x < cols && current.y < rows);
}
/// Brief: Finds the shortest possible path from starting position to the goal position
/// Param gridMap: The stage where the tracing of the shortest possible path will be performed.
/// Param start: The starting position in the gridMap. It is assumed that start cell is a free cell.
/// Param goal: The goal position in the gridMap. It is assumed that the goal cell is a free cell.
/// Param path: Returns the sequence of free cells leading to the goal starting from the starting cell.
bool findPathViaDijkstra(const cv::Mat& gridMap, cv::Point2i start, cv::Point2i goal, std::vector<cv::Point2i>& path)
{
// Clear the path just in case
path.clear();
// Create working and visited set.
std::multimap<double,vertex> working, visited;
// Initialize working set. We are going to perform the djikstra's
// backwards in order to get the actual path without reversing the path.
working.insert(std::make_pair(0, vertex(goal, goal)));
// Conditions in continuing
// 1.) Working is empty implies all nodes are visited.
// 2.) If the start is still not found in the working visited set.
// The Dijkstra's algorithm
while(!working.empty() && std::find_if(visited.begin(), visited.end(), CompareID(start)) == visited.end())
{
// Get the top of the STL.
// It is already given that the top of the multimap has the lowest cost.
std::pair<double, vertex> currentPair = *working.begin();
cv::Point2i current = currentPair.second.id_;
visited.insert(currentPair);
working.erase(working.begin());
// Check all arcs
// Only insert the cells into working under these 3 conditions:
// 1. The cell is not in visited cell
// 2. The cell is not out of bounds
// 3. The cell is free
for (int x = current.x-1; x <= current.x+1; x++)
for (int y = current.y-1; y <= current.y+1; y++)
{
if (checkIfNotOutOfBounds(cv::Point2i(x, y), gridMap.rows, gridMap.cols) &&
get_cell_at(gridMap, x, y) == FREE_CELL &&
std::find_if(visited.begin(), visited.end(), CompareID(cv::Point2i(x, y))) == visited.end())
{
vertex newVertex = vertex(cv::Point2i(x,y), current);
double cost = currentPair.first + sqrt(2);
// Cost is 1
if (x == current.x || y == current.y)
cost = currentPair.first + 1;
std::multimap<double, vertex>::iterator it =
std::find_if(working.begin(), working.end(), CompareID(cv::Point2i(x, y)));
if (it == working.end())
working.insert(std::make_pair(cost, newVertex));
else if(cost < (*it).first)
{
working.erase(it);
working.insert(std::make_pair(cost, newVertex));
}
}
}
}
// Now, recover the path.
// Path is valid!
if (std::find_if(visited.begin(), visited.end(), CompareID(start)) != visited.end())
{
std::pair <double, vertex> currentPair = *std::find_if(visited.begin(), visited.end(), CompareID(start));
path.push_back(currentPair.second.id_);
do
{
currentPair = *std::find_if(visited.begin(), visited.end(), CompareID(currentPair.second.from_));
path.push_back(currentPair.second.id_);
} while(currentPair.second.id_.x != goal.x || currentPair.second.id_.y != goal.y);
return true;
}
// Path is invalid!
else
return false;
}
int main()
{
// cv::Mat image = cv::imread("filteredmap1.jpg", CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat image = cv::Mat(100,100,CV_8UC1);
std::vector<cv::Point2i> path;
for (int i = 0; i < image.rows; i++)
for(int j = 0; j < image.cols; j++)
{
image.data[i*image.cols+j] = FREE_CELL;
if (j == image.cols/2 && (i > 3 && i < image.rows - 3))
image.data[i*image.cols+j] = OCCUPIED_CELL;
// if (image.data[i*image.cols+j] > 215)
// image.data[i*image.cols+j] = FREE_CELL;
// else if(image.data[i*image.cols+j] < 100)
// image.data[i*image.cols+j] = OCCUPIED_CELL;
// else
// image.data[i*image.cols+j] = UNKNOWN_CELL;
}
// Start top right
cv::Point2i goal(image.cols-1, 0);
// Goal bottom left
cv::Point2i start(0, image.rows-1);
// Time the algorithm.
Timer timer;
timer.start();
findPathViaDijkstra(image, start, goal, path);
std::cerr << "Time elapsed: " << timer.getElapsedTimeInMilliSec() << " ms";
// Add the path in the image for visualization purpose.
cv::cvtColor(image, image, CV_GRAY2BGRA);
int cn = image.channels();
for (int i = 0; i < path.size(); i++)
{
image.data[path[i].x*cn*image.cols+path[i].y*cn+0] = 0;
image.data[path[i].x*cn*image.cols+path[i].y*cn+1] = 255;
image.data[path[i].x*cn*image.cols+path[i].y*cn+2] = 0;
}
cv::imshow("Map with path", image);
cv::waitKey();
return 0;
}
For the algorithm implementation, I decided to have two sets namely the visited and working set whose each elements contain:
The location of itself in the 2D grid map.
The accumulated cost
Through what cell did it get its accumulated cost (for path recovery)
And here is the result:
The black pixels represent obstacles, the white pixels represent free space and the green line represents the path computed.
On this implementation, I would only search within the current working set for the minimum value and DO NOT need to scan throughout the cost matrix (where initially, the initially cost of all cells are set to infinity and the starting point 0). Maintaining a separate vector of the working set I think promises a better code performance because all the cells that have cost of infinity is surely to be not included in the working set but only those cells that have been touched.
I also took advantage of the STL which C++ provides. I decided to use the std::multimap since it can store duplicating keys (which is the cost) and it sorts the lists automatically. However, I was forced to use std::find_if() to find the id (which is the row,col of the current cell in the set) in the visited set to check if the current cell is on it which promises linear complexity. I really think this is the bottleneck of the Dijkstra's algorithm.
I am well aware that A* algorithm is much faster than Dijkstra's algorithm but what I wanted to ask is my implementation of Dijkstra's algorithm optimal? Even if I implemented A* algorithm using my current implementation in Dijkstra's which is I believe suboptimal, then consequently A* algorithm will also be suboptimal.
What improvement can I perform? What STL is the most appropriate for this algorithm? Particularly, how do I improve the bottleneck?
You're using a std::multimap for 'working' and 'visited'. That's not great.
The first thing you should do is change visited into a per-vertex flag so you can do your find_if in constant time instead of linear times and also so that operations on the list of visited vertices take constant instead of logarithmic time. You know what all the vertices are and you can map them to small integers trivially, so you can use either a std::vector or a std::bitset.
The second thing you should do is turn working into a priority queue, rather than a balanced binary tree structure, so that operations are a (largish) constant factor faster. std::priority_queue is a barebones binary heap. A higher-radix heap---say quaternary for concreteness---will probably be faster on modern computers due to its reduced depth. Andrew Goldberg suggests some bucket-based data structures; I can dig up references for you if you get to that stage. (They're not too complicated.)
Once you've taken care of these two things, you might look at A* or meet-in-the-middle tricks to speed things up even more.
Your performance is several orders of magnitude worse than it could be because you're using graph search algorithms for what looks like geometry. This geometry is much simpler and less general than the problems that graph search algorithms can solve. Also, with a vertex for every pixel your graph is huge even though it contains basically no information.
I heard you asking "how can I make this better without changing what I'm thinking" but nevertheless I'll tell you a completely different and better approach.
It looks like your robot can only go horizontally, vertically or diagonally. Is that for real or just a side effect of you choosing graph search algorithms? I'll assume the latter and let it go in any direction.
The algorithm goes like this:
(0) Represent your obstacles as polygons by listing the corners. Work in real numbers so you can make them as thin as you like.
(1) Try for a straight line between the end points.
(2) Check if that line goes through an obstacle or not. To do that for any line, show that all corners of any particular obstacle lie on the same side of the line. To do that, translate all points by (-X,-Y) of one end of the line so that that point is at the origin, then rotate until the other point is on the X axis. Now all corners should have the same sign of Y if there's no obstruction. There might be a quicker way just using gradients.
(3) If there's an obstruction, propose N two-segment paths going via the N corners of the obstacle.
(4) Recurse for all segments, culling any paths with segments that go out of bounds. That won't be a problem unless you have obstacles that go out of bounds.
(5) When it stops recursing, you should have a list of locally optimised paths from which you can choose the shortest.
(6) If you really want to restrict bearings to multiples of 45 degrees, then you can do this algorithm first and then replace each segment by any 45-only wiggly version that avoids obstacles. We know that such a version exists because you can stay extremely close to the original line by wiggling very often. We also know that all such wiggly paths have the same length.

How to draw a polygon in C++ such that the lines do not intersect?

I need to draw a polygon in C++. I set random points in vector and then connect them via lines. But sometimes those lines intersect and i get something like this.
Is there any formula or something like that, so that the lines wouldn't cross?
Here is part of the code:
void draw_picture(Canvas & canvas) {
PairXY a,b,c,d,e;
int k;
vector <PairXY> vertex;
vertex.push_back(PairXY(drandom(k),drandom(k)));
vertex.push_back(PairXY(drandom(k),drandom(k)));
vertex.push_back(PairXY(drandom(k),drandom(k)));
vertex.push_back(PairXY(drandom(k),drandom(k)));
vertex.push_back(PairXY(drandom(k),drandom(k)));
vector <PairXY>::const_iterator iter;
iter = vertex.begin();
a=*iter;
iter = vertex.begin()+1;
b=*iter;
iter = vertex.begin()+2;
c=*iter;
iter = vertex.begin()+3;
d=*iter;
iter = vertex.begin()+4;
e=*iter;
Line l1(a,b);
draw_line(l1,canvas);
Line l2(b,c);
draw_line(l2,canvas);
Line l3(c,d);
draw_line(l3,canvas);
Line l4(d,e);
draw_line(l4,canvas);
Line l5(e,a);
draw_line(l5,canvas);
}
Sounds like you want a convex hull.
As far as calculating them goes, you have several options.
I've had good luck with the monotone chain algorithm.
It sounds like what you are probably looking for is a "Simple" (as opposed to "Complex") Polygon:
http://en.wikipedia.org/wiki/Simple_polygon
There's not necessarily a unique solution to that:
Sort point list into polygon
This is why the ordering of points or path segments typically matters in polygon drawing engines. If you are so inclined--however--you can find at least one non-complex polygon for a set of points:
http://www.computational-geometry.org/mailing-lists/compgeom-announce/2003-March/000727.html
http://www.computational-geometry.org/mailing-lists/compgeom-announce/2003-March/000732.html
Others have pointed out your code is repetitive as written. You also don't define k in the excerpt you shared, and it's better to use a plural term for a vector of objects ("vertices") rather than one suggesting it is singular ("vertex"). Here's one fairly simple-to-understand set of changes that should generalize to any number of vertices:
void draw_picture(Canvas & canvas, int k, int numVertices = 5) {
vector<PairXY> vertices;
for (int index = 0; index < numVertices; index++) {
vertices.push_back(PairXY(drandom(k),drandom(k)));
}
vector<PairXY>::const_iterator iter = vertices.begin();
while (iter != vertices.end()) {
PairXY startPoint = *iter;
iter++;
if (iter == vertices.end()) {
Line edgeLine (startPoint, vertices[0]);
draw_line(edgeLine, canvas);
} else {
Line edgeLine (startPoint, *iter);
draw_line(edgeLine, canvas);
}
}
}
There are a lot of ways to manage iterations in C++, although many of them are more verbose than their counterparts in other languages. Recently a nice range-based for loop was added in C++11, but your build environment may not support it yet.
sort the array before drawing it
Find the left most point
than go CCW from there
ie
leftmost where point y < first point y until none found
rightmost point until none found

Polygon to Polygon Collision Detection Issue

I have been having a few issues implementing my narrow phase collision detection. Broadphase is working perfectly.
I have a group of polygons, that have a stl::vector array of points for their vertices in clockwise order. Every cycle, I check to see whether they're colliding.
I have borrowed the following Point in Polygon test from here and changed it using my Point data structures:
int InsidePolygon(std::vector <Point> poly, Point p) {
int i, j, c = 0;
int nvert = poly.size();
for (i = 0, j = nvert-1; i < nvert; j = i++) {
if ( ((poly[i].y> p.y) != (poly[j].y> p.y)) && (p.x < (poly[j].x-poly[i].x) * (p.y-poly[i].y) / (poly[j].y-poly[i].y) + poly[i].x) )
c = !c;
}
return c;
}
I have extended that to include a PolygonPolygon function, which check all the points of 1 polygon against another and then reverse it to check the other way around.
int PolygonPolygon(std::vector <Point> polygon1, std::vector <Point> polygon2) {
for(int i=0; i<polygon1.size();i++) {
if(InsidePolygon(polygon2, polygon1[i])) {
return 1;
}
}
for(int j=0; j<polygon2.size();j++) {
if(InsidePolygon(polygon1, polygon2[j])) {
return 1;
}
}
return 0;
}
The strange thing is that my PolygonPolygon function is always returning 1. So I have a few questions:
Have I screwed up the logic somewhere? Should I write my PolygonPolygon function differently?
Are there any better methods for a PolygonPolygon test, the polygons themselves are not guaranteed to be convex, which is why I went for the point in polygon method. I also hope to determine which point is colliding eventually, if I can get past this bit.
Should I be presenting my points in a particular order for the InsidePolygon test?
You may want to consider trying to draw a line between polygons as an alternative collision detection method.
[edit] Oops, I missed the fact that you have non-convex polys in there too. Maybe "Determining if a point lies on the interior of a polygon" would be better? Either that or you could break your non-convex polygons up into convex polygons first.
Also, there's at least one similar question here on StackOverflow.
Thanks for your help guys! But i've managed to sort it out on my own.
The importance of translating your vertices to world space and rotating them should not be overlooked, especially if you're colliding them.