Box2D - How can I know when 2 Fixtures are touching? - c++

I'm just curious, in the case that I have a body that has 2 or more fixtures in it that are not "joined together", how can I determine this definitively in code? Here is an example of what I mean:
I marked the vertices for each distinctive fixture just to make it completely clear that these are separate shapes which do not share vertices with each other. They are however combined into a single body. As you can see, two fixtures are within very close proximity to each other or "touching", and one is set apart by itself. I'm wondering how I can query the fixtures of a body in Box2D to be able to find this out at runtime.
In order to put this all into perspective, it's for creating and processing destructible bodies. The image is a rough representation of what will happen after a hole has been punched through a set of fixtures in a body. I need to query to see which fixtures touch each other so that I can split the one body into two, as naturally they should be at that point.

If you already have the new fixtures prepared you could use the b2TestOverlap function to check if they are overlapping. The function is found in b2Collision.h:
/// Determine if two generic shapes overlap.
bool b2TestOverlap( const b2Shape* shapeA, int32 indexA,
const b2Shape* shapeB, int32 indexB,
const b2Transform& xfA, const b2Transform& xfB );
The shapes can be found by fixture->GetShape(). The index parameters are only used to specify which segment of a chain shape to test against, so for any other shape type you can just pass in 0. As the fixtures are on the same body, the last two parameters can both be body->GetTransform().
I have never used this myself so I couldn't tell you for sure, but noting that the name of this function is 'overlap', it may not return a positive for the case of your A and B fixtures in the diagram since technically I don't think they are overlapping. If this is the case, I think you are on your own because Box2D doesn't offer any 'touching' tests.
btw the contact list mentioned by Constantinius is only valid for fixtures on different bodies colliding - fixtures on the same body don't collide.

In the Box2D manual under "9.3 Accessing Contacts" it is written, that you can access all contacts of a body using the function GetContactList:
for (b2ContactEdge* ce = myBody->GetContactList(); ce; ce = ce->next)
{
b2Contact* c = ce->contact;
...
}
In the b2ContactEdge and the b2Contact you can find the actual shapes and a flag if they are actually touching (b2Contact::IsTouching).

I would suggest doing something like this:
to get an accurate result you really have to test for intersections. In a first pass you could compare only the centers of gravity of the fixtures and then only compute the intersections of all lines if the COGs lie within small distance.
another approach might be to rasterize the scene. then you could paint the rasterized pixels using something like a stencil buffer.
the stencil would then be an array of size(w*h) where w and h are width and height of your rasterized image and holds an integer per pixel.
initialize the stencil buffer with 0 for each pixel.
for each rasterized fixture that you 'paint' you just increment the buffer value for all of its pixel positions.
so finally the stencil buffer contains 0 at positions which are not occupied by any fixture, 1 where only one fixture lies at the position and for any greater value you have intersections.
the complexity of this approach then mainly depends on the chosen rasterization resolution.

Another way to do this (as an alternative to using b2TestOverlap as suggested previously by someone else), would be to use the b2CollidePolygons function (from b2Collision.h). You would use this on the polygon shapes of the fixtures and then see whether the b2Manifold's pointCount field is greater than zero.
Here's some example pre-C++11 code for you (which I have not compiled and tested so hopefully isn't too foobar'ed):
bool isTouching(const b2PolygonShape& polygonA, const b2Transform& xfA, const b2PolygonShape& polygonB, const b2Transform& xfb)
{
b2Manifold manifold;
b2CollidePolygons(&manifold, &polygonA, xfA, &polygonB, xfB);
return manifold.pointCount > 0;
}
bool isPolygonFixturesTouching(const b2Fixture& fixtureA, const b2Fixture& fixtureB)
{
const b2Shape* shapeA = fixtureA.GetShape();
const b2Shape* shapeB = fixtureB.GetShape();
if (shapeA->GetType() == b2Shape::e_polygon && shapeB->GetType() == b2Shape::e_polygon)
{
const b2Body* bodyA = fixtureA.GetBody();
const b2Transform& xfA = bodyA->GetTransform();
const b2Body* bodyB = fixtureB.GetBody();
const b2Transform& xfB = bodyB->GetTransform();
return isTouching(*(static_cast<b2PolygonShape*>(shapeA)), xfA, *(static_cast<b2PolygonShape*>(shapeB)), xfB);
}
return false;
}
int main()
{
...
if (isPolygonFixturesTouching(fixtureA, fixtureB))
{
std::cout << "hooray!" << std::endl;
}
}
Note: Some crude timing tests which I've just tried that compare b2TestOverlap vs. b2CollidePolygons show b2TestOverlap to be 4 or 5 times faster (than b2CollidePolygons). Given that the former is only looking for any overlap while the later is calculating a collision manifold, I suppose the the speed difference isn't surprising. Anyways, it seems the moral of this story is that b2TestOverlap is likely the preferable function to use if you just want to know whether 2 fixtures are touching while b2CollidePolygons is the more useful if you also want to know additional stuff like how the 2 fixtures are touching.

Related

Problem moving vertex on Arrangement using Arrangement_accessor

I'm trying to move some vertex in an arrangement using an Arrangement_accessor. What I did so far is:
ArrangementAccessor acc(*(self.arr)); // I'm using Objective-C++
Point_2 newPoint = toPoint_2(mouse);
// Change the vertex position
acc.modify_vertex_ex(v, newPoint);
// adjust all incident curves
Arrangement_2::Halfedge_around_vertex_circulator curr, first;
curr = first = v->incident_halfedges();
do {
Point_2 sourcePoint = curr->source()->point();
Segment_2 newSegment = Segment_2(sourcePoint, newPoint);
acc.modify_edge_ex(curr, newSegment);
} while (++curr != first);
This is actually partially working (I ensure no overlapping is produced). But as soon as I change the lexicographically order of some halfedges endpoints, the Arrangement::isValid() returns false.
So my questions are:
Why does changing xy-order destroy the arrangement? I know this is also documented, but I don't really get why this is important.
And: Is there any way to fix this in my implementation? I already tried the easier remove vertex / insert vertex method, but that is not really what I want. (I want to keep the faces and their reference / index mapping alive)
Would be really glad if you could help me understand this.
Yours, Salabasti
The direction of every halfedge is cached to expedite frequent operations. The concept ArrangementDcelHalfedge (https://doc.cgal.org/latest/Arrangement_on_surface_2/classArrangementDcelHalfedge.html#a2bcd73c9eb8383be066161612de98033) requires supporting the methods direction() and set_direction(). So, either you also surgically fix the direction of the affected halfedges using the method set_direction() or you remove these halfedges and then re-insert them. You can retain low complexity by using the specialized insertion functions; see, e.g.,
https://doc.cgal.org/latest/Arrangement_on_surface_2/classCGAL_1_1Arrangement__2.html#a88fed7cf475e474d187e65beb894dde2, and
https://doc.cgal.org/latest/Arrangement_on_surface_2/classCGAL_1_1Arrangement__2.html#a7b90245d8a42ed90ea13b9d911adac73

Does Fixed_alpha_shape_3() destroy or modify original triangulation?

Does Fixed_alpha_shape_3() destroy or modify the underlying triangulation? The doc says "this operation destroys the triangulation" but does it replace it with the alpha shape triangulation? The source code suggests it is modifying the underlying delaunay triangulation since it is removing vertices. Further, the fact that the triangulation object is passed as a reference also makes me think alpha shape is modifying the underlying triangulation. If it is true, it means we can continue using the original triangulation object naturally throughout the remainder of our code. If the triangulation is not modified, instead it is truly destroyed and no longer exists, can we simply use the Fixed_alpha_shape_3 object as a triangulation object since it inherits from the Triangulation class?
The ultimate goal is to make sure cell neighbors are updated in a new triangulation object after alpha shape removes cells on the boundary. Most importantly, the new triangulation object needs to contain the correct cell->neighbor(i)->is_inifinite status at the boundaries.
For example, construction of the original triangulation follows as usual:
RTriangulation T(points.begin(),points.end());
followed by the creation of the Fixed_alpha_shape_3:
Fixed_alpha_shape_3 as(T);
I know we can access the alphaShape INTERIOR and EXTERIOR cells with various methods including as.get_alpha_shape_cells(), but if Fixed_alpha_shape_3 is simply modifying the original triangulation, T, then we should be able to continue using T as such:
const RTriangulation::Finite_cells_iterator cellEnd = T.finite_cells_end();
for (RTriangulation::Finite_cells_iterator cell = T.finite_cells_begin(); cell != cellEnd; cell++) {
for (int i=0;i<4;i++) {
if (cell->neighbor(i)->is_infinite) cell->info().p = 1;
}
}
Or can I simply start using the alpha shape object instead:
const RTriangulation::Finite_cells_iterator cellEnd = as.finite_cells_end();
for (RTriangulation::Finite_cells_iterator cell = as.finite_cells_begin(); cell != cellEnd; cell++) {
for (int i=0;i<4;i++) {
if (cell->neighbor(i)->is_infinite) cell->info().p = 1;
}
}
The least ideal solution would be to create new lists of cells with as.get_alpha_shape_cells(), because that would mean a major rehaul of our code with many logical splits. I suspect that is not necessary, which is why I am clarifying the action of Fixed_alpha_shape_3().
Thank you for your assistance.
The word "destroys" is an unlucky choice. As you noticed the class Fixed_alpha_shape_3 is derived from the triangulation class. The constructor that takes the reference to a triangulation dt as input, swaps it with dt. So afterwards in dt you find the default constructed, that is empty triangulation.

How to properly manage a vector of void pointers

First, some background:
I'm working on a project which requires me to simulate interactions between objects that can be thought of as polygons (usually triangles or quadrilaterals, almost certainly fewer than seven sides), each side of which is composed of the radius of two circles with a variable (and possibly zero) number of 'rivers' of various constant widths passing between them, and out of the polygon through some other side. As these rivers and circles and their widths (and the positions of the circles) are specified at runtime, one of these polygons with N sides and M rivers running through it can be completely described by an array of N+2M pointers, each referring to the relevant rivers/circles, starting from an arbitrary corner of the polygon and passing around (in principal, since rivers can't overlap, they should be specifiable with less data, but in practice I'm not sure how to implement that).
I was originally programming this in Python, but quickly found that for more complex arrangements performance was unacceptably slow. In porting this over to C++ (chosen because of its portability and compatibility with SDL, which I'm using to render the result once optimization is complete) I am at somewhat of a loss as to how to deal with the polygon structure.
The obvious thing to do is to make a class for them, but as C++ lacks even runtime-sized arrays or multi-type arrays, the only way to do this would be with a ludicrously cumbersome set of vectors describing the list of circles, rivers, and their relative placement, or else an even more cumbersome 'edge' class of some kind. Rather than this, it seems like the better option is to use a much simpler, though still annoying, vector of void pointers, each pointing to the rivers/circles as described above.
Now, the question:
If I am correct, the proper way to handle the relevant memory allocations here with the minimum amount of confusion (not saying much...) is something like this:
int doStuffWithPolygons(){
std::vector<std::vector<void *>> polygons;
while(/*some circles aren't assigned a polygon*/){
std::vector<void *> polygon;
void *start = &/*next circle that has not yet been assigned a polygon*/;
void *lastcircle = start;
void *nextcircle;
nextcircle = &/*next circle to put into the polygon*/;
while(nextcircle != start){
polygon.push_back(lastcircle);
std::vector<River *> rivers = /*list of rivers between last circle and next circle*/;
for(unsigned i = 0; i < rivers.size(); i++){
polygon.push_back(rivers[i]);
}
lastcircle = nextcircle;
nextcircle = &/*next circle to put into the polygon*/;
}
polygons.push_back(polygon);
}
int score = 0;
//do whatever you're going to do to evaluate the polygons here
return score;
}
int main(){
int bestscore = 0;
std::vector<int> bestarrangement; //contains position of each circle
std::vector<int> currentarrangement = /*whatever arbitrary starting arrangement is appropriate*/;
while(/*not done evaluating polygon configurations*/){
//fiddle with current arrangement a bit
int currentscore = doStuffWithPolygons();
if(currentscore > bestscore){
bestscore = currentscore;
bestarrangement = currentarrangement;
}
}
//somehow report what the best arrangement is
return 0;
}
If I properly understand how this stuff is handled, I shouldn't need any delete or .clear() calls because everything goes out of scope after the function call. Am I correct about this? Also, is there any part of the above that is needlessly complex, or else is insufficiently complex? Am I right in thinking that this is as simple as C++ will let me make it, or is there some way to avoid some of the roundabout construction?
And if you're response is going to be something like 'don't use void pointers' or 'just make a polygon class', unless you can explain how it will make the problem simpler, save yourself the trouble. I am the only one who will ever see this code, so I don't care about adhering to best practices. If I forget how/why I did something and it causes me problems later, that's my own fault for insufficiently documenting it, not a reason to have written it differently.
edit
Since at least one person asked, here's my original python, handling the polygon creation/evaluation part of the process:
#lots of setup stuff, such as the Circle and River classes
def evaluateArrangement(circles, rivers, tree, arrangement): #circles, rivers contain all the circles, rivers to be placed. tree is a class describing which rivers go between which circles, unrelated to the problem at hand. arrangement contains (x,y) position of the circles in the current arrangement.
polygons = []
unassignedCircles = range(len(circles))
while unassignedCircles:
polygon = []
start = unassignedCircles[0]
lastcircle = start
lastlastcircle = start
nextcircle = getNearest(start,arrangement)
unassignedCircles.pop(start)
unassignedCircles.pop(nextcircle)
while(not nextcircle = start):
polygon += [lastcircle]
polygon += getRiversBetween(tree, lastcircle,nextcircle)
lastlastcircle = lastcircle
lastcircle = nextcircle;
nextcircle = getNearest(lastcircle,arrangement,lastlastcircle) #the last argument here guarantees that the new nextcircle is not the same as the last lastcircle, which it otherwise would have been guaranteed to be.
unassignedCircles.pop(nextcircle)
polygons += [polygon]
return EvaluatePolygons(polygons,circles,rivers) #defined outside.
Void as template argument must be lower case. Other than that it should work, but I also recommend using a base class for that. With a smart pointer you can let the system handle all the memory management.

C++ Collision using Obj Models

I'm experimenting with OpenGL 3.2+ and have starting loading Obj files/models in 3D and trying to interact with them.
(following tutorials from sources like this site)
I was wondering what the easiest way (if it's possible) to set up collision detection between two existing(loaded) Obj objects/Models without using third party physics engines etc?
The easiest possible algorithm that can meet your criteria detects collision between spheres, that concludes your meshes. Here you can see the implementation example.
Simplest collision model is to use bounding boxes for collision. The principle is simple: You surround your object by a box defined by two points, minimum and maximum. You then use these points to determine whether two boxes intersect.
In my engine the structure of bounding box and collision-detection method are set as this:
typedef struct BoundingBox
{
Vector3 min; //Contains lowest corner of the box
Vector3 max; //Contains highest corner of the box
} AABB;
//True if collision is detected, false otherwise
bool detectCollision( BoundingBox a, BoundingBox b )
{
return (a.min <= b.max && b.min <= a.max);
}
Other simple method is to use spheres. This method is useful for objects that are of similar size in all dimensions but it creates lots of false collisions if they are not. In this method, you surround your object by sphere with radius radius and center position position and when it comes to the collision, you simply check whether the distance between centers is smaller than sum of the radii and that case two spheres intersect.
Again, code snippet from my engine:
struct Sphere
{
Vector3 position; //Center of the sphere
float radius; //Radius of the sphere
};
bool inf::physics::detectCollision( Sphere a, Sphere b )
{
Vector3 tmp = a.position - b.position; //Distance between centers
return (Dot(tmp, tmp) <= pow((a.radius + b.radius), 2));
}
In code above Dot() computes the dot product of two vectors, if you dot vector with itself it gives you (by definition) the magnitude of the vector squared. Notice how I am actually not square-rooting to get the actual distances and I am comparing the squares instead to avoid extra computations.
You should also be aware that neither of these methods are perfect and will give you false collision detection (unless the objects are perfect boxes or spheres) from time to time but that is the trade-off of the simple implementation and computation complexity. Nevertheless it is good way to start detecting collisions.

pass a pointer as a memory address and make it permanent

i've just begin to approach in cpp. so mayebe it is a simple problem, mayebe it is a structural problem and i have to change my design.
i have 3 facts and 1 problem.
fact 1:
i have a Gesture class with a vector of Point inside
vector<Point> Gesture::getPoints();
the Gesture instances recive the vector in the constructor so i think it could be normal vector (no pointer). vectors are not shared between gestures neither a gesture change its own points (aside normalization).
fact 2:
Gesture class has a static method that normalize all points between [0:w]. normalize take a memory address to normalize in-place. i think that normalization in place could be a nice thing. this normalization method is used by widgets to visualize the path in vector for normalize point between 0 and width-of-the-widget
static void Gesture::normalize(vectot<Point> &pts, int w);
fact 3:
i have a widget that visualize points:
void MyWidget::setGestures(vector <Gesture *> gs)
because the gesture is produced by another widget dinamically i thought that it has been handy to work with a vector of pointer and can do some new Gesture calls.
problem:
i have several widget that visualize gesture. each one with a different width (== different normalization).
the first time i use:
Gesture::rescale(this->w, this->points);
Gesture * g = new Gesture(getPoints(), centroids);
and it's everything ok
the second time i have:
vector<Gesture* > gs = foo();
int num_gesture = gs.size();
for (int i = 0; i < num_gesture; ++i) {
vector<Point> pts = gs.at(i)->getPoints();
Gesture::rescale( widget->getWidth(), pts );
}
widget->setGestures(gs);
and here there are problem because this widget is drawing not normalized points.
i have tried some crazyness with pointers but if the program does not crash.. anyway it not normalized. and it get some error like: pointer to a temporary.
i don't knwo what to think, now.
Although your question is a little confusing, the problem seems to be that Gesture::getPoints() returns a vector and not a reference to a vector, so it actually returns a copy to the internal vector, and so changing it does not modify the gesture object itself.