How to get back b2PolygonShape from fixture? [BOX2d] - cocos2d-iphone

Hi
In BOX2d I have a fixture I know its a polygon through:
b2Shape *sh = fixture->GetShape();
NSLog(#"Type %d", sh->GetType());
//this returns 1, means its a polygon.
Now how to get the vertices in order to know exactly what shape it has i.e. rectangle square etc.

If you know it's a polygon, cast it to a b2PolygonShape and call GetVertices().
if(sh->GetType()==1)
{
b2PolygonShape* poly = (b2PolygonShape*)sh;
b2Vec2* verts = poly->GetVertices();
}
Docs: http://www.linuxuser.at/elements/doc/box2d/classb2_polygon_shape.htm#0e8a0b73c98b3e57f26772d14f252e8b

Related

Object orientation within an opengl scene

I'm having difficulty getting the right orientation from my objects within a scene. The objects are defined in standard Cartesian coordinates in the same units as I define the scene.
I then define my scenes matrix with the following code:
void SVIS_SetLookAt (double eyePos[3], double center[3], double up[3])
{
// Determine the new n
double vN[3] = {eyePos[0] - center[0], eyePos[1] - center[1], eyePos[2] - center[2]};
// Don't I need to normalize the above?
// Determine the new up by crossing witht he Up Vector.
double vU[3];
MATH_crossproduct(up, vN, vU);
MATH_NormalizeVector(vN);
MATH_NormalizeVector(vU);
// Determine V by crossing n and u...
double vV[3];
MATH_crossproduct(vN, vU, vV);
MATH_NormalizeVector(vV);
// Create the model view matrix.
double modelView[16] = {
vU[0], vV[0], vN[0], 0,
vU[1], vV[1], vN[1], 0,
vU[2], vV[2], vN[2], 0,
// -MATH_Dotd(eyePos, vU), -MATH_Dotd(eyePos, vV), -MATH_Dotd(eyePos, vN), 1
0, 0, 0, 1
};
// Load the modelview matrix. The model view matrix shoudl already be active.
glLoadMatrixd(modelView);
}
I am attempting to display n-1 objects such that each object is facing the object in front of it, excluding the first object which is not displayed. So for each object, I define the up, right, and forward vectors as such:
lal_to_ecef(curcen, pHits->u); // up vector is our position normalized
MATH_subtractVec3D((SVN_VEC3D*) prevcenter, (SVN_VEC3D*) curcen, (SVN_VEC3D*) pHits->f);
MATH_NormalizeVector(pHits->u);
MATH_NormalizeVector(pHits->f);
MATH_crossproduct(pHits->u, pHits->f, pHits->r);
MATH_NormalizeVector(pHits->r);
MATH_crossproduct(pHits->f, pHits->r, pHits->u);
MATH_NormalizeVector(pHits->u);
I then go on to display each object with the following code:
double p[3] = {pHits->cen[0] - position[0],
pHits->cen[1] - position[1],
pHits->cen[2] - position[2]};
glPushMatrix();
SVIS_LookAt(pHits->u, pHits->f, pHits->r, p);
glCallList(G_svisHitsListId);
glPopMatrix();
void SVIS_LookAt (double u[3], double f[3], double l[3], double pos[3])
{
double model[16] = {
l[0], u[0], f[0], 0,
l[1], u[1], f[1], 0,
l[2], u[2], f[2], 0,
pos[0], pos[1], pos[2], 1
};
glMultMatrixd(model);
}
I would expect this to work for any object such that the output would be whatever was defined in the Cartesian coordinate system would be present at the given point oriented such that it was pointed at the proceeding object with 0,1,0 and 0,-1,0 from the defined object would be aligned vertically on the screen. What I am seeing instead (by using simple rectangle as the object to be displayed) is that the objects are consistently rotated about the forward axis.
Can anyone point out what I am doing wrong here?
[Edit]
I've displayed an axis grid without translating by taking the three vectors multiplying a scalar and adding/subtracting it to the centre point. Doing this, the axis align up as I would expect. Overlaying the object described above shows the object to not be aligned the same way. Is there a relationship between the object space forward, right, and up vectors and the desired world-space vectors that I am missing? Am I simply completely off the mark with regards to my rotation translation matrix?
You are conflicted here; part of that matrix is transposed and part of
it is correct... you have the 4th column correct but your top-left 3x3
matrix is transposed. Each column of the 3x3 matrix (row in that array
of 16 double) is supposed to be one of your axes. It should be:
l[0],l[1],l[2],0, u[0],u[1],u[2],0, f[0],f[1],f[2],0,
pos[0],pos[1],pos[2],1. – Andon M. Coleman
This was dead on. Thanks Andon.
Building an entirely new modelview matrix from scratch using a 'lookat' implementation for each object is, frankly, crazy (or will at least drive you crazy). What you're doing is tantamount to trying to build a scene by having set of objects which are always in a fixed location, and constantly repositioning a camera to catch them from different angles.
A lookat style function should be called once to set up the camera (the view portion of the modelview matrix) position, and subsequently you should be using the matrix stack to position objects within the scene (the model portion of the modelview matrix). That's why it's called the modelview matrix, and not just the view matrix.
In code terms, it would look something like this
// Set up camera position
SVIS_LookAt(....);
for (int i = 0; i < n; ++i) {
glPushMatrix();
// move the object to it's location relative to the world coordinate system
glTranslate(...);
// rotate the object to have the correct orientation
glRotate(...);
// render the geometry
glCallList(...);
glPopMatrix();
}
Of course this assumes that everything has it's position defined in world coordinates. If you have a hierarchy of objects, then you would need to descend into an objects children between the glCallList and glPopMatrix in order to have their locations applied relative to their parent object.

Changing the polygon vertices in a dynamic polygon in cocos2d

I am new to cocos2d and first I learned how to make a circle then a square and now I learned how to create a polygon with an amount of vertices that I pick with a "b2polygonshape polygon" and here is my code for that
-(void) createDynamicPoly:(CGPoint)p;
{
b2BodyDef bodyDefPoly;
bodyDefPoly.type = b2_dynamicBody;
bodyDefPoly.position.Set(p.x/PTM_RATIO, p.y/PTM_RATIO);
b2Body *polyBody = world->CreateBody(&bodyDefPoly);
int count = 8;
b2Vec2 vertices[8];
vertices[0].Set(10.0f/PTM_RATIO,0.0/PTM_RATIO);
vertices[1].Set(20.0f/PTM_RATIO,0.0f/PTM_RATIO);
vertices[2].Set(30.0f/PTM_RATIO,10.0f/PTM_RATIO);
vertices[3].Set(30.0f/PTM_RATIO,20.0f/PTM_RATIO);
vertices[4].Set(20.0f/PTM_RATIO,30.0f/PTM_RATIO);
vertices[5].Set(10.0f/PTM_RATIO,30.0f/PTM_RATIO);
vertices[6].Set(00.0f/PTM_RATIO,20.0f/PTM_RATIO);
vertices[7].Set(0.0f/PTM_RATIO,10.0f/PTM_RATIO);
b2PolygonShape polygon;
polygon.Set(vertices, count);
b2FixtureDef fixtureDefPoly;
fixtureDefPoly.shape = &polygon;
fixtureDefPoly.density = 1.0f;
fixtureDefPoly.friction = 0.3f;
polyBody->CreateFixture(&fixtureDefPoly);
}
My question is how can I actively change the vertices of this polygon and it change the shape on my screen, without drawing a new shape. My over all goal is to create a free flowing blob.
Thankyou
Modify your last statement to return a pointer to the resulting b2Fixture object. This can be kept as a class variable (ie. b2Fixture* fixture in your class interface).
fixture = polyBody->CreateFixture(&fixtureDefPoly);
Then, wherever you want to change the vertices of the polygon shape, grab a pointer to the shape object associated with your fixture:
b2PolygonShape* shape = (b2PolygonShape*) fixture->GetShape();
And modify the vertices as appropriate:
shape->m_vertices[0].Set(new_x0/PTM_RATIO,new_y0/PTM_RATIO);
shape->m_vertices[1].Set(new_x1/PTM_RATIO,new_y1/PTM_RATIO);
shape->m_vertices[2].Set(new_x2/PTM_RATIO,new_y2/PTM_RATIO);
...
shape->m_vertices[7].Set(new_x7/PTM_RATIO,new_y7/PTM_RATIO);
Good luck!

Cocos2d Box2d multiple b2circle shape fixture for one body with positioning

Anybody knows how to add 2 circle fixture to one b2body with desired positioning?
I know how to add two polygon fixture to one body by using m_centroid. But how can I do it for circle fixtures.
Any answer will be appreciated. I want to stick some object together. I tried joints but they all are elastic. I want distance static.
Thanks everyone!
You should create two fixtures for your body and the shapes of those fixtures should be b2CircleShape
//Create a body. You'll need a b2BodyDef, but I've assumed you know how to use these since you say you've created bodies successfully before.
b2Body* body = world->CreateBody(&bodyDef);
//Create the first circle shape. It's offset from the center of the body by -2, 0.
b2CircleShape circleShape1;
circleShape1.m_radius = 0.5f;
circleShape1.m_p.Set(-2.0f, 0.0f);
b2FixtureDef circle1FixtureDef;
circle1FixtureDef.shape = &circleShape1;
circle1FixtureDef.density = 1.0f;
//Create the second circle shape. It's offset from the center of the body by 2, 0.
b2CircleShape circleShape2;
circleShape2.m_radius = 0.5f;
circleShape2.m_p.Set(2.0f, 0.0f);
b2FixtureDef circle2FixtureDef;
circle2FixtureDef.shape = &circleShape2;
circle2FixtureDef.density = 1.0f;
//Attach both of these fixtures to the body.
body->CreateFixture(&circle1FixtureDef);
body->CreateFixture(&circle2FixtureDef);

Box2d multiple fixtures and positioning

I'm attempting to create a "U" shape in Box2d (in Cocos2d) by joining 3 rectangles like so: |_|
It sounds like joints are not the correct solution here since I don't want any movement so I have created a main body which is the middle bit and 2 fixtures for the sides. I've added the two sides to the middle bit like this:
mainBody->CreateFixture(&leftFixtureDef);
mainBody->CreateFixture(&rightFixtureDef);
This works, however both side fixtures get added to the center of the mainBody. I can't seem to work out how to position the fixtures relative to the main body. Attaching a sprite/node to the fixture and changing the position doesn't seem to make a difference.
Any ideas?
Many thanks.
it's the property of a shape. I did not find such property for b2CircleShape, but for b2PolygonShape has m_centroid paramter - it's the shape center coordinates relative to the body. Specify it to have a valid position of a shape.
For b2PolyganShape there is a method setAsBox(w, h) but alos there is more complex one:
setAsBox(float32 width, float32 height, const b2Vec2 &center, float32 rotation)
Use this method or specify the centroid manualy.
Here is the code for the U shape
b2BodyDef bDef;
bDef.type = b2_dynamicBody;
bDef.position = b2Vec2(0, 0);
b2Body *body = world_->CreateBody(&bDef);
b2PolygonShape shape;
const float32 density = 10;
shape.SetAsBox(1, 0.1);
body->CreateFixture(&shape, density);
shape.SetAsBox(0.1, 1, b2Vec2(-1 + 0.1, 1), 0);
body->CreateFixture(&shape, density);
shape.SetAsBox(0.1, 1, b2Vec2(1 - 0.1, 1), 0);
body->CreateFixture(&shape, density);

Some faces are transparent, other are opaque

I have created a regular dodecahedron with OpenGL. I wanted to make the faces transparent (as in the image on Wikipedia) but this doesn't always work. After some digging in the OpenGL documentation, is appears that I "need to sort the transparent faces from back to front". Hm. How do I do that?
I mean I call glRotatef() to rotate the coordinate system but the reference coordinates of the faces stay the same; the rotation effect is applied "outside" of my renering code.
If I apply the transformation to the coordinates, then everything else will stop moving.
How can I sort the faces in this case?
[EDIT] I know why this happens. I have no idea what the solution could look like. Can someone please direct me to the correct OpenGL calls or a piece of sample code? I know when the coordinate transform is finished and I have the coordinates of the vertices of the faces. I know how to calculate the center coordinates of the faces. I understand that I need to sort them by Z value. How to I transform a Vector3f by the current view matrix (or whatever this thing is called that rotates my coordinate system)?
Code to rotate the view:
glRotatef(xrot, 1.0f, 0.0f, 0.0f);
glRotatef(yrot, 0.0f, 1.0f, 0.0f);
When the OpenGL documentation says "sort the transparent faces" it means "change the order in which you draw them". You don't transform the geometry of the faces themselves, instead you make sure that you draw the faces in the right order: farthest from the camera first, nearest to the camera last, so that the colour is blended correctly in the frame buffer.
One way to do this is to compute for each transparent face a representative distance from the camera (for example, the distance of its centre from the centre of the camera), and then sort the list of transparent faces on this representative distance.
You need to do this because OpenGL uses the Z-buffering technique.
(I should add that the technique of "sorting by the distance of the centre of the face" is a bit naive, and leads to the wrong result in cases where faces are large or close to the camera. But it's simple and will get you started; there'll be plenty of time later to worry about more sophisticated approaches to Z-sorting.)
Update: Aaron, you clarified the post to indicate that you understand the above, but don't know how to calculate a suitable Z value for each face. Is that right? I would usually do this by measuring the distance from the camera to the face in question. So I guess this means you don't know where the camera is?
If that's a correct statement of the problem you're having, see OpenGL FAQ 8.010:
As far as OpenGL is concerned, there is no camera. More specifically, the camera is always located at the eye space coordinate (0., 0., 0.).
Update: Maybe the problem is that you don't know how to transform a point by the modelview matrix? If that's the problem, see OpenGL FAQ 9.130:
Transform the point into eye-coordinate space by multiplying it by the ModelView matrix. Then simply calculate its distance from the origin.
Use glGetFloatv(GL_MODELVIEW_MATRIX, dst) to get the modelview matrix as a list of 16 floats. I think you'll have to do the multiplication yourself: as far as I know OpenGL doesn't provide an API for this.
For reference, here is the code (using lwjgl 2.0.1). I define my model by using an array of float arrays for the coordinates:
float one = 1f * scale;
// Cube of size 2*scale
float[][] coords = new float[][] {
{ one, one, one }, // 0
{ -one, one, one },
{ one, -one, one },
{ -one, -one, one },
{ one, one, -one },
{ -one, one, -one },
{ one, -one, -one },
{ -one, -one, -one }, // 7
};
Faces are defined in an array of int arrays. The items in the inner array are indices of vertices:
int[][] faces = new int[][] {
{ 0, 2, 3, 1, },
{ 0, 4, 6, 2, },
{ 0, 1, 5, 4, },
{ 4, 5, 7, 6, },
{ 5, 1, 3, 7, },
{ 4, 5, 1, 0, },
};
These lines load the Model/View matrix:
Matrix4f matrix = new Matrix4f ();
FloatBuffer params = FloatBuffer.allocate (16);
GL11.glGetFloat (GL11.GL_MODELVIEW_MATRIX, params );
matrix.load (params);
I store some information of each face in a Face class:
public static class Face
{
public int id;
public Vector3f center;
#Override
public String toString ()
{
return String.format ("%d %.2f", id, center.z);
}
}
This comparator is then used to sort the faces by Z depth:
public static final Comparator<Face> FACE_DEPTH_COMPARATOR = new Comparator<Face> ()
{
#Override
public int compare (Face o1, Face o2)
{
float d = o1.center.z - o2.center.z;
return d < 0f ? -1 : (d == 0 ? 0 : 1);
}
};
getCenter() returns the center of a face:
public static Vector3f getCenter (float[][] coords, int[] face)
{
Vector3f center = new Vector3f ();
for (int vertice = 0; vertice < face.length; vertice ++)
{
float[] c = coords[face[vertice]];
center.x += c[0];
center.y += c[1];
center.z += c[2];
}
float N = face.length;
center.x /= N;
center.y /= N;
center.z /= N;
return center;
}
Now I need to set up the face array:
Face[] faceArray = new Face[faces.length];
Vector4f v = new Vector4f ();
for (int f = 0; f < faces.length; f ++)
{
Face face = faceArray[f] = new Face ();
face.id = f;
face.center = getCenter (coords, faces[f]);
v.x = face.center.x;
v.y = face.center.y;
v.z = face.center.z;
v.w = 0f;
Matrix4f.transform (matrix, v, v);
face.center.x = v.x;
face.center.y = v.y;
face.center.z = v.z;
}
After this loop, I have the transformed center vectors in faceArray and I can sort them by Z value:
Arrays.sort (faceArray, FACE_DEPTH_COMPARATOR);
//System.out.println (Arrays.toString (faceArray));
Rendering happens in another nested loop:
float[] faceColor = new float[] { .3f, .7f, .9f, .3f };
for (Face f: faceArray)
{
int[] face = faces[f.id];
glColor4fv(faceColor);
GL11.glBegin(GL11.GL_TRIANGLE_FAN);
for (int vertice = 0; vertice < face.length; vertice ++)
{
glVertex3fv (coords[face[vertice]]);
}
GL11.glEnd();
}
Have you tried just drawing each face, in relation to regular world coordinates from back to front? Often it seems like the wording in some of the OpenGL docs is weird. I think if you get the drawing in the right order with out worrying about rotation, it might automatically work when you add rotation. OpenGL might take care of the reordering of faces when rotating the matrix.
Alternatively you can grab the current matrix as you draw ( glGetMatrix() ) and reorder your drawing algorithm depending on which faces are going to be the rotated back/front.
That quote says it all - you need to sort the faces.
When drawing such a simple object you can just render the back faces first and the front faces second using the z-buffer (by rendering twice with different z-buffer comparison functions).
But usually, you just want to transform the object, then sort the faces. You transform just your representation of the object in memory, then determine the drawing order by sorting, then draw in that order with the original coordinates, using transformations as needed (need to be consistent with the sorting you've done). In a real application, you would probably do the transformation implicitly, eg. by storing the scene as a BSP- or Quad- or R- or whatever-tree and simply traversing the tree from various directions.
Note that the sorting part can be tricky, because the function "is-obsucred-by" which is the function you want to compare the faces by (because you need to draw the obscured faces first) is not an ordering, eg. there can be cycles (face A obscures B && face B obscures A). In this case, you would probably split one of the faces to break the loop.
EDIT:
You get the z-coordinate of a vertex by taking the coordinates you pass to glVertex3f(), make it 4D (homogenous coordinates) by appending 1, transform it with the modelview matrix, then transform it with the projection matrix, then do the perspective division. The details are in the OpenGL specs in Chapter 2, section Coordinate transformations.
However, there isn't any API for you to actually do the transformation. The only thing OpenGL lets you do is to draw the primitives, and tell the renderer how to draw them (eg. how to transform them). It doesn't let you easily transform coordinates or anything else (although there IIUC are ways to tell OpenGL to write transformed coordinates to a buffer, this is not that easy). If you want some library to help you manipulate actual objects, coordinates etc., consider using some sort of scenegraph library (OpenInventor or something)