Box2D collision detection failure - c++

I have recently began using Box2D version 2.1 in combination with Allegro5. Currently, I built a test with a ground and 4 boxes. 3 boxes are stacked, and the other one smashes causing them to flip. During this demonstration, I noticed got two glitches.
One is that creating a box in Box2D "SetAsBox( width, height )", only gives half the size of a normal box drawn to the screen using allegro. Example: In Box2D, I create a box the size of (15, 15). When I come to draw the shape using allegro, I must make an offset of -15 on the y, and scale the shape twice its size.
The other issue is during the collision detection while my boxes rotate due to impact. Most squares hit the ground, but some of them have an offset from the ground of its height making it floating.
Here is the code for making my boxes:
cBox2D::cBox2D( int width, int height ) {
// Note: In Box2D, 30 pixels = 1 meter
velocityIterations = 10;
positionIterations = 10;
worldGravity = 9.81f;
timeStep = ( 1.0f / 60.0f );
isBodySleep = false;
gravity.Set( 0.0f, worldGravity );
world = new b2World( gravity, isBodySleep );
groundBodyDef.position.Set( 0.0f, height ); // ground location
groundBody = world->CreateBody( &groundBodyDef );
groundBox.SetAsBox( width, 0.0f ); // Ground size
groundBody->CreateFixture( &groundBox, 0.0f );
}
cBox2D::~cBox2D( void ) {}
void cBox2D::makeSquare( int width, int height, int locX, int locY, float xVelocity, float yVelocity, float angle, float angleVelocity ) {
sSquare square;
square.bodyDef.type = b2_dynamicBody;
square.bodyDef.position.Set( locX, locY ); // Box location
square.bodyDef.angle = angle; // Box angle
square.bodyDef.angularVelocity = angleVelocity;
square.bodyDef.linearVelocity.Set( xVelocity, yVelocity ); // Box Velocity
square.body = world->CreateBody( &square.bodyDef );
square.dynamicBox.SetAsBox( width, height ); // Box size
square.fixtureDef.shape = &square.dynamicBox;
square.fixtureDef.density = 1.0f;
square.fixtureDef.friction = 0.3f;
square.fixtureDef.restitution = 0.0f; // Bouncyness
square.body->CreateFixture( &square.fixtureDef );
squareVec.push_back( square );
}
int cBox2D::getVecSize( void ) {
return squareVec.size();
}
b2Body* cBox2D::getSquareAt( int loc ) {
return squareVec.at( loc ).body;
}
void cBox2D::update( void ) {
world->Step(timeStep, velocityIterations, positionIterations);
world->ClearForces();
}
Edit:
Thank you Chris Burt-Brown for explaining the first issue to me, as for the second issue, It was a good idea, but it did not solve it. I tried both rounding methods you showed me.
Edit:
I think I found the answer to my second issue. Turns out that Allegro has a different coordinate system than OpenGL. As a result, instead of doing -gravity, I had to do +gravity which caused Box2D to become unstable and behave weird.
Edit:
My bad, I thought it was the issue, but turns out it did not change a thing.

It's actually SetAsBox(halfwidth, halfheight). I know it sounds weird but take a look inside SetAsBox. Passing in the parameters 15 and 15 will give a box with corners (-15,-15) and (15,15) i.e. a box of size 30x30.
I think it's intended as an optimisation, but it's a pretty silly one.
I don't know what's causing your other problem, but when you draw the boxes with Allegro, try seeing if it's fixed when you round the coordinates. (If that doesn't work, try ceil.)

Related

Light shader moved while resizing window

I've been working around to make a little light shader.
It works perfectly, I mean, the light fades as it's supposed to, it's a circle around my character moving with it.
It could be perfect only if that resizing event wasn't existing.
When SFML resizes the window, it enlarges everything, but in a strange way. It enlarged everything but shaders.
I tried to resize my window (I love resizing pixel graph games, I find it most beautiful. So I don't want to prevent the resizing event).
Here's my shader :
uniform vec3 light;
void main(void) {
float distance = sqrt(pow(gl_FragCoord.x - light.x, 2) + pow(gl_FragCoord.y - light.y, 2));
float alpha = 1.;
if (distance <= light.z) {
alpha = (1.0 / light.z) * distance;
}
gl_FragColor = vec4(0., 0., 0., alpha);
}
So, the problem is, my window is showed at 1280 x 736 (to fit with 32x32 textures), and I have a 1920 x 1080 monitor. When I enlarge the window to fit in 1920 x 1080 (title bar included), the whole thing resizes correctly, everything's fine, but the shader is now 1920x1080 (minus the title bar). So the shader needs different coordinates (what's supposed to be in x = 32, y = 0 is, for the shader, in x = 48 y = 0).
So I was wondering, is it possible to enlarge the shader with the whole window ? Should I use events or something like that ?
Thanks for your answers ^^
EDIT : Here's some pics :
So this is the light shader before it resizes (it's dark everywhere but on the player, like it's supposed to be).
Then I resize the window, the player doesn't move, the textures fit the entire window, but the light moved.
So, to explain correctly, when I resize the window, I want everything to fit the window, so it's full of textures, but when I do that, the coordinates given to my shader are the ones before resizing, and if I move it moves as if I didn't resize the window, so the light is never on my player again.
I'm not sure it's clearer, but I tried my best.
EDIT2 : Here's my code which calls the shader :
void Graphics::UpdateLight() {
short radius = 65; // 265 on the pictures
int x = m_game->GetPlayer()->GetSprite()->getPosition().x + CASE_LEN / 2; // Setting on the middle of the player sprite (CASE_LEN is a const which contains the size of a case (here 32))
int y = HEIGHT - (m_game->GetPlayer()->GetSprite()->getPosition().y + CASE_LEN / 2); // (the "HEIGHT -" part was set because it seems that y = 0 is on the bottom of the texture for GLSL)
sf::Vector3f shaderLight;
shaderLight.x = x;
shaderLight.y = y;
shaderLight.z = radius;
m_lightShader.setParameter("light", shaderLight);
}
The code snippet you're showing really only updates the shader coordinates (and from a quick glimpse it looks fine). The bug most likely happens somewhere where you're actually drawing things.
I'd use a completely different approach, because your shader approach might get rather tedious once you're rendering multiple things, other light sources, etc.
As such I'd suggest you render a light map to a render texture (which would essentially be like "black = no light, color = light of that color").
Rather than trying to explain everything in text, I've written a quick commented example program which will draw a window on screen and move some light sources over a background image (I've used the one that comes with SFML's shader example):
There are no requirements other than having a file called "background.jpg" in your startup path.
Feel free to copy this code or use it for inspiration. Just keep in mind this isn't optimized and really just a quick edit to show the general idea.
#include <SFML/Graphics.hpp>
#include <vector>
#include <cmath>
const float PI = 3.1415f;
struct Light
{
sf::Vector2f position;
sf::Color color;
float radius;
};
int main()
{
// Let's setup a window
sf::RenderWindow window(sf::VideoMode(640, 480), "SFML Lights");
window.setVerticalSyncEnabled(false);
window.setFramerateLimit(60);
// Create something simple to draw
sf::Texture texture;
texture.loadFromFile("background.jpg");
sf::Sprite background(texture);
// Setup everything for the lightmap
sf::RenderTexture lightmapTex;
// We're using a 512x512 render texture for max. compatibility
// On modern hardware it could match the window resolution of course
lightmapTex.create(512, 512);
sf::Sprite lightmap(lightmapTex.getTexture());
// Scale the sprite to fill the window
lightmap.setScale(640 / 512.f, 480 / 512.f);
// Set the lightmap's view to the same as the window
lightmapTex.setView(window.getDefaultView());
// Drawable helper to draw lights
// We'll just have to adjust the first vertex's color to tint it
sf::VertexArray light(sf::PrimitiveType::TriangleFan);
light.append({sf::Vector2f(0, 0), sf::Color::White});
// This is inaccurate, but for demo purposes…
// This could be more elaborate to allow better graduation etc.
for (float i = 0; i <= 2 * PI; i += PI * .125f)
light.append({sf::Vector2f(std::sin(i), std::cos(i)), sf::Color::Transparent});
// Setup some lights
std::vector<Light> lights;
lights.push_back({sf::Vector2f(50.f, 50.f), sf::Color::White, 100.f });
lights.push_back({sf::Vector2f(350.f, 150.f), sf::Color::Red, 150.f });
lights.push_back({sf::Vector2f(150.f, 250.f), sf::Color::Yellow, 200.f });
lights.push_back({sf::Vector2f(250.f, 450.f), sf::Color::Cyan, 100.f });
// RenderStates helper to transform and draw lights
sf::RenderStates rs(sf::BlendAdd);
while (window.isOpen()) {
sf::Event event;
while (window.pollEvent(event)) {
switch (event.type) {
case sf::Event::Closed:
window.close();
break;
}
}
bool flip = false; // simple toggle to animate differently
// Draw the light map
lightmapTex.clear(sf::Color::Black);
for(Light &l : lights)
{
// Apply all light attributes and render it
// Reset the transformation
rs.transform = sf::Transform::Identity;
// Move the light
rs.transform.translate(l.position);
// And scale it (this could be animated to create flicker)
rs.transform.scale(l.radius, l.radius);
// Adjust the light color (first vertex)
light[0].color = l.color;
// Draw the light
lightmapTex.draw(light, rs);
// To make things a bit more interesting
// We're moving the lights
l.position.x += flip ? 2 : -2;
flip = !flip;
if (l.position.x > 640)
l.position.x -= 640;
else if (l.position.x < 0)
l.position.x += 640;
}
lightmapTex.display();
window.clear(sf::Color::White);
// Draw the background / game
window.draw(background);
// Draw the lightmap
window.draw(lightmap, sf::BlendMultiply);
window.display();
}
}

How to draw a segment of a circle in Cocos2d-x?

Context
I try to draw pie chart for statistic in my game. I'm using Cocos2d-x ver.3.8.1. Size of the game is important, so I won't to use third-party frameworks to create pie charts.
Problem
I could not find any suitable method in Cocos2d-x for drawing part of the circle.
I tried to do
I tried to find a solution to this problem in Internet, but without success.
As is known, sector of a circle = triangle + segment. So, I tried to use the method drawSegment() from DrawNode also.
Although it has parameter radius ("The segment radius" written in API reference), radius affects only the thickness of the line.
drawSegment() method draw a simple line, the thickness of which is set by a method call.
Question
Please prompt me, how can I draw a segment or a sector of a circle in Cocos2d-x?
Any advice will be appreciated, thanks.
I think the one of the ways to draw a sector of a circle in Cocos2d-X is the way to use drawPolygon on DrawNode. I wrote little sample.
void drawSector(cocos2d::DrawNode* node, cocos2d::Vec2 origin, float radius, float angle_degree,
cocos2d::Color4F fillColor, float borderWidth, cocos2d::Color4F bordercolor,
unsigned int num_of_points = 100)
{
if (!node)
{
return;
}
const cocos2d::Vec2 start = origin + cocos2d::Vec2{radius, 0};
const auto angle_step = 2 * M_PI * angle_degree / 360.f / num_of_points;
std::vector<cocos2d::Point> circle;
circle.emplace_back(origin);
for (int i = 0; i <= num_of_points; i++)
{
auto rads = angle_step * i;
auto x = origin.x + radius * cosf(rads);
auto y = origin.y + radius * sinf(rads);
circle.emplace_back(x, y);
}
node->drawPolygon(circle.data(), circle.size(), fillColor, borderWidth, bordercolor);
}
This is the function to calculate the position of edge point of circle and draw polygon. If you want to use it, you need to call like following,
auto canvas = DrawNode::create();
drawSector(canvas, cocos2d::Vec2(400, 400), 100, 60, cocos2d::Color4F::GREEN, 2, cocos2d::Color4F::BLUE, 100);
this->addChild(triangle);
The result would be like this. I think the code will help your problem.

Maintaining Aspect Ratio and Scale Independent of Window Size with freeglut

I've been wanting to experiment with platforming physics using freeglut, but before I would allow myself to start, I had an old problem to take care of.
You see, I want to write a reshape handler that not only maintains the scale and eliminates any distortion of the view, but also allows all of the onscreen shapes to maintain their size even while the window is too small to contain them (i.e. let them be clipped).
I've almost got all three parts solved, but when I scale my window, the circle I have drawn onto it scales just slightly. Otherwise, I got the clipping, and I have eliminated the distortion. Update: What I want to achieve is a program that maintains scale and aspect ratio independent of window size.
Here's my code:
void reshape(int nwidth,int nheight)
{
glViewport(0,0,nwidth,nheight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
//here begins the code
double bound = 1.5;
double aspect = double(nwidth)/nheight;
//so far, I get the best results by normalizing the dimensions
double norm = sqrt(bound*bound+aspect*aspect);
double invnorm = sqrt(bound*bound+(1/aspect)*(1/aspect));
if(nwidth <= nheight)
glOrtho(-bound/invnorm,bound/invnorm,-bound/aspect/invnorm,bound/aspect/invnorm,-1,1);
else
glOrtho(-bound*aspect/norm,bound*aspect/norm,-bound/norm,bound/norm,-1,1);
//without setting the modelview matrix to the identity form,
//the circle becomes an oval, and does not clip when nheight > nwidth
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
Update: As per Mr. Coleman's suggestion, I've tried switching out single precision for double. The scaling issue has improved along the vertical axis, but whenever I drag the horizontal axis in either direction, the shape still scales by a noticeable amount. It's still the same shape throughout, but a visual inspection tells me that the shape is not the same size when the window is 150x300 as it is when the window is 600x800, regardless of which glOrtho is being executed.
I've got it. Here's how I changed my code:
//at the top of the source file, in global scope:
int init_width;//the initial width
int init_height;//the initial height
void reshape(int new_width, int new_height)
{
//moved the glViewport call further down (it was part of an earlier idea that didn't work out)
glMatrixMode(GL_PROJECTION);
glLoadIdentity();//these two lines are unchanged
double bound = 1.0; //I reduced the edge distance to make the shape larger in the window
double scaleX = double(new_width)/init_width;
double scaleY = double(new_height)/init_height;
glOrtho( -bound*scaleX/2, bound*scaleX/2, //these are halved in order to un-squash the shape
-bound*scaleY, bound*scaleY, -1,1 );
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glViewport(0,0,new_width,new_height);
}
That is what my code looks like now. It maintains the scale and shape of what I have on screen, and allows it to go offscreen when the window is too small to contain the entire shape.

CPU Ray Casting

I'm attempting ray casting an octree on the CPU (I know the GPU is better, but I'm unable to get that working at this time, I believe my octree texture is created incorrectly).
I understand what needs to be done, and so far I cast a ray for each pixel, and check if that ray intersects any nodes within the octree. If it does and the node is not a leaf node, I check if the ray intersects it's child nodes. I keep doing this until a leaf node is hit. Once a leaf node is hit, I get the colour for that node.
My question is, what is the best way to draw this to the screen? Currently im storing the colours in an array and drawing them with glDrawPixels, but this does not produce correct results, with gaps in the renderings, as well as the projection been wrong (I am using glRasterPos3fv).
Edit: Here is some code so far, it needs cleaning up, sorry. I have omitted the octree ray casting code as I'm not sure it's needed, but I will post if it'll help :)
void Draw(Vector cameraPosition, Vector cameraLookAt)
{
// Calculate the right Vector
Vector rightVector = Cross(cameraLookAt, Vector(0, 1, 0));
// Set up the screen plane starting X & Y positions
float screenPlaneX, screenPlaneY;
screenPlaneX = cameraPosition.x() - ( ( WINDOWWIDTH / 2) * rightVector.x());
screenPlaneY = cameraPosition.y() + ( (float)WINDOWHEIGHT / 2);
float deltaX, deltaY;
deltaX = 1;
deltaY = 1;
int currentX, currentY, index = 0;
Vector origin, direction;
origin = cameraPosition;
vector<Vector4<int>> colours(WINDOWWIDTH * WINDOWHEIGHT);
currentY = screenPlaneY;
Vector4<int> colour;
for (int y = 0; y < WINDOWHEIGHT; y++)
{
// Set the current pixel along x to be the left most pixel
// on the image plane
currentX = screenPlaneX;
for (int x = 0; x < WINDOWWIDTH; x++)
{
// default colour is black
colour = Vector4<int>(0, 0, 0, 0);
// Cast the ray into the current pixel. Set the length of the ray to be 200
direction = Vector(currentX, currentY, cameraPosition.z() + ( cameraLookAt.z() * 200 ) ) - origin;
direction.normalize();
// Cast the ray against the octree and store the resultant colour in the array
colours[index] = RayCast(origin, direction, rootNode, colour);
// Move to next pixel in the plane
currentX += deltaX;
// increase colour arry index postion
index++;
}
// Move to next row in the image plane
currentY -= deltaY;
}
// Set the colours for the array
SetFinalImage(colours);
// Load array to 0 0 0 to set the raster position to (0, 0, 0)
GLfloat *v = new GLfloat[3];
v[0] = 0.0f;
v[1] = 0.0f;
v[2] = 0.0f;
// Set the raster position and pass the array of colours to drawPixels
glRasterPos3fv(v);
glDrawPixels(WINDOWWIDTH, WINDOWHEIGHT, GL_RGBA, GL_FLOAT, finalImage);
}
void SetFinalImage(vector<Vector4<int>> colours)
{
// The array is a 2D array, with the first dimension
// set to the size of the window (WINDOW_WIDTH * WINDOW_HEIGHT)
// Second dimension stores the rgba values for each pizel
for (int i = 0; i < colours.size(); i++)
{
finalImage[i][0] = (float)colours[i].r;
finalImage[i][1] = (float)colours[i].g;
finalImage[i][2] = (float)colours[i].b;
finalImage[i][3] = (float)colours[i].a;
}
}
Your pixel drawing code looks okay. But I'm not sure that your RayCasting routines are correct. When I wrote my raytracer, I had a bug that caused horizontal artifacts in on the screen, but it was related to rounding errors in the render code.
I would try this...create a result set of vector<Vector4<int>> where the colors are all red. Now render that to the screen. If it looks correct, then the opengl routines are correct. Divide and conquer is always a good debugging method.
Here's a question though....why are you using Vector4 when later on you write the image as GL_FLOAT? I'm not seeing any int->float conversion here....
You problem may be in your 3DDDA (octree raycaster), and specifically with adaptive termination. It results from the quantisation of rays into gridcell form, that causes certain octree nodes which lie slightly behind foreground nodes (i.e. of a higher z depth) and which thus should be partly visible & partly occluded, to not be rendered at all. The smaller your voxels are, the less noticeable this will be.
There is a very easy way to test whether this is the problem -- comment out the adaptive termination line(s) in your 3DDDA and see if you still get the same gap artifacts.

Rendering sprites from spritesheet with OpenGL?

Imagine the following scenario: you have a set of RPG character spritesheets in PNG format and you want to use them in an OpenGL application.
The separate characters are (usually) 16 by 24 pixels in size (that is, 24 pixels tall) and may be at any width and height without leaving padding. Kinda like this:
(source: kafuka.org)
I already have the code to determine an integer-based clipping rectangle given a frame index and size:
int framesPerRow = sheet.Width / cellWidth;
int framesPerColumn = sheet.Height / cellHeight;
framesTotal = framesPerRow * framesPerColumn;
int left = frameIndex % framesPerRow;
int top = frameIndex / framesPerRow;
//Clipping rect's width and height are obviously cellWidth and cellHeight.
Running this code with frameIndex = 11, cellWidth = 16, cellHeight = 24 would return a cliprect (32, 24)-(48, 48) assuming it's Right/Bottom opposed to Width/Height.
The actual question
Now, given a clipping rectangle and an X/Y coordinate to place the sprite on, how do I draw this in OpenGL? Having the zero coordinate in the top left is preferred.
You have to start thinking in "texture space" where the coordinates are in the range [0, 1].
So if you have a sprite sheet:
class SpriteSheet {
int spriteWidth, spriteHeight;
int texWidth, texHeight;
int tex;
public:
SpriteSheet(int t, int tW, int tH, int sW, int sH)
: tex(t), texWidth(tW), texHeight(tH), spriteWidth(sW), spriteHeight(sH)
{}
void drawSprite(float posX, float posY, int frameIndex);
};
All you have to do is submit both vertices and texture vertices to OpenGL:
void SpriteSheet::drawSprite(float posX, float posY, int frameIndex) {
const float verts[] = {
posX, posY,
posX + spriteWidth, posY,
posX + spriteWidth, posY + spriteHeight,
posX, posY + spriteHeight
};
const float tw = float(spriteWidth) / texWidth;
const float th = float(spriteHeight) / texHeight;
const int numPerRow = texWidth / spriteWidth;
const float tx = (frameIndex % numPerRow) * tw;
const float ty = (frameIndex / numPerRow + 1) * th;
const float texVerts[] = {
tx, ty,
tx + tw, ty,
tx + tw, ty + th,
tx, ty + th
};
// ... Bind the texture, enable the proper arrays
glVertexPointer(2, GL_FLOAT, verts);
glTexCoordPointer(2, GL_FLOAT, texVerts);
glDrawArrays(GL_TRI_STRIP, 0, 4);
}
};
Franks solution is already very good.
Just a (very important) sidenote, since some of the comments suggested otherwise.
Please don't ever use glBegin/glEnd.
Don't ever tell someone to use it.
The only time it is OK to use glBegin/glEnd is in your very first OpenGL program.
Arrays are not much harder to handle, but...
... they are faster.
... they will still work with newer OpenGL versions.
... they will work with GLES.
... loading them from files is much easier.
I'm assuming you're learning OpenGL and only needs to get this to work somehow. If you need raw speed, there's shaders and vertex buffers and all sorts of both neat and complicated things.
The simplest way is to load the PNG into a texture (assuming you have the ability to load images into memory, you do need htat), then draw it with a quad setting appropriate texture coordinates (they go from 0 to 1 with floating point coordinates, so you need to divide by texture width or height accordingly).
Use glBegin(GL_QUADS), glTexcoord2f(), glVertex2f(), glEnd() for the simplest (but not fastest) way to draw this.
For making zero top left, either use gluOrtho() to set up the view matrix differently from normal GL (look up the docs for that function, set top to 0 and bottom to 1 or screen_height if you want integer coords) or just make change your drawing loop and just do glVertex2f(x/screen_width, 1-y/screen_height).
There are better and faster ways to do this, but this is probably one of the easiest if you're learning raw OpenGL from scratch.
A suggestion, if I may. I use SDL to load my textures, so what I did is :
1. I loaded the texture
2. I determined how to separate the spritesheet into separate sprites.
3. I split them into separate surfaces
4. I make a texture for each one (I have a sprite class to manage them).
5. Free the surfaces.
This takes more time (obviously) on loading, but pays of later.
This way it's a lot easier (and faster), as you only have to calculate the index of the texture you want to display, and then display it. Then, you can scale/translate it as you like and call a display list to render it to whatever you want. Or, you could do it in immediate mode, either works :)