Axis Aligned Bounding Box Collision Detection Issue - c++

I have attempted to produce an algorithm that uses world coordinates and a bounding box structure to
detect collision between two bounding boxes. I really don't know what I'm doing, but I thought the code below would work. My issue is that it only detects collision if the bounding boxes are on the exact same x,y,z position.
BOOL AABB::isCollidedWith(AABB* bb)
{
if(bb == NULL) return FALSE;
float radX1,radX2;
float radY1,radY2;
float radZ1,radZ2;
float arr[12];
//please note that all the mins are set to 0
//and all the maxes are set to 1
radX1 = (bb->maxX - bb->minX) / 2;
radX2 = (this->maxX - this->minX) / 2;
radY1 = (bb->maxY - bb->minY) / 2;
radY2 = (this->maxY - this->minY) / 2;
radZ1 = (bb->maxZ - bb->minZ) / 2;
radZ2 = (this->maxZ - this->minZ) / 2;
//bb coords
arr[1] = bb->bbX - radX1;
arr[2] = bb->bbX + radX1;
arr[3] = bb->bbY - radY1;
arr[4] = bb->bbY + radY1;
arr[5] = bb->bbZ - radZ1;
arr[6] = bb->bbZ + radZ1;
//this coords
arr[7] = this->bbX - radX2;
arr[8] = this->bbX + radX2;
arr[9] = this->bbY - radY2;
arr[10] = this->bbY + radY2;
arr[11] = this->bbZ - radZ2;
arr[12] = this->bbZ + radZ2;
if(arr[2] >= arr[7] && arr[1] <= arr[8])
{
if(arr[4] >= arr[9] && arr[3] <= arr[10])
{
if(arr[6] >= arr[11] && arr[5] <= arr[12])
{
this->collided = TRUE;
OutputDebugStringA("Collided!\n");
return TRUE;
}
}
}
}
Structures I am comparing:
AABB* aabb1 = new AABB(0.0f,0.0f,0.0f,1.0f,1.0f,1.0f,0.0f,0.0f,0.0f);
AABB* aabb2 = new AABB(0.0f,0.0f,0.0f,1.0f,1.0f,1.0f,0.0f,0.0f,0.0f);
aabb2->isCollidedWith(aabb1);
Constructor snippet :
Also note that the last three parameters dictate the x,y,z cords of the bounding box
AABB::AABB(float minx,float maxx,float miny,float maxy,float minz,float maxz,float x,float y,float z)
{
this->minX = minx;
this->maxX = maxx;
this->minY = miny;
this->maxY = maxy;
this->minZ = minz;
this->maxZ = maxz;
Any help,criticism, or advice would help.

As you are creating the boxes with minX=0.0 and maxX=0.0, the bbX coordinate must be the same for the boxes to collide (because radX = 0). The same goes for minZ=maxZ=1.0.
Note the order of parameters in your constructor: it's minX, maxX, minY, maxY, minZ, maxZ and not minX, minY, minZ, maxX, maxY, maxZ (I guess you supposed the second order and wanted to define a box of 1.0 x 1.0 x 1.0 dimensions).

Simple Error! I ignored the way the parameters were listed, causing the issue.
Also, I had subtract 0.5 from each member of the array "arr" to find the center of AABB.

Related

How do I get a calculation result from a shader?

I load a 3D-model from a file and need to see it all on my screen. All vertices should be on the screen within the main window. Then I rotate and zoom the model and at some point I would like to fit the model to the window again. So, I have written the function OptimiseView (see below) which multiplies the view matrix by each vertex position and then calculates minimum and maximum coordinates of the screen plane.
The above multiplication takes a lot of time. My shader does the same multiplication but I can't manage to store minimum and maximum coordinates in the shader (GPU) and return these values back to the program (CPU) after processing the last vertex.
Is this possible at all? How do CAD-systems implement this Fit View (Optimise View) feature? How does a space mouse (e.g. 3Dconnexion) work in this relation?
I am currently using C++ and OpenGL.
void OptimiseView(glm::mat4& view, glm::mat4* proj, int trianglesNumber, float* positions)
{
if ((rotAngleX != 0) || (rotAngleY != 0) || (mouseScroll != 0))
{
float minX, maxX, minY, maxY, minZ, maxZ;
float centreX, centreY;
float largestDimension;
int triangleCount{ 0 };
glm::vec4 vertexPosition;
if (trianglesNumber < 1)
{
minX = maxX = minY = maxY = minZ = maxZ = 0;
}
else
{
vertexPosition = view * glm::vec4(
positions[0],
positions[1],
positions[2],
1.0f);
minX = maxX = vertexPosition.x;
minY = maxY = vertexPosition.y;
minZ = maxZ = vertexPosition.z;
}
while (triangleCount < trianglesNumber)
{
for (int initialPosition : {0, 3, 9})
{
vertexPosition = view * glm::vec4(
positions[triangleCount * 9 * 2 + initialPosition],
positions[triangleCount * 9 * 2 + initialPosition + 1],
positions[triangleCount * 9 * 2 + initialPosition + 2],
1.0f);
if (vertexPosition.x < minX) minX = vertexPosition.x;
if (vertexPosition.x > maxX) maxX = vertexPosition.x;
if (vertexPosition.y < minY) minY = vertexPosition.y;
if (vertexPosition.y > maxY) maxY = vertexPosition.y;
if (vertexPosition.z < minZ) minZ = vertexPosition.z;
if (vertexPosition.z > maxZ) maxZ = vertexPosition.z;
}
triangleCount++;
}
centreX = minX + (maxX - minX) / 2.0f;
centreY = minY + (maxY - minY) / 2.0f;
largestDimension = ((maxX - minX) >= (maxY - minY)) ? (maxX - minX) : (maxY - minY);
minX = centreX - largestDimension / 2.0f;
maxX = centreX + largestDimension / 2.0f;
minY = centreY - largestDimension / 2.0f;
maxY = centreY + largestDimension / 2.0f;
*proj = glm::ortho(minX, maxX, minY, maxY, -minZ, -maxZ);
}
}
I have also tried to use global variables in the shader but they seem to store values during processing a single vertex only.
I know the Fit View feature already works in different CAD-systems and am just wondering what approach would be the best.
Thanks.
After more deep research, I found the OpenGL thing which is called 'Transform Feedback'. The best practical manual and the answer to my question is here.
More information is here.
It will also be useful to read the book of Sam Buss '3D Computer Graphics: A mathematical approach with OpenGL.' It contains a lot of examples on using different OpenGL functions.

weird inaccuracy in line rotation - c++

I have programmed a simple dragon curve fractal. It seems to work for the most part, but there is an odd logical error that shifts the rotation of certain lines by one pixel. This wouldn't normally be an issue, but after a few generations, at the right size, the fractal begins to look wonky.
I am using open cv in c++ to generate it, but I'm pretty sure it's a logical error rather than a display error. I have printed the values to the console multiple times and seen for myself that there is a one-digit difference between values that are intended to be the exact same - meaning a line may have a y of 200 at one end and 201 at another.
Here is the full code:
#include<iostream>
#include<cmath>
#include<opencv2/opencv.hpp>
const int width=500;
const int height=500;
const double PI=std::atan(1)*4.0;
struct point{
double x;
double y;
point(double x_,double y_){
x=x_;
y=y_;
}};
cv::Mat img(width,height,CV_8UC3,cv::Scalar(255,255,255));
double deg_to_rad(double degrees){return degrees*PI/180;}
point rotate(int degree, int centx, int centy, int ll) {
double radians = deg_to_rad(degree);
return point(centx + (ll * std::cos(radians)), centy + (ll * std::sin(radians)));
}
void generate(point & r, std::vector < point > & verticies, int rotation = 90) {
int curRotation = 90;
bool start = true;
point center = r;
point rot(0, 0);
std::vector<point> verticiesc(verticies);
for (point i: verticiesc) {
double dx = center.x - i.x;
double dy = center.y - i.y;
//distance from centre
int ll = std::sqrt(dx * dx + dy * dy);
//angle from centre
curRotation = std::atan2(dy, dx) * 180 / PI;
//add 90 degrees of rotation
rot = rotate(curRotation + rotation, center.x, center.y, ll);
verticies.push_back(rot);
//endpoint, where the next centre will be
if (start) {
r = rot;
start = false;
}
}
}
void gen(int gens, int bwidth = 1) {
int ll = 7;
std::vector < point > verticies = {
point(width / 2, height / 2 - ll),
point(width / 2, height / 2)
};
point rot(width / 2, height / 2);
for (int i = 0; i < gens; i++) {
generate(rot, verticies);
}
//draw lines
for (int i = 0; i < verticies.size(); i += 2) {
cv::line(img, cv::Point(verticies[i].x, verticies[i].y), cv::Point(verticies[i + 1].x, verticies[i + 1].y), cv::Scalar(0, 0, 0), 1, 8);
}
}
int main() {
gen(10);
cv::imshow("", img);
cv::waitKey(0);
return 0;
}
First, you use int to store point coordinates - that's a bad idea - you lose all accuracy of point position. Use double or float.
Second, your method for drawing fractals is not too stable numericly. You'd better store original shape and all rotation/translation/scale that indicate where and how to draw scaled copies of the original shape.
Also, I believe this is a bug:
for(point i: verices)
{
...
vertices.push_back(rot);
...
}
Changing size of vertices while inside such a for-loop might cause a crash or UB.
Turns out it was to do with floating-point precision. I changed
x=x_;
y=y_;
to
x=std::round(x_);
y=std::round(y_);
and it works.

How could I fill in my circle with a solid color using the distance formula?

I am a beginner in c++ and have coded a for loop to show a hollow circle when I run the code, however, I was wondering how I could achieve a filled-in circle using the distance formula (d = sqrt((ax-bx)^2 + (ay-by)^2). Here's what I have so far! Any help would be appreciated!
int MAX = 728;
for (float t = 0; t < 2 * 3.14; t += 0.01)
SetPixel(MAX / 4 + MAX / 6 * sin(t), MAX / 4 + MAX / 6 * cos(t), 255, 255, 0);
#include <windows.h>
#include <iostream>
using namespace std;
int main()
{
HWND consoleWindow = GetConsoleWindow(); // Get a console handle
HDC consoleDC = GetDC(consoleWindow); // Get a handle to device context
int max = 628;
float i = 0;
float t;
float doublePi = 6.29;
for (i = 0.0; i < max; i += 2.0) {
for (t = 0.0; t < doublePi; t += 0.01) {
SetPixel(consoleDC, max / 4 + (max - i) / 6 * sin(t), max / 4 + (max - i) / 6 * cos(t), RGB(255, 255, 0));
}
}
ReleaseDC(consoleWindow, consoleDC);
cin.ignore();
return 0;
}
Working almost well. Draw and fill in! A little slow...
Pffff... do not use sin and cos! instead use the sqrt(1-x^2) approach. You can view the formula rendering a circle in google for example: https://www.google.com/search?q=sqrt(1-x^2)
I edit this answer because it seems that is not clear:
float radius = 50.0f;
for (int x = -radius; x <= radius; ++x) {
int d = round(sqrt(1.0f - (x * x / radius / radius)) * radius);
for (int y = -d; y <= d; ++y) {
SetPixel(x, y, 255, 255, 0);
}
}
Note: each graphic library is different, so I assumed that you used rightfully the "SetPixel" function.
Now, for most people say the sqrt(1-x^2) approach should be enough, but it seem that some downvoters does not think the same XD.
Inefficient as can be, and probably the last way you really want to draw a circle ... but ...
Over the entire square encompassing your circle, calculate each pixel's distance from the center and set if under or equal the radius.
// Draw a circle centered at (Xcenter,Ycenter) with given radius using distance formula
void drawCircle(HDC dc, int XCenter, int YCenter, int radius, COLORREF c) {
double fRad = radius * 1.0; // Just a shortcut to avoid thrashing data types
for (int x = XCenter - radius; x<XCenter + radius; x++) {
for (int y = YCenter - radius; y<YCenter + radius; y++) {
double d = sqrt(((x - XCenter) * (x - XCenter)) + ((y - YCenter) * (y - YCenter)) );
if (d <= fRad) SetPixel(dc, x, y, c);
}
}
}
Caveat: No more caveat, used a C++ environment and tested it this time. :-)
Call thusly:
int main()
{
HWND consoleWindow = GetConsoleWindow();
HDC consoleDC = GetDC(consoleWindow);
drawCircle(consoleDC, 50, 50, 20, RGB(255, 0, 255));
ReleaseDC(consoleWindow, consoleDC);
return 0;
}

Perlin Noise getting wrong values in Y axis (C++)

Issue
I'm trying to implement the Perlin Noise algorithm in 2D with a single octave with a size of 16x16. I'm using this as heightmap data for a terrain, however it only seems to work in one axis. Whenever the sample point moves to a new Y section in the Perlin Noise grid, the gradient is very different from what I expect (for example, it often flips from 0.98 to -0.97, which is a very sudden change).
This image shows the staggered terrain in the z direction (which is the y axis in the 2D Perlin Noise grid)
Code
I've put the code that calculates which sample point to use at the end since it's quite long and I believe it's not where the issue is, but essentially I scale down the terrain to match the Perlin Noise grid (16x16) and then sample through all the points.
Gradient At Point
So the code that calculates out the gradient at a sample point is the following:
// Find the gradient at a certain sample point
float PerlinNoise::gradientAt(Vector2 point)
{
// Decimal part of float
float relativeX = point.x - (int)point.x;
float relativeY = point.y - (int)point.y;
Vector2 relativePoint = Vector2(relativeX, relativeY);
vector<float> weights(4);
// Find the weights of the 4 surrounding points
weights = surroundingWeights(point);
float fadeX = fadeFunction(relativePoint.x);
float fadeY = fadeFunction(relativePoint.y);
float lerpA = MathUtils::lerp(weights[0], weights[1], fadeX);
float lerpB = MathUtils::lerp(weights[2], weights[3], fadeX);
float lerpC = MathUtils::lerp(lerpA, lerpB, fadeY);
return lerpC;
}
Surrounding Weights of Point
I believe the issue is somewhere here, in the function that calculates the weights for the 4 surrounding points of a sample point, but I can't seem to figure out what is wrong since all the values seem sensible in the function when stepping through it.
// Find the surrounding weight of a point
vector<float> PerlinNoise::surroundingWeights(Vector2 point){
// Produces correct values
vector<Vector2> surroundingPoints = surroundingPointsOf(point);
vector<float> weights;
for (unsigned i = 0; i < surroundingPoints.size(); ++i) {
// The corner to the sample point
Vector2 cornerToPoint = surroundingPoints[i].toVector(point);
// Getting the seeded vector from the grid
float x = surroundingPoints[i].x;
float y = surroundingPoints[i].y;
Vector2 seededVector = baseGrid[x][y];
// Dot product between the seededVector and corner to the sample point vector
float dotProduct = cornerToPoint.dot(seededVector);
weights.push_back(dotProduct);
}
return weights;
}
OpenGL Setup and Sample Point
Setting up the heightmap and getting the sample point. Variables 'wrongA' and 'wrongA' is an example of when the gradient flips and changes suddenly.
void HeightMap::GenerateRandomTerrain() {
int perlinGridSize = 16;
PerlinNoise perlin_noise = PerlinNoise(perlinGridSize, perlinGridSize);
numVertices = RAW_WIDTH * RAW_HEIGHT;
numIndices = (RAW_WIDTH - 1) * (RAW_HEIGHT - 1) * 6;
vertices = new Vector3[numVertices];
textureCoords = new Vector2[numVertices];
indices = new GLuint[numIndices];
float perlinScale = RAW_HEIGHT/ (float) (perlinGridSize -1);
float height = 50;
float wrongA = perlin_noise.gradientAt(Vector2(0, 68.0f / perlinScale));
float wrongB = perlin_noise.gradientAt(Vector2(0, 69.0f / perlinScale));
for (int x = 0; x < RAW_WIDTH; ++x) {
for (int z = 0; z < RAW_HEIGHT; ++z) {
int offset = (x* RAW_WIDTH) + z;
float xVal = (float)x / perlinScale;
float yVal = (float)z / perlinScale;
float noise = perlin_noise.gradientAt(Vector2( xVal , yVal));
vertices[offset] = Vector3(x * HEIGHTMAP_X, noise * height, z * HEIGHTMAP_Z);
textureCoords[offset] = Vector2(x * HEIGHTMAP_TEX_X, z * HEIGHTMAP_TEX_Z);
}
}
numIndices = 0;
for (int x = 0; x < RAW_WIDTH - 1; ++x) {
for (int z = 0; z < RAW_HEIGHT - 1; ++z) {
int a = (x * (RAW_WIDTH)) + z;
int b = ((x + 1)* (RAW_WIDTH)) + z;
int c = ((x + 1)* (RAW_WIDTH)) + (z + 1);
int d = (x * (RAW_WIDTH)) + (z + 1);
indices[numIndices++] = c;
indices[numIndices++] = b;
indices[numIndices++] = a;
indices[numIndices++] = a;
indices[numIndices++] = d;
indices[numIndices++] = c;
}
}
BufferData();
}
Turned out the issue was in the interpolation stage:
float lerpA = MathUtils::lerp(weights[0], weights[1], fadeX);
float lerpB = MathUtils::lerp(weights[2], weights[3], fadeX);
float lerpC = MathUtils::lerp(lerpA, lerpB, fadeY);
I had the interpolation in the y axis the wrong way around, so it should have been:
lerp(lerpB, lerpA, fadeY)
Instead of:
lerp(lerpA, lerpB, fadeY)

Separating Axis Theorem is driving me nuts!

i am working on an implementation of the Separting Axis Theorem for use in 2D games. It kind of works but just kind of.
I use it like this:
bool penetration = sat(c1, c2) && sat(c2, c1);
Where c1 and c2 are of type Convex, defined as:
class Convex
{
public:
float tx, ty;
public:
std::vector<Point> p;
void translate(float x, float y) {
tx = x;
ty = y;
}
};
(Point is a structure of float x, float y)
The points are typed in clockwise.
My current code (ignore Qt debug):
bool sat(Convex c1, Convex c2, QPainter *debug)
{
//Debug
QColor col[] = {QColor(255, 0, 0), QColor(0, 255, 0), QColor(0, 0, 255), QColor(0, 0, 0)};
bool ret = true;
int c1_faces = c1.p.size();
int c2_faces = c2.p.size();
//For every face in c1
for(int i = 0; i < c1_faces; i++)
{
//Grab a face (face x, face y)
float fx = c1.p[i].x - c1.p[(i + 1) % c1_faces].x;
float fy = c1.p[i].y - c1.p[(i + 1) % c1_faces].y;
//Create a perpendicular axis to project on (axis x, axis y)
float ax = -fy, ay = fx;
//Normalize the axis
float len_v = sqrt(ax * ax + ay * ay);
ax /= len_v;
ay /= len_v;
//Debug graphics (ignore)
debug->setPen(col[i]);
//Draw the face
debug->drawLine(QLineF(c1.tx + c1.p[i].x, c1.ty + c1.p[i].y, c1.p[(i + 1) % c1_faces].x + c1.tx, c1.p[(i + 1) % c1_faces].y + c1.ty));
//Draw the axis
debug->save();
debug->translate(c1.p[i].x, c1.p[i].y);
debug->drawLine(QLineF(c1.tx, c1.ty, ax * 100 + c1.tx, ay * 100 + c1.ty));
debug->drawEllipse(QPointF(ax * 100 + c1.tx, ay * 100 + c1.ty), 10, 10);
debug->restore();
//Carve out the min and max values
float c1_min = FLT_MAX, c1_max = FLT_MIN;
float c2_min = FLT_MAX, c2_max = FLT_MIN;
//Project every point in c1 on the axis and store min and max
for(int j = 0; j < c1_faces; j++)
{
float c1_proj = (ax * (c1.p[j].x + c1.tx) + ay * (c1.p[j].y + c1.ty)) / (ax * ax + ay * ay);
c1_min = min(c1_proj, c1_min);
c1_max = max(c1_proj, c1_max);
}
//Project every point in c2 on the axis and store min and max
for(int j = 0; j < c2_faces; j++)
{
float c2_proj = (ax * (c2.p[j].x + c2.tx) + ay * (c2.p[j].y + c2.ty)) / (ax * ax + ay * ay);
c2_min = min(c2_proj, c2_min);
c2_max = max(c2_proj, c2_max);
}
//Return if the projections do not overlap
if(!(c1_max >= c2_min && c1_min <= c2_max))
ret = false; //return false;
}
return ret; //return true;
}
What am i doing wrong? It registers collision perfectly but is over sensitive on one edge (in my test using a triangle and a diamond):
//Triangle
push_back(Point(0, -150));
push_back(Point(0, 50));
push_back(Point(-100, 100));
//Diamond
push_back(Point(0, -100));
push_back(Point(100, 0));
push_back(Point(0, 100));
push_back(Point(-100, 0));
I am getting this mega-adhd over this, please help me out :)
http://u8999827.fsdata.se/sat.png
OK, I was wrong the first time. Looking at your picture of a failure case it is obvious a separating axis exists and is one of the normals (the normal to the long edge of the triangle). The projection is correct, however, your bounds are not.
I think the error is here:
float c1_min = FLT_MAX, c1_max = FLT_MIN;
float c2_min = FLT_MAX, c2_max = FLT_MIN;
FLT_MIN is the smallest normal positive number representable by a float, not the most negative number. In fact you need:
float c1_min = FLT_MAX, c1_max = -FLT_MAX;
float c2_min = FLT_MAX, c2_max = -FLT_MAX;
or even better for C++
float c1_min = std::numeric_limits<float>::max(), c1_max = -c1_min;
float c2_min = std::numeric_limits<float>::max(), c2_max = -c2_min;
because you're probably seeing negative projections onto the axis.