Calculating AABB from Box (center, halfSize, rotation) - c++

I want to calculate AABB (axis aligned bounding box) from my Box class.
The box class:
Box{
Point3D center; //x,y,z
Point3D halfSize; //x,y,z
Point3D rotation; //x,y,z rotation
};
The AABB class (Box, but without rotation):
BoundingBox{
Point3D center; //x,y,z
Point3D halfSize; //x,y,z
};
Ofc, when rotation = (0,0,0), BoundingBox = Box. But how to calculate minimum BoundingBox that contains everything from Box when rotation = (rx,ry,rz)?
If somebody asks: the rotation is in radians and I use it in DirectX matrix rotation:
XMMATRIX rotX = XMMatrixRotationX( rotation.getX() );
XMMATRIX rotY = XMMatrixRotationY( rotation.getY() );
XMMATRIX rotZ = XMMatrixRotationZ( rotation.getZ() );
XMMATRIX scale = XMMatrixScaling( 1.0f, 1.0f, 1.0f );
XMMATRIX translate = XMMatrixTranslation( center.getX(), center.getY(), center.getZ() );
XMMATRIX worldM = scale * rotX * rotY * rotZ * translate;

You can use matrix rotations in Cartesian coordinates. A rotation of an angle A around the x axis is defined by the matrix:
1 0 0
Rx(A) = 0 cos(A) -sin(A)
0 sin(A) cos(A)
If you do the same for an angle B around y and C around z you have:
cos(B) 0 sin(B)
Ry(B) = 0 1 0
-sin(B) 0 cos(A)
and
cos(C) -sin(C) 0
Rz(C) = sin(C) cos(C) 0
0 0 1
With this you can calculate (even analytically) the final rotation matrix. Let's say that you rotate (in that order) along z, then along y then along x (note that the axis x,y,z are fixed in space, they do not rotate at each rotation). The final matrix is the product:
R = Rx(A) Ry(B) Rz(C)
Now you can construct vectors with the positions of the six corners and apply the full rotation matrix to these vectors. This will give the positions of the six corners in the rotated version. Then just calculate the distance between opposing corners and you have the new bounding box dimensions.

Well, you should apply the rotation on the vertices of the original bounding box (for the purposes of the calculation), then iterate over all of them to find the min and max x, y and z of all the vertices. That would define your axis-aligned bounding box. That's it at its most basic form, you should try and figure out the details. I hope that's a good start. :)

Related

Project Points on near plane using NDC space

I have severals pairs of points in world space each pair have a different depth. I want to project those points on the near plane of the view frustrum, then recompute their new world position.
note: I want to keep the perspective effect
To do so, I convert the point's location in NDC space. I think that each pair of points on NDC space with the same z value lie on the same plane, parallel to the view direction. So if I set their z value to -1, they should lie on the near plane.
Now that I have thoose new NDC locations I need their world position, I lost the w component by changing the depth, I need to recompute it.
I found this link: unproject ndc
which said that:
wclip * inverse(mvp) * vec4(ndc.xyz, 1.0f) = 1.0f
wclip = 1.0f / (inverse(mvp) * vec4(ndc.xyz, 1.0f))
my full code:
glm::vec4 homogeneousClipSpaceLeft = mvp * leftAnchor;
glm::vec4 homogeneousClipSpaceRight = mvp * rightAnchor;
glm::vec3 ndc_left = homogeneousClipSpaceLeft.xyz() / homogeneousClipSpaceLeft.w;
glm::vec3 ndc_right = homogeneousClipSpaceRight.xyz() / homogeneousClipSpaceRight.w;
ndc_left.z = -1.0f;
ndc_right.z = -1.0f;
float clipWLeft = (1.0f / (inverseMVP * glm::vec4(ndc_left, 1.0f)).w);
float clipWRight = (1.0f / (inverseMVP * glm::vec4(ndc_right, 1.0f)).w);
glm::vec3 worldPositionLeft = clipWLeft * inverseMVP * (glm::vec4(ndc_left, 1.0f));
glm::vec3 worldPositionRight = clipWRight * inverseMVP * (glm::vec4(ndc_right, 1.0f));
It should work in practice but i get weird result, I start with 2 points in world space:
left world position: -116.463 15.6386 -167.327
right world position: 271.014 15.6386 -167.327
left NDC position: -0.59719 0.0790622 -1
right NDC position: 0.722784 0.0790622 -1
final left position: 31.4092 -9.22973 1251.16
final right position: 31.6823 -9.22981 1251.17
mvp
4.83644 0 0 0
0 4.51071 0 0
0 0 -1.0002 -1
-284.584 41.706 1250.66 1252.41
Am I doing something wrong ?
Would you recommend this way to project pair of points to the near plane, with perspective ?
If glm::vec3 ndc_left and glm::vec3 ndc_right are normalized device coordiantes, then the following projects the coordinates to the near plane in normaized device space:
ndc_left.z = -1.0f;
ndc_right.z = -1.0f;
If you want to get the model positon of a point in normalized device space in Cartesian coordinates, then you have to transform the point by the invers model view projection matrix and to devide the x, y and z component by the w component of the result. Not the transfomation by inverseMVP gives a Homogeneous coordinate:
glm::vec4 wlh = inverseMVP * glm::vec4(ndc_left, 1.0f);
glm::vec4 wrh = inverseMVP * glm::vec4(ndc_right, 1.0f);
glm::vec3 worldPositionLeft = glm::vec3( wlh.x, wlh.y, wlh.z ) / wlh.w;
glm::vec3 worldPositionRight = glm::vec3( wrh.x, wrh.y, wrh.z ) / wrh.w;
Note, that the OpenGL Mathematics (GLM) library provides operations vor "unproject". See glm::unProject.

gluLookAt specification

I have some problems understanding the specification for gluLookAt.
For example the z-axis is defined as:
F = ( centerX - eyeX, centerY - eyeY, centerZ - eyeZ )
with center being the point the camera looks at and eye being the position the camera is at.
f = F / |F|
and the View-Matrix M is defined as:
( x[0] x[1] x[2] 0 )
( y[0] y[1] y[2] 0 )
(-f[0] -f[1] -f[2] 0 )
( 0 0 0 1 )
with x and y being the x,y-axis and f being the z-axis
If my camera is positioned at (0, 0, 5) and the camera looks at the center. Then f would look along the negative z-axis because of the first equation (center - eye) the f-vector would be: (0,0,0) - (0,0,5) = (0,0,-5)
So far everything makes sense to me, but then the f-vector is multiplied by -1 in the M-Matrix above.
That way the f-vector looks along the positive z-axis and away from the center.
I found that the perspective matrix gluPerspective will also multiply the z-axis of the camrea with -1 which turns the z-axis again and makes it look toward the world's negative z-axis.
So what is the point of multiplying it with -1?
Because gluLookAt is a View Matrix for a right-handed system. In this space, Z-coordinate increments as it goes out of screen, or behind the camera. So all objects that the camera can see have negative Z in view space.
EDIT
You should review your maths. The matrix you exposed lacks the translation to camera position.
Following this notation let's do:
Obtain f normalized, up normalized, s normalized, and u=sn x f. Notice that s must be normalized because f and up may be not be perpendicular and then their cross-product is not a vector of length=1. This is not mentioned in the link above.
Form the matrix and pre-multiply by the translation to camera position, L= M ยท T
The resulting lookAt matrix is:
s.x s.y s.z -dot(s, eye)
u.x u.y u.z -dot(u, eye)
-f.x -f.y -f.z dot(f, eye)
0 0 0 1
With your data: camera=(0,0,5), target=(0,0,0), and up=(0,1,0), the matrix is:
1 0 0 0
0 -1 0 0
0 0 1 -5
0 0 0 1
Let's apply this transformation a the point A=(0,0,4). We get A'=(0,0,-1).
Again for B=(0,0,20), B'=(0,0,15).
A' has a negative Z, so the camera sees it. B' has a positive value, the camera can not see it.
I know this isn't a direct answer to the question but it might help someone who is looking for an equivalent function without using GLU, for example, if they are porting old OpenGL2 code to modern OpenGL.
Here is an equivalent function to gluLookAt(...):
void gluLookAt(float eyeX, float eyeY, float eyeZ,
float centreX, float centreY, float centreZ,
float upX, float upY, float upZ) {
GLfloat mat[16];
float forwardX = centreX - eyeX;
float forwardY = centreY - eyeY;
float forwardZ = centreZ - eyeZ;
glm::vec3 forward = glm::normalize(glm::vec3(forwardX, forwardY, forwardZ));
glm::vec3 right = glm::cross(glm::vec3(forwardX, forwardY, forwardZ),
glm::vec3(upX, upY, upZ));
right = glm::normalize(right);
mat[0] = right.x;
mat[1] = right.y;
mat[2] = right.z;
mat[3] = 0.0f;
mat[4] = upX;
mat[5] = upY;
mat[6] = upZ;
mat[7] = 0.0f;
mat[8] = -forward.x;
mat[9] = -forward.y;
mat[10] = -forward.z;
mat[11] = 0.0f;
mat[12] = 0.0f;
mat[13] = 0.0f;
mat[14] = 0.0f;
mat[15] = 1.0f;
glMultMatrixf(mat);
glTranslatef (-eyeX, -eyeY, -eyeZ);
}

Rotating a Group of Vectors

I am trying to rotate a group of vectors I sampled to the normal of a triangle
If this was correct, the randomly sampled hemisphere would line up with the triangle.
Currently I generate it on the Z-axis and am attempting to rotate all the samples to the normal of the triangle.
but it seems to be "just off"
glm::quat getQuat(glm::vec3 v1, glm::vec3 v2)
{
glm::quat myQuat;
float dot = glm::dot(v1, v2);
if (dot != 1)
{
glm::vec3 aa = glm::normalize(glm::cross(v1, v2));
float w = sqrt(glm::length(v1)*glm::length(v1) * glm::length(v2)*glm::length(v2)) + dot;
myQuat.x = aa.x;
myQuat.y = aa.y;
myQuat.z = aa.z;
myQuat.w = w;
}
return myQuat;
}
Which I pulled from the bottom of this page : http://lolengine.net/blog/2013/09/18/beautiful-maths-quaternion-from-vectors
Then I :
glm::vec3 zaxis = glm::normalize( glm::vec3(0, 0, 1) ); // hardcoded but test orginal axis
glm::vec3 n1 = glm::normalize( glm::cross((p2 - p1), (p3 - p1)) ); //normal
glm::quat myQuat = glm::normalize(getQuat(zaxis, n1));
glm::mat4 rotmat = glm::toMat4(myQuat); //make a rotation matrix
glm::vec4 n3 = rotmat * glm::vec4(n2,1); // current vector I am trying to rotate
Construct 4x4 transform matrix instead of Quaternions.
Do not forget that OpenGL has column wise matrix
so for double m[16];
is X axis vector in m[ 0],m[ 1],m[ 2]
is Y axis vector in m[ 4],m[ 5],m[ 6]
is Z axis vector in m[ 8],m[ 9],m[10]
and position is in m[12],m[13],m[14]
The LCS mean local coordinate system (your triangle or object or whatever) and GCS mean global coordinate system (world or whatever).
All the X,Y,Z vectors should be normalized to unit vectors otherwise scaling will occur.
construction
set Z-axis vector to your triangle normal
set position (LCS origin) to mid point of your triangle (or average point form its vertexes)
now you just need X and Y axises which is easy
let X = any triangle vertex - triangle midpoint
or X = substraction of any 2 vertexes of triangle
The only condition that must be met for X is that it must lie on triangle plane.
Now let Y = X x Z the cross product will create vector perpendicular to X and Z (which also lies in triangle plane).
now put all this inside matrix and load it to OpenGL as ModelView matrix or what ever.

How to rotate a vector to a place that is aligned with Z axis?

I want to rotate a vector to the Z axis and its direction is Z-axis backward. So if the vector is (1,1,1), my result should be (0,0,-sqrt(3)).
My idea is two steps. The first step is to rotate my vector around X axis to the XZ plane. The second step is to rotate the vector in the XZ plane around Y axis to Z axis.
Here is my code:
GLfloat p[4] = {1,1,1,0}; //my vector, in homogeneous coordinates
GLfloat r[4]; //result vector to test
float theta1 = ((double)180/PI)*asin(p[1]/sqrt(p[0]*p[0]+p[1]*p[1]+p[2]*p[2]));
//angle theta1 between the vector and XZ plane, is this right ??? I doubt it !!!
float theta2 = ((double)180/PI)*atan(p[0]/p[2]);
//angle theta2 between the vector's projection in XZ plane and Z axis
GLfloat m[16];
glMatrixMode(GL_MODELVIEW); // get the rotation matrix in model-view matrix
glPushMatrix();
glLoadIdentity();
glRotatef(theta1, 1,0,0); //rotate to the XZ plane
glRotatef(180-theta2,0,1,0); //rotate to the Z axis
glGetFloatv(GL_MODELVIEW_MATRIX, m); // m is column-major.
glPopMatrix();
// use the matrix multiply my vector and get the result vector r[4]
//my expectation is (0,0,-sqrt(3))
r[0] = p[0]*m[0]+p[1]*m[4]+p[2]*m[8]+p[3]*m[12];
r[1] = p[0]*m[1]+p[1]*m[5]+p[2]*m[9]+p[3]*m[13];
r[2] = p[0]*m[2]+p[1]*m[6]+p[2]*m[10]+p[3]*m[14];
r[3] = p[0]*m[3]+p[1]*m[7]+p[2]*m[11]+p[3]*m[15];
However, the result r[4] is not my expectation. So I think I made some mistakes in some places above. Could anyone give me a hint about that ?
To rotate one vector so it faces another:
normalise it
take their dot product to get the cosine of the rotation angle
take their cross product to find an orthogonal rotation vector
rotate around that new vector by the angle found in #2
Step 2 can be omitted if you remember that | A x B | = sin(theta) if A and B are both normalised.

c++ graphical programming

I'm new to c++ 3D, so I may just be missing something obvious, but how do I convert from 3D to 2D and (for a given z location) from 2D to 3D?
You map 3D to 2D via projection. You map 2D to 3D by inserting the appropriate value in the Z element of the vector.
It is a matter of casting a ray from the screen onto a plane which is parallel to x-y and is at the required z location. You then need to find out where on the plane the ray is colliding.
Here's one example, considering that screen_x and screen_y ranges from [0, 1], where 0 is the left-most or top-most coordinate and 1 is right-most or bottom-most, respectively:
Vector3 point_of_contact(-1.0f, -1.0f, -1.0f);
Matrix4 view_matrix = camera->getViewMatrix();
Matrix4 proj_matrix = camera->getProjectionMatrix();
Matrix4 inv_view_proj_matrix = (proj_matrix * view_matrix).inverse();
float nx = (2.0f * screen_x) - 1.0f;
float ny = 1.0f - (2.0f * screen_y);
Vector3 near_point(nx, ny, -1.0f);
Vector3 mid_point(nx, ny, 0.0f);
// Get ray origin and ray target on near plane in world space
Vector3 ray_origin, ray_target;
ray_origin = inv_view_proj_matrix * near_point;
ray_target = inv_view_proj_matrix * mid_point;
Vector3 ray_direction = ray_target - ray_origin;
ray_direction.normalise();
// Check for collision with the plane
Vector3 plane_normal(0.0f, 0.0f, 1.0f);
float denominator = plane_normal.dotProduct(ray_direction);
if (fabs(denom) >= std::numeric_limits<float>::epsilon())
{
float num = plane_normal.dotProduct(ray.getOrigin()) + Vector3(0, 0, z_pos);
float distance = -(num/denom);
if (distance > 0)
{
point_of_contact = ray_origin + (ray_direction * distance);
}
}
return point_of_contact
Disclaimer Notice: This solution was taken from bits and pieces of Ogre3D graphics library.
The simplest way is to do a divide by z. Therefore ...
screenX = projectionX / projectionZ;
screenY = projectionY / projectionZ;
That does perspective projection based on distance. Thing is it is often better to use homgeneous coordinates as this simplifies matrix transformation (everything becomes a multiply). Equally this is what D3D and OpenGL use. Understanding how to use non-homogeneous coordinates (ie an (x,y,z) coordinate triple) will be very helpful for things like shader optimisations however.
One lame solution:
^ y
|
|
| /z
| /
+/--------->x
Angle is the angle between the Ox and Oz axes (
#include <cmath>
typedef struct {
double x,y,z;
} Point3D;
typedef struct {
double x,y;
} Point2D
const double angle = M_PI/4; //can be changed
Point2D* projection(Point3D& point) {
Point2D* p = new Point2D();
p->x = point.x + point.z * sin(angle);
p->y = point.y + point.z * cos(angle);
return p;
}
However there are lots of tutorials on this on the net... Have you googled for it?