Say I have a cube. Say the coordinate values are like this. (1 unit an arm)
GLfloat vertA[3] = { 0.5, 0.5, 0.5};
GLfloat vertB[3] = {-0.5, 0.5, 0.5};
GLfloat vertC[3] = {-0.5,-0.5, 0.5};
GLfloat vertD[3] = { 0.5,-0.5, 0.5};
GLfloat vertE[3] = { 0.5, 0.5,-0.5};
GLfloat vertF[3] = {-0.5, 0.5,-0.5};
GLfloat vertG[3] = {-0.5,-0.5,-0.5};
GLfloat vertH[3] = { 0.5,-0.5,-0.5};
If I translate it like
glTranslatef(1,2,3);
then 1,2 and 3 will be added to x,y and z coordinates respectively. and those are the new coordinate values of the translated cube. But if I rotate it some degree (with or without a translation)
glRotatef(25,0,0,1);
what is the coordinates of the rotated cube now?
I am working new in opengl. I am using c++ on windows.
You should make yourself familiar with linear algebra and transformation matrices.
What glRotate will do is generating a rotation matrix and post-multiplying it to the current matrix. You should be aware of some things here: the glTranslate will not directly add anything to the vertex coordinates, and the glRotate will also not change the coordinates. All what these do is changing a single matrix. This matrix will accumulate the composition of all the transformations, and will be applied once to all the vertices during the draw call.
In your case, a rotation of 25 degrees around the z axis is desired, so the z coordinates will not be changed. The rotation matrix will look like this
| cos(25°) -sin(25°) 0 0 |
| sin(25°) cos(25°) 0 0 |
| 0 0 1 0 |
| 0 0 0 1 |
To apply this matrix to a vector (x,y,z,w)^T, we just multiply the matrix by the vector.
Following the rules of that multiplcation, we get a new vector with
x' = cos(25°)*x -sin(25°)*y
y' = sin(25°)*x +cos(25°)*y
z' = z
w' = w
This is just the rotation alone, not considering the translation. But you can put int the values of zour vertex and will get the transformed result back.
Here you are rotating the current matrix 25 degrees in the z axis. This is the the code for glm::rotate which does the same.
template <typename T, precision P>
GLM_FUNC_QUALIFIER detail::tmat4x4<T, P> rotate
(
detail::tmat4x4<T, P> const & m,
T const & angle,
detail::tvec3<T, P> const & v
)
{
T c = cos(a);
T s = sin(a);
detail::tvec3<T, P> axis(normalize(v));
detail::tvec3<T, P> temp((T(1) - c) * axis);
detail::tmat4x4<T, P> Rotate(detail::tmat4x4<T, P>::_null);
Rotate[0][0] = c + temp[0] * axis[0];
Rotate[0][1] = 0 + temp[0] * axis[1] + s * axis[2];
Rotate[0][2] = 0 + temp[0] * axis[2] - s * axis[1];
Rotate[1][0] = 0 + temp[1] * axis[0] - s * axis[2];
Rotate[1][1] = c + temp[1] * axis[1];
Rotate[1][2] = 0 + temp[1] * axis[2] + s * axis[0];
Rotate[2][0] = 0 + temp[2] * axis[0] + s * axis[1];
Rotate[2][1] = 0 + temp[2] * axis[1] - s * axis[0];
Rotate[2][2] = c + temp[2] * axis[2];
detail::tmat4x4<T, P> Result(detail::tmat4x4<T, P>::_null);
Result[0] = m[0] * Rotate[0][0] + m[1] * Rotate[0][1] + m[2] * Rotate[0][2];
Result[1] = m[0] * Rotate[1][0] + m[1] * Rotate[1][1] + m[2] * Rotate[1][2];
Result[2] = m[0] * Rotate[2][0] + m[1] * Rotate[2][1] + m[2] * Rotate[2][2];
Result[3] = m[3];
return Result;
}
Related
I cannot understand the math behind this problem, I am trying to create an FPS camera where I can look freely with my mouse input.
I am trying to rotate and position my lookat point with 180 degrees of freedom. I understand the easier solution is to glRotate the world to fit my perspective, but I do not want this approach. I am fairly unfamiliar with the trigonometry involved here and cannot figure out how to solve this problem the way I want to...
here is my attempt to do this so far...
code to get mouse coordinates relative to the center of the window, then process it in my camera object
#define DEG2RAD(a) (a * (M_PI / 180.0f))//convert to radians
static void glutPassiveMotionHandler(int x, int y) {
glf centerX = WinWidth / 2; glf centerY = WinHeight / 2;//get windows origin point
f speed = 0.2f;
f oldX = mouseX; f oldY = mouseY;
mouseX = DEG2RAD(-((x - centerX)));//get distance from 0 and convert to radians
mouseY = DEG2RAD(-((y - centerY)));//get distance from 0 and convert to radians
f diffX = mouseX - oldX; f diffY = mouseY - oldY;//get difference from last frame to this frame
if (mouseX != 0 || mouseY != 0) {
mainCamera->Rotate(diffX, diffY);
}
Code to rotate the camera
void Camera::Rotate(f angleX, f angleY) {
Camera::refrence = Vector3D::NormalizeVector(Camera::refrence * cos(angleX)) + (Camera::upVector * sin(angleY));//rot up
Camera::refrence = Vector3D::NormalizeVector((Camera::refrence * cos(angleY)) - (Camera::rightVector * sin(angleX)));//rot side to side
};
Camera::refrence is our lookat point, processing the lookat point is handled as follows
void Camera::LookAt(void) {
gluLookAt(
Camera::position.x, Camera::position.y, Camera::position.z,
Camera::refrence.x, Camera::refrence.y, Camera::refrence.z,
Camera::upVector.x, Camera::upVector.y, Camera::upVector.z
);
};
The camera is defined by a position point (position) a target point (refrence) and a up-vector upVector. If you want to change the orientation of the camera, then you've to rotate the direction vector from the position (position) to the target (refrence) rather then the target point by a Rotation matrix.
Note, since the 2 angles are angles which should change an already rotated view, you've to use a rotation matrix, to rotate the vectors which point in an arbitrary direction.
Write a function which set 3x3 rotation matrix around an arbitrary axis:
void RotateMat(float m[], float angle_radians, float x, float y, float z)
{
float c = cos(angle_radians);
float s = sin(angle_radians);
m[0] = x*x*(1.0f-c)+c; m[1] = x*y*(1.0f-c)-z*s; m[2] = x*z*(1.0f-c)+y*s;
m[3] = y*x*(1.0f-c)+z*s; m[4] = y*y*(1.0f-c)+c; m[5] = y*z*(1.0f-c)-x*s;
m[6] = z*x*(1.0f-c)-y*s; m[7] = z*y*(1.0f-c)+x*s; m[8] = z*z*(1.0f-c)+c };
}
Write a function which rotates a 3 dimensional vector by the matrix:
Vector3D Rotate(float m[], const Vector3D &v)
{
Vector3D rv;
rv.x = m[0] * v.x + m[3] * v.y + m[6] * v.z;
rv.y = m[1] * v.x + m[4] * v.y + m[7] * v.z;
rv.z = m[2] * v.x + m[5] * v.y + m[8] * v.z;
return rv;
}
Calculate the vector form the position to the target:
Vector3D los = Vector3D(refrence.x - position.x, refrence.y - position.y, refrence.z - position.z);
Rotate all the vectors around the z axis of the world by angleX:
float rotX[9];
RotateMat(rotX, angleX, Vector3D(0, 0, 1));
los = Rotate(rotX, los);
upVector = Rotate(rotX, upVector);
Rotate all the vectors around the current y axis of the view by angleY:
float rotY[9];
RotateMat(rotY, angleY, Vector3D(los.x, los.y, 0.0));
los = Rotate(rotY, los);
upVector = Rotate(rotY, upVector);
Calculate the new target point:
refrence = Vector3D(position.x + los.x, position.y + los.y, position.z + los.z);
U_Cam_X_angle is left right rotation.. U_Cam_Y_angle is up down rotation.
view_radius is the view distance (zoom) to U_look_point_x, U_look_point_y and U_look_point_z.
This is ALWAYS a negative number! This is because you are always looking in positive direction. Deeper in the screen is more positive.
This is all in radians.
The last three.. eyeX, eyeY and eyeZ is where the camera is in 3D space.
This code is in VB.net. Find a converter online for VB to C++ or do it manually.
Public Sub set_eyes()
Dim sin_x, sin_y, cos_x, cos_y As Single
sin_x = Sin(U_Cam_X_angle + angle_offset)
cos_x = Cos(U_Cam_X_angle + angle_offset)
cos_y = Cos(U_Cam_Y_angle)
sin_y = Sin(U_Cam_Y_angle)
cam_y = Sin(U_Cam_Y_angle) * view_radius
cam_x = (sin_x - (1 - cos_y) * sin_x) * view_radius
cam_z = (cos_x - (1 - cos_y) * cos_x) * view_radius
Glu.gluLookAt(cam_x + U_look_point_x, cam_y + U_look_point_y, cam_z + U_look_point_z, _
U_look_point_x, U_look_point_y, U_look_point_z, 0.0F, 1.0F, 0.0F)
eyeX = cam_x + U_look_point_x
eyeY = cam_y + U_look_point_y
eyeZ = cam_z + U_look_point_z
End Sub
I am playing around with OpenGL and one thing I decided to do is create my own Matrix class, instead of using glm's matrices.
The Matrix class has methods for translating, rotating and scaling the object, which are written below:
Matrix4 Matrix4::translate(Matrix4& matrix, Vector3& translation)
{
Vector4 result(translation, 1.0f);
result.multiply(matrix);
matrix.mElements[3 * 4 + 0] = result.x;
matrix.mElements[3 * 4 + 1] = result.y;
matrix.mElements[3 * 4 + 2] = result.z;
return matrix;
}
Matrix4 Matrix4::rotate(Matrix4& matrix, float angle, Vector3& axis)
{
if (axis.x == 0 && axis.y == 0 && axis.z == 0)
return matrix;
float r = angle;
float s = sin(r);
float c = cos(r);
float omc = 1.0f - cos(r);
float x = axis.x;
float y = axis.y;
float z = axis.z;
matrix.mElements[0 + 0 * 4] = c + x * x * omc;
matrix.mElements[1 + 0 * 4] = x * y * omc - z * s;
matrix.mElements[2 + 0 * 4] = z * x * omc + y * s;
matrix.mElements[0 + 1 * 4] = x * y * omc + z * s;
matrix.mElements[1 + 1 * 4] = c + y * y * omc;
matrix.mElements[2 + 1 * 4] = z * y * omc - x * s;
matrix.mElements[0 + 2 * 4] = x * z * omc - y * s;
matrix.mElements[1 + 2 * 4] = y * z * omc + x * s;
matrix.mElements[2 + 2 * 4] = c + z * z * omc;
return matrix;
}
Matrix4 Matrix4::scale(Matrix4& matrix, Vector3& scaler)
{
matrix.mElements[0 + 0 * 4] *= scaler.x;
matrix.mElements[1 + 0 * 4] *= scaler.x;
matrix.mElements[2 + 0 * 4] *= scaler.x;
matrix.mElements[0 + 1 * 4] *= scaler.y;
matrix.mElements[1 + 1 * 4] *= scaler.y;
matrix.mElements[2 + 1 * 4] *= scaler.y;
matrix.mElements[0 + 2 * 4] *= scaler.z;
matrix.mElements[1 + 2 * 4] *= scaler.z;
matrix.mElements[2 + 2 * 4] *= scaler.z;
matrix.mElements[3 + 3 * 4] = 1;
return matrix;
}
When I call the translate, rotate and scale methods in while loop (in this particular order), it does what I want, which is translate the object, then rotate it around its local origin and scale it. However, when I want to switch order so I call rotation first and then translation, I want it to do this:
But my code dosen't do that. Instead, its doing this:
What can I do so that my object only rotates around the center of the screen and not around it's local origin aswell?
My only guess is that I am doing something wrong with adding the rotation calculation on transformed matrix, but I still can't tell what it is.
EDIT: One thing i need to point out is if i left out the rotation method and i only tackle with translation and scaling, they do what i expect them to do in translation first, rotation second and in rotation first, translation second order.
EDIT 2: Here is how i call these functions in while loop.
Matrix4 trans = Matrix4(1.0f);
trans = Matrix4::rotate(trans, (float)glfwGetTime(), Vector3(0.0f, 0.0f, 1.0f));
trans = Matrix4::translate(trans, Vector3(0.5f, -0.5f, 0.0f));
trans = Matrix4::scale(trans, Vector3(0.5f, 0.5f, 1.0f));
shader.setUniformMatrix4f("uTransform", trans);
You have to concatenate the matrices by a matrix multiplication.
A matrix multiplication C = A * B works like this:
Matrix4x4 A, B, C;
// C = A * B
for ( int k = 0; k < 4; ++ k )
for ( int j = 0; j < 4; ++ j )
C[k][j] = A[0][j] * B[k][0] + A[1][j] * B[k][1] + A[2][j] * B[k][2] + A[3][j] * B[k][3];
I recommend to create specify the matrix class somehow like this:
#include <array>
class Matrix4
{
public:
std::array<float, 16> mElements{
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1 };
const float * dataPtr( void ) const { return mElements.data(); }
Matrix4 & multiply( const Matrix4 &mat );
Matrix4 & translate( const Vector3 &translation );
Matrix4 & scale( const Vector3 &scaler );
Matrix4 & rotate( float angle, const Vector3 &axis );
};
Implement the matrix multiplication. Note, you have to store the result in a buffer.
If you would write the result back to the matrix member directly, then you would change elements, which will read again later in the nested loop and the result wouldn't be correct:
Matrix4& Matrix4::multiply( const Matrix4 &mat )
{
// multiply the existing matrix by the new and store the result in a buffer
const float *A = dataPtr();
const float *B = mat.dataPtr();
std::array<float, 16> C;
for ( int k = 0; k < 4; ++ k ) {
for ( int j = 0; j < 4; ++ j ) {
C[k*4+j] =
A[0*4+j] * B[k*4+0] +
A[1*4+j] * B[k*4+1] +
A[2*4+j] * B[k*4+2] +
A[3*4+j] * B[k*4+3];
}
}
// copy the buffer to the attribute
mElements = C;
return *this;
}
Adapt the methods for translation, rotation and scaling like this:
Matrix4 & Matrix4::translate( const Vector3 &translation )
{
float x = translation.x;
float y = translation.y;
float z = translation.z;
Matrix4 transMat;
transMat.mElements = {
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
x, y, z, 1.0f };
return multiply(transMat);
}
Matrix4 & Matrix4::rotate( float angle, const Vector3 &axis )
{
float x = axis.x;
float y = axis.y;
float z = axis.z;
float c = cos(angle);
float s = sin(angle);
Matrix4 rotationMat;
rotationMat.mElements = {
x*x*(1.0f-c)+c, x*y*(1.0f-c)-z*s, x*z*(1.0f-c)+y*s, 0.0f,
y*x*(1.0f-c)+z*s, y*y*(1.0f-c)+c, y*z*(1.0f-c)-x*s, 0.0f,
z*x*(1.0f-c)-y*s, z*y*(1.0f-c)+x*s, z*z*(1.0f-c)+c, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f };
return multiply(rotationMat);
}
Matrix4 & Matrix4::scale( const Vector3 &scaler )
{
float x = scaler.x;
float y = scaler.y;
float z = scaler.z;
Matrix4 scaleMat;
scaleMat.mElements = {
x, 0.0f, 0.0f, 0.0f,
0.0f, y, 0.0f, 0.0f,
0.0f, 0.0f, z, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f };
return multiply(scaleMat);
}
If you use the matrix class like this,
float angle_radians = ....;
Vector3 scaleVec{ 0.2f, 0.2f, 0.2f };
Vector3 transVec{ 0.3f, 0.3f, 0.0f };
Vector3 rotateVec{ 0.0f, 0.0f, 1.0f };
Matrix4 model;
model.rotate( angle_rad, rotateVec );
model.translate( transVec );
model.scale( scaleVec );
then the result would look like this:
The function rotate() isn't performing an actual rotation. Only generating a partial rotation matrix, and overwriting it over the original matrix.
You need to construct a complete one and multiply it to the original matrix.
Matrix4 Matrix4::rotate(const Matrix4& matrix, float angle, const Vector3& axis)
{
if (axis.x == 0 && axis.y == 0 && axis.z == 0)
return matrix;
float r = angle;
float s = sin(r);
float c = cos(r);
float omc = 1.0f - cos(r);
float x = axis.x;
float y = axis.y;
float z = axis.z;
Matrix4 r;
r.mElements[0 + 0 * 4] = c + x * x * omc;
r.mElements[1 + 0 * 4] = x * y * omc - z * s;
r.mElements[2 + 0 * 4] = z * x * omc + y * s;
r.mElements[3 + 0 * 4] = 0;
r.mElements[0 + 1 * 4] = x * y * omc + z * s;
r.mElements[1 + 1 * 4] = c + y * y * omc;
r.mElements[2 + 1 * 4] = z * y * omc - x * s;
r.mElements[3 + 1 * 4] = 0;
r.mElements[0 + 2 * 4] = x * z * omc - y * s;
r.mElements[1 + 2 * 4] = y * z * omc + x * s;
r.mElements[2 + 2 * 4] = c + z * z * omc;
r.mElements[3 + 2 * 4] = 0;
r.mElements[0 + 3 * 4] = 0;
r.mElements[1 + 3 * 4] = 0;
r.mElements[2 + 3 * 4] = 0;
r.mElements[3 + 3 * 4] = 1;
return r * matrix;
}
I understand that both Euler and Quaternion rotation types have their own distinctive quirks, however the problem that I'm having is that (for example) when performing the following rotations to an object:
rotateX = 90.0
rotateY = 90.0
... Oh, hang on a minute... now the X and Z axis are basically the same!
See, what I want is to rotate a cube say 90 degrees X, 90 degrees Y and still have all axis points back in their original position as opposed of rotating locally.
Any code examples would be ideal - Here is the code I'm currently using:
_model = scale(_scale) *
translate(_position) *
( rotate(_rotation.data[0], 1.0f, 0.0f, 0.0f) *
rotate(_rotation.data[1], 0.0f, 1.0f, 0.0f) *
rotate(_rotation.data[2], 0.0f, 0.0f, 1.0f) );
I have a Math.h that calculates the rotations like so:
template <typename T>
static inline Tmat4<T> rotate(T angle, T x, T y, T z)
{
Tmat4<T> result;
const T x2 = x * x;
const T y2 = y * y;
const T z2 = z * z;
float rads = float(angle) * 0.0174532925f;
const float c = cosf(rads);
const float s = sinf(rads);
const float omc = 1.0f - c;
result[0] = Tvec4<T>(T(x2 * omc + c), T(y * x * omc + z * s), T(x * z * omc - y * s), T(0));
result[1] = Tvec4<T>(T(x * y * omc - z * s), T(y2 * omc + c), T(y * z * omc + x * s), T(0));
result[2] = Tvec4<T>(T(x * z * omc + y * s), T(y * z * omc - x * s), T(z2 * omc + c), T(0));
result[3] = Tvec4<T>(T(0), T(0), T(0), T(1));
return result;
}
I can already rotate point sprite on 0, 90, 180, 270 degrees
Fragment shader
precision lowp float;
uniform sampler2D us_tex;
uniform mat3 um_tex;
void main ()
{
vec2 tex_coords = (um_tex * vec3(gl_PointCoord, 1.0)).xy;
gl_FragColor = texture2D(us_tex, tex_coords);
}
2*2 Matrix operations (i know about GLM - it's great, academic purpose to handle matrix on your own)
typedef GLfloat m3[9]//3*3 matrix
#define DEG_TO_RAD(x) (x * M_PI/180.0f)
void ident_m3(m3 res)
{
memset(res, 0, sizeof(m3));
res[0] = res[4] = res[8] = 1.0f;
}
void trans_m3(m3 res, const p2* pos)
{
ident_m3(res);
res[7] = pos->x;
res[8] = pos->y;
}
void mult_m3(m3 res, const m3 m1, const m3 m2)
{
res[0] = m1[0] * m2[0] + m1[3] * m2[1] + m1[6] * m2[2];
res[1] = m1[1] * m2[0] + m1[4] * m2[1] + m1[7] * m2[2];
res[2] = m1[2] * m2[0] + m1[5] * m2[1] + m1[8] * m2[2];
res[3] = m1[0] * m2[3] + m1[3] * m2[4] + m1[6] * m2[5];
res[4] = m1[1] * m2[3] + m1[4] * m2[4] + m1[7] * m2[5];
res[5] = m1[2] * m2[3] + m1[5] * m2[4] + m1[8] * m2[5];
res[6] = m1[0] * m2[6] + m1[3] * m2[7] + m1[6] * m2[8];
res[7] = m1[1] * m2[6] + m1[4] * m2[7] + m1[7] * m2[8];
res[8] = m1[2] * m2[6] + m1[5] * m2[7] + m1[8] * m2[8];
}
in ParticlesDraw()
m3 r;
rot_m3(r, 90.0f);
...
glUniformMatrix3fv(/*um_tex uniform*/, 1, GL_FALSE, res);
glDrawArrays(GL_POINTS, 0, /*particles count*/);
...
Also i know how rotate ordinary sprite around pos(x,y,z)
Translate to pos(-x,-y,-z)
Rotate
Translate to pos(x,y,z)
Result Matrix = (Rot Matrix * Translate Matrix) * Anti-Traslate Matrix.
I want to rotate point sprite to 45, 32,64,72 e.g any degree (now it rotates not right, last frame 45 deg)
But in this case, i can translate to center of tex (0.5, 0.5), but what will be anti translate - (0.0, 0.0)?
I try something like this, but it does not work for example for 30, 45 rotation, also if my texture is 64*64, do i need to set gl_PointSize to 64.0 for rotation?
This:
Translate to pos(-x,-y,-z)
Rotate
Translate to pos(x,y,z)
Is not the same thing as this:
Result Matrix = (Rot Matrix * Translate Matrix) * Anti-Traslate Matrix.
If you wish to rotate around the point (x,y,z), then you need to do this:
Matrix T1 = Translate(x, y, z);
Matrix R1 = Rotate();
Matrix T2 = Translate(-x, -y, -z);
Which is the same thing as:
Result Matrix = T1 * R1 * T2
I have implemented frustum culling and am checking the bounding box for its intersection with the frustum planes. I added the ability to pause frustum updates which lets me see if the frustum culling has been working correctly. When I turn around after I have paused it, nothing renders behind me and to the left and right side, they taper off as well just as you would expect. Beyond the clip distance (far plane), they still render and I am not sure whether it is a problem with my frustum updating or bounding box checking code or I am using the wrong matrix or what. As I put the distance in the projection matrix at 3000.0f, it still says that bounding boxes well past that are still in the frustum, which isn't the case.
Here is where I create my modelview matrix:
projectionMatrix = glm::perspective(newFOV, 4.0f / 3.0f, 0.1f, 3000.0f);
viewMatrix = glm::mat4(1.0);
viewMatrix = glm::scale(viewMatrix, glm::vec3(1.0, 1.0, -1.0));
viewMatrix = glm::rotate(viewMatrix, anglePitch, glm::vec3(1.0, 0.0, 0.0));
viewMatrix = glm::rotate(viewMatrix, angleYaw, glm::vec3(0.0, 1.0, 0.0));
viewMatrix = glm::translate(viewMatrix, glm::vec3(-x, -y, -z));
modelViewProjectiomMatrix = projectionMatrix * viewMatrix;
The reason I scale it by -1 in the Z direction is because the levels were designed to be rendered with DirectX so I reverse the Z direction.
Here is where I update my frustum:
void CFrustum::calculateFrustum()
{
glm::mat4 mat = camera.getModelViewProjectionMatrix();
// Calculate the LEFT side
m_Frustum[LEFT][A] = (mat[0][3]) + (mat[0][0]);
m_Frustum[LEFT][B] = (mat[1][3]) + (mat[1][0]);
m_Frustum[LEFT][C] = (mat[2][3]) + (mat[2][0]);
m_Frustum[LEFT][D] = (mat[3][3]) + (mat[3][0]);
// Calculate the RIGHT side
m_Frustum[RIGHT][A] = (mat[0][3]) - (mat[0][0]);
m_Frustum[RIGHT][B] = (mat[1][3]) - (mat[1][0]);
m_Frustum[RIGHT][C] = (mat[2][3]) - (mat[2][0]);
m_Frustum[RIGHT][D] = (mat[3][3]) - (mat[3][0]);
// Calculate the TOP side
m_Frustum[TOP][A] = (mat[0][3]) - (mat[0][1]);
m_Frustum[TOP][B] = (mat[1][3]) - (mat[1][1]);
m_Frustum[TOP][C] = (mat[2][3]) - (mat[2][1]);
m_Frustum[TOP][D] = (mat[3][3]) - (mat[3][1]);
// Calculate the BOTTOM side
m_Frustum[BOTTOM][A] = (mat[0][3]) + (mat[0][1]);
m_Frustum[BOTTOM][B] = (mat[1][3]) + (mat[1][1]);
m_Frustum[BOTTOM][C] = (mat[2][3]) + (mat[2][1]);
m_Frustum[BOTTOM][D] = (mat[3][3]) + (mat[3][1]);
// Calculate the FRONT side
m_Frustum[FRONT][A] = (mat[0][3]) + (mat[0][2]);
m_Frustum[FRONT][B] = (mat[1][3]) + (mat[1][2]);
m_Frustum[FRONT][C] = (mat[2][3]) + (mat[2][2]);
m_Frustum[FRONT][D] = (mat[3][3]) + (mat[3][2]);
// Calculate the BACK side
m_Frustum[BACK][A] = (mat[0][3]) - (mat[0][2]);
m_Frustum[BACK][B] = (mat[1][3]) - (mat[1][2]);
m_Frustum[BACK][C] = (mat[2][3]) - (mat[2][2]);
m_Frustum[BACK][D] = (mat[3][3]) - (mat[3][2]);
// Normalize all the sides
NormalizePlane(m_Frustum, LEFT);
NormalizePlane(m_Frustum, RIGHT);
NormalizePlane(m_Frustum, TOP);
NormalizePlane(m_Frustum, BOTTOM);
NormalizePlane(m_Frustum, FRONT);
NormalizePlane(m_Frustum, BACK);
}
And finally, where I check the bounding box:
bool CFrustum::BoxInFrustum( float x, float y, float z, float x2, float y2, float z2)
{
// Go through all of the corners of the box and check then again each plane
// in the frustum. If all of them are behind one of the planes, then it most
// like is not in the frustum.
for(int i = 0; i < 6; i++ )
{
if(m_Frustum[i][A] * x + m_Frustum[i][B] * y + m_Frustum[i][C] * z + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x2 + m_Frustum[i][B] * y + m_Frustum[i][C] * z + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x + m_Frustum[i][B] * y2 + m_Frustum[i][C] * z + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x2 + m_Frustum[i][B] * y2 + m_Frustum[i][C] * z + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x + m_Frustum[i][B] * y + m_Frustum[i][C] * z2 + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x2 + m_Frustum[i][B] * y + m_Frustum[i][C] * z2 + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x + m_Frustum[i][B] * y2 + m_Frustum[i][C] * z2 + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x2 + m_Frustum[i][B] * y2 + m_Frustum[i][C] * z2 + m_Frustum[i][D] > 0) continue;
// If we get here, it isn't in the frustum
return false;
}
// Return a true for the box being inside of the frustum
return true;
}
I've noticed a few things, particularly with how you set up the projection matrix. For starters, gluProject doesn't return a value, unless you're using some kind of wrapper or weird api. gluLookAt is used more often.
Next, assuming the scale, rotate, and translate functions are intended to change the modelview matrix, you need to reverse their order. OpenGL doesn't actually move objects around; instead it effectively moves the origin around, and renders each object using the new definition of <0,0,0>. Thus you 'move' to where you want it to render, then you rotate the axes as needed, then you stretch out the grid.
As for the clipping problem, you may want to give glClipPlane() a good look over. If everything else mostly works, but there seems to be some rounding error, try changing the near clipping plane in your perspective(,,,) function from 0.1 to 1.0 (smaller values tend to mess with the z-buffer).
I see a lot of unfamiliar syntax, so I think you're using some kind of wrapper; but here are some (Qt) code fragments from my own GL project that I use. Might help, dunno:
//This gets called during resize, as well as once during initialization
void GLWidget::resizeGL(int width, int height) {
int side = qMin(width, height);
padX = (width-side)/2.0;
padY = (height-side)/2.0;
glViewport(padX, padY, side, side);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0, 1.0, 1.0, 2400.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
//This fragment gets called at the top of every paint event:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
glLightfv(GL_LIGHT0, GL_POSITION, FV0001);
camMain.stepVars();
gluLookAt(camMain.Pos[0],camMain.Pos[1],camMain.Pos[2],
camMain.Aim[0],camMain.Aim[1],camMain.Aim[2],
0.0,1.0,0.0);
glPolygonMode(GL_FRONT_AND_BACK, drawMode);
//And this fragment represents a typical draw event
void GLWidget::drawFleet(tFleet* tIn) {
if (tIn->firstShip != 0){
glPushMatrix();
glTranslatef(tIn->Pos[0], tIn->Pos[1], tIn->Pos[2]);
glRotatef(tIn->Yaw, 0.0, 1.0, 0.0);
glRotatef(tIn->Pitch,0,0,1);
drawShip(tIn->firstShip);
glPopMatrix();
}
}
I'm working on the assumption that you're newish to GL, so my apologies if I come off as a little pedantic.
I had the same problem.
Given Vinny Rose's answer, I checked the function that creates a normalized plane, and found an error.
This is the corrected version, with the incorrect calculation commented out:
plane plane_normalized(float A, float B, float C, float D) {
// Wrong, this is not a 4D vector
// float nf = 1.0f / sqrtf(A * A + B * B + C * C + D * D);
// Correct
float nf = 1.0f / sqrtf(A * A + B * B + C * C);
return (plane) {{
nf * A,
nf * B,
nf * C,
nf * D
}};
}
My guess is that your NormalizePlane function does something similar.
The point of normalizing is to have a plane in Hessian normal form so that we can do easy half-space tests. If you normalize the plane as you would a four-dimensional vector, the normal direction [A, B, C] is still correct but the offset D is not.
I think you'd get correct results when testing points against the top, bottom, left and right planes because they pass through the origin, and the near plane might be close enough to not notice. (Bounding sphere tests would fail.)
The frustum cull worked as expected for me when I restored the correct normalization.
Here's what I think is happening: The far plane is getting defined correctly but in my testing the D value of that plane is coming out much too small. So objects are getting accepted as being on the correct side of the far plane because the math is forcing the far plane to actually be much farther away than you want.
Try a different approach: (http://www.lighthouse3d.com/tutorials/view-frustum-culling/geometric-approach-extracting-the-planes/)
float tang = tanf(fov * PI / 360.0f);
float nh = near * tang; // near height
float nw = nh * aspect; // near width
float fh = far * tang; // far height
float fw = fh * aspect; // far width
glm::vec3 p,nc,fc,X,Y,Z,Xnw,Ynh;
//camera position
p = glm::vec3(viewMatrix[3][0],viewMatrix[3][1],viewMatrix[3][2]);
// the left vector
glm::vec3 X = glm::vec3(viewMatrix[0][0], viewMatrix[1][0], viewMatrix[2][0]);
// the up vector
glm::vec3 Y = glm::vec3(viewMatrix[0][1], viewMatrix[1][1], viewMatrix[2][1]);
// the look vector
glm::vec3 Z = glm::vec3(viewMatrix[0][2], viewMatrix[1][2], viewMatrix[2][2]);
nc = p - Z * near; // center of the near plane
fc = p - Z * far; // center of the far plane
// the distance to get to the left or right edge of the near plane from nc
Xnw = X * nw;
// the distance to get to top or bottom of the near plane from nc
Ynh = Y * nh;
// the distance to get to the left or right edge of the far plane from fc
Xfw = X * fw;
// the distance to get to top or bottom of the far plane from fc
Yfh = Y * fh;
ntl = nc + Ynh - Xnw; // "near top left"
ntr = nc + Ynh + Xnw; // "near top right" and so on
nbl = nc - Ynh - Xnw;
nbr = nc - Ynh + Xnw;
ftl = fc + Yfh - Xfw;
ftr = fc + Yfh + Xfw;
fbl = fc - Yfh - Xfw;
fbr = fc - Yfh + Xfw;
m_Frustum[TOP] = planeWithPoints(ntr,ntl,ftl);
m_Frustum[BOTTOM] = planeWithPoints(nbl,nbr,fbr);
m_Frustum[LEFT] = planeWithPoints(ntl,nbl,fbl);
m_Frustum[RIGHT] = planeWithPoints(nbr,ntr,fbr);
m_Frustum[FRONT] = planeWithPoints(ntl,ntr,nbr);
m_Frustum[BACK] = planeWithPoints(ftr,ftl,fbl);
// Normalize all the sides
NormalizePlane(m_Frustum, LEFT);
NormalizePlane(m_Frustum, RIGHT);
NormalizePlane(m_Frustum, TOP);
NormalizePlane(m_Frustum, BOTTOM);
NormalizePlane(m_Frustum, FRONT);
NormalizePlane(m_Frustum, BACK);
Then planeWithPoints would be something like:
planeWithPoints(glm::vec3 a, glm::vec3 b, glm::vec3 c){
double A = a.y * (b.z - c.z) + b.y * (c.z - a.z) + c.y * (a.z - b.z);
double B = a.z * (b.x - c.x) + b.z * (c.x - a.x) + c.z * (a.x - b.x);
double C = a.x * (b.y - c.y) + b.x * (c.y - a.y) + c.x * (a.y - b.y);
double D = -(a.x * (b.y * c.z - c.y * b.z) + b.x * (c.y * a.z - a.y * c.z) + c.x * (a.y * b.z - b.y * a.z));
return glm::vec4(A,B,C,D);
}
I didn't test any of the above. But the original reference is there if you need it.
Previous Answer:
OpenGL and GLSL matrices are stored and accessed in column-major order when the matrix is represented by a 2D array. This is also true with GLM as they follow the GLSL standards.
You need to change your frustum creation to the following.
// Calculate the LEFT side (column1 + column4)
m_Frustum[LEFT][A] = (mat[3][0]) + (mat[0][0]);
m_Frustum[LEFT][B] = (mat[3][1]) + (mat[0][1]);
m_Frustum[LEFT][C] = (mat[3][2]) + (mat[0][2]);
m_Frustum[LEFT][D] = (mat[3][3]) + (mat[0][3]);
// Calculate the RIGHT side (-column1 + column4)
m_Frustum[RIGHT][A] = (mat[3][0]) - (mat[0][0]);
m_Frustum[RIGHT][B] = (mat[3][1]) - (mat[0][1]);
m_Frustum[RIGHT][C] = (mat[3][2]) - (mat[0][2]);
m_Frustum[RIGHT][D] = (mat[3][3]) - (mat[0][3]);
// Calculate the TOP side (-column2 + column4)
m_Frustum[TOP][A] = (mat[3][0]) - (mat[1][0]);
m_Frustum[TOP][B] = (mat[3][1]) - (mat[1][1]);
m_Frustum[TOP][C] = (mat[3][2]) - (mat[1][2]);
m_Frustum[TOP][D] = (mat[3][3]) - (mat[1][3]);
// Calculate the BOTTOM side (column2 + column4)
m_Frustum[BOTTOM][A] = (mat[3][0]) + (mat[1][0]);
m_Frustum[BOTTOM][B] = (mat[3][1]) + (mat[1][1]);
m_Frustum[BOTTOM][C] = (mat[3][2]) + (mat[1][2]);
m_Frustum[BOTTOM][D] = (mat[3][3]) + (mat[1][3]);
// Calculate the FRONT side (column3 + column4)
m_Frustum[FRONT][A] = (mat[3][0]) + (mat[2][0]);
m_Frustum[FRONT][B] = (mat[3][1]) + (mat[2][1]);
m_Frustum[FRONT][C] = (mat[3][2]) + (mat[2][2]);
m_Frustum[FRONT][D] = (mat[3][3]) + (mat[2][3]);
// Calculate the BACK side (-column3 + column4)
m_Frustum[BACK][A] = (mat[3][0]) - (mat[2][0]);
m_Frustum[BACK][B] = (mat[3][1]) - (mat[2][1]);
m_Frustum[BACK][C] = (mat[3][2]) - (mat[2][2]);
m_Frustum[BACK][D] = (mat[3][3]) - (mat[2][3]);