Related
According to the thread on this page, The equation given for calculating the depth buffer:
F_depth = 1/z - 1/n/(1/f - 1/n)
is non-linear only because of the perspective divide.(Note that this is a combination of from the view-space z coord to window coord directly)
So, as per my understanding:
to convert it to a linear depth buffer, the only thing we would do is to remove the perspective divide(?) and then perform the glDepthRange(a,b) given here.
In that case, the equation would be like this:
z_linear = z_NDC * W_clip = -(f+n)/(f-n)*z_eye + ( 2fn/(f-n) )
and, with depth range transformation:
z_[0,1] = ( z_linear + 1 ) /2
= ( (f+n)*z_eye - 2fn + f - n )/ ( 2(f-n) )
but, in the learnopenGL site for depth testing this is done:
First we transform the depth value to NDC which is not too difficult:
float ndc = depth * 2.0 - 1.0;
We then take the resulting ndc value and apply the inverse
transformation to retrieve its linear depth value:
float linearDepth = (2.0 * near * far) / (far + near - ndc * (far - near)
how is the non-linear to linear depth-buffer being computed?(i.e. equation being formed)?
Using glm in a right handed system I found the following solutions for converting from ndc depth [-1, 1] to eye depth [0-far].
perspective projection:
float screen_to_eye_depth_perspective(float d, float zNear, float zFar)
{
// [0; 1] -> [-1; 1]
float depth = d * 2.0 - 1.0;
return (2.0 * zNear * zFar) / (zFar + zNear - depth * (zFar - zNear));
}
orthographic projection:
float screen_to_eye_depth_ortho(float d, float zNear, float zFar)
{
// [0; 1] -> [-1; 1]
float depth = d * 2.0 - 1.0;
return (depth * (zFar - zNear) + (zFar + zNear)) / 2.0;
}
I advise you to test with your own near and far value to check the final result.
n = 10;
f = 100;
// -------------
z = 2 * f * n / (f + n) - ndc * (f - n);
z = 2000 / (110 - ndc * 90);
// -------------
ndc = -1;
z = 2000 / (110 + 90) = 10;
ndc = 1;
z = 2000 / (110 - 90) = 100;
I cannot understand the math behind this problem, I am trying to create an FPS camera where I can look freely with my mouse input.
I am trying to rotate and position my lookat point with 180 degrees of freedom. I understand the easier solution is to glRotate the world to fit my perspective, but I do not want this approach. I am fairly unfamiliar with the trigonometry involved here and cannot figure out how to solve this problem the way I want to...
here is my attempt to do this so far...
code to get mouse coordinates relative to the center of the window, then process it in my camera object
#define DEG2RAD(a) (a * (M_PI / 180.0f))//convert to radians
static void glutPassiveMotionHandler(int x, int y) {
glf centerX = WinWidth / 2; glf centerY = WinHeight / 2;//get windows origin point
f speed = 0.2f;
f oldX = mouseX; f oldY = mouseY;
mouseX = DEG2RAD(-((x - centerX)));//get distance from 0 and convert to radians
mouseY = DEG2RAD(-((y - centerY)));//get distance from 0 and convert to radians
f diffX = mouseX - oldX; f diffY = mouseY - oldY;//get difference from last frame to this frame
if (mouseX != 0 || mouseY != 0) {
mainCamera->Rotate(diffX, diffY);
}
Code to rotate the camera
void Camera::Rotate(f angleX, f angleY) {
Camera::refrence = Vector3D::NormalizeVector(Camera::refrence * cos(angleX)) + (Camera::upVector * sin(angleY));//rot up
Camera::refrence = Vector3D::NormalizeVector((Camera::refrence * cos(angleY)) - (Camera::rightVector * sin(angleX)));//rot side to side
};
Camera::refrence is our lookat point, processing the lookat point is handled as follows
void Camera::LookAt(void) {
gluLookAt(
Camera::position.x, Camera::position.y, Camera::position.z,
Camera::refrence.x, Camera::refrence.y, Camera::refrence.z,
Camera::upVector.x, Camera::upVector.y, Camera::upVector.z
);
};
The camera is defined by a position point (position) a target point (refrence) and a up-vector upVector. If you want to change the orientation of the camera, then you've to rotate the direction vector from the position (position) to the target (refrence) rather then the target point by a Rotation matrix.
Note, since the 2 angles are angles which should change an already rotated view, you've to use a rotation matrix, to rotate the vectors which point in an arbitrary direction.
Write a function which set 3x3 rotation matrix around an arbitrary axis:
void RotateMat(float m[], float angle_radians, float x, float y, float z)
{
float c = cos(angle_radians);
float s = sin(angle_radians);
m[0] = x*x*(1.0f-c)+c; m[1] = x*y*(1.0f-c)-z*s; m[2] = x*z*(1.0f-c)+y*s;
m[3] = y*x*(1.0f-c)+z*s; m[4] = y*y*(1.0f-c)+c; m[5] = y*z*(1.0f-c)-x*s;
m[6] = z*x*(1.0f-c)-y*s; m[7] = z*y*(1.0f-c)+x*s; m[8] = z*z*(1.0f-c)+c };
}
Write a function which rotates a 3 dimensional vector by the matrix:
Vector3D Rotate(float m[], const Vector3D &v)
{
Vector3D rv;
rv.x = m[0] * v.x + m[3] * v.y + m[6] * v.z;
rv.y = m[1] * v.x + m[4] * v.y + m[7] * v.z;
rv.z = m[2] * v.x + m[5] * v.y + m[8] * v.z;
return rv;
}
Calculate the vector form the position to the target:
Vector3D los = Vector3D(refrence.x - position.x, refrence.y - position.y, refrence.z - position.z);
Rotate all the vectors around the z axis of the world by angleX:
float rotX[9];
RotateMat(rotX, angleX, Vector3D(0, 0, 1));
los = Rotate(rotX, los);
upVector = Rotate(rotX, upVector);
Rotate all the vectors around the current y axis of the view by angleY:
float rotY[9];
RotateMat(rotY, angleY, Vector3D(los.x, los.y, 0.0));
los = Rotate(rotY, los);
upVector = Rotate(rotY, upVector);
Calculate the new target point:
refrence = Vector3D(position.x + los.x, position.y + los.y, position.z + los.z);
U_Cam_X_angle is left right rotation.. U_Cam_Y_angle is up down rotation.
view_radius is the view distance (zoom) to U_look_point_x, U_look_point_y and U_look_point_z.
This is ALWAYS a negative number! This is because you are always looking in positive direction. Deeper in the screen is more positive.
This is all in radians.
The last three.. eyeX, eyeY and eyeZ is where the camera is in 3D space.
This code is in VB.net. Find a converter online for VB to C++ or do it manually.
Public Sub set_eyes()
Dim sin_x, sin_y, cos_x, cos_y As Single
sin_x = Sin(U_Cam_X_angle + angle_offset)
cos_x = Cos(U_Cam_X_angle + angle_offset)
cos_y = Cos(U_Cam_Y_angle)
sin_y = Sin(U_Cam_Y_angle)
cam_y = Sin(U_Cam_Y_angle) * view_radius
cam_x = (sin_x - (1 - cos_y) * sin_x) * view_radius
cam_z = (cos_x - (1 - cos_y) * cos_x) * view_radius
Glu.gluLookAt(cam_x + U_look_point_x, cam_y + U_look_point_y, cam_z + U_look_point_z, _
U_look_point_x, U_look_point_y, U_look_point_z, 0.0F, 1.0F, 0.0F)
eyeX = cam_x + U_look_point_x
eyeY = cam_y + U_look_point_y
eyeZ = cam_z + U_look_point_z
End Sub
I'm trying to create a 3D viewer for a parallax barrier display, but I'm stuck with camera movements. You can see a parallax barrier display at: displayblocks.org
Multiple views are needed for this effect, this tutorial provide code for calculating the interViewpointDistance depending of the display properties and so selecting the head Position.
Here are the parts of the code involved in the matrix creation:
for (y = 0; y < viewsCountY; y++) {
for (x = 0; x <= viewsCountX; x++) {
viewMatrix = glm::mat4(1.0f);
// selection of the head Position
float cameraX = (float(x - int(viewsCountX / 2))) * interViewpointDistance;
float cameraY = (float(y - int(mviewsCountY / 2))) * interViewpointDistance;
camera.Position = glm::vec3(camera.Position.x + cameraX, camera.Position.y + cameraY, camera.Position.z);
// Move the apex of the frustum to the origin.
viewMatrix = glm::translate(viewMatrix -camera.Position);
projectionMatrix = get_off_Axis_Projection_Matrix();
// render's stuff
// (...)
// glfwSwapBuffers();
}
}
The following code is the projection matrix function. I use the Robert Kooima's paper generalized perspective projection.
glm::mat4 get_off_Axis_Projection_Matrix() {
glm::vec3 Pe = camera.Position;
// space corners coordinates (space points)
glm::vec3 Pa = glm::vec3(screenSizeX, -screenSizeY, 0.0);
glm::vec3 Pb = glm::vec3(screenSizeX, -screenSizeY, 0.0);
glm::vec3 Pc = glm::vec3(screenSizeX, screenSizeY, 0.0);
// Compute an orthonormal basis for the screen.
glm::vec3 Vr = Pb - Pa;
Vr = glm::normalize(Vr);
glm::vec3 Vu = Pc - Pa;
Vu = glm::normalize(Vu);
glm::vec3 Vn = glm::cross(Vr, Vu);
Vn = glm::normalize(Vn);
// Compute the screen corner vectors.
glm::vec3 Va = Pa - Pe;
glm::vec3 Vb = Pb - Pe;
glm::vec3 Vc = Pc - Pe;
//-- Find the distance from the eye to screen plane.
float d = -glm::dot(Va, Vn);
// Find the extent of the perpendicular projection.
float left = glm::dot(Va, Vr) * const_near / d;
float right = glm::dot(Vr, Vb) * const_near / d;
float bottom = glm::dot(Vu, Va) * const_near / d;
float top = glm::dot(Vu, Vc) * const_near / d;
// Load the perpendicular projection.
return glm::frustum(left, right, bottom, top, const_near, const_far + d);
}
These two methods works, and I can see that my multiple views are well projected.
But I cant manage to make a camera that works normally, like in a FPS, with Tilt and Pan.
This code for example give me the "head tracking" effect (but with the mouse), it was handy to test projections, but this is not what I'm looking for.
float cameraX = (mouseX - windowWidth / 2) / (windowWidth * headDisplacementFactor);
float cameraY = (mouseY - windowHeight / 2) / (windowHeight * headDisplacementFactor);
camera.Position = glm::vec3(cameraX, cameraY, 60.0f);
viewMatrix = glm::translate(viewMatrix, -camera.Position);
My camera class works if viewmatrix is created with lookAt. But with the off-axis projection, using lookAt will rotate the scene, by which the correspondence between near plane and screen plane will be lost.
I may need to translate/rotate the space corners coordinates Pa, Pb, Pc, used to create the frustum, but I don't know how.
My Question
Can someone please link a good article/tutorial/anything or maybe even explain how to correctly cast a ray from the mouse coordinates to pick objects in 3D?
I already have the Ray and intersection works, now I only need to create the ray from the mouse click.
I would just like have something which I know actually should work, thats why I ask the professionals here, not something where I am unsure if it is even correct in the first place.
State right now
I have a ray class, which actually works and detects intersection if I set the origin and direction to be the same as the camera, so when I move the camera it actually selects the right thing.
Now I would like to actually have 3D picking with the mouse, not camera movement.
I have read so many other questions about this, 2 tutorials, and especially so much different math stuff, since I am really not good at it.
But that didn't help me much, because the people there often use some "unproject" functions, which seem to actually be deprecated and which I have no idea how to use and also don't have access to.
Right now I set the ray origin to the camera position and then try to get the direction of the ray from the calculations in this tutorial.
And it works a little bit, meaning the selection works when the camera is pointed at the object and also sometimes along the whole y-axis, I have no idea what is happening.
If someone wants to take a look at my code right now:
public Ray2(Camera cam, float mouseX, float mouseY) {
origin = cam.getEye();
float height = 600;
float width = 600;
float aspect = (float) width / (float) height;
float x = (2.0f * mouseX) / width - 1.0f;
float y = 1.0f - (2.0f * mouseX) / height;
float z = 1.0f;
Vector ray_nds = vecmath.vector(x, y, z);
Vector4f clip = new Vector4f(ray_nds.x(), ray_nds.y(), -1.0f, 1.0f);
Matrix proj = vecmath.perspectiveMatrix(60f, aspect, 0.1f, 100f);
proj = proj.invertRigid();
float tempX = proj.get(0, 0) * clip.x + proj.get(1, 0) * clip.y
+ proj.get(2, 0) * clip.z + proj.get(3, 0) * clip.w;
float tempY = proj.get(0, 1) * clip.x + proj.get(1, 1) * clip.y
+ proj.get(2, 1) * clip.z + proj.get(3, 1) * clip.w;
float tempZ = proj.get(0, 2) * clip.x + proj.get(1, 2) * clip.y
+ proj.get(2, 2) * clip.z + proj.get(3, 2) * clip.w;
float tempW = proj.get(0, 3) * clip.x + proj.get(1, 3) * clip.y
+ proj.get(2, 3) * clip.z + proj.get(3, 3) * clip.w;
Vector4f ray_eye = new Vector4f(tempX, tempY, tempZ, tempW);
ray_eye = new Vector4f(ray_eye.x, ray_eye.y, -1.0f, 0.0f);
Matrix view = cam.getTransformation();
view = view.invertRigid();
tempX = view.get(0, 0) * ray_eye.x + view.get(1, 0) * ray_eye.y
+ view.get(2, 0) * ray_eye.z + view.get(3, 0) * ray_eye.w;
tempY = view.get(0, 1) * ray_eye.x + view.get(1, 1) * ray_eye.y
+ view.get(2, 1) * ray_eye.z + view.get(3, 1) * ray_eye.w;
tempZ = view.get(0, 2) * ray_eye.x + view.get(1, 2) * ray_eye.y
+ view.get(2, 2) * ray_eye.z + view.get(3, 2) * ray_eye.w;
tempW = view.get(0, 3) * ray_eye.x + view.get(1, 3) * ray_eye.y
+ view.get(2, 3) * ray_eye.z + view.get(3, 3) * ray_eye.w;
Vector ray_wor = vecmath.vector(tempX, tempY, tempZ);
// don't forget to normalise the vector at some point
ray_wor = ray_wor.normalize();
direction = ray_wor;
}
First,unproject() method is the way to go.It is not deprecated at all.You can find it implemented in GLM math library for example.Here is my implementation of Ray based 3D picking:
// let's check if this renderable's AABB is clicked:
const glm::ivec2& mCoords = _inputManager->GetMouseCoords();
int mouseY = _viewportHeight - mCoords.y;
//unproject twice to build a ray from near to far plane"
glm::vec3 v0 = glm::unProject(glm::vec3(float(mCoords.x), float(mouseY), 0.0f),_camera->Transform().GetView(),_camera->Transform().GetProjection(), _viewport);
glm::vec3 v1 = glm::unProject(glm::vec3(float(mCoords.x), float(mouseY), 1.0f),_camera->Transform().GetView(),_camera->Transform().GetProjection(), _viewport);
glm::vec3 dir = (v1 - v0);
Ray r(_camera->Transform().GetPosition(),dir);
float ishit ;
//construct AABB:
glm::mat4 aabbMatr = glm::translate(glm::mat4(1.0),renderable->Transform().GetPosition());
aabbMatr = glm::scale(aabbMatr,renderable->Transform().GetScale());
//transforms AABB vertices(need it if the origianl bbox is not axis aligned as in this case)
renderable->GetBoundBox()->RecalcVertices(aabbMatr);
//this method makes typical Ray-AABB intersection test:
if(r.CheckIntersectAABB(*renderable->GetBoundBox().get(),&ishit)){
printf("HIT!\n");
}
But I would suggest you also to take a look at color based 3d picking which is pixel perfect and even easier to implement.
I have implemented frustum culling and am checking the bounding box for its intersection with the frustum planes. I added the ability to pause frustum updates which lets me see if the frustum culling has been working correctly. When I turn around after I have paused it, nothing renders behind me and to the left and right side, they taper off as well just as you would expect. Beyond the clip distance (far plane), they still render and I am not sure whether it is a problem with my frustum updating or bounding box checking code or I am using the wrong matrix or what. As I put the distance in the projection matrix at 3000.0f, it still says that bounding boxes well past that are still in the frustum, which isn't the case.
Here is where I create my modelview matrix:
projectionMatrix = glm::perspective(newFOV, 4.0f / 3.0f, 0.1f, 3000.0f);
viewMatrix = glm::mat4(1.0);
viewMatrix = glm::scale(viewMatrix, glm::vec3(1.0, 1.0, -1.0));
viewMatrix = glm::rotate(viewMatrix, anglePitch, glm::vec3(1.0, 0.0, 0.0));
viewMatrix = glm::rotate(viewMatrix, angleYaw, glm::vec3(0.0, 1.0, 0.0));
viewMatrix = glm::translate(viewMatrix, glm::vec3(-x, -y, -z));
modelViewProjectiomMatrix = projectionMatrix * viewMatrix;
The reason I scale it by -1 in the Z direction is because the levels were designed to be rendered with DirectX so I reverse the Z direction.
Here is where I update my frustum:
void CFrustum::calculateFrustum()
{
glm::mat4 mat = camera.getModelViewProjectionMatrix();
// Calculate the LEFT side
m_Frustum[LEFT][A] = (mat[0][3]) + (mat[0][0]);
m_Frustum[LEFT][B] = (mat[1][3]) + (mat[1][0]);
m_Frustum[LEFT][C] = (mat[2][3]) + (mat[2][0]);
m_Frustum[LEFT][D] = (mat[3][3]) + (mat[3][0]);
// Calculate the RIGHT side
m_Frustum[RIGHT][A] = (mat[0][3]) - (mat[0][0]);
m_Frustum[RIGHT][B] = (mat[1][3]) - (mat[1][0]);
m_Frustum[RIGHT][C] = (mat[2][3]) - (mat[2][0]);
m_Frustum[RIGHT][D] = (mat[3][3]) - (mat[3][0]);
// Calculate the TOP side
m_Frustum[TOP][A] = (mat[0][3]) - (mat[0][1]);
m_Frustum[TOP][B] = (mat[1][3]) - (mat[1][1]);
m_Frustum[TOP][C] = (mat[2][3]) - (mat[2][1]);
m_Frustum[TOP][D] = (mat[3][3]) - (mat[3][1]);
// Calculate the BOTTOM side
m_Frustum[BOTTOM][A] = (mat[0][3]) + (mat[0][1]);
m_Frustum[BOTTOM][B] = (mat[1][3]) + (mat[1][1]);
m_Frustum[BOTTOM][C] = (mat[2][3]) + (mat[2][1]);
m_Frustum[BOTTOM][D] = (mat[3][3]) + (mat[3][1]);
// Calculate the FRONT side
m_Frustum[FRONT][A] = (mat[0][3]) + (mat[0][2]);
m_Frustum[FRONT][B] = (mat[1][3]) + (mat[1][2]);
m_Frustum[FRONT][C] = (mat[2][3]) + (mat[2][2]);
m_Frustum[FRONT][D] = (mat[3][3]) + (mat[3][2]);
// Calculate the BACK side
m_Frustum[BACK][A] = (mat[0][3]) - (mat[0][2]);
m_Frustum[BACK][B] = (mat[1][3]) - (mat[1][2]);
m_Frustum[BACK][C] = (mat[2][3]) - (mat[2][2]);
m_Frustum[BACK][D] = (mat[3][3]) - (mat[3][2]);
// Normalize all the sides
NormalizePlane(m_Frustum, LEFT);
NormalizePlane(m_Frustum, RIGHT);
NormalizePlane(m_Frustum, TOP);
NormalizePlane(m_Frustum, BOTTOM);
NormalizePlane(m_Frustum, FRONT);
NormalizePlane(m_Frustum, BACK);
}
And finally, where I check the bounding box:
bool CFrustum::BoxInFrustum( float x, float y, float z, float x2, float y2, float z2)
{
// Go through all of the corners of the box and check then again each plane
// in the frustum. If all of them are behind one of the planes, then it most
// like is not in the frustum.
for(int i = 0; i < 6; i++ )
{
if(m_Frustum[i][A] * x + m_Frustum[i][B] * y + m_Frustum[i][C] * z + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x2 + m_Frustum[i][B] * y + m_Frustum[i][C] * z + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x + m_Frustum[i][B] * y2 + m_Frustum[i][C] * z + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x2 + m_Frustum[i][B] * y2 + m_Frustum[i][C] * z + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x + m_Frustum[i][B] * y + m_Frustum[i][C] * z2 + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x2 + m_Frustum[i][B] * y + m_Frustum[i][C] * z2 + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x + m_Frustum[i][B] * y2 + m_Frustum[i][C] * z2 + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x2 + m_Frustum[i][B] * y2 + m_Frustum[i][C] * z2 + m_Frustum[i][D] > 0) continue;
// If we get here, it isn't in the frustum
return false;
}
// Return a true for the box being inside of the frustum
return true;
}
I've noticed a few things, particularly with how you set up the projection matrix. For starters, gluProject doesn't return a value, unless you're using some kind of wrapper or weird api. gluLookAt is used more often.
Next, assuming the scale, rotate, and translate functions are intended to change the modelview matrix, you need to reverse their order. OpenGL doesn't actually move objects around; instead it effectively moves the origin around, and renders each object using the new definition of <0,0,0>. Thus you 'move' to where you want it to render, then you rotate the axes as needed, then you stretch out the grid.
As for the clipping problem, you may want to give glClipPlane() a good look over. If everything else mostly works, but there seems to be some rounding error, try changing the near clipping plane in your perspective(,,,) function from 0.1 to 1.0 (smaller values tend to mess with the z-buffer).
I see a lot of unfamiliar syntax, so I think you're using some kind of wrapper; but here are some (Qt) code fragments from my own GL project that I use. Might help, dunno:
//This gets called during resize, as well as once during initialization
void GLWidget::resizeGL(int width, int height) {
int side = qMin(width, height);
padX = (width-side)/2.0;
padY = (height-side)/2.0;
glViewport(padX, padY, side, side);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0, 1.0, 1.0, 2400.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
//This fragment gets called at the top of every paint event:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
glLightfv(GL_LIGHT0, GL_POSITION, FV0001);
camMain.stepVars();
gluLookAt(camMain.Pos[0],camMain.Pos[1],camMain.Pos[2],
camMain.Aim[0],camMain.Aim[1],camMain.Aim[2],
0.0,1.0,0.0);
glPolygonMode(GL_FRONT_AND_BACK, drawMode);
//And this fragment represents a typical draw event
void GLWidget::drawFleet(tFleet* tIn) {
if (tIn->firstShip != 0){
glPushMatrix();
glTranslatef(tIn->Pos[0], tIn->Pos[1], tIn->Pos[2]);
glRotatef(tIn->Yaw, 0.0, 1.0, 0.0);
glRotatef(tIn->Pitch,0,0,1);
drawShip(tIn->firstShip);
glPopMatrix();
}
}
I'm working on the assumption that you're newish to GL, so my apologies if I come off as a little pedantic.
I had the same problem.
Given Vinny Rose's answer, I checked the function that creates a normalized plane, and found an error.
This is the corrected version, with the incorrect calculation commented out:
plane plane_normalized(float A, float B, float C, float D) {
// Wrong, this is not a 4D vector
// float nf = 1.0f / sqrtf(A * A + B * B + C * C + D * D);
// Correct
float nf = 1.0f / sqrtf(A * A + B * B + C * C);
return (plane) {{
nf * A,
nf * B,
nf * C,
nf * D
}};
}
My guess is that your NormalizePlane function does something similar.
The point of normalizing is to have a plane in Hessian normal form so that we can do easy half-space tests. If you normalize the plane as you would a four-dimensional vector, the normal direction [A, B, C] is still correct but the offset D is not.
I think you'd get correct results when testing points against the top, bottom, left and right planes because they pass through the origin, and the near plane might be close enough to not notice. (Bounding sphere tests would fail.)
The frustum cull worked as expected for me when I restored the correct normalization.
Here's what I think is happening: The far plane is getting defined correctly but in my testing the D value of that plane is coming out much too small. So objects are getting accepted as being on the correct side of the far plane because the math is forcing the far plane to actually be much farther away than you want.
Try a different approach: (http://www.lighthouse3d.com/tutorials/view-frustum-culling/geometric-approach-extracting-the-planes/)
float tang = tanf(fov * PI / 360.0f);
float nh = near * tang; // near height
float nw = nh * aspect; // near width
float fh = far * tang; // far height
float fw = fh * aspect; // far width
glm::vec3 p,nc,fc,X,Y,Z,Xnw,Ynh;
//camera position
p = glm::vec3(viewMatrix[3][0],viewMatrix[3][1],viewMatrix[3][2]);
// the left vector
glm::vec3 X = glm::vec3(viewMatrix[0][0], viewMatrix[1][0], viewMatrix[2][0]);
// the up vector
glm::vec3 Y = glm::vec3(viewMatrix[0][1], viewMatrix[1][1], viewMatrix[2][1]);
// the look vector
glm::vec3 Z = glm::vec3(viewMatrix[0][2], viewMatrix[1][2], viewMatrix[2][2]);
nc = p - Z * near; // center of the near plane
fc = p - Z * far; // center of the far plane
// the distance to get to the left or right edge of the near plane from nc
Xnw = X * nw;
// the distance to get to top or bottom of the near plane from nc
Ynh = Y * nh;
// the distance to get to the left or right edge of the far plane from fc
Xfw = X * fw;
// the distance to get to top or bottom of the far plane from fc
Yfh = Y * fh;
ntl = nc + Ynh - Xnw; // "near top left"
ntr = nc + Ynh + Xnw; // "near top right" and so on
nbl = nc - Ynh - Xnw;
nbr = nc - Ynh + Xnw;
ftl = fc + Yfh - Xfw;
ftr = fc + Yfh + Xfw;
fbl = fc - Yfh - Xfw;
fbr = fc - Yfh + Xfw;
m_Frustum[TOP] = planeWithPoints(ntr,ntl,ftl);
m_Frustum[BOTTOM] = planeWithPoints(nbl,nbr,fbr);
m_Frustum[LEFT] = planeWithPoints(ntl,nbl,fbl);
m_Frustum[RIGHT] = planeWithPoints(nbr,ntr,fbr);
m_Frustum[FRONT] = planeWithPoints(ntl,ntr,nbr);
m_Frustum[BACK] = planeWithPoints(ftr,ftl,fbl);
// Normalize all the sides
NormalizePlane(m_Frustum, LEFT);
NormalizePlane(m_Frustum, RIGHT);
NormalizePlane(m_Frustum, TOP);
NormalizePlane(m_Frustum, BOTTOM);
NormalizePlane(m_Frustum, FRONT);
NormalizePlane(m_Frustum, BACK);
Then planeWithPoints would be something like:
planeWithPoints(glm::vec3 a, glm::vec3 b, glm::vec3 c){
double A = a.y * (b.z - c.z) + b.y * (c.z - a.z) + c.y * (a.z - b.z);
double B = a.z * (b.x - c.x) + b.z * (c.x - a.x) + c.z * (a.x - b.x);
double C = a.x * (b.y - c.y) + b.x * (c.y - a.y) + c.x * (a.y - b.y);
double D = -(a.x * (b.y * c.z - c.y * b.z) + b.x * (c.y * a.z - a.y * c.z) + c.x * (a.y * b.z - b.y * a.z));
return glm::vec4(A,B,C,D);
}
I didn't test any of the above. But the original reference is there if you need it.
Previous Answer:
OpenGL and GLSL matrices are stored and accessed in column-major order when the matrix is represented by a 2D array. This is also true with GLM as they follow the GLSL standards.
You need to change your frustum creation to the following.
// Calculate the LEFT side (column1 + column4)
m_Frustum[LEFT][A] = (mat[3][0]) + (mat[0][0]);
m_Frustum[LEFT][B] = (mat[3][1]) + (mat[0][1]);
m_Frustum[LEFT][C] = (mat[3][2]) + (mat[0][2]);
m_Frustum[LEFT][D] = (mat[3][3]) + (mat[0][3]);
// Calculate the RIGHT side (-column1 + column4)
m_Frustum[RIGHT][A] = (mat[3][0]) - (mat[0][0]);
m_Frustum[RIGHT][B] = (mat[3][1]) - (mat[0][1]);
m_Frustum[RIGHT][C] = (mat[3][2]) - (mat[0][2]);
m_Frustum[RIGHT][D] = (mat[3][3]) - (mat[0][3]);
// Calculate the TOP side (-column2 + column4)
m_Frustum[TOP][A] = (mat[3][0]) - (mat[1][0]);
m_Frustum[TOP][B] = (mat[3][1]) - (mat[1][1]);
m_Frustum[TOP][C] = (mat[3][2]) - (mat[1][2]);
m_Frustum[TOP][D] = (mat[3][3]) - (mat[1][3]);
// Calculate the BOTTOM side (column2 + column4)
m_Frustum[BOTTOM][A] = (mat[3][0]) + (mat[1][0]);
m_Frustum[BOTTOM][B] = (mat[3][1]) + (mat[1][1]);
m_Frustum[BOTTOM][C] = (mat[3][2]) + (mat[1][2]);
m_Frustum[BOTTOM][D] = (mat[3][3]) + (mat[1][3]);
// Calculate the FRONT side (column3 + column4)
m_Frustum[FRONT][A] = (mat[3][0]) + (mat[2][0]);
m_Frustum[FRONT][B] = (mat[3][1]) + (mat[2][1]);
m_Frustum[FRONT][C] = (mat[3][2]) + (mat[2][2]);
m_Frustum[FRONT][D] = (mat[3][3]) + (mat[2][3]);
// Calculate the BACK side (-column3 + column4)
m_Frustum[BACK][A] = (mat[3][0]) - (mat[2][0]);
m_Frustum[BACK][B] = (mat[3][1]) - (mat[2][1]);
m_Frustum[BACK][C] = (mat[3][2]) - (mat[2][2]);
m_Frustum[BACK][D] = (mat[3][3]) - (mat[2][3]);