openGL - converting a non-linear depth buffer to a linear depth buffer - c++

According to the thread on this page, The equation given for calculating the depth buffer:
F_depth = 1/z - 1/n/(1/f - 1/n)
is non-linear only because of the perspective divide.(Note that this is a combination of from the view-space z coord to window coord directly)
So, as per my understanding:
to convert it to a linear depth buffer, the only thing we would do is to remove the perspective divide(?) and then perform the glDepthRange(a,b) given here.
In that case, the equation would be like this:
z_linear = z_NDC * W_clip = -(f+n)/(f-n)*z_eye + ( 2fn/(f-n) )
and, with depth range transformation:
z_[0,1] = ( z_linear + 1 ) /2
= ( (f+n)*z_eye - 2fn + f - n )/ ( 2(f-n) )
but, in the learnopenGL site for depth testing this is done:
First we transform the depth value to NDC which is not too difficult:
float ndc = depth * 2.0 - 1.0;
We then take the resulting ndc value and apply the inverse
transformation to retrieve its linear depth value:
float linearDepth = (2.0 * near * far) / (far + near - ndc * (far - near)
how is the non-linear to linear depth-buffer being computed?(i.e. equation being formed)?

Using glm in a right handed system I found the following solutions for converting from ndc depth [-1, 1] to eye depth [0-far].
perspective projection:
float screen_to_eye_depth_perspective(float d, float zNear, float zFar)
{
// [0; 1] -> [-1; 1]
float depth = d * 2.0 - 1.0;
return (2.0 * zNear * zFar) / (zFar + zNear - depth * (zFar - zNear));
}
orthographic projection:
float screen_to_eye_depth_ortho(float d, float zNear, float zFar)
{
// [0; 1] -> [-1; 1]
float depth = d * 2.0 - 1.0;
return (depth * (zFar - zNear) + (zFar + zNear)) / 2.0;
}
I advise you to test with your own near and far value to check the final result.
n = 10;
f = 100;
// -------------
z = 2 * f * n / (f + n) - ndc * (f - n);
z = 2000 / (110 - ndc * 90);
// -------------
ndc = -1;
z = 2000 / (110 + 90) = 10;
ndc = 1;
z = 2000 / (110 - 90) = 100;

Related

Culling frustum becoming narrow when looking up or down

I have a problem implementing frustum culling in OpenGL.
I implemented plane extraction in world space, using this Lighthouse3d tutorial (the rest of the engine comes from learnopengl.com).
It works well, but only when the camera points towards the horizon. If the camera points up or down, the frustum becomes progressively narrow.
Indeed, far frustum plane size is 1023 * 828 when pointing to horizon, but only 17 * 14 when pointing up or down. The same problem occurs for the near frustum plane.
Here is the code that updates the frustum each frame (C++):
tang = (float) tan(glm::radians(45.0f) * 0.5f);
ratio = SCR_WIDTH * 1.0 / SCR_HEIGHT;
// Direction vectors
cameraZ = cameraFront * -1.0f;
cameraRight = glm::cross(glm::vec3(0.0f, 1.0f, 0.0f), cameraZ);
cameraUp = glm::cross(cameraZ, cameraRight);
// Dimensions of near and far frustum planes
Hnear = nearDist * tang; // Hnear = near plane height
Wnear = Hnear * ratio; // Wnear = near plane width
Hfar = farDist * tang; // Hfar = far plane height
Wfar = Hfar * ratio; // Wfar = far plane width
// Distance of near and far planes
fc = cameraPos - cameraZ * farDist; // fc = far center
nc = cameraPos - cameraZ * nearDist; // nc = near center
// Extraction of frustum points
ntl = nc + (cameraUp * Hnear) - (cameraRight * Wnear);
ntr = nc + (cameraUp * Hnear) + (cameraRight * Wnear);
nbl = nc - (cameraUp * Hnear) - (cameraRight * Wnear);
nbr = nc - (cameraUp * Hnear) + (cameraRight * Wnear);
ftl = fc + (cameraUp * Hfar) - (cameraRight * Wfar);
ftr = fc + (cameraUp * Hfar) + (cameraRight * Wfar);
fbl = fc - (cameraUp * Hfar) - (cameraRight * Wfar);
fbr = fc - (cameraUp * Hfar) + (cameraRight * Wfar);
// Computation of the 6 frustum planes
topPlane = {ntr,ntl,ftl};
bottomPlane = {nbl,nbr,fbr};
leftPlane = {ntl,nbl,fbl};
rightPlane = {nbr,ntr,fbr};
nearPlane = {ntl,ntr,nbr};
farPlane = {ftr,ftl,fbl};
Here is the result:
The problem was due to cameraUp and cameraRight values not being normalized.

Arcball camera behaving oddly

I am a bit confused. I need help, I tried implementing an arcball camera. The theory I followed is here:
https://www.khronos.org/opengl/wiki/Object_Mouse_Trackball
It "works" except it doesn;t behave like the arcball camera in Renderdoc:
Mine:
Renderdoc
So in mine when you try to rotate too far away from the screen center the rotation seems to be on the opposite direction of where it should be
vec3 ScreenToArcSurface(vec2 pos)
{
const float radius = 0.9f; // Controls the speed
if(pos.x * pos.x + pos.y * pos.y >= (radius * radius) / 2.f - 0.00001)
{
// This is equal to (r^2 / 2) / (sqrt(x^2 + y^2)) since the magnitude of the
// vector is sqrt(x^2 + y^2)
return {pos, (radius * radius / 2.f) / (length(pos))};
}
return {pos.x, pos.y, sqrt(radius * radius - (pos.x * pos.x + pos.y * pos.y))};
}
void ArcballCamera::UpdateCameraAngles(void* ptr, glm::vec2 position, glm::vec2 offset)
{
auto camera = reinterpret_cast<ArcballCamera*>(ptr);
vec3 vb = ScreenToArcSurface(position);
vec3 va = ScreenToArcSurface(position - offset);
float angle = acos(glm::min(1.f, dot(vb, va)));
vec3 axis = cross(va, vb);
camera->rotation *= quat(cos(angle) / 2.f, sin(angle) * axis);
camera->rotation = normalize(camera->rotation);
}
glm::mat4 ArcballCamera::GetViewMatrix()
{
return glm::lookAt(
look_at_point + rotation * (position - look_at_point),
look_at_point,
rotation * up);
}
I don't understand what difference there is between what I implemented and what the khronos link is describing.
I fixed it by multiplying the position by -1;.
I don't understand why this fixes the math. The input coorindates are what i expect. the poisition is normalized from -1 to 1 and top left is (-,-) top right is (+,-), bottom left (-,+) and finally the last one is (+,+).
So I don;t know Why I need to work in a negated coordinate system for this to work.
The problem is that the Website I took this from defined the formula for the OpenGL coordinate system.
However I am working on vulkan, where the Y coordinate is flipped, which changes the handedness of the system. Due to this, using the formula as is uses the wrong half of the sphere.
The correct implementation for Vulkan just needs to negate the z component, i.e:
vec3 ScreenToArcSurface(vec2 pos)
{
const float radius = 0.9f; // Controls the speed
if(pos.x * pos.x + pos.y * pos.y >= (radius * radius) / 2.f - 0.00001)
{
// This is equal to (r^2 / 2) / (sqrt(x^2 + y^2)) since the magnitude of the
// vector is sqrt(x^2 + y^2)
return {pos, -(radius * radius / 2.f) / (length(pos))};
}
return {pos.x, pos.y, -sqrt(radius * radius - (pos.x * pos.x + pos.y * pos.y))};
}

Terrain Collision issues

I'm trying to implement terrain collision for my height map terrain, and I'm following this. The tutorial is for java but I'm using C++, though the principles are the same so it shouldn't be a problem.
To start off we need a function to get the height of the terrain based on the camera's position. WorldX and WorldZ is the camera's position (x, z) and heights is an 2D-array containing all the heights of the vertices.
float HeightMap::getHeightOfTerrain(float worldX, float worldZ, float heights[][256])
{
//Image is (256 x 256)
float gridLength = 256;
float terrainLength = 256;
float terrainX = worldX;
float terrainZ = worldZ;
float gridSquareLength = terrainLength / ((float)gridLength - 1);
int gridX = (int)std::floor(terrainX / gridSquareLength);
int gridZ = (int)std::floor(terrainZ / gridSquareLength);
//Check if position is on the terrain
if (gridX >= gridLength - 1 || gridZ >= gridLength - 1 || gridX < 0 || gridZ < 0)
{
return 0;
}
//Find out where the player is on the grid square
float xCoord = std::fmod(terrainX, gridSquareLength) / gridSquareLength;
float zCoord = std::fmod(terrainZ, gridSquareLength) / gridSquareLength;
float answer = 0.0;
//Top triangle of a square else the bottom
if (xCoord <= (1 - zCoord))
{
answer = barryCentric(glm::vec3(0, heights[gridX][gridZ], 0),
glm::vec3(1, heights[gridX + 1][gridZ], 0), glm::vec3(0, heights[gridX][gridZ + 1], 1),
glm::vec2(xCoord, zCoord));
}
else
{
answer = barryCentric(glm::vec3(1, heights[gridX + 1][gridZ], 0),
glm::vec3(1, heights[gridX + 1][gridZ + 1], 1), glm::vec3(0, heights[gridX][gridZ + 1], 1),
glm::vec2(xCoord, zCoord));
}
return answer;
}
To find the height of the triangle the camera is currently standing on we use the baryCentric interpolation function.
float HeightMap::barryCentric(glm::vec3 p1, glm::vec3 p2, glm::vec3 p3, glm::vec2 pos)
{
float det = (p2.z - p3.z) * (p1.x - p3.x) + (p3.x - p2.x) * (p1.z - p3.z);
float l1 = ((p2.z - p3.z) * (pos.x - p3.x) + (p3.x - p2.x) * (pos.y - p3.z)) / det;
float l2 = ((p3.z - p1.z) * (pos.x - p3.x) + (p1.x - p3.x) * (pos.y - p3.z)) / det;
float l3 = 1.0f - l1 - l2;
return l1 * p1.y + l2 * p2.y + l3 * p3.y;
}
Then we just have to use the height we have calculated to check for
collision during the game
float terrainHeight = heightMap.getHeightOfTerrain(camera.Position.x, camera.Position.z, heights);
if (camera.Position.y < terrainHeight)
{
camera.Position.y = terrainHeight;
};
Now according to the tutorial this should work perfectly fine, but the height is rather off and at some places it doesn't even work. I figured it might have something to do with the translation and scaling part of the terrain
glm::mat4 model;
model = glm::translate(model, glm::vec3(0.0f, -0.3f, -15.0f));
model = glm::scale(model, glm::vec3(0.1f, 0.1f, 0.1f));
and that I should multiply the values of the heights array by 0.1, as the scaling does that part for the terrain on the GPU side, but that didn't do the trick.
Note
In the tutorial the first lines in the getHeightOfTerrain function says
float terrainX = worldX - x;
float terrainZ = worldZ - z;
where x and z is the world position of the terrain. This is done to get the player position relative to the terrain's position. I tried with the values from the translation part, but it doensn't work either. I changed these lines because it doesn't seem necessary.
float terrainX = worldX - x;
float terrainZ = worldZ - z;
Those lines are, in fact, very necessary, unless your terrain is always at the origin.
Your code resource (tutorial) assumes that you haven't scaled or rotated the terrain in any way. The x and z variables are the XZ position of the terrain which take care of cases where the terrain is translated.
Ideally, you should transform the world position vector from world space to object space (using the inverse of the model matrix you use for the terrain), something like
vec3 localPosition = inverse(model) * vec4(worldPosition, 1)
And then use localPosition.x and localPosition.z instead of terrainX and terrainZ.

OpenGL - Frustum not culling polygons beyond far plane

I have implemented frustum culling and am checking the bounding box for its intersection with the frustum planes. I added the ability to pause frustum updates which lets me see if the frustum culling has been working correctly. When I turn around after I have paused it, nothing renders behind me and to the left and right side, they taper off as well just as you would expect. Beyond the clip distance (far plane), they still render and I am not sure whether it is a problem with my frustum updating or bounding box checking code or I am using the wrong matrix or what. As I put the distance in the projection matrix at 3000.0f, it still says that bounding boxes well past that are still in the frustum, which isn't the case.
Here is where I create my modelview matrix:
projectionMatrix = glm::perspective(newFOV, 4.0f / 3.0f, 0.1f, 3000.0f);
viewMatrix = glm::mat4(1.0);
viewMatrix = glm::scale(viewMatrix, glm::vec3(1.0, 1.0, -1.0));
viewMatrix = glm::rotate(viewMatrix, anglePitch, glm::vec3(1.0, 0.0, 0.0));
viewMatrix = glm::rotate(viewMatrix, angleYaw, glm::vec3(0.0, 1.0, 0.0));
viewMatrix = glm::translate(viewMatrix, glm::vec3(-x, -y, -z));
modelViewProjectiomMatrix = projectionMatrix * viewMatrix;
The reason I scale it by -1 in the Z direction is because the levels were designed to be rendered with DirectX so I reverse the Z direction.
Here is where I update my frustum:
void CFrustum::calculateFrustum()
{
glm::mat4 mat = camera.getModelViewProjectionMatrix();
// Calculate the LEFT side
m_Frustum[LEFT][A] = (mat[0][3]) + (mat[0][0]);
m_Frustum[LEFT][B] = (mat[1][3]) + (mat[1][0]);
m_Frustum[LEFT][C] = (mat[2][3]) + (mat[2][0]);
m_Frustum[LEFT][D] = (mat[3][3]) + (mat[3][0]);
// Calculate the RIGHT side
m_Frustum[RIGHT][A] = (mat[0][3]) - (mat[0][0]);
m_Frustum[RIGHT][B] = (mat[1][3]) - (mat[1][0]);
m_Frustum[RIGHT][C] = (mat[2][3]) - (mat[2][0]);
m_Frustum[RIGHT][D] = (mat[3][3]) - (mat[3][0]);
// Calculate the TOP side
m_Frustum[TOP][A] = (mat[0][3]) - (mat[0][1]);
m_Frustum[TOP][B] = (mat[1][3]) - (mat[1][1]);
m_Frustum[TOP][C] = (mat[2][3]) - (mat[2][1]);
m_Frustum[TOP][D] = (mat[3][3]) - (mat[3][1]);
// Calculate the BOTTOM side
m_Frustum[BOTTOM][A] = (mat[0][3]) + (mat[0][1]);
m_Frustum[BOTTOM][B] = (mat[1][3]) + (mat[1][1]);
m_Frustum[BOTTOM][C] = (mat[2][3]) + (mat[2][1]);
m_Frustum[BOTTOM][D] = (mat[3][3]) + (mat[3][1]);
// Calculate the FRONT side
m_Frustum[FRONT][A] = (mat[0][3]) + (mat[0][2]);
m_Frustum[FRONT][B] = (mat[1][3]) + (mat[1][2]);
m_Frustum[FRONT][C] = (mat[2][3]) + (mat[2][2]);
m_Frustum[FRONT][D] = (mat[3][3]) + (mat[3][2]);
// Calculate the BACK side
m_Frustum[BACK][A] = (mat[0][3]) - (mat[0][2]);
m_Frustum[BACK][B] = (mat[1][3]) - (mat[1][2]);
m_Frustum[BACK][C] = (mat[2][3]) - (mat[2][2]);
m_Frustum[BACK][D] = (mat[3][3]) - (mat[3][2]);
// Normalize all the sides
NormalizePlane(m_Frustum, LEFT);
NormalizePlane(m_Frustum, RIGHT);
NormalizePlane(m_Frustum, TOP);
NormalizePlane(m_Frustum, BOTTOM);
NormalizePlane(m_Frustum, FRONT);
NormalizePlane(m_Frustum, BACK);
}
And finally, where I check the bounding box:
bool CFrustum::BoxInFrustum( float x, float y, float z, float x2, float y2, float z2)
{
// Go through all of the corners of the box and check then again each plane
// in the frustum. If all of them are behind one of the planes, then it most
// like is not in the frustum.
for(int i = 0; i < 6; i++ )
{
if(m_Frustum[i][A] * x + m_Frustum[i][B] * y + m_Frustum[i][C] * z + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x2 + m_Frustum[i][B] * y + m_Frustum[i][C] * z + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x + m_Frustum[i][B] * y2 + m_Frustum[i][C] * z + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x2 + m_Frustum[i][B] * y2 + m_Frustum[i][C] * z + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x + m_Frustum[i][B] * y + m_Frustum[i][C] * z2 + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x2 + m_Frustum[i][B] * y + m_Frustum[i][C] * z2 + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x + m_Frustum[i][B] * y2 + m_Frustum[i][C] * z2 + m_Frustum[i][D] > 0) continue;
if(m_Frustum[i][A] * x2 + m_Frustum[i][B] * y2 + m_Frustum[i][C] * z2 + m_Frustum[i][D] > 0) continue;
// If we get here, it isn't in the frustum
return false;
}
// Return a true for the box being inside of the frustum
return true;
}
I've noticed a few things, particularly with how you set up the projection matrix. For starters, gluProject doesn't return a value, unless you're using some kind of wrapper or weird api. gluLookAt is used more often.
Next, assuming the scale, rotate, and translate functions are intended to change the modelview matrix, you need to reverse their order. OpenGL doesn't actually move objects around; instead it effectively moves the origin around, and renders each object using the new definition of <0,0,0>. Thus you 'move' to where you want it to render, then you rotate the axes as needed, then you stretch out the grid.
As for the clipping problem, you may want to give glClipPlane() a good look over. If everything else mostly works, but there seems to be some rounding error, try changing the near clipping plane in your perspective(,,,) function from 0.1 to 1.0 (smaller values tend to mess with the z-buffer).
I see a lot of unfamiliar syntax, so I think you're using some kind of wrapper; but here are some (Qt) code fragments from my own GL project that I use. Might help, dunno:
//This gets called during resize, as well as once during initialization
void GLWidget::resizeGL(int width, int height) {
int side = qMin(width, height);
padX = (width-side)/2.0;
padY = (height-side)/2.0;
glViewport(padX, padY, side, side);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0, 1.0, 1.0, 2400.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
//This fragment gets called at the top of every paint event:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
glLightfv(GL_LIGHT0, GL_POSITION, FV0001);
camMain.stepVars();
gluLookAt(camMain.Pos[0],camMain.Pos[1],camMain.Pos[2],
camMain.Aim[0],camMain.Aim[1],camMain.Aim[2],
0.0,1.0,0.0);
glPolygonMode(GL_FRONT_AND_BACK, drawMode);
//And this fragment represents a typical draw event
void GLWidget::drawFleet(tFleet* tIn) {
if (tIn->firstShip != 0){
glPushMatrix();
glTranslatef(tIn->Pos[0], tIn->Pos[1], tIn->Pos[2]);
glRotatef(tIn->Yaw, 0.0, 1.0, 0.0);
glRotatef(tIn->Pitch,0,0,1);
drawShip(tIn->firstShip);
glPopMatrix();
}
}
I'm working on the assumption that you're newish to GL, so my apologies if I come off as a little pedantic.
I had the same problem.
Given Vinny Rose's answer, I checked the function that creates a normalized plane, and found an error.
This is the corrected version, with the incorrect calculation commented out:
plane plane_normalized(float A, float B, float C, float D) {
// Wrong, this is not a 4D vector
// float nf = 1.0f / sqrtf(A * A + B * B + C * C + D * D);
// Correct
float nf = 1.0f / sqrtf(A * A + B * B + C * C);
return (plane) {{
nf * A,
nf * B,
nf * C,
nf * D
}};
}
My guess is that your NormalizePlane function does something similar.
The point of normalizing is to have a plane in Hessian normal form so that we can do easy half-space tests. If you normalize the plane as you would a four-dimensional vector, the normal direction [A, B, C] is still correct but the offset D is not.
I think you'd get correct results when testing points against the top, bottom, left and right planes because they pass through the origin, and the near plane might be close enough to not notice. (Bounding sphere tests would fail.)
The frustum cull worked as expected for me when I restored the correct normalization.
Here's what I think is happening: The far plane is getting defined correctly but in my testing the D value of that plane is coming out much too small. So objects are getting accepted as being on the correct side of the far plane because the math is forcing the far plane to actually be much farther away than you want.
Try a different approach: (http://www.lighthouse3d.com/tutorials/view-frustum-culling/geometric-approach-extracting-the-planes/)
float tang = tanf(fov * PI / 360.0f);
float nh = near * tang; // near height
float nw = nh * aspect; // near width
float fh = far * tang; // far height
float fw = fh * aspect; // far width
glm::vec3 p,nc,fc,X,Y,Z,Xnw,Ynh;
//camera position
p = glm::vec3(viewMatrix[3][0],viewMatrix[3][1],viewMatrix[3][2]);
// the left vector
glm::vec3 X = glm::vec3(viewMatrix[0][0], viewMatrix[1][0], viewMatrix[2][0]);
// the up vector
glm::vec3 Y = glm::vec3(viewMatrix[0][1], viewMatrix[1][1], viewMatrix[2][1]);
// the look vector
glm::vec3 Z = glm::vec3(viewMatrix[0][2], viewMatrix[1][2], viewMatrix[2][2]);
nc = p - Z * near; // center of the near plane
fc = p - Z * far; // center of the far plane
// the distance to get to the left or right edge of the near plane from nc
Xnw = X * nw;
// the distance to get to top or bottom of the near plane from nc
Ynh = Y * nh;
// the distance to get to the left or right edge of the far plane from fc
Xfw = X * fw;
// the distance to get to top or bottom of the far plane from fc
Yfh = Y * fh;
ntl = nc + Ynh - Xnw; // "near top left"
ntr = nc + Ynh + Xnw; // "near top right" and so on
nbl = nc - Ynh - Xnw;
nbr = nc - Ynh + Xnw;
ftl = fc + Yfh - Xfw;
ftr = fc + Yfh + Xfw;
fbl = fc - Yfh - Xfw;
fbr = fc - Yfh + Xfw;
m_Frustum[TOP] = planeWithPoints(ntr,ntl,ftl);
m_Frustum[BOTTOM] = planeWithPoints(nbl,nbr,fbr);
m_Frustum[LEFT] = planeWithPoints(ntl,nbl,fbl);
m_Frustum[RIGHT] = planeWithPoints(nbr,ntr,fbr);
m_Frustum[FRONT] = planeWithPoints(ntl,ntr,nbr);
m_Frustum[BACK] = planeWithPoints(ftr,ftl,fbl);
// Normalize all the sides
NormalizePlane(m_Frustum, LEFT);
NormalizePlane(m_Frustum, RIGHT);
NormalizePlane(m_Frustum, TOP);
NormalizePlane(m_Frustum, BOTTOM);
NormalizePlane(m_Frustum, FRONT);
NormalizePlane(m_Frustum, BACK);
Then planeWithPoints would be something like:
planeWithPoints(glm::vec3 a, glm::vec3 b, glm::vec3 c){
double A = a.y * (b.z - c.z) + b.y * (c.z - a.z) + c.y * (a.z - b.z);
double B = a.z * (b.x - c.x) + b.z * (c.x - a.x) + c.z * (a.x - b.x);
double C = a.x * (b.y - c.y) + b.x * (c.y - a.y) + c.x * (a.y - b.y);
double D = -(a.x * (b.y * c.z - c.y * b.z) + b.x * (c.y * a.z - a.y * c.z) + c.x * (a.y * b.z - b.y * a.z));
return glm::vec4(A,B,C,D);
}
I didn't test any of the above. But the original reference is there if you need it.
Previous Answer:
OpenGL and GLSL matrices are stored and accessed in column-major order when the matrix is represented by a 2D array. This is also true with GLM as they follow the GLSL standards.
You need to change your frustum creation to the following.
// Calculate the LEFT side (column1 + column4)
m_Frustum[LEFT][A] = (mat[3][0]) + (mat[0][0]);
m_Frustum[LEFT][B] = (mat[3][1]) + (mat[0][1]);
m_Frustum[LEFT][C] = (mat[3][2]) + (mat[0][2]);
m_Frustum[LEFT][D] = (mat[3][3]) + (mat[0][3]);
// Calculate the RIGHT side (-column1 + column4)
m_Frustum[RIGHT][A] = (mat[3][0]) - (mat[0][0]);
m_Frustum[RIGHT][B] = (mat[3][1]) - (mat[0][1]);
m_Frustum[RIGHT][C] = (mat[3][2]) - (mat[0][2]);
m_Frustum[RIGHT][D] = (mat[3][3]) - (mat[0][3]);
// Calculate the TOP side (-column2 + column4)
m_Frustum[TOP][A] = (mat[3][0]) - (mat[1][0]);
m_Frustum[TOP][B] = (mat[3][1]) - (mat[1][1]);
m_Frustum[TOP][C] = (mat[3][2]) - (mat[1][2]);
m_Frustum[TOP][D] = (mat[3][3]) - (mat[1][3]);
// Calculate the BOTTOM side (column2 + column4)
m_Frustum[BOTTOM][A] = (mat[3][0]) + (mat[1][0]);
m_Frustum[BOTTOM][B] = (mat[3][1]) + (mat[1][1]);
m_Frustum[BOTTOM][C] = (mat[3][2]) + (mat[1][2]);
m_Frustum[BOTTOM][D] = (mat[3][3]) + (mat[1][3]);
// Calculate the FRONT side (column3 + column4)
m_Frustum[FRONT][A] = (mat[3][0]) + (mat[2][0]);
m_Frustum[FRONT][B] = (mat[3][1]) + (mat[2][1]);
m_Frustum[FRONT][C] = (mat[3][2]) + (mat[2][2]);
m_Frustum[FRONT][D] = (mat[3][3]) + (mat[2][3]);
// Calculate the BACK side (-column3 + column4)
m_Frustum[BACK][A] = (mat[3][0]) - (mat[2][0]);
m_Frustum[BACK][B] = (mat[3][1]) - (mat[2][1]);
m_Frustum[BACK][C] = (mat[3][2]) - (mat[2][2]);
m_Frustum[BACK][D] = (mat[3][3]) - (mat[2][3]);

glm::perspective explanation

I am trying to understand what the following code does:
glm::mat4 Projection = glm::perspective(35.0f, 1.0f, 0.1f, 100.0f);
Does it create a projection matrix? Clips off anything that is not in the user's view?
I wasn't able to find anything on the API page, and the only thing I could find in the pdf on their website was this:
gluPerspective:
glm::mat4 perspective(float fovy, float aspect, float zNear,
float zFar);
glm::dmat4 perspective(
double fovy, double aspect, double zNear,
double zFar);
From GLM_GTC_matrix_transform extension: <glm/gtc/matrix_transform.hpp>
But it doesn't explain the parameters. Maybe I missed something.
It creates a projection matrix, i.e. the matrix that describes the set of linear equations that transforms vectors from eye space into clip space. Matrices really are not black magic. In the case of OpenGL they happen to be a 4-by-4 arrangement of numbers:
X_x Y_x Z_x T_x
X_y Y_y Z_y T_y
X_z Y_z Z_z T_z
X_w Y_w Z_w W_w
You can multply a 4-vector by a 4×4 matrix:
v' = M * v
v'_x = M_xx * v_x + M_yx * v_y + M_zx * v_z + M_tx * v_w
v'_y = M_xy * v_x + M_yy * v_y + M_zy * v_z + M_ty * v_w
v'_z = M_xz * v_x + M_yz * v_y + M_zz * v_z + M_tz * v_w
v'_w = M_xw * v_x + M_yw * v_y + M_zw * v_z + M_tw * v_w
After reaching clip space (i.e. after the projection step), the primitives are clipped. The vertices resulting from the clipping are then undergoing the perspective divide, i.e.
v'_x = v_x / v_w
v'_y = v_y / v_w
v'_z = v_z / v_w
( v_w = 1 = v_w / v_w )
And that's it. There's really nothing more going on in all those transformation steps than ordinary matrix-vector multiplication.
Now the cool thing about this is, that matrices can be used to describe the relative alignment of a coordinate system within another coordinate system. What the perspective transform does is, that it let's the vertices z-values "slip" into their projected w-values as well. And by the perspective divide a non-unity w will cause "distortion" of the vertex coordinates. Vertices with small z will be divided by a small w, thus their coordinates "blow" up, whereas vertices with large z will be "squeezed", which is what's causing the perspective effect.
This is a c standalone version of the same function. This is roughly a copy paste version of the original.
# include <math.h>
# include <stdlib.h>
# include <string.h>
typedef struct s_mat {
float *array;
int width;
int height;
} t_mat;
t_mat *mat_new(int width, int height)
{
t_mat *to_return;
to_return = (t_mat*)malloc(sizeof(t_mat));
to_return->array = malloc(width * height * sizeof(float));
to_return->width = width;
to_return->height = height;
return (to_return);
}
void mat_zero(t_mat *dest)
{
bzero(dest->array, dest->width * dest->height * sizeof(float));
}
void mat_set(t_mat *m, int x, int y, float val)
{
if (m == NULL || x > m->width || y > m->height)
return ;
m->array[m->width * (y - 1) + (x - 1)] = val;
}
t_mat *mat_perspective(float angle, float ratio,
float near, float far)
{
t_mat *to_return;
float tan_half_angle;
to_return = mat_new(4, 4);
mat_zero(to_return);
tan_half_angle = tan(angle / 2);
mat_set(to_return, 1, 1, 1 / (ratio * tan_half_angle));
mat_set(to_return, 2, 2, 1 / (tan_half_angle));
mat_set(to_return, 3, 3, -(far + near) / (far - near));
mat_set(to_return, 4, 3, -1);
mat_set(to_return, 3, 4, -(2 * far * near) / (far - near));
return (to_return);
}