I needed to implement 'choosing an object' in a 3D environment. So instead of going with robust, accurate approach, such as raycasting, I decided to take the easy way out. First, I transform the objects world position onto screen coordinates:
glm::mat4 modelView, projection, accum;
glGetFloatv(GL_PROJECTION_MATRIX, (GLfloat*)&projection);
glGetFloatv(GL_MODELVIEW_MATRIX, (GLfloat*)&modelView);
accum = projection * modelView;
glm::mat4 transformed = accum * glm::vec4(objectLocation, 1);
Followed by some trivial code to transform the opengl coordinate system to normal window coordinates, and do a simple distance from the mouse check. BUT that doesn't quite work. In order to translate from world space to screen space, I need one more calculation added on to the end of the function shown above:
transformed.x /= transformed.z;
transformed.y /= transformed.z;
I don't understand why I have to do this. I was under the impression that, once one multiplied your vertex by the accumulated modelViewProjection matrix, you had your screen coordinates. But I have to divide by Z to get it to work properly. In my openGL 3.3 shaders, I never have to divide by Z. Why is this?
EDIT: The code to transform from from opengl coordinate system to screen coordinates is this:
int screenX = (int)((trans.x + 1.f)*640.f); //640 = 1280/2
int screenY = (int)((-trans.y + 1.f)*360.f); //360 = 720/2
And then I test if the mouse is near that point by doing:
float length = glm::distance(glm::vec2(screenX, screenY), glm::vec2(mouseX, mouseY));
if(length < 50) {//you can guess the rest
EDIT #2
This method is called upon a mouse click event:
glm::mat4 modelView;
glm::mat4 projection;
glm::mat4 accum;
glGetFloatv(GL_PROJECTION_MATRIX, (GLfloat*)&projection);
glGetFloatv(GL_MODELVIEW_MATRIX, (GLfloat*)&modelView);
accum = projection * modelView;
float nearestDistance = 1000.f;
gameObject* nearest = NULL;
for(uint i = 0; i < objects.size(); i++) {
gameObject* o = objects[i];
o->selected = false;
glm::vec4 trans = accum * glm::vec4(o->location,1);
trans.x /= trans.z;
trans.y /= trans.z;
int clipX = (int)((trans.x+1.f)*640.f);
int clipY = (int)((-trans.y+1.f)*360.f);
float length = glm::distance(glm::vec2(clipX,clipY), glm::vec2(mouseX, mouseY));
if(length<50) {
nearestDistance = trans.z;
nearest = o;
}
}
if(nearest) {
nearest->selected = true;
}
mouseRightPressed = true;
The code as a whole is incomplete, but the parts relevant to my question works fine. The 'objects' vector contains only one element for my tests, so the loop doesn't get in the way at all.
I've figured it out. As Mr David Lively pointed out,
Typically in this case you'd divide by .w instead of .z to get something useful, though.
My .w values were very close to my .z values, so in my code I change the statement:
transformed.x /= transformed.z;
transformed.y /= transformed.z;
to:
transformed.x /= transformed.w;
transformed.y /= transformed.w;
And it still worked just as before.
https://stackoverflow.com/a/10354368/2159051 explains that division by w will be done later in the pipeline. Obviously, because my code simply multiplies the matrices together, there is no 'later pipeline'. I was just getting lucky in a sense, because my .z value was so close to my .w value, there was the illusion that it was working.
The divide-by-Z step effectively applies the perspective transformation. Without it, you'd have an iso view. Imagine two view-space vertices: A(-1,0,1) and B(-1,0,100).
Without the divide by Z step, the screen coordinates are equal (-1,0).
With the divide-by-Z, they are different: A(-1,0) and B(-0.01,0). So, things farther away from the view-space origin (camera) are smaller in screen space than things that are closer. IE, perspective.
That said: if your projection matrix (and matrix multiplication code) is correct, this should already be happening, as the projection matrix will contain 1/Z scaling components which do this. So, some questions:
Are you really using the output of a projection transform, or just the view transform?
Are you doing this in a pixel/fragment shader? Screen coordinates there are normalized (-1,-1) to (+1,+1), not pixel coordinates, with the origin at the middle of the viewport. Typically in this case you'd divide by .w instead of .z to get something useful, though.
If you're doing this on the CPU, how are you getting this information back to the host?
I guess it is because you are going from 3 dimensions to 2 dimensions, so you are normalizing the 3 dimension world to a 2 dimensional coordinates.
P = (X,Y,Z) in 3D will be q = (x,y) in 2D where x=X/Z and y = Y/Z
So a circle in 3D will not be circle in 2D.
You can check this video out:
https://www.youtube.com/watch?v=fVJeJMWZcq8
I hope I understand your question correctly.
Related
The situation si as follows. I am trying to implement a linear voxel search in a glsl shader for efficient voxel ray tracing. In toehr words, I have a 3D texture and I am ray tracing on it but I am trying to ray trace such that I only ever check voxels intersected by the ray once.
To this effect I have written a program with the following results:
Not efficient but correct:
The above image was obtained by adding a small epsilon ray multiple times and sampling from the texture on each iteration. Which produces the correct results but it's very inefficient.
That would look like:
loop{
start += direction*0.01;
sample(start);
}
To make it efficient I decided to instead implement the following lookup function:
float bound(float val)
{
if(val >= 0)
return voxel_size;
return 0;
}
float planeIntersection(vec3 ray, vec3 origin, vec3 n, vec3 q)
{
n = normalize(n);
if(dot(ray,n)!=0)
return (dot(q,n)-dot(n,origin))/dot(ray,n);
return -1;
}
vec3 get_voxel(vec3 start, vec3 direction)
{
direction = normalize(direction);
vec3 discretized_pos = ivec3((start*1.f/(voxel_size))) * voxel_size;
vec3 n_x = vec3(sign(direction.x), 0,0);
vec3 n_y = vec3(0, sign(direction.y),0);
vec3 n_z = vec3(0, 0,sign(direction.z));
float bound_x, bound_y, bound_z;
bound_x = bound(direction.x);
bound_y = bound(direction.y);
bound_z = bound(direction.z);
float t_x, t_y, t_z;
t_x = planeIntersection(direction, start, n_x,
discretized_pos+vec3(bound_x,0,0));
t_y = planeIntersection(direction, start, n_y,
discretized_pos+vec3(0,bound_y,0));
t_z = planeIntersection(direction, start, n_z,
discretized_pos+vec3(0,0,bound_z));
if(t_x < 0)
t_x = 1.f/0.f;
if(t_y < 0)
t_y = 1.f/0.f;
if(t_z < 0)
t_z = 1.f/0.f;
float t = min(t_x, t_y);
t = min(t, t_z);
return start + direction*t;
}
Which produces the following result:
Notice the triangle aliasing on the left side of some surfaces.
It seems this aliasing occurs because some coordinates are not being set to their correct voxel.
For example modifying the truncation part as follows:
vec3 discretized_pos = ivec3((start*1.f/(voxel_size)) - vec3(0.1)) * voxel_size;
Creates:
So it has fixed the issue for some surfaces and caused it for others.
I wanted to know if there is a way in which I can correct this truncation so that this error does not happen.
Update:
I have narrowed down the issue a bit. Observe the following image:
The numbers represent the order in which I expect the boxes to be visited.
As you can see for some of the points the sampling of the fifth box seems to be ommitted.
The following is the sampling code:
vec4 grabVoxel(vec3 pos)
{
pos *= 1.f/base_voxel_size;
pos.x /= (width-1);
pos.y /= (depth-1);
pos.z /= (height-1);
vec4 voxelVal = texture(voxel_map, pos);
return voxelVal;
}
yep that was the +/- rounding I was talking about in my comments somewhere in your previous questions related to this. What you need to do is having step equal to grid size in one of the axises (and test 3 times once for |dx|=1 then for |dy|=1 and lastly |dz|=1).
Also you should create a debug draw 2D slice through your map to actually see where the hits for a single specific test ray occurred. Now based on direction of ray in each axis you set the rounding rules separately. Without this you are just blindly patching one case and corrupting other two ...
Now actually Look at this (I linked it to your before but you clearly did not):
Wolf and Doom ray casting techniques
especially pay attention to:
On the right It shows you how to compute the ray step (your epsilon). You simply scale the ray direction so one of the coordinate is +/-1. For simplicity start with 2D slice through your map. The red dot is ray start position. Green is ray step vector for vertical grid lines hits and red is for horizontal grid lines hits (z will be analogically the same).
Now you should add the 2D overview of your map through some height slice that is visible (like on the image on the left) add a dot or marker to each intersection detected but distinguish between x,y and z hits by color. Do this for single ray only (I use the center of view ray). Fist handle view when you look at X+ directions than X- and when done move to Y,Z ...
In my GLSL volumetric 3D back raytracer I also linked you before look at these lines:
if (dir.x<0.0) { p+=dir*(((floor(p.x*n)-_zero)*_n)-ray_pos.x)/dir.x; nnor=vec3(+1.0,0.0,0.0); }
if (dir.x>0.0) { p+=dir*((( ceil(p.x*n)+_zero)*_n)-ray_pos.x)/dir.x; nnor=vec3(-1.0,0.0,0.0); }
if (dir.y<0.0) { p+=dir*(((floor(p.y*n)-_zero)*_n)-ray_pos.y)/dir.y; nnor=vec3(0.0,+1.0,0.0); }
if (dir.y>0.0) { p+=dir*((( ceil(p.y*n)+_zero)*_n)-ray_pos.y)/dir.y; nnor=vec3(0.0,-1.0,0.0); }
if (dir.z<0.0) { p+=dir*(((floor(p.z*n)-_zero)*_n)-ray_pos.z)/dir.z; nnor=vec3(0.0,0.0,+1.0); }
if (dir.z>0.0) { p+=dir*((( ceil(p.z*n)+_zero)*_n)-ray_pos.z)/dir.z; nnor=vec3(0.0,0.0,-1.0); }
they are how I did this. As you can see I use different rounding/flooring rule for each of the 6 cases. This way you handle case without corrupting the other. The rounding rule depends on a lot of stuff like how is your coordinate system offseted to (0,0,0) and more so it might be different in your code but the if conditions should be the same. Also as you can see I am handling this by offsetting the ray start position a bit instead of having these conditions inside the ray traversal loop castray.
That macro cast ray and look for intersections with grid and on top of that actually zsorts the intersections and use the first valid one (that is what l,ll are for and no other conditions or combination of ray results are needed). So my way of deal with this is cast ray for each type of intersection (x,y,z) starting on the first intersection with the grid for the same axis. You need to take into account the starting offset so the l,ll resembles the intersection distance to real start of ray not to offseted one ...
Also a good idea is to do this on CPU side first and when 100% working port to GLSL as in GLSL is very hard to debug things like this.
I'm currently working on a OpenGL FrameWork/Engine and as far as the OpenGL part goes, I'm quite satisfied with my results.
On the other hand I have a serious problem getting a Camera to work.
Moving along the Z-Axis works well, but as soon as I start to strafe (moving along the X-Axis), the whole Scene get screwed.
You can see the result of strafing in the image below.
The left part shows the actual scene, the right part shows the scene resulting from a strafe movement.
My code is the following.
In Constructor:
//Info is a Struct with Default values
m_projectionMatrix = glm::perspective(
info.fieldOfView, width / height, //info.fov = 90
info.nearPlane, info.farPlane // info.near = 0.1f, info.far = 1000
);
//m_pos = glm::vec3(0.0f,0.0f,0.0f), info.target = glm::vec3(0.0f, 0.0f, -1.0f)
m_viewMatrix = glm::lookAt(m_pos, m_pos + info.target, Camera::UP);
//combine projection and view
m_vpMatrix = m_projectionMatrix * m_viewMatrix;
In the "Update"-Method I'm currently doing the following:
glm::mat4x4 Camera::GetVPMatrix()
{
m_vpMatrix = glm::translate(m_vpMatrix, m_deltaPos);
return m_vpMatrix;
}
As far as i know:
The projection matrix achieves the actual perspective view. The view matrix, initially, translates and rotates the whole scene, that it is centered?
So why translating the VP-Matrix by any Z-Value works just fine, but by an X-Value doesn't?
I would like to achive a camera behaviour like this:
Initial Cam Pos is (0,0,0) and "Center" is e.g. (0,0,-1).
Then after Translation by X = 5: Cam Pos is (5,0,0) and Center is (5,0,-1).
Edit: Additional Question.
Why is the Z-Coordinate affekted by VP-Transformation?
Thanks for any help!
Best regards, Christoph.
Okay, I finally got the solution... As you can see, I am using GLM for my matrix math. GLM stores its matrices values in column major order. Open GL wants column major ordered matrices, too. C/C++ native 2d Array layout is row major, so most of the available OpenGL/C++ tutorials state, that one should use
glUniformMatrix4fv(location, 1, GL_TRUE, &mat[0][0]);
With GL_TRUE meaning, that the matrix should be converted (transposed?) from row major to column major order. Because of my matrices already beeing in column major format, that makes absolutely no sense...
Changing the above to
glUniformMatrix4fv(location, 1, GL_FALSE, &mat[0][0]);
fixed my problem...
Matrix math is not my strong point so I can't explain why your current approach doesn't work, though I have my suspicions (translating the projection matrix doesn't seem right). The following should allow you to move the camera though:
// update your camera position
m_pos = new_pos;
// update the view matrix
m_viewMatrix = glm::lookAt(m_pos, m_pos + info.target, Camera::UP);
// update the view-projection matrix, projection matrix never changes
m_vpMatrix = m_projectionMatrix * m_viewMatrix;
I try to get the 3D coordinates of my OpenGL model. I found this code in the forum, but I donĀ“t understand how the collision is detected.
-(void)receivePoint:(CGPoint)loke
{
GLfloat projectionF[16];
GLfloat modelViewF[16];
GLint viewportI[4];
glGetFloatv(GL_MODELVIEW_MATRIX, modelViewF);
glGetFloatv(GL_PROJECTION_MATRIX, projectionF);
glGetIntegerv(GL_VIEWPORT, viewportI);
loke.y = (float) viewportI[3] - loke.y;
float nearPlanex, nearPlaney, nearPlanez, farPlanex, farPlaney, farPlanez;
gluUnProject(loke.x, loke.y, 0, modelViewF, projectionF, viewportI, &nearPlanex, &nearPlaney, &nearPlanez);
gluUnProject(loke.x, loke.y, 1, modelViewF, projectionF, viewportI, &farPlanex, &farPlaney, &farPlanez);
float rayx = farPlanex - nearPlanex;
float rayy = farPlaney - nearPlaney;
float rayz = farPlanez - nearPlanez;
float rayLength = sqrtf((rayx*rayx)+(rayy*rayy)+(rayz*rayz));
//normalizing rayVector
rayx /= rayLength;
rayy /= rayLength;
rayz /= rayLength;
float collisionPointx, collisionPointy, collisionPointz;
for (int i = 0; i < 50; i++)
{
collisionPointx = rayx * rayLength/i*50;
collisionPointy = rayy * rayLength/i*50;
collisionPointz = rayz * rayLength/i*50;
}
}
In my opinion there a break condition missing. When do I find the collisionPoint?
Another question is:
How do I manipulate the texture at these collision point? I think that I need the corresponding vertex!?
best regards
That code takes the ray from your near clipping place to your far at the position of your loke then partitions it in 50 and interpolates all the possible location of your point in 3D along this ray. At the exit of the loop, in the original code you posted, collisionPointx, y and z is the value of the far most point. There is no "collision" test in that code. you actually need to test your 3D coordinates against a 3D object you want to collide with.
I have a camera class for controlling the camera, with the main function:
void PNDCAMERA::renderMatrix()
{
float dttime=getElapsedSeconds();
GetCursorPos(&cmc.p_cursorPos);
ScreenToClient(hWnd, &cmc.p_cursorPos);
double d_horangle=((double)cmc.p_cursorPos.x-(double)cmc.p_origin.x)/(double)screenWidth*PI;
double d_verangle=((double)cmc.p_cursorPos.y-(double)cmc.p_origin.y)/(double)screenHeight*PI;
cmc.horizontalAngle=d_horangle+cmc.d_horangle_prev;
cmc.verticalAngle=d_verangle+cmc.d_verangle_prev;
if(cmc.verticalAngle>PI/2) cmc.verticalAngle=PI/2;
if(cmc.verticalAngle<-PI/2) cmc.verticalAngle=-PI/2;
changevAngle(cmc.verticalAngle);
changehAngle(cmc.horizontalAngle);
rightVector=glm::vec3(sin(horizontalAngle - PI/2.0f),0,cos(horizontalAngle - PI/2.0f));
directionVector=glm::vec3(cos(verticalAngle) * sin(horizontalAngle), sin(verticalAngle), cos(verticalAngle) * cos(horizontalAngle));
upVector=glm::vec3(glm::cross(rightVector,directionVector));
glm::normalize(upVector);
glm::normalize(directionVector);
glm::normalize(rightVector);
if(moveForw==true)
{
cameraPosition=cameraPosition+directionVector*(float)C_SPEED*dttime;
}
if(moveBack==true)
{
cameraPosition=cameraPosition-directionVector*(float)C_SPEED*dttime;
}
if(moveRight==true)
{
cameraPosition=cameraPosition+rightVector*(float)C_SPEED*dttime;
}
if(moveLeft==true)
{
cameraPosition=cameraPosition-rightVector*(float)C_SPEED*dttime;
}
glViewport(0,0,screenWidth,screenHeight);
glScissor(0,0,screenWidth,screenHeight);
projection_matrix=glm::perspective(60.0f, float(screenWidth) / float(screenHeight), 1.0f, 40000.0f);
view_matrix = glm::lookAt(
cameraPosition,
cameraPosition+directionVector,
upVector);
gShader->bindShader();
gShader->sendUniform4x4("model_matrix",glm::value_ptr(model_matrix));
gShader->sendUniform4x4("view_matrix",glm::value_ptr(view_matrix));
gShader->sendUniform4x4("projection_matrix",glm::value_ptr(projection_matrix));
gShader->sendUniform("camera_position",cameraPosition.x,cameraPosition.y,cameraPosition.z);
gShader->sendUniform("screen_size",(GLfloat)screenWidth,(GLfloat)screenHeight);
};
It runs smooth, I can control the angle with my mouse in X and Y directions, but not around the Z axis (the Y is the "up" in world space).
In my rendering method I render the terrain grid with one VAO call. The grid itself is a quad as the center (highes lod), and the others are L shaped grids scaled by powers of 2. It is always repositioned before the camera, scaled into world space, and displaced by a heightmap.
rcampos.x = round((camera_position.x)/(pow(2,6)*gridscale))*(pow(2,6)*gridscale);
rcampos.y = 0;
rcampos.z = round((camera_position.z)/(pow(2,6)*gridscale))*(pow(2,6)*gridscale);
vPos = vec3(uv.x,0,uv.y)*pow(2,LOD)*gridscale + rcampos;
vPos.y = texture(hmap,vPos.xz/horizontal_scale).r*vertical_scale;
The problem:
The camera starts at the origin, at (0,0,0). When I move it far away from that point, it causes the rotation along the X axis discontinuous. It feels like the mouse cursor was aligned with a grid in screen space, and only the position at grid points were recorded as the cursor movement.
I've also recorded the camera position when it gets pretty noticeable, it's about at 1,000,000 from the origin in X or Z directions. I've noticed that this 'lag' increases linearly with distance, (from the origin).
There is also a little Z-fighting at this point(or similar effect), even if I use a single plane with no displacement, and no planes can overlap. (I use tessellation shaders and render patches.) Black spots appear on the patches. May be caused by fog:
float fc = (view_matrix*vec4(Pos,1)).z/(view_matrix*vec4(Pos,1)).w;
float fResult = exp(-pow(0.00005f*fc, 2.0));
fResult = clamp(fResult, 0.0, 1.0);
gl_FragColor = vec4(mix(vec4(0.0,0.0,0.0,0),vec4(n,1),fResult));
Another strange behavior is the little rotation by the Z axis, this increases with distance too, but I don't use this kind of rotation.
Variable formats:
The vertices are unsigned short format, the indexes are in unsigned int format.
The cmc struct is the camera/cursor struct with double variables.
PI and C_SPEED are #define constants.
Additional information:
The grid is created with the above mentioned ushort array, with the spacing of 1. In the shader I scale it with a constant, then use tessellation to achieve the best performance and the largest view distance.
The final position of a vertex is calculated in the tessellation evaluation shader.
mat4 MVP = projection_matrix*view_matrix*model_matrix;
As you could see I send my matrices to the shader with the glm library.
+Q:
How could the length of a float (or any other format) cause this kind of 'precision loss', or whatever causes the problem. The view_matrix could be a cause of this, but I still cannot output it on the screen at runtime.
PS: I don't know If this helps, but the view matrix at about the 'lag start location' is
-0.49662 -0.49662 0.863129 0
0.00514956 0.994097 0.108373 0
-0.867953 0.0582648 -0.493217 0
1.62681e+006 16383.3 -290126 1
EDIT
Comparing the camera position and view matrix:
view matrix = 0.967928 0.967928 0.248814 0
-0.00387854 0.988207 0.153079 0
-0.251198 -0.149134 0.956378 0
-2.88212e+006 89517.1 -694945 1
position = 2.9657e+006, 6741.52, -46002
It's a long post so I might not answer everything.
I think it is most likely precision issue. Lets start with the camera rotation problem. I think the main problem is here
view_matrix = glm::lookAt(
cameraPosition,
cameraPosition+directionVector,
upVector);
As you said, position is quite a big number like 2.9657e+006 - and look what glm does in glm::lookAt:
GLM_FUNC_QUALIFIER detail::tmat4x4<T> lookAt
(
detail::tvec3<T> const & eye,
detail::tvec3<T> const & center,
detail::tvec3<T> const & up
)
{
detail::tvec3<T> f = normalize(center - eye);
detail::tvec3<T> u = normalize(up);
detail::tvec3<T> s = normalize(cross(f, u));
u = cross(s, f);
In your case, eye and center are these big (very similar) numbers and then glm subtracts them to compute f. This is bad, because if you subtract two almost equal floats, the most significant digits are set to zero, which leaves you with the insignificant (most erroneous) digits. And you use this for further computations, which only emphasizes the error. Check this link for some details.
The z-fighting is similar issue. Z-buffer is not linear, it has the best resolution near the camera because of the perspective divide. The z-buffer range is set according to your near and far clipping plane values. You always want to have the smallest possible ration between far and near values (generally far/near should not be greater than 30000). There is a very good explanation of this on the openGL wiki, I suggest you read it :)
Back to the camera issue - first, I would consider if you really need such a huge scene. I don't think so, but if yes, you could try computing your view matrix differently, compute rotation and translation separately, which could help your case. The way I usually handle camera:
glm::vec3 cameraPos;
glm::vec3 cameraRot;
glm::vec3 cameraPosLag;
glm::vec3 cameraRotLag;
int ox, oy;
const float inertia = 0.08f; //mouse inertia
const float rotateSpeed = 0.2f; //mouse rotate speed (sensitivity)
const float walkSpeed = 0.25f; //walking speed (wasd)
void updateCameraViewMatrix() {
//camera inertia
cameraPosLag += (cameraPos - cameraPosLag) * inertia;
cameraRotLag += (cameraRot - cameraRotLag) * inertia;
// view transform
g_CameraViewMatrix = glm::rotate(glm::mat4(1.0f), cameraRotLag[0], glm::vec3(1.0, 0.0, 0.0));
g_CameraViewMatrix = glm::rotate(g_CameraViewMatrix, cameraRotLag[1], glm::vec3(0.0, 1.0, 0.0));
g_CameraViewMatrix = glm::translate(g_CameraViewMatrix, cameraPosLag);
}
void mousePositionChanged(int x, int y) {
float dx, dy;
dx = (float) (x - ox);
dy = (float) (y - oy);
ox = x;
oy = y;
if (mouseRotationEnabled) {
cameraRot[0] += dy * rotateSpeed;
cameraRot[1] += dx * rotateSpeed;
}
}
void keyboardAction(int key, int action) {
switch (key) {
case 'S':// backwards
cameraPos[0] -= g_CameraViewMatrix[0][2] * walkSpeed;
cameraPos[1] -= g_CameraViewMatrix[1][2] * walkSpeed;
cameraPos[2] -= g_CameraViewMatrix[2][2] * walkSpeed;
break;
...
}
}
This way, the position would not affect your rotation. I should add that I adapted this code from NVIDIA CUDA samples v5.0 (Smoke Particles), I really like it :)
Hope at least some of this helps.
I am doing a program to test sphere-frustum intersection and being able to determine the sphere's visibility. I am extracting the frustum's clipping planes into camera space and checking for intersection. It works perfectly for all planes except the far plane and I cannot figure out why. I keep pulling the camera back but my program still claims the sphere is visible, despite it having been clipped long ago. If I go far enough it eventually determines that it is not visible, but this is some distance after it has exited the frustum.
I am using a unit sphere at the origin for the test. I am using the OpenGL Mathematics (GLM) library for vector and matrix data structures and for its built in math functions. Here is my code for the visibility function:
void visibilityTest(const struct MVP *mvp) {
static bool visLastTime = true;
bool visThisTime;
const glm::vec4 modelCenter_worldSpace = glm::vec4(0,0,0,1); //at origin
const int negRadius = -1; //unit sphere
//Get cam space model center
glm::vec4 modelCenter_cameraSpace = mvp->view * mvp->model * modelCenter_worldSpace;
//---------Get Frustum Planes--------
//extract projection matrix row vectors
//NOTE: since glm stores their mats in column-major order, we extract columns
glm::vec4 rowVec[4];
for(int i = 0; i < 4; i++) {
rowVec[i] = glm::vec4( mvp->projection[0][i], mvp->projection[1][i], mvp->projection[2][i], mvp->projection[3][i] );
}
//determine frustum clipping planes (in camera space)
glm::vec4 plane[6];
//NOTE: recall that indices start at zero. So M4 + M3 will be rowVec[3] + rowVec[2]
plane[0] = rowVec[3] + rowVec[2]; //near
plane[1] = rowVec[3] - rowVec[2]; //far
plane[2] = rowVec[3] + rowVec[0]; //left
plane[3] = rowVec[3] - rowVec[0]; //right
plane[4] = rowVec[3] + rowVec[1]; //bottom
plane[5] = rowVec[3] - rowVec[1]; //top
//extend view frustum by 1 all directions; near/far along local z, left/right among local x, bottom/top along local y
// -Ax' -By' -Cz' + D = D'
plane[0][3] -= plane[0][2]; // <x',y',z'> = <0,0,1>
plane[1][3] += plane[1][2]; // <0,0,-1>
plane[2][3] += plane[2][0]; // <-1,0,0>
plane[3][3] -= plane[3][0]; // <1,0,0>
plane[4][3] += plane[4][1]; // <0,-1,0>
plane[5][3] -= plane[5][1]; // <0,1,0>
//----------Determine Frustum-Sphere intersection--------
//if any of the dot products between model center and frustum plane is less than -r, then the object falls outside the view frustum
visThisTime = true;
for(int i = 0; i < 6; i++) {
if( glm::dot(plane[i], modelCenter_cameraSpace) < static_cast<float>(negRadius) ) {
visThisTime = false;
}
}
if(visThisTime != visLastTime) {
printf("Sphere is %s visible\n", (visThisTime) ? "" : "NOT " );
visLastTime = visThisTime;
}
}
The polygons appear to be clipped by the far plane properly so it seems that the projection matrix is set up properly, but the calculations make it seem like the plane is way far out. Perhaps I am not calculating something correctly or have a fundamental misunderstanding of the calculations that are required?
The calculations that deal specifically with the far clipping plane are:
plane[1] = rowVec[3] - rowVec[2]; //far
and
plane[1][3] += plane[1][2]; // <0,0,-1>
I'm setting the plane to be equal to the 4th row (or in this case column) of the projection matrix - the 3rd row of the projection matrix. Then I'm extending the far plane one unit further (due to the sphere's radius of one; D' = D - C(-1) )
I've looked over this code many times and I can't see why it shouldn't work. Any help is appreciated.
EDIT:
I can't answer my own question as I don't have the rep, so I will post it here.
The problem was that I wasn't normalizing the plane equations. This didn't seem to make much of a difference for any of the clip planes besides the far one, so I hadn't even considered it (but that didn't make it any less wrong). After normalization everything works properly.