Get 3D model coordinate with 2D screen coordinates gluUnproject - c++

I try to get the 3D coordinates of my OpenGL model. I found this code in the forum, but I donĀ“t understand how the collision is detected.
-(void)receivePoint:(CGPoint)loke
{
GLfloat projectionF[16];
GLfloat modelViewF[16];
GLint viewportI[4];
glGetFloatv(GL_MODELVIEW_MATRIX, modelViewF);
glGetFloatv(GL_PROJECTION_MATRIX, projectionF);
glGetIntegerv(GL_VIEWPORT, viewportI);
loke.y = (float) viewportI[3] - loke.y;
float nearPlanex, nearPlaney, nearPlanez, farPlanex, farPlaney, farPlanez;
gluUnProject(loke.x, loke.y, 0, modelViewF, projectionF, viewportI, &nearPlanex, &nearPlaney, &nearPlanez);
gluUnProject(loke.x, loke.y, 1, modelViewF, projectionF, viewportI, &farPlanex, &farPlaney, &farPlanez);
float rayx = farPlanex - nearPlanex;
float rayy = farPlaney - nearPlaney;
float rayz = farPlanez - nearPlanez;
float rayLength = sqrtf((rayx*rayx)+(rayy*rayy)+(rayz*rayz));
//normalizing rayVector
rayx /= rayLength;
rayy /= rayLength;
rayz /= rayLength;
float collisionPointx, collisionPointy, collisionPointz;
for (int i = 0; i < 50; i++)
{
collisionPointx = rayx * rayLength/i*50;
collisionPointy = rayy * rayLength/i*50;
collisionPointz = rayz * rayLength/i*50;
}
}
In my opinion there a break condition missing. When do I find the collisionPoint?
Another question is:
How do I manipulate the texture at these collision point? I think that I need the corresponding vertex!?
best regards

That code takes the ray from your near clipping place to your far at the position of your loke then partitions it in 50 and interpolates all the possible location of your point in 3D along this ray. At the exit of the loop, in the original code you posted, collisionPointx, y and z is the value of the far most point. There is no "collision" test in that code. you actually need to test your 3D coordinates against a 3D object you want to collide with.

Related

calling glm::unproject() correctly, confused

I'm trying to use glm::unproject() to convert my SDL mouse coordinates into a world position vector, on the x/z-plane. Basically I want to figure out which "x/z" coordinate the user clicked on with a mouse.
From other stack overflow answers I came up needing to call glm::unproject(). I think I'm passing it the wrong arguments, because the values I'm getting back for the world position (printed std std::cerr) aren't world position values as I would expect.
Am I constructing the arguments to glm::unproject() correctly below? Specifically should I be combing the camera's world position and the view matrix (computed using glm::lookAt) to compute the modelview matrix passed into glm::unproject?
struct Dimensions {
int x, y, w, h;
};
glm::mat4
Camera::view_matrix() const
{
// VIEW matrix is created by looking at some target member
auto const& target = target_->translation;
auto const position_xyz = world_position();
glm::vec3 const UP{0, 1, 0};
return glm::lookAt(position_xyz, target, UP);
}
glm::mat4
Camera::projection_matrix() const
{
auto const fov = glm::radians(90.0f);
return glm::perspective(fov, 4.0f/3.0f, 0.1f, 200.0f);
}
glm::vec3
calculate_worldpos(Camera const& camera, int const mouse_x, int const mouse_y)
{
float const width = 1024.0f, height = 768.0f;
glm::vec4 const viewport = glm::vec4(0.0f, 0.0f, width, height);
glm::mat4 const modelview = camera.view_matrix();
glm::mat4 const projection = camera.projection_matrix();
float z = 0.0;
glm::vec3 screenPos = glm::vec3(mouse_x, height - mouse_y - 1, z);
std::cerr << "screenpos: xyz: '" << glm::to_string(screenPos) << "'\n";
glm::vec3 worldPos = glm::unProject(screenPos, modelview, projection, viewport);
std::cerr << "worldpos: xyz: '" << glm::to_string(worldPos) << "'\n";
return worldPos;
}
In the image below, I have the follow setup.
camera lookAt target = (0, 0, 0)
camera world position = (-0.009, 5.107, -0.368)
(mouse_x, mouse_y, mouse_z) = (286, 393, 0)
If you look at the image below, you can see that my mouse is hovering over the world position (3, 0, 0) as shown by the grid. I would expect calculating the world position of my mouse (as shown in the picture) would return me the vector (3, 0, 0). It does not, instead I get the vector: (0.049, 5.007, -0.360).
Does anyone see where I might be going wrong? I'm assuming I'm making some kind of incorrect assumption somewhere.
Your assumption is wrong: glm::unproject returns the worldspace coordinates of the input given by a xy-position in pixel coordinates and a z-coordinate storing the depth value. On every pixel on the screen, there is an infinite number of points in worldspace that project to this pixel (All that lie on the ray going from the projecting center through this pixel). Which one you want is identified by choosing the depth coordinate which than results in one specific point on this ray. Choosing z = 0 means that the result will always be a point on the near-plane of the camera.
What you are actually looking for is the intersection of this ray (going through the camera position and the calculated point) and the xz-plane (where y=0).
The ray is given by the two points on it (camera position C, near plane point P) as follows:
-0.009 0.058
C + l * (P-C) = ( 5.107 ) + l * ( -0.100 )
-0.368 0.008
, where l is a free variable.
As already said, we are looking for the intersection point (a,b) with the y=0 plane, thus we can formulate the following equation:
-0.009 0.058 a
( 5.107 ) + l * ( -0.100 ) = ( 0 )
-0.368 0.008 b
Solving the y-equation (5.107 + l * -0.1 = 0) for l results in l = 51.07. Pasting back in the equations for x and z yields:
a = -0.009 + 51.07 * 0.058 = 2.95306
b = -0.368 + 51.07 * 0.008 = 0.04056
Which is close to the expected worldspace position. The difference is most probably given by the fact that you just showed rounded numbers in the question. For accuracy reasons, I would also not calculate a point on the near-plane but one on the far plane (z=1) since the near-plane distance is usually quite small and could lead to numerical issues.
Conclusion: All values supplied are correct, but you were just not calculating what you expected.

Why do I have to divide by Z?

I needed to implement 'choosing an object' in a 3D environment. So instead of going with robust, accurate approach, such as raycasting, I decided to take the easy way out. First, I transform the objects world position onto screen coordinates:
glm::mat4 modelView, projection, accum;
glGetFloatv(GL_PROJECTION_MATRIX, (GLfloat*)&projection);
glGetFloatv(GL_MODELVIEW_MATRIX, (GLfloat*)&modelView);
accum = projection * modelView;
glm::mat4 transformed = accum * glm::vec4(objectLocation, 1);
Followed by some trivial code to transform the opengl coordinate system to normal window coordinates, and do a simple distance from the mouse check. BUT that doesn't quite work. In order to translate from world space to screen space, I need one more calculation added on to the end of the function shown above:
transformed.x /= transformed.z;
transformed.y /= transformed.z;
I don't understand why I have to do this. I was under the impression that, once one multiplied your vertex by the accumulated modelViewProjection matrix, you had your screen coordinates. But I have to divide by Z to get it to work properly. In my openGL 3.3 shaders, I never have to divide by Z. Why is this?
EDIT: The code to transform from from opengl coordinate system to screen coordinates is this:
int screenX = (int)((trans.x + 1.f)*640.f); //640 = 1280/2
int screenY = (int)((-trans.y + 1.f)*360.f); //360 = 720/2
And then I test if the mouse is near that point by doing:
float length = glm::distance(glm::vec2(screenX, screenY), glm::vec2(mouseX, mouseY));
if(length < 50) {//you can guess the rest
EDIT #2
This method is called upon a mouse click event:
glm::mat4 modelView;
glm::mat4 projection;
glm::mat4 accum;
glGetFloatv(GL_PROJECTION_MATRIX, (GLfloat*)&projection);
glGetFloatv(GL_MODELVIEW_MATRIX, (GLfloat*)&modelView);
accum = projection * modelView;
float nearestDistance = 1000.f;
gameObject* nearest = NULL;
for(uint i = 0; i < objects.size(); i++) {
gameObject* o = objects[i];
o->selected = false;
glm::vec4 trans = accum * glm::vec4(o->location,1);
trans.x /= trans.z;
trans.y /= trans.z;
int clipX = (int)((trans.x+1.f)*640.f);
int clipY = (int)((-trans.y+1.f)*360.f);
float length = glm::distance(glm::vec2(clipX,clipY), glm::vec2(mouseX, mouseY));
if(length<50) {
nearestDistance = trans.z;
nearest = o;
}
}
if(nearest) {
nearest->selected = true;
}
mouseRightPressed = true;
The code as a whole is incomplete, but the parts relevant to my question works fine. The 'objects' vector contains only one element for my tests, so the loop doesn't get in the way at all.
I've figured it out. As Mr David Lively pointed out,
Typically in this case you'd divide by .w instead of .z to get something useful, though.
My .w values were very close to my .z values, so in my code I change the statement:
transformed.x /= transformed.z;
transformed.y /= transformed.z;
to:
transformed.x /= transformed.w;
transformed.y /= transformed.w;
And it still worked just as before.
https://stackoverflow.com/a/10354368/2159051 explains that division by w will be done later in the pipeline. Obviously, because my code simply multiplies the matrices together, there is no 'later pipeline'. I was just getting lucky in a sense, because my .z value was so close to my .w value, there was the illusion that it was working.
The divide-by-Z step effectively applies the perspective transformation. Without it, you'd have an iso view. Imagine two view-space vertices: A(-1,0,1) and B(-1,0,100).
Without the divide by Z step, the screen coordinates are equal (-1,0).
With the divide-by-Z, they are different: A(-1,0) and B(-0.01,0). So, things farther away from the view-space origin (camera) are smaller in screen space than things that are closer. IE, perspective.
That said: if your projection matrix (and matrix multiplication code) is correct, this should already be happening, as the projection matrix will contain 1/Z scaling components which do this. So, some questions:
Are you really using the output of a projection transform, or just the view transform?
Are you doing this in a pixel/fragment shader? Screen coordinates there are normalized (-1,-1) to (+1,+1), not pixel coordinates, with the origin at the middle of the viewport. Typically in this case you'd divide by .w instead of .z to get something useful, though.
If you're doing this on the CPU, how are you getting this information back to the host?
I guess it is because you are going from 3 dimensions to 2 dimensions, so you are normalizing the 3 dimension world to a 2 dimensional coordinates.
P = (X,Y,Z) in 3D will be q = (x,y) in 2D where x=X/Z and y = Y/Z
So a circle in 3D will not be circle in 2D.
You can check this video out:
https://www.youtube.com/watch?v=fVJeJMWZcq8
I hope I understand your question correctly.

Precision issue - viewpoint far from origin - OpenGL C++

I have a camera class for controlling the camera, with the main function:
void PNDCAMERA::renderMatrix()
{
float dttime=getElapsedSeconds();
GetCursorPos(&cmc.p_cursorPos);
ScreenToClient(hWnd, &cmc.p_cursorPos);
double d_horangle=((double)cmc.p_cursorPos.x-(double)cmc.p_origin.x)/(double)screenWidth*PI;
double d_verangle=((double)cmc.p_cursorPos.y-(double)cmc.p_origin.y)/(double)screenHeight*PI;
cmc.horizontalAngle=d_horangle+cmc.d_horangle_prev;
cmc.verticalAngle=d_verangle+cmc.d_verangle_prev;
if(cmc.verticalAngle>PI/2) cmc.verticalAngle=PI/2;
if(cmc.verticalAngle<-PI/2) cmc.verticalAngle=-PI/2;
changevAngle(cmc.verticalAngle);
changehAngle(cmc.horizontalAngle);
rightVector=glm::vec3(sin(horizontalAngle - PI/2.0f),0,cos(horizontalAngle - PI/2.0f));
directionVector=glm::vec3(cos(verticalAngle) * sin(horizontalAngle), sin(verticalAngle), cos(verticalAngle) * cos(horizontalAngle));
upVector=glm::vec3(glm::cross(rightVector,directionVector));
glm::normalize(upVector);
glm::normalize(directionVector);
glm::normalize(rightVector);
if(moveForw==true)
{
cameraPosition=cameraPosition+directionVector*(float)C_SPEED*dttime;
}
if(moveBack==true)
{
cameraPosition=cameraPosition-directionVector*(float)C_SPEED*dttime;
}
if(moveRight==true)
{
cameraPosition=cameraPosition+rightVector*(float)C_SPEED*dttime;
}
if(moveLeft==true)
{
cameraPosition=cameraPosition-rightVector*(float)C_SPEED*dttime;
}
glViewport(0,0,screenWidth,screenHeight);
glScissor(0,0,screenWidth,screenHeight);
projection_matrix=glm::perspective(60.0f, float(screenWidth) / float(screenHeight), 1.0f, 40000.0f);
view_matrix = glm::lookAt(
cameraPosition,
cameraPosition+directionVector,
upVector);
gShader->bindShader();
gShader->sendUniform4x4("model_matrix",glm::value_ptr(model_matrix));
gShader->sendUniform4x4("view_matrix",glm::value_ptr(view_matrix));
gShader->sendUniform4x4("projection_matrix",glm::value_ptr(projection_matrix));
gShader->sendUniform("camera_position",cameraPosition.x,cameraPosition.y,cameraPosition.z);
gShader->sendUniform("screen_size",(GLfloat)screenWidth,(GLfloat)screenHeight);
};
It runs smooth, I can control the angle with my mouse in X and Y directions, but not around the Z axis (the Y is the "up" in world space).
In my rendering method I render the terrain grid with one VAO call. The grid itself is a quad as the center (highes lod), and the others are L shaped grids scaled by powers of 2. It is always repositioned before the camera, scaled into world space, and displaced by a heightmap.
rcampos.x = round((camera_position.x)/(pow(2,6)*gridscale))*(pow(2,6)*gridscale);
rcampos.y = 0;
rcampos.z = round((camera_position.z)/(pow(2,6)*gridscale))*(pow(2,6)*gridscale);
vPos = vec3(uv.x,0,uv.y)*pow(2,LOD)*gridscale + rcampos;
vPos.y = texture(hmap,vPos.xz/horizontal_scale).r*vertical_scale;
The problem:
The camera starts at the origin, at (0,0,0). When I move it far away from that point, it causes the rotation along the X axis discontinuous. It feels like the mouse cursor was aligned with a grid in screen space, and only the position at grid points were recorded as the cursor movement.
I've also recorded the camera position when it gets pretty noticeable, it's about at 1,000,000 from the origin in X or Z directions. I've noticed that this 'lag' increases linearly with distance, (from the origin).
There is also a little Z-fighting at this point(or similar effect), even if I use a single plane with no displacement, and no planes can overlap. (I use tessellation shaders and render patches.) Black spots appear on the patches. May be caused by fog:
float fc = (view_matrix*vec4(Pos,1)).z/(view_matrix*vec4(Pos,1)).w;
float fResult = exp(-pow(0.00005f*fc, 2.0));
fResult = clamp(fResult, 0.0, 1.0);
gl_FragColor = vec4(mix(vec4(0.0,0.0,0.0,0),vec4(n,1),fResult));
Another strange behavior is the little rotation by the Z axis, this increases with distance too, but I don't use this kind of rotation.
Variable formats:
The vertices are unsigned short format, the indexes are in unsigned int format.
The cmc struct is the camera/cursor struct with double variables.
PI and C_SPEED are #define constants.
Additional information:
The grid is created with the above mentioned ushort array, with the spacing of 1. In the shader I scale it with a constant, then use tessellation to achieve the best performance and the largest view distance.
The final position of a vertex is calculated in the tessellation evaluation shader.
mat4 MVP = projection_matrix*view_matrix*model_matrix;
As you could see I send my matrices to the shader with the glm library.
+Q:
How could the length of a float (or any other format) cause this kind of 'precision loss', or whatever causes the problem. The view_matrix could be a cause of this, but I still cannot output it on the screen at runtime.
PS: I don't know If this helps, but the view matrix at about the 'lag start location' is
-0.49662 -0.49662 0.863129 0
0.00514956 0.994097 0.108373 0
-0.867953 0.0582648 -0.493217 0
1.62681e+006 16383.3 -290126 1
EDIT
Comparing the camera position and view matrix:
view matrix = 0.967928 0.967928 0.248814 0
-0.00387854 0.988207 0.153079 0
-0.251198 -0.149134 0.956378 0
-2.88212e+006 89517.1 -694945 1
position = 2.9657e+006, 6741.52, -46002
It's a long post so I might not answer everything.
I think it is most likely precision issue. Lets start with the camera rotation problem. I think the main problem is here
view_matrix = glm::lookAt(
cameraPosition,
cameraPosition+directionVector,
upVector);
As you said, position is quite a big number like 2.9657e+006 - and look what glm does in glm::lookAt:
GLM_FUNC_QUALIFIER detail::tmat4x4<T> lookAt
(
detail::tvec3<T> const & eye,
detail::tvec3<T> const & center,
detail::tvec3<T> const & up
)
{
detail::tvec3<T> f = normalize(center - eye);
detail::tvec3<T> u = normalize(up);
detail::tvec3<T> s = normalize(cross(f, u));
u = cross(s, f);
In your case, eye and center are these big (very similar) numbers and then glm subtracts them to compute f. This is bad, because if you subtract two almost equal floats, the most significant digits are set to zero, which leaves you with the insignificant (most erroneous) digits. And you use this for further computations, which only emphasizes the error. Check this link for some details.
The z-fighting is similar issue. Z-buffer is not linear, it has the best resolution near the camera because of the perspective divide. The z-buffer range is set according to your near and far clipping plane values. You always want to have the smallest possible ration between far and near values (generally far/near should not be greater than 30000). There is a very good explanation of this on the openGL wiki, I suggest you read it :)
Back to the camera issue - first, I would consider if you really need such a huge scene. I don't think so, but if yes, you could try computing your view matrix differently, compute rotation and translation separately, which could help your case. The way I usually handle camera:
glm::vec3 cameraPos;
glm::vec3 cameraRot;
glm::vec3 cameraPosLag;
glm::vec3 cameraRotLag;
int ox, oy;
const float inertia = 0.08f; //mouse inertia
const float rotateSpeed = 0.2f; //mouse rotate speed (sensitivity)
const float walkSpeed = 0.25f; //walking speed (wasd)
void updateCameraViewMatrix() {
//camera inertia
cameraPosLag += (cameraPos - cameraPosLag) * inertia;
cameraRotLag += (cameraRot - cameraRotLag) * inertia;
// view transform
g_CameraViewMatrix = glm::rotate(glm::mat4(1.0f), cameraRotLag[0], glm::vec3(1.0, 0.0, 0.0));
g_CameraViewMatrix = glm::rotate(g_CameraViewMatrix, cameraRotLag[1], glm::vec3(0.0, 1.0, 0.0));
g_CameraViewMatrix = glm::translate(g_CameraViewMatrix, cameraPosLag);
}
void mousePositionChanged(int x, int y) {
float dx, dy;
dx = (float) (x - ox);
dy = (float) (y - oy);
ox = x;
oy = y;
if (mouseRotationEnabled) {
cameraRot[0] += dy * rotateSpeed;
cameraRot[1] += dx * rotateSpeed;
}
}
void keyboardAction(int key, int action) {
switch (key) {
case 'S':// backwards
cameraPos[0] -= g_CameraViewMatrix[0][2] * walkSpeed;
cameraPos[1] -= g_CameraViewMatrix[1][2] * walkSpeed;
cameraPos[2] -= g_CameraViewMatrix[2][2] * walkSpeed;
break;
...
}
}
This way, the position would not affect your rotation. I should add that I adapted this code from NVIDIA CUDA samples v5.0 (Smoke Particles), I really like it :)
Hope at least some of this helps.

OpenGL Frustum visibility test with sphere : Far plane not working

I am doing a program to test sphere-frustum intersection and being able to determine the sphere's visibility. I am extracting the frustum's clipping planes into camera space and checking for intersection. It works perfectly for all planes except the far plane and I cannot figure out why. I keep pulling the camera back but my program still claims the sphere is visible, despite it having been clipped long ago. If I go far enough it eventually determines that it is not visible, but this is some distance after it has exited the frustum.
I am using a unit sphere at the origin for the test. I am using the OpenGL Mathematics (GLM) library for vector and matrix data structures and for its built in math functions. Here is my code for the visibility function:
void visibilityTest(const struct MVP *mvp) {
static bool visLastTime = true;
bool visThisTime;
const glm::vec4 modelCenter_worldSpace = glm::vec4(0,0,0,1); //at origin
const int negRadius = -1; //unit sphere
//Get cam space model center
glm::vec4 modelCenter_cameraSpace = mvp->view * mvp->model * modelCenter_worldSpace;
//---------Get Frustum Planes--------
//extract projection matrix row vectors
//NOTE: since glm stores their mats in column-major order, we extract columns
glm::vec4 rowVec[4];
for(int i = 0; i < 4; i++) {
rowVec[i] = glm::vec4( mvp->projection[0][i], mvp->projection[1][i], mvp->projection[2][i], mvp->projection[3][i] );
}
//determine frustum clipping planes (in camera space)
glm::vec4 plane[6];
//NOTE: recall that indices start at zero. So M4 + M3 will be rowVec[3] + rowVec[2]
plane[0] = rowVec[3] + rowVec[2]; //near
plane[1] = rowVec[3] - rowVec[2]; //far
plane[2] = rowVec[3] + rowVec[0]; //left
plane[3] = rowVec[3] - rowVec[0]; //right
plane[4] = rowVec[3] + rowVec[1]; //bottom
plane[5] = rowVec[3] - rowVec[1]; //top
//extend view frustum by 1 all directions; near/far along local z, left/right among local x, bottom/top along local y
// -Ax' -By' -Cz' + D = D'
plane[0][3] -= plane[0][2]; // <x',y',z'> = <0,0,1>
plane[1][3] += plane[1][2]; // <0,0,-1>
plane[2][3] += plane[2][0]; // <-1,0,0>
plane[3][3] -= plane[3][0]; // <1,0,0>
plane[4][3] += plane[4][1]; // <0,-1,0>
plane[5][3] -= plane[5][1]; // <0,1,0>
//----------Determine Frustum-Sphere intersection--------
//if any of the dot products between model center and frustum plane is less than -r, then the object falls outside the view frustum
visThisTime = true;
for(int i = 0; i < 6; i++) {
if( glm::dot(plane[i], modelCenter_cameraSpace) < static_cast<float>(negRadius) ) {
visThisTime = false;
}
}
if(visThisTime != visLastTime) {
printf("Sphere is %s visible\n", (visThisTime) ? "" : "NOT " );
visLastTime = visThisTime;
}
}
The polygons appear to be clipped by the far plane properly so it seems that the projection matrix is set up properly, but the calculations make it seem like the plane is way far out. Perhaps I am not calculating something correctly or have a fundamental misunderstanding of the calculations that are required?
The calculations that deal specifically with the far clipping plane are:
plane[1] = rowVec[3] - rowVec[2]; //far
and
plane[1][3] += plane[1][2]; // <0,0,-1>
I'm setting the plane to be equal to the 4th row (or in this case column) of the projection matrix - the 3rd row of the projection matrix. Then I'm extending the far plane one unit further (due to the sphere's radius of one; D' = D - C(-1) )
I've looked over this code many times and I can't see why it shouldn't work. Any help is appreciated.
EDIT:
I can't answer my own question as I don't have the rep, so I will post it here.
The problem was that I wasn't normalizing the plane equations. This didn't seem to make much of a difference for any of the clip planes besides the far one, so I hadn't even considered it (but that didn't make it any less wrong). After normalization everything works properly.

CPU Ray Casting

I'm attempting ray casting an octree on the CPU (I know the GPU is better, but I'm unable to get that working at this time, I believe my octree texture is created incorrectly).
I understand what needs to be done, and so far I cast a ray for each pixel, and check if that ray intersects any nodes within the octree. If it does and the node is not a leaf node, I check if the ray intersects it's child nodes. I keep doing this until a leaf node is hit. Once a leaf node is hit, I get the colour for that node.
My question is, what is the best way to draw this to the screen? Currently im storing the colours in an array and drawing them with glDrawPixels, but this does not produce correct results, with gaps in the renderings, as well as the projection been wrong (I am using glRasterPos3fv).
Edit: Here is some code so far, it needs cleaning up, sorry. I have omitted the octree ray casting code as I'm not sure it's needed, but I will post if it'll help :)
void Draw(Vector cameraPosition, Vector cameraLookAt)
{
// Calculate the right Vector
Vector rightVector = Cross(cameraLookAt, Vector(0, 1, 0));
// Set up the screen plane starting X & Y positions
float screenPlaneX, screenPlaneY;
screenPlaneX = cameraPosition.x() - ( ( WINDOWWIDTH / 2) * rightVector.x());
screenPlaneY = cameraPosition.y() + ( (float)WINDOWHEIGHT / 2);
float deltaX, deltaY;
deltaX = 1;
deltaY = 1;
int currentX, currentY, index = 0;
Vector origin, direction;
origin = cameraPosition;
vector<Vector4<int>> colours(WINDOWWIDTH * WINDOWHEIGHT);
currentY = screenPlaneY;
Vector4<int> colour;
for (int y = 0; y < WINDOWHEIGHT; y++)
{
// Set the current pixel along x to be the left most pixel
// on the image plane
currentX = screenPlaneX;
for (int x = 0; x < WINDOWWIDTH; x++)
{
// default colour is black
colour = Vector4<int>(0, 0, 0, 0);
// Cast the ray into the current pixel. Set the length of the ray to be 200
direction = Vector(currentX, currentY, cameraPosition.z() + ( cameraLookAt.z() * 200 ) ) - origin;
direction.normalize();
// Cast the ray against the octree and store the resultant colour in the array
colours[index] = RayCast(origin, direction, rootNode, colour);
// Move to next pixel in the plane
currentX += deltaX;
// increase colour arry index postion
index++;
}
// Move to next row in the image plane
currentY -= deltaY;
}
// Set the colours for the array
SetFinalImage(colours);
// Load array to 0 0 0 to set the raster position to (0, 0, 0)
GLfloat *v = new GLfloat[3];
v[0] = 0.0f;
v[1] = 0.0f;
v[2] = 0.0f;
// Set the raster position and pass the array of colours to drawPixels
glRasterPos3fv(v);
glDrawPixels(WINDOWWIDTH, WINDOWHEIGHT, GL_RGBA, GL_FLOAT, finalImage);
}
void SetFinalImage(vector<Vector4<int>> colours)
{
// The array is a 2D array, with the first dimension
// set to the size of the window (WINDOW_WIDTH * WINDOW_HEIGHT)
// Second dimension stores the rgba values for each pizel
for (int i = 0; i < colours.size(); i++)
{
finalImage[i][0] = (float)colours[i].r;
finalImage[i][1] = (float)colours[i].g;
finalImage[i][2] = (float)colours[i].b;
finalImage[i][3] = (float)colours[i].a;
}
}
Your pixel drawing code looks okay. But I'm not sure that your RayCasting routines are correct. When I wrote my raytracer, I had a bug that caused horizontal artifacts in on the screen, but it was related to rounding errors in the render code.
I would try this...create a result set of vector<Vector4<int>> where the colors are all red. Now render that to the screen. If it looks correct, then the opengl routines are correct. Divide and conquer is always a good debugging method.
Here's a question though....why are you using Vector4 when later on you write the image as GL_FLOAT? I'm not seeing any int->float conversion here....
You problem may be in your 3DDDA (octree raycaster), and specifically with adaptive termination. It results from the quantisation of rays into gridcell form, that causes certain octree nodes which lie slightly behind foreground nodes (i.e. of a higher z depth) and which thus should be partly visible & partly occluded, to not be rendered at all. The smaller your voxels are, the less noticeable this will be.
There is a very easy way to test whether this is the problem -- comment out the adaptive termination line(s) in your 3DDDA and see if you still get the same gap artifacts.