Blank Screen after compiling openGL - c++

I am really new to OpenGL and I am trying to just make a surface from two triangles. I don't know where I am going wrong with this code. I know that all the positions and colors are getting into the triangles class and that the Triangles are being made, but it's not getting outputted. Can someone help?
I tried to get just the output from the Triangle class but it doesn't seem to be working. I don't think there's anything wrong with the way I am calling the Display function.
Code:
#include<GL/gl.h>
#include<GL/glu.h>
#include<GL/glut.h>
#include<iostream>
#include<vector>
using namespace std;
class Triangle
{
public:
float position[9],color[3];
Triangle()
{}
Triangle(float position_t[], float color_t[])
{
for(int i=0;i<9;i++)
{position[i] = position_t[i];}
for(int i=0;i<3;i++)
{color[i]= color_t[i];}
}
void makeTriangle()
{
glBegin(GL_TRIANGLES);
glColor3f(color[0],color[1],color[2]);glVertex3f(position[0],position[1],position[2]);
glColor3f(color[0],color[1],color[2]);glVertex3f(position[3],position[4],position[5]);
glColor3f(color[0],color[1],color[2]);glVertex3f(position[6],position[7],position[8]);
glEnd();}
};
class Mesh
{
public:
/*float center[3],position[9],color[3];
float size;*/
vector<Triangle> elements;
float center[3],position[9],color[3];
float size;
Mesh(){}
Mesh(float center_in[3], float color_in[3])
{
for (int i=0;i<3;i++)
{
color[i] = color_in[i];
center[i] = center_in[i];
}
}
void getPositions()
{
position[0] = 1;position[1] = 1; position[2] = 1;
position[3] = -1;position[4] = -1; position[5] = 1;
position[6] = 1;position[7] = -1; position[8] = 1;
}
void getColor()
{
color[0] = 1; color[1]=0; color[2]=0;
}
static Mesh makeMesh()
{
Mesh a;
a.elements.resize(2);
a.getPositions();
a.getColor();
Triangle T(a.position,a.color);
a.elements[0] = T;
//Triangle O(2);
//a.elements[1] = 0;
return a;
}
};
void render()
{
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
Mesh a;
a.elements.resize(2);
a.getPositions();
a.getColor();
Triangle T(a.position,a.color);
//vector<Mesh> m;
//m.push_back(Mesh::makeMesh());
glPushMatrix();
T.makeTriangle();
glPopMatrix();
glFlush();
glutSwapBuffers();
glutPostRedisplay();
}
Full Code: http://pastebin.com/xa3B7166

As I suggested you in the comments, you are not setting the gluLookat() function. Everything is being drawn but you are just not looking at it!
Docs: https://www.opengl.org/sdk/docs/man2/xhtml/gluLookAt.xml

Your code does not specify any transformations. Therefore, your coordinates need to be within the default view volume, which is [-1, 1] in all coordinate directions.
Or more technically, the model/view/projection transformations (or all the transformations applied in your vertex shader if you use the programmable pipeline) transform the coordinates into the clip coordinate space, and after perspective division into the normalized device coordinate (aka NDC) space. The range of the NDC space is [-1, 1] for all coordinates.
If you don't apply any transformations, like is the case in your code, your original coordinates already have to be in NDC space.
With your current coordinates:
position[0] = 1;position[1] = 1; position[2] = 1;
position[3] = -1;position[4] = -1; position[5] = 1;
position[6] = 1;position[7] = -1; position[8] = 1;
all the z-coordinates have values of 1, which means that the whole triangle is right on the boundary of the clip volume. To make it visible, you can simply set the z-coordinates to 0:
position[0] = 1;position[1] = 1; position[2] = 0;
position[3] = -1;position[4] = -1; position[5] = 0;
position[6] = 1;position[7] = -1; position[8] = 0;
This centers it within the NDC space in z-direction, with the vertices being on 3 of the corners in the xy-plane. You will therefore see half of your window covered by the triangle, cutting it in half along the diagonal.
It's of course common in OpenGL to have the original coordinates in a different coordinate space, and then apply transformations to place them within the view volume.
You're probably already aware of this, but I thought I'd mention it anyway: If you're just starting to learn OpenGL, I would suggest that you learn what people often call "modern OpenGL". This includes the OpenGL Core Profile, or OpenGL ES 2.0 or later. The calls you are using now are mostly deprecated in newer versions of OpenGL, and not available anymore in the Core Profile and ES. The initial hurdle is somewhat higher for "modern OpenGL", particularly since you have to write your own shaders, but you will get on the path to acquiring knowledge that is still current.

Related

how can i draw a triangle on the osgearth with osg api

I want to draw triangle on the earth.
If I draw the triangle by the class osgEarth::Features::Feature there is no problem.
for example:
void DrawGeometryByFeature(ListVec3d& vecList, std::vector<unsigned int>& lstIndices)
{
osgEarth::Symbology::Style shapeStyle;
shapeStyle.getOrCreate<osgEarth::Symbology::PolygonSymbol>()->fill()->color() = osgEarth::Symbology::Color::Green;
_polyFeature = new osgEarth::Features::Feature(new osgEarth::Symbology::MultiGeometry, s_mapNode->getMapSRS(), shapeStyle);
_polyNode = new osgEarth::Annotation::FeatureNode(s_mapNode, _polyFeature);
osgEarth::Symbology::MultiGeometry* pGeometry = (MultiGeometry*)_polyNode->getFeature()->getGeometry();
pGeometry->clear();
_polyNode->setStyle(shapeStyle);
int index = 0;
for (std::vector<unsigned int>::iterator iit = lstIndices.begin();
iit != lstIndices.end(); iit++) {
index++;
if ((index + 1) % 3 == 0) {
osgEarth::Symbology::Geometry* polygen = new osgEarth::Symbology::Geometry();
polygen->push_back(vecList[lstIndices[index - 2]]);
polygen->push_back(vecList[lstIndices[index - 1]]);
polygen->push_back(vecList[lstIndices[index]]);
pGeometry->add(polygen);
}
}
_polyNode->init();
BBoxNodes.push_back(_polyNode);
s_mapNode->addChild(_polyNode);
}
but I want to draw it more efficient, so I try to draw it by the osg API
for example:
void DrawGeometryByOsg(std::vector<osg::Vec3d> vecList, std::vector<unsigned int>& lstIndices, int color, long type)
{
// create Geometry object to store all the vertices and lines primitive.
osg::Geometry* polyGeom = new osg::Geometry();
// note, first coord at top, second at bottom, reverse to that buggy OpenGL image..
const size_t numCoords = lstIndices.size();
osg::Vec3* myCoords = new osg::Vec3[numCoords];
unsigned int index = 0;
osg::Vec3Array* normals = new osg::Vec3Array(/*numCoords/3*/);
for (std::vector<unsigned int>::iterator it = lstIndices.begin(); it != lstIndices.end(); it++){
myCoords[index++] = vecList[*it];
if(index%3 == 2){
//
osg::Vec3d kEdge1 = myCoords[index-1] - myCoords[index-2];
osg::Vec3d kEdge2 = myCoords[index] - myCoords[index - 2];
osg::Vec3d normal = kEdge1^kEdge2;
//normal.normalize();
normals->push_back(normal);
//
}
}
osg::Vec3Array* vertices = new osg::Vec3Array(numCoords, myCoords);
polyGeom->setVertexArray(vertices);
osg::Vec4Array* colors = new osg::Vec4Array;
colors->push_back(osg::Vec4(0.0f, 0.8f, 0.0f, 1.0f));
polyGeom->setColorArray(colors, osg::Array::BIND_OVERALL);
polyGeom->addPrimitiveSet(new osg::DrawArrays(osg::PrimitiveSet::TRIANGLES, 0, numCoords));
osg::Geode* geode = new osg::Geode();
geode->addDrawable(polyGeom);
s_mapNode->addChild(geode);
}
but the gemotry which i draw by Osg API is always shaking....( ̄﹏ ̄;)
could you tell me where is the mistake in my code?
Any time you have "shaking" geometry you are likely running into a floating-point precision problem. OpenGL deals in 32-bit floating point coordinates. So if your geometry uses large coordinate values (as it does in a geocentric map like osgEarth), the values will get cropped when they are sent to the GPU and you get shaking/jittering when the camera moves.
To solve this problem, express your data relative to a local origin. Pick a double-precision point somewhere -- the centroid of the geometry is usually a good place -- and make that your local origin. Then translate all your double-precision coordinates so they are relative to that origin. Finally, parent the geometry with a MatrixTransform that translates the localized data to the actual double-precision location.
Hope this helps!

How to update Geometry properly

I am trying to display a point cloud, consisting of vertices and color with OSG. A static point cloud to display is rather easy with this guide.
But I am not capable of updating such a point cloud. My intention is to create a geometry and attach it to my viewer class once.
This is the mentioned method which is called once in the beginning.
The OSGWidget strongly depends on this OpenGLWidget based approach.
void OSGWidget::attachGeometry(osg::ref_ptr<osg::Geometry> geom)
{
osg::Geode* geode = new osg::Geode;
geom->setDataVariance(osg::Object::DYNAMIC);
geom->setUseDisplayList(false);
geom->setUseVertexBufferObjects(true);
bool addDrawSuccess = geode->addDrawable(geom.get()); // Adding Drawable Shape to the geometry node
if (!addDrawSuccess)
{
throw "Adding Drawable failed!";
}
{
osg::StateSet* stateSet = geode->getOrCreateStateSet();
stateSet->setMode(GL_LIGHTING, osg::StateAttribute::OFF);
}
float aspectRatio = static_cast<float>(this->width()) / static_cast<float>(this->height());
// Setting up the camera
osg::Camera* camera = new osg::Camera;
camera->setViewport(0, 0, this->width(), this->height());
camera->setClearColor(osg::Vec4(0.f, 0.f, 0.f, 1.f)); // Kind of Backgroundcolor, clears the buffer and sets the default color (RGBA)
camera->setProjectionMatrixAsPerspective(30.f, aspectRatio, 1.f, 1000.f); // Create perspective projection
camera->setGraphicsContext(graphicsWindow_); // embed
osgViewer::View* view = new osgViewer::View;
view->setCamera(camera); // Set the defined camera
view->setSceneData(geode); // Set the geometry
view->addEventHandler(new osgViewer::StatsHandler);
osgGA::TrackballManipulator* manipulator = new osgGA::TrackballManipulator;
manipulator->setAllowThrow(false);
view->setCameraManipulator(manipulator);
///////////////////////////////////////////////////
// Set the viewer
//////////////////////////////////////////////////
viewer_->addView(view);
viewer_->setThreadingModel(osgViewer::CompositeViewer::SingleThreaded);
viewer_->realize();
this->setFocusPolicy(Qt::StrongFocus);
this->setMinimumSize(100, 100);
this->setMouseTracking(true);
}
After I have 'attached' the geometry, I am trying to update the geometry like this
void PointCloudViewOSG::processData(DepthDataSet depthData)
{
if (depthData.points()->empty())
{
return; // empty cloud, cannot do anything
}
const DepthDataSet::IndexPtr::element_type& index = *depthData.index();
const size_t nPixel = depthData.points().get()->points.size();
if (depthData.intensity().isValid() && !index.empty() )
{
for (int i = 0; i < nPixel; i++)
{
float x = depthData.points().get()->points[i].x;
float y = depthData.points().get()->points[i].y;
float z = depthData.points().get()->points[i].z;
m_vertices->push_back(osg::Vec3(x
, y
, z));
// 32 bit integer variable containing the rgb (8 bit per channel) value
uint32_t rgb_val_;
memcpy(&rgb_val_, &(depthData.points().get()->points[i].rgb), sizeof(uint32_t));
uint32_t red, green, blue;
blue = rgb_val_ & 0x000000ff;
rgb_val_ = rgb_val_ >> 8;
green = rgb_val_ & 0x000000ff;
rgb_val_ = rgb_val_ >> 8;
red = rgb_val_ & 0x000000ff;
m_colors->push_back(
osg::Vec4f((float)red / 255.0f,
(float)green / 255.0f,
(float)blue / 255.0f,
1.0f)
);
}
m_geometry->setVertexArray(m_vertices.get());
m_geometry->setColorArray(m_colors.get());
m_geometry->setColorBinding(osg::Geometry::BIND_PER_VERTEX);
m_geometry->addPrimitiveSet(new osg::DrawArrays(osg::PrimitiveSet::POINTS, 0, m_vertices->size()));
}
}
My guess is that the
addPrimitiveSet(...)
Shall not be called every time I update the geometry.
Or can it be the attachment of the geometry, so that I have to reattach it every time?
PointCloudlibrary (PCL) is unfortunately not an alternative since of some incompatibilities with my application.
Update: When I am reattaching the geometry to the OSGWidget class,
calling
this->attachGeometry(m_geometry)
after
m_geometry->addPrimitiveSet(new osg::DrawArrays(osg::PrimitiveSet::POINTS, 0, m_vertices->size()));
I get my point cloud visible, but this procedure is definitely wrong since I am losing way too much performance and the display driver crashes.
You need to set the array and add the primitive set only once, after that you can update the vertices like this:
osg::Vec3Array* vx = static_cast<osg::Vec3Array*>(m_vertices);
for (int i = 0; i < nPixel; i++)
{
float x, y, z;
// fill with your data...
(*vx)[i].set(x, y, z);
}
m_vertices->dirty();
The same goes for colors and other arrays.
As you're using VBO, you don't need to call dirtyDisplayList()
If you need instead to recompure the bounding box of the geometry, call
m_geometry->dirtyBound()
In case the number of points changes between updates, you can push new vertices into the array if its size is too small, and update the PrimitiveSet count like this:
osg::DrawArrays* drawArrays = static_cast<osg::DrawArrays*>(m_geometry->getPrimitiveSet(0));
drawArrays->setCount(nPixel);
drawArrays->dirty();
rickvikings solution works - I only had one issue... (OSG 3.6.1 on OSX)
I had to modify the m_vertices array directly, it would cause OSG to crash if I used the static_cast method above to modify the vertices array:
osg::Vec3Array* vx = static_cast(m_vertices);
For some reason OSG would not create a buffer object in the vertices array class if using the static_cast approach.

Marching Cubes Issues

I've been trying to implement the marching cubes algorithm with C++ and Qt. Anyway, so far all the steps have been written, but I'm getting a really bad result. I'm looking for orientation or advices about what can be going wrong. I suspect one of the problems may be with the voxel conception, specifically about which vertex goes in which corner (0, 1, ..., 7). Also, I'm not a 100% sure about how to interpret the input for the algorithm (I'm using datasets). Should I read it in the ZYX order and move the marching cube in the same way or it doesn't matter at all? (Leaving aside the fact that no every dimension has to have the same size).
Here is what I'm getting against what it should look like...
http://i57.tinypic.com/2nb7g46.jpg
http://en.wikipedia.org/wiki/Marching_cubes
http://en.wikipedia.org/wiki/Marching_cubes#External_links
Paul Bourke. "Overview and source code".
http://paulbourke.net/geometry/polygonise/
Qt_MARCHING_CUBES.zip: Qt/OpenGL example courtesy Dr. Klaus Miltenberger.
http://paulbourke.net/geometry/polygonise/Qt_MARCHING_CUBES.zip
The example requires boost, but looks like it probably should work.
In his example, it has in marchingcubes.cpp, a few different methods for calculating the marching cubes: vMarchCube1 and vMarchCube2.
In the comments it says vMarchCube2 performs the Marching Tetrahedrons algorithm on a single cube by making six calls to vMarchTetrahedron.
Below is the source for the first one vMarchCube1:
//vMarchCube1 performs the Marching Cubes algorithm on a single cube
GLvoid GL_Widget::vMarchCube1(const GLfloat &fX, const GLfloat &fY, const GLfloat &fZ, const GLfloat &fScale, const GLfloat &fTv)
{
GLint iCorner, iVertex, iVertexTest, iEdge, iTriangle, iFlagIndex, iEdgeFlags;
GLfloat fOffset;
GLvector sColor;
GLfloat afCubeValue[8];
GLvector asEdgeVertex[12];
GLvector asEdgeNorm[12];
//Make a local copy of the values at the cube's corners
for(iVertex = 0; iVertex < 8; iVertex++)
{
afCubeValue[iVertex] = (this->*fSample)(fX + a2fVertexOffset[iVertex][0]*fScale,fY + a2fVertexOffset[iVertex][1]*fScale,fZ + a2fVertexOffset[iVertex][2]*fScale);
}
//Find which vertices are inside of the surface and which are outside
iFlagIndex = 0;
for(iVertexTest = 0; iVertexTest < 8; iVertexTest++)
{
if(afCubeValue[iVertexTest] <= fTv) iFlagIndex |= 1<<iVertexTest;
}
//Find which edges are intersected by the surface
iEdgeFlags = aiCubeEdgeFlags[iFlagIndex];
//If the cube is entirely inside or outside of the surface, then there will be no intersections
if(iEdgeFlags == 0)
{
return;
}
//Find the point of intersection of the surface with each edge
//Then find the normal to the surface at those points
for(iEdge = 0; iEdge < 12; iEdge++)
{
//if there is an intersection on this edge
if(iEdgeFlags & (1<<iEdge))
{
fOffset = fGetOffset(afCubeValue[ a2iEdgeConnection[iEdge][0] ],afCubeValue[ a2iEdgeConnection[iEdge][1] ], fTv);
asEdgeVertex[iEdge].fX = fX + (a2fVertexOffset[ a2iEdgeConnection[iEdge][0] ][0] + fOffset * a2fEdgeDirection[iEdge][0]) * fScale;
asEdgeVertex[iEdge].fY = fY + (a2fVertexOffset[ a2iEdgeConnection[iEdge][0] ][1] + fOffset * a2fEdgeDirection[iEdge][1]) * fScale;
asEdgeVertex[iEdge].fZ = fZ + (a2fVertexOffset[ a2iEdgeConnection[iEdge][0] ][2] + fOffset * a2fEdgeDirection[iEdge][2]) * fScale;
vGetNormal(asEdgeNorm[iEdge], asEdgeVertex[iEdge].fX, asEdgeVertex[iEdge].fY, asEdgeVertex[iEdge].fZ);
}
}
//Draw the triangles that were found. There can be up to five per cube
for(iTriangle = 0; iTriangle < 5; iTriangle++)
{
if(a2iTriangleConnectionTable[iFlagIndex][3*iTriangle] < 0) break;
for(iCorner = 0; iCorner < 3; iCorner++)
{
iVertex = a2iTriangleConnectionTable[iFlagIndex][3*iTriangle+iCorner];
vGetColor(sColor, asEdgeVertex[iVertex], asEdgeNorm[iVertex]);
glColor4f(sColor.fX, sColor.fY, sColor.fZ, 0.6);
glNormal3f(asEdgeNorm[iVertex].fX, asEdgeNorm[iVertex].fY, asEdgeNorm[iVertex].fZ);
glVertex3f(asEdgeVertex[iVertex].fX, asEdgeVertex[iVertex].fY, asEdgeVertex[iVertex].fZ);
}
}
}
UPDATE: Github working example, tested
https://github.com/peteristhegreat/qt-marching-cubes
Hope that helps.
Finally, I found what was wrong.
I use a VBO indexer class to reduce the ammount of duplicated vertices and make the render faster. This class is implemented with a std::map to find and discard already existing vertices, using a tuple of < vec3, unsigned short >. As you may imagine, a marching cubes algorithm generates structures with thousands if not millions of vertices. The highest number a common unsigned short can hold is 65536, or 2^16. So, when the output geometry had more than that, the map index started to overflow and the result was a mess, since it started to overwrite vertices with the new ones. I just changed my implementation to draw with common VBO and not indexed while I fix my class to support millions of vertices.
The result, with some minor vertex normal issues, speaks for itself:
http://i61.tinypic.com/fep2t3.jpg

OpenGL Frustum visibility test with sphere : Far plane not working

I am doing a program to test sphere-frustum intersection and being able to determine the sphere's visibility. I am extracting the frustum's clipping planes into camera space and checking for intersection. It works perfectly for all planes except the far plane and I cannot figure out why. I keep pulling the camera back but my program still claims the sphere is visible, despite it having been clipped long ago. If I go far enough it eventually determines that it is not visible, but this is some distance after it has exited the frustum.
I am using a unit sphere at the origin for the test. I am using the OpenGL Mathematics (GLM) library for vector and matrix data structures and for its built in math functions. Here is my code for the visibility function:
void visibilityTest(const struct MVP *mvp) {
static bool visLastTime = true;
bool visThisTime;
const glm::vec4 modelCenter_worldSpace = glm::vec4(0,0,0,1); //at origin
const int negRadius = -1; //unit sphere
//Get cam space model center
glm::vec4 modelCenter_cameraSpace = mvp->view * mvp->model * modelCenter_worldSpace;
//---------Get Frustum Planes--------
//extract projection matrix row vectors
//NOTE: since glm stores their mats in column-major order, we extract columns
glm::vec4 rowVec[4];
for(int i = 0; i < 4; i++) {
rowVec[i] = glm::vec4( mvp->projection[0][i], mvp->projection[1][i], mvp->projection[2][i], mvp->projection[3][i] );
}
//determine frustum clipping planes (in camera space)
glm::vec4 plane[6];
//NOTE: recall that indices start at zero. So M4 + M3 will be rowVec[3] + rowVec[2]
plane[0] = rowVec[3] + rowVec[2]; //near
plane[1] = rowVec[3] - rowVec[2]; //far
plane[2] = rowVec[3] + rowVec[0]; //left
plane[3] = rowVec[3] - rowVec[0]; //right
plane[4] = rowVec[3] + rowVec[1]; //bottom
plane[5] = rowVec[3] - rowVec[1]; //top
//extend view frustum by 1 all directions; near/far along local z, left/right among local x, bottom/top along local y
// -Ax' -By' -Cz' + D = D'
plane[0][3] -= plane[0][2]; // <x',y',z'> = <0,0,1>
plane[1][3] += plane[1][2]; // <0,0,-1>
plane[2][3] += plane[2][0]; // <-1,0,0>
plane[3][3] -= plane[3][0]; // <1,0,0>
plane[4][3] += plane[4][1]; // <0,-1,0>
plane[5][3] -= plane[5][1]; // <0,1,0>
//----------Determine Frustum-Sphere intersection--------
//if any of the dot products between model center and frustum plane is less than -r, then the object falls outside the view frustum
visThisTime = true;
for(int i = 0; i < 6; i++) {
if( glm::dot(plane[i], modelCenter_cameraSpace) < static_cast<float>(negRadius) ) {
visThisTime = false;
}
}
if(visThisTime != visLastTime) {
printf("Sphere is %s visible\n", (visThisTime) ? "" : "NOT " );
visLastTime = visThisTime;
}
}
The polygons appear to be clipped by the far plane properly so it seems that the projection matrix is set up properly, but the calculations make it seem like the plane is way far out. Perhaps I am not calculating something correctly or have a fundamental misunderstanding of the calculations that are required?
The calculations that deal specifically with the far clipping plane are:
plane[1] = rowVec[3] - rowVec[2]; //far
and
plane[1][3] += plane[1][2]; // <0,0,-1>
I'm setting the plane to be equal to the 4th row (or in this case column) of the projection matrix - the 3rd row of the projection matrix. Then I'm extending the far plane one unit further (due to the sphere's radius of one; D' = D - C(-1) )
I've looked over this code many times and I can't see why it shouldn't work. Any help is appreciated.
EDIT:
I can't answer my own question as I don't have the rep, so I will post it here.
The problem was that I wasn't normalizing the plane equations. This didn't seem to make much of a difference for any of the clip planes besides the far one, so I hadn't even considered it (but that didn't make it any less wrong). After normalization everything works properly.

CPU Ray Casting

I'm attempting ray casting an octree on the CPU (I know the GPU is better, but I'm unable to get that working at this time, I believe my octree texture is created incorrectly).
I understand what needs to be done, and so far I cast a ray for each pixel, and check if that ray intersects any nodes within the octree. If it does and the node is not a leaf node, I check if the ray intersects it's child nodes. I keep doing this until a leaf node is hit. Once a leaf node is hit, I get the colour for that node.
My question is, what is the best way to draw this to the screen? Currently im storing the colours in an array and drawing them with glDrawPixels, but this does not produce correct results, with gaps in the renderings, as well as the projection been wrong (I am using glRasterPos3fv).
Edit: Here is some code so far, it needs cleaning up, sorry. I have omitted the octree ray casting code as I'm not sure it's needed, but I will post if it'll help :)
void Draw(Vector cameraPosition, Vector cameraLookAt)
{
// Calculate the right Vector
Vector rightVector = Cross(cameraLookAt, Vector(0, 1, 0));
// Set up the screen plane starting X & Y positions
float screenPlaneX, screenPlaneY;
screenPlaneX = cameraPosition.x() - ( ( WINDOWWIDTH / 2) * rightVector.x());
screenPlaneY = cameraPosition.y() + ( (float)WINDOWHEIGHT / 2);
float deltaX, deltaY;
deltaX = 1;
deltaY = 1;
int currentX, currentY, index = 0;
Vector origin, direction;
origin = cameraPosition;
vector<Vector4<int>> colours(WINDOWWIDTH * WINDOWHEIGHT);
currentY = screenPlaneY;
Vector4<int> colour;
for (int y = 0; y < WINDOWHEIGHT; y++)
{
// Set the current pixel along x to be the left most pixel
// on the image plane
currentX = screenPlaneX;
for (int x = 0; x < WINDOWWIDTH; x++)
{
// default colour is black
colour = Vector4<int>(0, 0, 0, 0);
// Cast the ray into the current pixel. Set the length of the ray to be 200
direction = Vector(currentX, currentY, cameraPosition.z() + ( cameraLookAt.z() * 200 ) ) - origin;
direction.normalize();
// Cast the ray against the octree and store the resultant colour in the array
colours[index] = RayCast(origin, direction, rootNode, colour);
// Move to next pixel in the plane
currentX += deltaX;
// increase colour arry index postion
index++;
}
// Move to next row in the image plane
currentY -= deltaY;
}
// Set the colours for the array
SetFinalImage(colours);
// Load array to 0 0 0 to set the raster position to (0, 0, 0)
GLfloat *v = new GLfloat[3];
v[0] = 0.0f;
v[1] = 0.0f;
v[2] = 0.0f;
// Set the raster position and pass the array of colours to drawPixels
glRasterPos3fv(v);
glDrawPixels(WINDOWWIDTH, WINDOWHEIGHT, GL_RGBA, GL_FLOAT, finalImage);
}
void SetFinalImage(vector<Vector4<int>> colours)
{
// The array is a 2D array, with the first dimension
// set to the size of the window (WINDOW_WIDTH * WINDOW_HEIGHT)
// Second dimension stores the rgba values for each pizel
for (int i = 0; i < colours.size(); i++)
{
finalImage[i][0] = (float)colours[i].r;
finalImage[i][1] = (float)colours[i].g;
finalImage[i][2] = (float)colours[i].b;
finalImage[i][3] = (float)colours[i].a;
}
}
Your pixel drawing code looks okay. But I'm not sure that your RayCasting routines are correct. When I wrote my raytracer, I had a bug that caused horizontal artifacts in on the screen, but it was related to rounding errors in the render code.
I would try this...create a result set of vector<Vector4<int>> where the colors are all red. Now render that to the screen. If it looks correct, then the opengl routines are correct. Divide and conquer is always a good debugging method.
Here's a question though....why are you using Vector4 when later on you write the image as GL_FLOAT? I'm not seeing any int->float conversion here....
You problem may be in your 3DDDA (octree raycaster), and specifically with adaptive termination. It results from the quantisation of rays into gridcell form, that causes certain octree nodes which lie slightly behind foreground nodes (i.e. of a higher z depth) and which thus should be partly visible & partly occluded, to not be rendered at all. The smaller your voxels are, the less noticeable this will be.
There is a very easy way to test whether this is the problem -- comment out the adaptive termination line(s) in your 3DDDA and see if you still get the same gap artifacts.