I have been trying to get an object to be lit properly all day long, without results, so I'm going to try here. In the essence, I am trying the object to look like this:
While in my program it looks like this:
Here's my context:
glClearDepth(1.0);
glShadeModel(GL_SMOOTH);
glEnable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glLight(GL_LIGHT0, GL_POSITION, {0, 5, 0, 1});
glDisable(GL_TEXTURE_2D);
And my material/lighting settings:
glMaterialf(GL_FRONT, GL_SHININESS, 10);
glLight(GL_LIGHT0, GL_AMBIENT, {0.1, 0.1, 0.1, 1.0});
glLight(GL_LIGHT0, GL_DIFFUSE, {0.2, 0.2, 0.2, 1.0});
glLight(GL_LIGHT0, GL_SPECULAR, {0.6, 0.6, 0.6, 1.0});
glMaterial(GL_FRONT, GL_AMBIENT, {1, 0.8078, 0});
glMaterial(GL_FRONT, GL_DIFFUSE, {1, 0.8078, 0});
glMaterial(GL_FRONT, GL_SPECULAR, {0.5, 0.5, 0.5});
glMaterial(GL_FRONT, GL_EMISSION, {0, 0, 0});
[I used {r, g, b, a} to denote an array for simplicity. I looked up the actual values that were used to draw the model and wrote them into the arrays]
The main problem is that whenever my objects get fully lit, everything "clutters" together into the ambient colour. There are no lighter and darker parts depending on the orientation of the fragment anymore, just one chunk of solid colour.
I have searched the whole project for openGL settings I may have missed, though the only thing I found was what you see above (omitting a few calls to reset the projection and modelview matrices, clearing the screen, and a translation/rotation).
I have also tried to alter the values of the lights and materials, without much success. Changing the ambient colour just causes the whole model to become brighter. I also tried moving the light.
EDIT: By request, here's how I store and draw the model:
I load the model from an OBJ file. I looked at the file itself, which lists normals. The loader also recognizes that.
Here's how it draws the VBO:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vertexBufferID);
glVertexPointer(3, GL_FLOAT, stride, 0 * 4);
glNormalPointer(GL_FLOAT, stride, 3 * 4);
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, indexBufferID);
glDrawElements(GL_TRIANGLES, this.numberOfVertices, GL_UNSIGNED_INT, 0);
The VBO is interleaved, formatted like [vertex, normal, vertex, normal, ..]. I dropped some print calls in the drawing code, and it sets the vertex and normal pointers. Hence I am pretty sure that the VBO itself is loaded and drawn correctly.
It's crashing because you leave texture_coord_array client state enabled, so it tries to access whatever is at the texcoord pointer, which is null.
So with many problems, the answer is often unexpected. The first part was the switch statement I used to determine how to set the data pointers of the VBO:
private void setDataPointers() {
int stride = this.dataFormat.elementsPerVertex * 4;
glVertexPointer(3, GL_FLOAT, stride, 0 * 4);
switch(this.dataFormat)
{
case VERTICES_AND_TEXTURES:
glTexCoordPointer(2, GL_FLOAT, stride, (3)*4);
case VERTICES_AND_NORMALS:
glNormalPointer(GL_FLOAT, stride, 3 * 4);
case VERTICES_TEXTURES_NORMALS:
glTexCoordPointer(2, GL_FLOAT, stride, 3 * 4);
glNormalPointer(GL_FLOAT, stride, (3 + 3) * 4);
}
}
As you can see the break; statements are missing. So the vertex, normal AND texture pointers would be set.
Second, when drawing the VBO, all three client side modes were enabled:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
So the renderer was looking for texture coordinates that didn't exist, and somehow spit out the strange unlit geometry.
Related
I have a set of vertices and normals stored in a buffer. I want to display most of the vertices as points but i want draw lines for the remaining few vertices. These are all stored inside one vector with the points part in the front and i know the location of the buffer until they are to be displayed using points. I also know the count of elements for each drawing task. I am using only one vao and one buffer object for this task.
I initialize the GLWidget with the following code:
void GLWidget::initializeGL()
{
connect(context(), &QOpenGLContext::aboutToBeDestroyed, this, &GLWidget::cleanup);
initializeOpenGLFunctions();
glClearColor(0, 0, 0, m_transparent ? 0 : 1);
m_program = new QOpenGLShaderProgram;
m_program->addShaderFromSourceCode(QOpenGLShader::Vertex, m_core ? vertexShaderSourceCore : vertexShaderSource);
m_program->addShaderFromSourceCode(QOpenGLShader::Fragment, m_core ? fragmentShaderSourceCore : fragmentShaderSource);
m_program->bindAttributeLocation("vertex", 0);
m_program->bindAttributeLocation("normal", 1);
m_program->link();
m_program->bind();
m_projMatrixLoc = m_program->uniformLocation("projMatrix");
m_mvMatrixLoc = m_program->uniformLocation("mvMatrix");
m_normalMatrixLoc = m_program->uniformLocation("normalMatrix");
m_lightPosLoc = m_program->uniformLocation("lightPos");
m_vao.create();
QOpenGLVertexArrayObject::Binder vaoBinder(&m_vao);
m_obj.create();
setupBuffer();
setupVertexAttribs();
m_camera.setToIdentity();
QVector3D camPos = QVector3D(0.0, 0.0, 15.0);
m_camera.translate(-camPos);
QVector3D camTarget = QVector3D(0.0, 0.0, 0.0);
QVector3D camDirection = QVector3D(camPos - camTarget).normalized();
QVector3D worldUp = QVector3D(0.0, 1.0, 0.0);
QVector3D camRight = QVector3D::crossProduct(worldUp, camDirection).normalized();
QVector3D camUp = QVector3D::crossProduct(camDirection, camRight);
m_camera.lookAt(camPos, camTarget, camUp);
// Light position is fixed.
m_program->setUniformValue(m_lightPosLoc, QVector3D(0, 0, 200));
m_program->release();
}
Where the functions setupBuffer() setupVertexAtrribs() do the same tasks as their names imply. The vertices are layed out in the buffer with xyz positions of the vertex followed by the xyz of its associated normal. They are implemented as follows
void GLWidget::setupBuffer()
{
m_obj.bind();
m_obj.allocate(vertices.constData(), vertices.size() * sizeof(GLfloat));
}
void GLWidget::setupVertexAttribs()
{
m_obj.bind();
QOpenGLFunctions *f = QOpenGLContext::currentContext()->functions();
f->glEnableVertexAttribArray(0);
f->glEnableVertexAttribArray(1);
f->glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), reinterpret_cast<void *>(0));
f->glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), reinterpret_cast<void *>(3 * sizeof(GLfloat)));
m_obj.release();
}
Now, the QVector vertices is the buffer that is being passed to opengl. The last few entries in this vector are the vertices that need to be drawn using GL_LINES.
My paintGL() function looks something like this:
void GLWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glEnable(GL_POINT_SIZE);
glEnable(GL_LINE_WIDTH);
glPointSize(2);
glLineWidth(10);
m_world.setToIdentity();
m_world.rotate(180.0f - (m_xRot / 16.0f), 1, 0, 0);
m_world.rotate(m_yRot / 16.0f, 0, 1, 0);
m_world.rotate(m_zRot / 16.0f, 0, 0, 1);
m_world.scale(m_dispScale);
QOpenGLVertexArrayObject::Binder vaoBinder(&m_vao);
m_program->bind();
m_program->setUniformValue(m_projMatrixLoc, m_proj);
m_program->setUniformValue(m_mvMatrixLoc, m_camera * m_world);
QMatrix3x3 normalMatrix = m_world.normalMatrix();
m_program->setUniformValue(m_normalMatrixLoc, normalMatrix);
glDrawArrays(GL_POINTS, 0, vertices.size() - camVertices.size());
// Draw camera frustums
glDrawArrays(GL_LINES, vertices.size() - camVertices.size(), camVertices.size());
//glDrawElements(GL_POINTS, vecIndices.size(), GL_UNSIGNED_INT, 0);
m_program->release();
}
QVector camVertices is another vector that contains points that need to be drawn using lines. The data in camVertices is appended to the end of the vector 'Vertices' before rendering. As seen in the above code, I call glDrawArrays function twice - first, starting from 0 index of the buffer, second, starting from where the previous call ended to display the remaining of the points.
The problem is that the points are being displayed fine. However the second call only displays the points but does not draw any lines.
Here's a link to a screenshot of the displayed output - https://drive.google.com/open?id=1i7CjO1qkBALw78KKYGvBteydhfAWh3wh
The picture shows an example of the displayed out where the bright green points seen on the top outlying from the rest (box of many points) are the ones that are to be drawn with lines. However, I only see points but not any lines.
i did a simple test and i'm able to draw points and lines using the following code:
glDrawArrays(GL_POINTS, 0, verticesCount() - 10);
glDrawArrays(GL_LINES, 10, 10);
Which is not very different from yours except for the variables. I'm also using 1 VAO. So it's definitely possible to draw lines after points as we would expect.
Could you try the same (using an integer instead of your variables)
Can you show the debug information on your vertices?
Can you upload a minimal compilable example?
I'm trying to draw some basic triangles using opengl, but it's not rendering on screen. These are the relevant functions:
glewInit();
glClearColor(0.0, 0.0, 0.0, 1.0);
glFrontFace(GL_CW);
glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
Vertex vertices[] = {Vertex(Vector3f(0.0, 1.0, 0.0)),
Vertex(Vector3f(-1.0, -1.0, 0.0)),
Vertex(Vector3f(1.0, -1.0, 0.0))};
mesh.addVertices(vertices, 3);
Pastebin links to Vertex.hpp and Vector3f.hpp:
Vertex.hpp
Vector3f.hpp
/*
* Mesh.cpp:
*/
Mesh::Mesh()
{
glGenBuffers(1, &m_vbo); // unsigned int Mesh::m_vbo
}
void Mesh::addVertices(Vertex vertices[4], int indexSize)
{
m_size = indexSize * 3;
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glBufferData(GL_ARRAY_BUFFER, m_size, vertices, GL_STATIC_DRAW);
}
void Mesh::draw()
{
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 4 * sizeof(Vertex), 0);
glDrawArrays(GL_TRIANGLES, 0, m_size);
glDisableVertexAttribArray(0);
}
It's just black if I call glClear otherwise just the random noise of a default window. I can make it draw a triangle by using the most primitive method:
glBegin(GL_TRIANGLES);
glColor3f(0.4, 0.0, 0.0);
glVertex2d(0.0, 0.5);
glVertex2d(-0.5, -0.5);
glVertex2d(0.5, -0.5);
glEnd();
That works and displays what it should do correctly, so I guess that at least says my application is not 100% busted. The tutorial I'm following is in Java, and I'm translating it to C++ SFML as I go along, so I guess it's possible that something got lost in translation so to speak, unless I'm just missing something really basic (more likely.)
How do we fix this so it uses the Vertex list to draw the triangle like it's supposed to?
So many mistakes. There are truly a lot of examples, in any language, so why?
const float pi = 3.141592653589793; is member field of Vector3f. Do you realise this is non-static member and it is included in each and every Vector3f you use, so your vectors actually have four elements - x, y, z, and pi? Did you informed GL about it, so it could skip this garbage data? I don't think so.
You using glVertexAttribPointer, but don't have active shader. There is no guarantee that position is in slot 0. Either use glVertexPointer, or use shader with position attribute bound to 0.
void Mesh::addVertices(Vertex vertices[4], int indexSize) what [4] supposed to mean here? While it is not an error, it is at least misguiding.
glBufferData(GL_ARRAY_BUFFER, m_size, vertices, GL_STATIC_DRAW); m_size is 3*3 in your example, while documentation says it should be array size in bytes - which is sizeof(Vertex) * indexSize.
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 4 * sizeof(Vertex), 0); why stride parameter is 4*sizeof(Vertex)? Either set it to 0 or write correct stride - which is sizeof(Vertex).
glDrawArrays(GL_TRIANGLES, 0, m_size); m_size is already [incorrectly] set as "vertex buffer size", while DrawArrays expects number of vertices to draw - which is m_size / sizeof(Vertex) (given m_size is calculated correctly).
I hope this dont seem too much of a code dump, but I really have no clue as to why this dont work. Ive tried getting errors from glGetError() and it seems to always return 0.
Ive tried to only include the code I think is affecting the problem, as the other code has been working fine in most other situations Ive used it in.
Anyways, code first:
This is my main render loop:
float rotate = 0.0f;
void Render(SDL_Window *window)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(-2,-2,-10, 0,0,0, 0,1,0);
glRotatef(rotate, 0, 1, 0);
// Start drawing
glUseProgramObjectARB( *shader->GetShaderProgram() );
texture->EnableTexture( *shader->GetShaderProgram(),"tex" );
cube->Render();
SDL_GL_SwapWindow(window);
rotate = rotate+1;
glUseProgramObjectARB(0);
}
This is my objects render(cube->Render()):
void Object3DVBO::Render()
{
//Bind the buffers and tell OpenGL to use the Vertex Buffer Objects (VBO's), which we already prepared earlier
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboID);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, texcoordsID);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, normalID);
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, indiceID);
//Enable states, and render (as if using vertex arrays directly)
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, 0);
glNormalPointer(GL_FLOAT, 0, 0);
glVertexPointer(3, GL_FLOAT, 0, 0);
if(textureid > 0) {
glEnable(GL_TEXTURE_2D); // Turn on Texturing
//glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glBindTexture(GL_TEXTURE_2D, textureid);
}
//Draw the thing!
glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_BYTE, 0);
//restore the GL state back
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
if(textureid > 0) {
glDisable(GL_TEXTURE_2D); // Turn off Texturing
glBindTexture(GL_TEXTURE_2D, textureid);
}
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0); //Restore non VBO mode
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, 0);
};
This is the code for initializing the VBOs:
void Object3DVBO::SetVBO()
{
// Vertices:
glGenBuffersARB(1, &vboID);
glBindBufferARB( GL_ARRAY_BUFFER_ARB, vboID);
glBufferDataARB( GL_ARRAY_BUFFER_ARB, sizeof(GLfloat)*vertices.size(), &vertices, GL_STATIC_DRAW_ARB );
// Vertices:
glGenBuffersARB(1, &indiceID);
glBindBufferARB( GL_ELEMENT_ARRAY_BUFFER_ARB, indiceID);
glBufferDataARB( GL_ELEMENT_ARRAY_BUFFER_ARB, sizeof(GLubyte)*indices.size(), &indices, GL_STATIC_DRAW_ARB );
// Normals:
glGenBuffersARB(1, &normalID);
glBindBufferARB( GL_ARRAY_BUFFER_ARB, normalID);
glBufferDataARB( GL_ARRAY_BUFFER_ARB, sizeof(GLfloat)*normals.size(), &normals, GL_STATIC_DRAW_ARB );
// Texture coordinates:
glGenBuffersARB(1, &texcoordsID);
glBindBufferARB( GL_ARRAY_BUFFER_ARB, texcoordsID);
glBufferDataARB( GL_ARRAY_BUFFER_ARB, sizeof(GLfloat)*texCoords.size(), &texCoords, GL_STATIC_DRAW_ARB );
vertices.clear();
normals.clear();
texCoords.clear();
indices.clear();
}
and then finally the simplest model Ive made in a textfile:
mesh:
-0.5 -0.5 0.0
0.5 -0.5 0.0
0.5 0.5 0.0
-0.5 0.5 0.0
normals:
0 0 -1
0 0 -1
0 0 -1
0 0 -1
texcoords:
0.0 0
1.0 0
1.0 1.0
0.0 1.0
indices:
0 0 0 1 1 1 2 2 2 0 0 0 2 2 2 3 3 3
end:
Whats outside this is reading the textfile and storing it into vectors from functions.
Here is the header file for the object:
class Object3DVBO
{
public:
Object3DVBO(const char* objectfilename, GLuint textureid);
~Object3DVBO();
void Render();
private:
GLuint vboID,
texcoordsID,
normalID,
textureid,
indiceID;
int vertCount,
indexCount;
std::vector<GLfloat> vertices;
std::vector<GLfloat> normals;
std::vector<GLfloat> texCoords;
std::vector<GLubyte> indices;
void ConvertToReadable( std::vector<GLfloat> v, std::vector<GLfloat> n, std::vector<GLfloat> tc, std::vector<GLint> i);
void ReadObjectData( const char* objectfilename);
void SetVBO();
};
The other parts are for basically doing the same for a shaderpair & a texture, these seem to work fine(Shader is for now only version 120, so its using ftransform() etc.etc.)
The problem that I have is that when I use this code and glDrawElements using indices, I get a black screen(no errors anywhere) and if I then switch to glDrawArrays it works(or atleast Im seeing SOMETHING. Ive read alot of tutorials, examples and other SO posts trying to find what Im doing wrong, but none of the solutions/tutorials have so far made any difference.
Im doing this for educational purposes so I really need to use the glDrawElements & indices.
Any help would be greatly appreciated!
Ps. If anyone is wondering about the SDL version, its SDL2.
Here is the crux of your problem:
//Bind the buffers and tell OpenGL to use the Vertex Buffer Objects (VBO's), which we already prepared earlier
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboID); // 1
glBindBufferARB(GL_ARRAY_BUFFER_ARB, texcoordsID); // 2
glBindBufferARB(GL_ARRAY_BUFFER_ARB, normalID); // 3
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, indiceID);
//Enable states, and render (as if using vertex arrays directly)
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, 0); // 2
glNormalPointer(GL_FLOAT, 0, 0); // 3
glVertexPointer(3, GL_FLOAT, 0, 0); // 1
The lines labeled 1, 2 and 3 need to come in pairs. That is to say, since you can only have one VBO bound at a time and it provides the context for a call to glTexCoordPointer (...) for instance, you need to set the pointers while the appropriate VBO is bound.
You can fix it like this:
//Bind the buffers and tell OpenGL to use the Vertex Buffer Objects (VBO's), which we already prepared earlier
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboID); // 1
glVertexPointer(3, GL_FLOAT, 0, 0); // 1
glBindBufferARB(GL_ARRAY_BUFFER_ARB, texcoordsID); // 2
glTexCoordPointer(2, GL_FLOAT, 0, 0); // 2
glBindBufferARB(GL_ARRAY_BUFFER_ARB, normalID); // 3
glNormalPointer(GL_FLOAT, 0, 0); // 3
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, indiceID);
//Enable states, and render (as if using vertex arrays directly)
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
You have one additional issue, which is not really an error so much as a performance concern. GL_UNSIGNED_BYTE is not a hardware supported vertex index type. The driver must convert your index array to a 16-bit type for the hardware to use it, so there is no actual benefit to using 8-bit indices when you are going to store them in a VBO. Most OpenGL profilers and drivers with debug output enabled will generate a performance warning if you try to do this.
Umm.. I wrote this code to print mesh (var m), and it's running well
glBegin(GL_TRIANGLES);
for (unsigned i : m.vtIndex)
{
const aiVector3D *pv = &m.pMesh->mVertices[i];
const aiVector3D *pvn = &m.pMesh->mNormals[i];
glNormal3fv((const GLfloat *)pvn);
glVertex3fv((const GLfloat *)pv);
}
glEnd();
And here is other one
glVertexPointer(3, GL_FLOAT, 0, m.pMesh->mVertices);
glNormalPointer(GL_FLOAT, 0, m.pMesh->mNormals);
glDrawElements(GL_TRIANGLES, m.vtIndex.size(), GL_UNSIGNED_INT, &m.vtIndex[0]);
But second one occured access violation
Could you give me an opinion?
You are using deprecated OpenGL in this example and using client memory to store vertex data. In deprecated OpenGL, you need to enable the appropriate client states:
glEnableClientState (GL_VERTEX_ARRAY);
glEnableClientState (GL_NORMAL_ARRAY);
If you add this to your code before your draw call, it ought to fix your crash.
You can safely leave GL_VERTEX_ARRAY enabled for the entire time your program runs, but you may need to enable/disable other arrays such as GL_NORMAL_ARRAY depending on which vertex pointers your meshes actually use.
I figure out.
glEnableClientState(GL_INDEX_ARRAY); // Oops!!
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
haha....
I'm trying to emulate exactly how a game sets up a VBO and draws it to the screen. I've never set one up before and the tutorials all show how to do it with glDrawArrays but I want to use glDrawElements.
I came up with the following:
glViewport(0, 0, 765, 553);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 765, 553, 0.0, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
xCast(ptr_glActiveTextureARB, ptr_wglGetProcAddress("glActiveTextureARB"));
xCast(ptr_glMultiTexCoord2fARB, ptr_wglGetProcAddress("glMultiTexCoord2fARB"));
xCast(ptr_glGenBuffersARB, ptr_wglGetProcAddress("glGenBuffersARB"));
xCast(ptr_glBindBufferARB, ptr_wglGetProcAddress("glBindBufferARB"));
xCast(ptr_glBufferDataARB, ptr_wglGetProcAddress("glBufferDataARB"));
struct PointInfo
{
float Pos[3];
float Colour[3];
};
const int NumVerts = 3, NumInds = 3;
std::vector<PointInfo> Vertices;
Vertices.push_back({{0.0f, 1.0f, 0.0f}, {1, 1, 1}}); ///top left;
Vertices.push_back({{0.5f, 0.0f, 0.0f}, {1, 1, 1}}); ///bottom middle;
Vertices.push_back({{1.0f, 1.0f, 0.0f}, {1, 1, 1}}); ///top right;
std::vector<std::uint32_t> Indices = {0, 1, 2};
std::uint32_t VBO = 0, IBO = 0;
ptr_glGenBuffersARB(1, &VBO);
ptr_glGenBuffersARB(1, &IBO);
///Put Vertices In.
ptr_glBindBufferARB(GL_ARRAY_BUFFER, VBO);
ptr_glBufferDataARB(GL_ARRAY_BUFFER, sizeof(PointInfo) * NumVerts, &Vertices[0], GL_STATIC_DRAW);
Log(glGetError());
///Put Indices In.
ptr_glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER, IBO);
ptr_glBufferDataARB(GL_ELEMENT_ARRAY_BUFFER, sizeof(int) * NumInds, &Indices[0], GL_STATIC_DRAW);
Log(glGetError());
I run the above only once at the start of my program. Then in my while loop, I run:
glPushMatrix();
glClearColor(0.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
Log(glGetError());
ptr_glBindBufferARB(GL_ARRAY_BUFFER, VBO);
Log(glGetError());
glVertexPointer(3, GL_FLOAT, sizeof(PointInfo), (void*) offsetof(PointInfo, Pos));
Log(glGetError());
glColorPointer(3, GL_FLOAT, sizeof(PointInfo), (void*) offsetof(PointInfo, Colour));
Log(glGetError());
ptr_glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER, IBO);
Log(glGetError());
glDrawElements(GL_TRIANGLES, NumInds, GL_UNSIGNED_INT, 0);
Log(glGetError());
glPopMatrix();
SwapBuffers(DC);
Sleep(1);
But the only thing that happens is my screen clearing. I never see my triangle at all :S I think it might be my view setup via the glOrtho but I'm not sure. Is there anything wrong with what I did? The glGetError just prints 0.. No errors :S
The triangle coordinates you specified are very small. The triangle occupies only half of a pixel at the top left corner of the screen. Try scaling it by 100.
Also I think you're missing calls to glEnableClientState with GL_VERTEX_ARRAY and GL_COLOR_ARRAY.
As a general approach I would suggest to take things one step at a time. Start with immediate mode glVertex to make sure you got the coordinates and camera setup right. Then add shaders. Then convert to a position-only VBO with DrawArrays. Then add vertex colors. Then convert to DrawElements. That way you have a better sense of where problems might lie.
You might also be interested in the glload library here to get rid of these ptr_ prefixes.
You should use glVertexAttribPointer. The functions you are using are deprecated. Perhaps you could get this code to work, but if you aren't forced to use such an ancient OpenGL, chances are you'd save yourself a lot of trouble.
Oh also manually loading function pointers is extremely cumbersome. I suggest you looked at libraries such as GLload.
A specialized debugger such as CodeXL or gDebugger can be very helpful in solving issues like that.
As for the problems in this code, your triangle is simply too small.