During every call to glDrawElements, my graphics driver crashes/freezes and recovers after a few seconds (Windows 7 Timeout Detection/Recovery). glGetError() always returns GL_NO_ERROR.
Edit: Just to be clear about what exactly happens: the first time glDrawElements is called, my computer freezes for 5-10 seconds, then the screen goes black for a few more seconds, then it recovers and Windows gives me a message: "Display driver stopped responding and has recovered". My program keeps running, but it's stuck in glDrawElements.
Update: Adding a call to glFlush() just before glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) makes it work. I don't understand why.
Here's my code, somewhat simplified and without error checking. Am I doing anything wrong?
struct Vertex
{
float pos[3];
float color[3];
};
struct Tri
{
unsigned int idxs[3];
};
// ...
GLuint m_vbo;
GLuint m_ibo;
std::vector<Vertex> m_verts;
std::vector<Tri> m_faces;
// ...
glGenBuffers(1, &m_vbo);
Vertex* vBuf = &(m_verts[0]);
unsigned int vboSize = sizeof(vBuf[0]) * m_verts.size();
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glBufferData(GL_ARRAY_BUFFER, vboSize, vBuf, GL_STATIC_DRAW);
glGenBuffers(1, &m_ibo);
unsigned int* iBuf = (unsigned int*) (&(m_faces[0]));
unsigned int iboSize = sizeof(iBuf[0]) * (m_faces.size() * 3);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, iboSize, iBuf, GL_STATIC_DRAW);
// ...
// this line fixes it
// glFlush();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(someShaderProgram);
// ...
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_ibo);
// the attribute locations are queried using glGetAttribLocation in the real code
glEnableVertexAttribArray(1);
glVertexAttribPointer
(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)(0));
glEnableVertexAttribArray(0);
glVertexAttribPointer
(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)(3*sizeof(float)));
// doesn't get past this line
glDrawElements(GL_TRIANGLES, m_faces.size()*3, GL_UNSIGNED_INT, 0);
// ...
glfwSwapBuffers();
The last argument for glVertexAttribPointer is "a offset of the first component" - in byte(!). Are you sure you want the value 3 there?
Related
I'm pretty sure that the vertex array is not binding the vertex buffer because if I comment out the line where I unbind the vertex buffer it works perfectly fine, which suggests that the vertex array isn't binding the vertex buffer properly.
Here is the code (there is some abstraction around the program and window but it isn't relevant to the question):
GLuint va;
glGenVertexArrays(1, &va);
glBindVertexArray(va);
GLuint vb;
glGenBuffers(1, &vb);
glBindBuffer(GL_ARRAY_BUFFER, vb);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, (2 + 3) * sizeof(float), nullptr);
glBindAttribLocation(program.id(), 0, "i_position");
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, (2 + 3) * sizeof(float), (const void*)(2 * sizeof(float)));
glBindAttribLocation(program.id(), 1, "i_color");
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0); //< if this line is commented out it works perfectly fine
program.bind();
while(window->isOpen())
{
glfwPollEvents();
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(va);
glBufferData(GL_ARRAY_BUFFER, 3 * (2 + 3) * sizeof(float), vertexes, GL_DYNAMIC_DRAW);
glDrawArrays(GL_TRIANGLES, 0, 3);
window->update();
}
Does someone know what I am doing wrong?
A VAO doesn't store the buffer binding. It only stores which buffer is bound to which attribute. If you need the buffer binding itself (for example, for glBufferData), you have to bind the buffer yourself.
Also note, that glBindAttribLocation has to be called before the program object get's linked.
I'm writing a program which uses OpenGL for 3D rendering and so far I've been programming on Ubuntu and all was good. However, I had to change to Windows and much to my surprise the code that used to render a cube, a quad or whatever really, because it was working perfectly fine, was rendering absolutely nothing.
To test that the code was indeed the same, I took the copy I had on windows, did a complete rebuild on Ubuntu again and it was working as intended there.
So, in my attempt to see where I could be going wrong I fired up the debugger a few times to see if there weren't happening any strange memory leaks or something like that and I confirmed that the memory was fine.
The OpenGL context is being created fine because the clear color is showing, so I really don't have a clue of what might be going wrong.
The render code is as follows:
void render(Handle<Model> _model)
{
if(!_model->created && _model->loaded)
{
glGenVertexArrays(1, &_model->VAO);
glGenBuffers(1, &_model->VBO);
glGenBuffers(1, &_model->EBO);
glBindVertexArray(_model->VAO);
glBindBuffer(GL_ARRAY_BUFFER, _model->VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * _model->vertices.size(), &_model->vertices[0], GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _model->EBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned int) * _model->indices.size(), &_model->indices[0], GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)0);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)(3*sizeof(float)));
glEnableVertexAttribArray(1);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)(6*sizeof(float)));
glEnableVertexAttribArray(2);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
_model->created = true;
}
if(_model->loaded && _model->created)
{
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
_model->shader.useProgram();
_model->shader.setMat4("transform", _model->transform);
_model->shader.setMat4("view", this->view);
_model->shader.setMat4("projection", this->projection);
glBindVertexArray(_model->VAO);
glDrawElements(GL_TRIANGLES, _model->indices.size(), GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
if(glGetError() != GL_NO_ERROR)
{
std::cout << "erro" << std::endl;
}
}
}
The Handle is more or less a pointer wrapper and the Model struct is just a bunch of data:
struct Model
{
unsigned int VAO;
unsigned int VBO;
unsigned int EBO;
std::vector<unsigned int> indices;
Shader shader;
glm::mat4 transform;
std::vector<Vertex> vertices;
bool created;
bool loaded;
};
The fact that this code works on Ubuntu and not Windows is driving me insane.
I'm using shaders and modern OpenGL. I tried glGetError() checks but no error is returned, I also tried debugging with apitrace, but I couldn't find anything. I'm not even sure if the problem is initialization or drawing code.
Sprite init:
void Sprite::init(float _x, float _y, float _width, float _height, const char* texture_path) {
x = _x;
y = _y;
width = _width;
height = _height;
texture.init(texture_path);
glGenBuffers(1, &vbo);
glGenBuffers(1, &ebo);
// This array will hold our vertex data
// We need 4 vertices, and each vertex has 2 floats for X and Y
Vertex vertexData[4];
// Top right
vertexData[0].set_position(x + width, y + height);
vertexData[0].set_uv(1.0f, 1.0f);
// Bottom right
vertexData[1].set_position(x + width, y);
vertexData[1].set_uv(1.0f, 0.0f);
// Bottom left
vertexData[2].set_position(x, y);
vertexData[2].set_uv(0.0f, 0.0f);
// Top left
vertexData[3].set_position(x, y + height);
vertexData[3].set_uv(0.0f, 1.0f);
for (int i = 0; i < 4; i++) {
vertexData[i].set_color(255, 255, 255, 255);
}
GLuint indices[] = { // Note that we start from 0!
0, 1, 3, // First Triangle
1, 2, 3 // Second Triangle
};
// Bind the vertex buffer object (active buffer)
glBindBuffer(GL_ARRAY_BUFFER, vbo);
// Upload the buffer data to GPU
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexData), vertexData, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
// Unbind the buffer
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
Sprite draw:
void Sprite::draw() {
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture.id);
// Bind the buffer object
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
// Tell OpenGL that we want to use the first attribute array
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
// This is the position attribute pointer
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, position));
// This is the color attribute pointer
glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(Vertex), (void*)offsetof(Vertex, color));
// This is the UV attribute pointer
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, uv));
// Draw the 4 vertices to the screen
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
// Disable the vertex attrib array
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
// Unbind the VBO and EBO
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
Rendering code:
Sprite sprite;
sprite.init(0, 0, 500, 500, "assets/textures/awesomeface.png");
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Enable shader
shader_program.enable();
sprite.draw();
// Disable shader
shader_program.disable();
// Swap buffers
window.swap_window();
You need to call glEnable(GL_TEXTURE_2D); to enable use of textures. It would also be preferable to disable it as soon as you are done using that utility, simply by putting glDisable(GL_TEXTURE2D); as soon as you have finished drawing, or whenever you are done working with textures. Hope this helps! I had this problem as well, and it took me a good 3 days of staring at a blinking cursor to figure out.
The title sums up my issue, but no matter what I set the first vertex as, OpenGL always draws it at the origin. I've tried this on a school computer and it wasn't a problem but I'm not at school and it's possible something I've changed is causing the issue. Regardless, I see no reason why this should happen. In case syntax seems weird, this code is written in D but should be an almost seamless port from C.
My code is:
class Mesh
{
this(vec3[] vertices, uint[] indices)
{
draw_count = indices.length;
glGenVertexArrays(1, &vertex_array_object);
glBindVertexArray(vertex_array_object);
glGenBuffers(NUM_BUFFERS, vertex_array_buffers.ptr);
glBindBuffer(GL_ARRAY_BUFFER, vertex_array_buffers[POSITION_VB]);
glBufferData(GL_ARRAY_BUFFER, vertices.length * vertices.sizeof, vertices.ptr, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(cast(GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, cast(void*)0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vertex_array_buffers[INDEX_VB]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.length * indices.sizeof, indices.ptr, GL_STATIC_DRAW);
glBindVertexArray(0);
}
void draw()
{
glBindVertexArray(vertex_array_object);
glDrawElements(GL_TRIANGLES, draw_count, GL_UNSIGNED_INT, cast(const(void)*)0);
glBindVertexArray(0);
}
private:
enum
{
POSITION_VB,
INDEX_VB,
NORMAL_VB,
NUM_BUFFERS
};
GLuint vertex_array_object;
GLuint vertex_array_buffers[NUM_BUFFERS];
vec3 normals;
int draw_count;
}
I have a mesh class as follows
Mesh_PTI::Mesh_PTI(bool dynamic) : m_dynamic(dynamic), m_drawCount(0)
{
glGenVertexArrays(1, m_vertexArrays);
glBindVertexArray(m_vertexArrays[0]);
glGenBuffers(NUM_BUFFERS, m_buffers);
glBindVertexArray(0);
}
Mesh_PTI::Mesh_PTI(glm::vec3 positions[], glm::vec2 texCoords[], unsigned short indices[], unsigned short numVertices, unsigned int numIndices, bool dynamic) :
m_dynamic(dynamic)
{
glGenVertexArrays(1, m_vertexArrays);
glBindVertexArray(m_vertexArrays[0]);
glGenBuffers(NUM_BUFFERS, m_buffers);
createBuffers(positions, texCoords, indices, numVertices, numIndices, false);
glBindVertexArray(0);
m_drawCount = numIndices;
}
Mesh_PTI::~Mesh_PTI()
{
glDeleteBuffers(NUM_BUFFERS, m_buffers);
glDeleteVertexArrays(1, m_vertexArrays);
}
void Mesh_PTI::draw()
{
if(m_drawCount > 0)
{
glBindVertexArray(m_vertexArrays[0]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_buffers[INDEX_VB]);
glDrawElements(GL_TRIANGLES, m_drawCount, GL_UNSIGNED_SHORT, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
}
void Mesh_PTI::setData(glm::vec3 positions[], glm::vec2 texCoords[], unsigned short indices[], unsigned short numVertices, unsigned int numIndices)
{
glBindVertexArray(m_vertexArrays[0]);
createBuffers(positions, texCoords, indices, numVertices, numIndices, false);
glBindVertexArray(0);
m_drawCount = numIndices;
}
void Mesh_PTI::createBuffers(glm::vec3 positions[], glm::vec2 texCoords[], unsigned short indices[], unsigned short numVertices, unsigned int numIndices, bool dynamic)
{
glBindBuffer(GL_ARRAY_BUFFER, m_buffers[POSITION_VB]);
glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(positions[0]), positions, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glBindBuffer(GL_ARRAY_BUFFER, m_buffers[TEXCOORD_VB]);
glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(texCoords[0]), texCoords, GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, NULL);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_buffers[INDEX_VB]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, numIndices * sizeof(indices[0]), indices, GL_STATIC_DRAW);
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 1, GL_SHORT, GL_FALSE, 0, NULL);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
I need to update the vertex data. If I delete the mesh and load the vertex data using the constructor, everything works fine.
If I initialize it with the first constructor and use the setData function to load vertex data, multiple instances of this class only render the last one that had setData called.
What am I doing wrong?
glGenBuffers only returns available buffer names at the time at which the method is called, it does not reserve those names. So the next time you call glGenBuffers without binding anything to the first buffers from the call of glGenBuffers, you will get the same names since they haven't been used yet. When you later call glBindBuffers, you will find all your instances are using the same names for their VBOs, so they are overwriting each other.
You are also trying to bind an element array buffer as a vertex attribute, which doesn't
make any sense because indices are used as part of glDrawElements (unless you are using
them in your shader for some reason).
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 1, GL_SHORT, GL_FALSE, 0, NULL);
// ^~~~~ but your indices are GL_UNSIGNED_SHORT
On a related note: you don't need to bind your index buffer every time before you draw with a VAO, since the index buffer is part of the VAO state. Specified vertex data is bound using the glVertexPointer* functions, so the vertex->attribute bindings and their appropriate VBO is part of state as well, but GL_ARRAY_BUFFER isn't.