I have an application that uses OpenGL within a Qt Widgets application. I used to develop it on macOS where everything worked fine, before switching over to Linux. Now when I call glBindVertexArray(mesh->getVao());, OpenGL spits out INVALID_OPERATION.
Using AMD's CodeXL, I determined that mesh->getVao() retuened 2. I also used it to get a list of all OpenGL calls. Upon examining it, it seems to generate a VAO with the ID of 2, and there is no glDeleteVertexArrays within the call list. I even commented out the code that would delete the vertex array.
The OpenGL docs state that the only reason glBindVertexArray could fail is if it is is not given zero, or a generated VAO.
Are there any other possible reasons why glBindVertexArray could spit out INVALID_OPERATION, despite the VAO existing, and why it could work on macOS but not on Linux?
A few code samples, if it helps
Mesh rendering
void renderMesh(const Resource::ResourceMesh *mesh) {
//Set up textures
for (unsigned int i = 0; i < mesh->getTextures().size(); i++) {
glActiveTexture(GL_TEXTURE0 + i);
mesh->getTextures().at(i)->getTexture()->bind();
}
static const int texIDs[32] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31};
glUniform1iv(shaderTexID, MAX_SHADER_TEXTURES, texIDs);
//Draw the mesh
glBindVertexArray(mesh->getVao());
qDebug() << "mesh == nullptr:" << (mesh == nullptr);
glDrawElements(GL_TRIANGLES, mesh->getIndices().size(), GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
}
VAO generation
//In ResourceMesh.hpp
protected:
QVector<Model::Vertex> vertices;
QVector<unsigned int> indices;
QVector<Resource::ResourceTexture*> textures;
GLuint vao;
GLuint vbo;
GLuint ebo;
//In ResourceMesh.cpp
void ResourceMesh::generateGlBuffers() {
glGenVertexArrays(1, &vao);
glGenBuffers(1, &vbo);
glGenBuffers(1, &ebo);
glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(Model::Vertex), &vertices[0], GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.size() * sizeof(unsigned int), &indices[0], GL_STATIC_DRAW);
//Vertex positions
glEnableVertexAttribArray(GLManager::VertexAttribs::VERTEX_POSITION);
glVertexAttribPointer(GLManager::VertexAttribs::VERTEX_POSITION, 3, GL_FLOAT, GL_FALSE, sizeof(Model::Vertex), (void*) offsetof(Model::Vertex, position));
//Vertex normals
glEnableVertexAttribArray(GLManager::VertexAttribs::VERTEX_NORMAL);
glVertexAttribPointer(GLManager::VertexAttribs::VERTEX_NORMAL, 3, GL_FLOAT, GL_FALSE, sizeof(Model::Vertex), (void*) offsetof(Model::Vertex, normal));
//Vertex texture coordinates
glEnableVertexAttribArray(GLManager::VertexAttribs::VERTEX_TEX_COORD);
glVertexAttribPointer(GLManager::VertexAttribs::VERTEX_TEX_COORD, 2, GL_FLOAT, GL_FALSE, sizeof(Model::Vertex), (void*) offsetof(Model::Vertex, texCoord));
glBindVertexArray(0);
}
And if it helps, here's a screenshot of CodeXL, breaking at glBindVertexArray
Solved in the comments - Turns out the VAO was being created on the wrong context. I guess the macOS implementation of Qt shares a context for everything, whereas on Linux, they're seperated.
Related
I am following Cherno's brilliant series on OpenGL, and I have encountered a problem. I have moved on from using a vertex buffer only, to now using a vertex buffer together with an index buffer.
What I want to happen, is for my program to draw two triangles, using the given positions and indices, however when I run my program I only get a black screen. My shaders are working fine when drawing only from a vertex buffer, but introducing the index buffer makes it fail. Here is the relevant parts of code:
float positions[] {
-0.5, -0.5,
0.5, -0.5,
0.5, 0.5,
-0.5, 0.5
};
unsigned int indices[] {
0, 1, 2,
2, 3, 0
};
unsigned int VBO;
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, 4*2*sizeof(float), positions, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(float)*2, 0);
unsigned int IBO;
glGenBuffers(1, &IBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, 6*sizeof(unsigned int), indices, GL_STATIC_DRAW);
ShaderProgramSource source = parseShader("res/shaders/Basic.glsl");
unsigned int shader = createShader(source.vertexSource, source.fragmentSource);
glUseProgram(shader);
/* Loop until the user closes the window */
while (!glfwWindowShouldClose(window))
{
/* Render here */
glClear(GL_COLOR_BUFFER_BIT);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr);
/* Swap front and back buffers */
glfwSwapBuffers(window);
/* Poll for and process events */
glfwPollEvents();
}
I am pretty much sure that my code is equal to that of Cherno, but he gets a nice looking square on screen whereas I get nothing. Can you spot an error?
Here's some info on my system:
macOS 12.2.1
OpenGL Version 4.1
GLSL Version 3.3
Writing and compiling in Xcode
Static linking to GLEW and GLFW
Unlike using Linux or Windows, a Compatibility profile OpenGL Context is not supported on a Mac. You must use a Core profile OpenGL Context. If you use a Core profile, you must create a Vertex Array Object because a core profile does not have a default Vertex Array Object.
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(float)*2, 0);
This question already has answers here:
How do I determine the size of my array in C?
(24 answers)
Closed 11 months ago.
I'm trying to create an element/index array that I dynamically build with a set amount of memory. However, my code only works when I hard-code this array. In particular, the following is all the relevant code:
GLfloat elements[15 * 15 * 2 * 3 * 2];
GLuint second_elements[] = {
0, 16, 17,
0, 1, 17,
1, 17, 18,
1, 2, 18,
...
};
GLuint VBO;
glGenBuffers(1, &VBO);
GLuint EBO;
glGenBuffers(1, &EBO);
GLuint VAO;
glGenVertexArrays(1, &VAO);
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(elements), elements, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), (void*)0);
glEnableVertexAttribArray(0);
...
glDrawElements(GL_TRIANGLES, sizeof(elements) / sizeof(elements[0]), GL_UNSIGNED_INT, 0);
When I use elements, my code doesn't render anything. When I use 'second_elements', my code renders everything perfectly. I've confirmed that they hold the same data, and have tried to just render one triangle from each as a test, but I just can't seem to get the EBO to load into VAO properly. Can anyone help?
Turns out, (for some reason) if I use a std::vector instead of normal heap memory, I can make this work by passing in elements.size() and &elements[0]. I don't know why, but I'll take what I can get.
I'm trying to draw a cube using indexed draw in OpenGL 3.3. But it does not show up right...
Here is what I tried doing.
GLfloat vertices01[] = {
-1.0f,1.0f,0.0f,
-1.0f,-1.0f,0.0f,
1.0f,1.0f,0.0f,
1.0f,-1.0f,0.0f,
-1.0f,1.0f,-1.0f,
-1.0f,-1.0f,-1.0f,
1.0f,1.0f,-1.0f,
1.0f,-1.0f,-1.0f
};
unsigned int indices01[] = {
0, 2, 3, 1,
2, 6, 7, 3,
6, 4, 5, 7,
4, 0, 1, 5,
0, 4, 6, 2,
1, 5, 7, 3
};
Mesh* obj3 = new Mesh();
obj3->CreateMesh(vertices01, indices01, 24, 24);
meshList.push_back(obj3);
meshList[0]->RenderMesh();
//in mesh class
indexCount = numOfIndices;
glGenVertexArrays(1, &VAO);
glBindVertexArray(VAO);
glGenBuffers(1, &IBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices[0])*numOfIndices, indices, GL_STATIC_DRAW);
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices[0])*numOfVertices, vertices, GL_STATIC_DRAW);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindVertexArray(0);
glBindVertexArray(VAO);
//bind ibo
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBO);
glDrawElements(GL_TRIANGLES,indexCount, GL_UNSIGNED_INT, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindVertexArray(0);
the output shows a partial cube whose each side had one triangle each and also a triangle going through it's diagonal
If you use a compatibility profile context, then you can keep your indices an use GL_QUADS instead of GL_TRIANGLES. But that's deprecated (Legacy OpenGL ).
Since the primitive type is GL_TRIANGLES, each side of the cube has to be formed by 2 triangles. See Triangle primitives.
Change the index buffer to solve the issue:
unsigned int indices01[] = {
0, 2, 3, 0, 3, 1,
2, 6, 7, 2, 7, 3,
6, 4, 5, 6, 5, 7,
4, 0, 1, 4, 1, 5,
0, 4, 6, 0, 6, 2,
1, 5, 7, 1, 7, 3,
};
An alternative solution would be to use the primitive type GL_TRIANGLE_STRIP and a Primitive Restart index.
Enable primitive restart and define a restart index:
e.g.
glEnable( GL_PRIMITIVE_RESTART );
glPrimitiveRestartIndex( 99 );
Define 2 triangle strips, which are separated by the restart index:
unsigned int indices01[] = {
0, 1, 2, 3, 6, 7, 4, 5,
99, // 99 is the restart index
7, 3, 5, 1, 4, 0, 6, 2
};
int indexCOunt = 17;
And draw the elements:
glDrawElements(GL_TRIANGLE_STRIP, indexCount, GL_UNSIGNED_INT, 0);
Note, the Index buffer binding is stored in the Vertex Array Object.
So it is sufficient to bind the index buffer once, when the VAO is setup:
glGenVertexArrays(1, &VAO);
glBindVertexArray(VAO);
// [...]
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBO);
// [...]
// glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); <---- delete this
glBindVertexArray(0);
Then it is superfluous to bind the index buffer again, before the draw call:
glBindVertexArray(VAO);
// glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBO); <---- now this is superfluous
glDrawElements(GL_TRIANGLES,indexCount, GL_UNSIGNED_INT, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindVertexArray(0);
I am trying to draw 2 objects in OpenGL. The window/viewport is (0,0,950,1050). I am not sure this is the right way of doing it, but I thought so. I thought the idea was to create a VBO/VAO per object, bind it, set the data, and repeat that operation for each object.
Then when the objects are to be drawn:
set the shader
bind the data of the first object we want to draw using its vbo (say vbo1)
do the drawing call
bind the data of the next object we want to draw using its vbo (say vbo2)
do the drawing call
...
When I do this, I only get the points of the second object drawn on the screen, but with the color of the first object (it's blue instead of red).
My mistake must be obvious to any expert out there. What do I miss?
// BLUE -----------------------------------
GLuint vbo1, vao1;
glGenBuffers(1, &vbo1);
glBindBuffer(GL_ARRAY_BUFFER, vbo1);
float arr1[] = { 10, 10, 10, 110, 110, 110, 110, 10 };
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * 8, arr1, GL_STATIC_DRAW);
glGenVertexArrays(1, &vao1);
glBindVertexArray(vao1);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(0);
// RED -----------------------------------
GLuint vbo2, vao2;
glGenBuffers(1, &vbo2);
glBindBuffer(GL_ARRAY_BUFFER, vbo2);
float arr2[] = { 400, 400, 400, 800, 800, 800, 800, 400 };
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * 8, arr2, GL_STATIC_DRAW);
glGenVertexArrays(1, &vao2);
glBindVertexArray(vao2);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(0);
while (!glfwWindowShouldClose(window))
{
glClearColor(1, 1, 1, 0.1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(pointShader);
GLint loc;
loc = glGetUniformLocation(pointShader, "pointColor");
// it draws this one but with the color blue!
glBindBuffer(GL_ARRAY_BUFFER, vbo2);
float ptColor2[3] = { 1, 0, 0 };
glUniform3fv(loc, 1, ptColor2);
glDrawArrays(GL_POINTS, 0, 4);
// it doesn't draw this one???
glBindBuffer(GL_ARRAY_BUFFER, vbo1);
float ptColor1[3] = { 0, 0, 1 };
glUniform3fv(loc, 1, ptColor1);
glDrawArrays(GL_POINTS, 0, 4);
glfwSwapBuffers(window);
glfwWaitEvents();
}
EDIT 2 WORKING CODE
Thank you very much to both Reto Koradi and Datenwolf. Combining the answers, helped to come with the right answer. It's sad, these things are not explained properly in books. Hope this post will help other beginners (sorry if the result is a bit misleading, I swapped color between when I asked the question, and when I got the answer).
// RED -----------------------------------
GLuint vbo1, vao1;
glGenVertexArrays(1, &vao1);
glBindVertexArray(vao1);
glGenBuffers(1, &vbo1);
glBindBuffer(GL_ARRAY_BUFFER, vbo1);
float arr1[] = { 10, 10, 10, 110, 110, 110, 110, 10 };
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * 8, arr1, GL_STATIC_DRAW);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(0);
// BLUE -----------------------------------
GLuint vbo2, vao2;
glGenVertexArrays(1, &vao2);
glBindVertexArray(vao2);
glGenBuffers(1, &vbo2);
glBindBuffer(GL_ARRAY_BUFFER, vbo2);
float arr2[] = { 400, 400, 400, 800, 800, 800, 800, 400 };
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * 8, arr2, GL_STATIC_DRAW);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(0);
while (!glfwWindowShouldClose(window))
{
float ratio;
int width, height;
glfwGetFramebufferSize(window, &width, &height);
glClearColor(1, 1, 1, 0.1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(pointShader);
GLint loc;
loc = glGetUniformLocation(pointShader, "pointColor");
// red
glBindVertexArray(vao1);
float ptColor1[3] = { 1, 0, 0 };
glUniform3fv(loc, 1, ptColor1);
glDrawArrays(GL_POINTS, 0, 4);
// blue
glBindVertexArray(vao2);
float ptColor2[3] = { 0, 0, 1 };
glUniform3fv(loc, 1, ptColor2);
glDrawArrays(GL_POINTS, 0, 4);
glfwSwapBuffers(window);
glfwWaitEvents();
}
EDIT 3
Note though the order from the first code fragment for the VAO/VBO order declaration would also work. So the above version and the one below are both valid:
// RED -----------------------------------
GLuint vbo1, vao1;
glGenBuffers(1, &vbo1);
glBindBuffer(GL_ARRAY_BUFFER, vbo1);
float arr1[] = { 10, 10, 10, 110, 110, 110, 110, 10 };
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * 8, arr1, GL_STATIC_DRAW);
glGenVertexArrays(1, &vao1);
glBindVertexArray(vao1);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(0);
// BLUE -----------------------------------
GLuint vbo2, vao2;
glGenBuffers(1, &vbo2);
glBindBuffer(GL_ARRAY_BUFFER, vbo2);
float arr2[] = { 400, 400, 400, 800, 800, 800, 800, 400 };
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * 8, arr2, GL_STATIC_DRAW);
glGenVertexArrays(1, &vao2);
glBindVertexArray(vao2);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(0);
The only thing that was really missing in the code was glBindVertexArray.
The problem is in your draw loop, where you call:
glBindBuffer(GL_ARRAY_BUFFER, vbo2);
It does not matter which GL_ARRAY_BUFFER is currently bound when you make the draw call. The correct buffer needs to be bound when you call glVertexAttribPointer(), which you correctly did in your setup code.
The VAOs track all your vertex setup state. So before each draw call, you need to bind the corresponding VAO by calling glBindVertexArray(), instead of the glBindBuffer() calls you have in the posted code:
glBindVertexArray(vao2);
glUniform3fv(...);
glDrawArrays(...);
glBindVertexArray(vao1);
glUniform3fv(...);
glDrawArrays(...);
You must create and bind the Vertex Array Object before specifying the vertex attribute data locations (glVertexAttribPointer). The VAO kind of takes ownership of the data that belongs to the VBO currently bound when making those calls.
(EDIT accidently submitted while still typing)
Assuming you have a core profile context, when you attempt to create that first buffer object, there's no VAO to bind it to yet, so the whole creation of the BO fails. Hence when you try to draw it, nothing gets drawn at all. But what you see drawn is the BO you intended to bind to the second VAO, but because you got the order of operations wrong it ends up in the first VAO.
I'm currently trying to get my head around VBOs and I'm running into some problems.
I'm using an interleaved array with position, colors, and normals. However, when I go to draw, the display is just white.
This is the structure of my array:
GLfloat position[3];
GLfloat normal[3];
GLfloat color[4];
Here's the code:
Initialization:
glGenBuffers(1, &arrays[0]);
glBindBuffer(GL_ARRAY_BUFFER, arrays[0]);
glBufferData(GL_ARRAY_BUFFER, 125*10*36*sizeof(GLfloat), vertices, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glGenVertexArrays(1, &vaoID[1]);
glBindVertexArray(vaoID[1]);
glBindBuffer(GL_ARRAY_BUFFER, vaoID[1]);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 40, ((void*)0));
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 40, ((void*)12));
glVertexAttribPointer(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, 40, ((void*)24));
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glBindVertexArray(0);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
Draw:
glPushMatrix();
glBindVertexArray(vaoID[1]);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 125*36);
glPopMatrix();
I'm making a 5x5 cube of cubes that spectrum in color from black to white. However, on draw, this is all I'm getting:
I see a problem with what you've posted above. You've defined your colors as being floats in the struct, but you're telling glVertexAttribPointer() that your colors are unsigned bytes. It should probably be something like this:
glVertexAttribPointer(2, 4, GL_FLOAT, GL_TRUE, 40, ((void*)24));
And that assumes that you're actually putting floats into those memory locations. How are you setting them?