I am trying to render some trees with instancing. This is rather weird, but before sleeping yesterday night, I checked the code, and it was in a running state, when I got up this morning, it is crashing when I am calling glVertexAttribDivisor I haven't changed any code since yesterday.
Here is how I am sending data to GPU for instancing.
glGenBuffers(1, &iVBO);
glBindBuffer(GL_ARRAY_BUFFER, iVBO);
glBufferData(GL_ARRAY_BUFFER, (ml_instance->i_positions.size()*sizeof(glm::vec4)) , NULL, GL_STATIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, 0, (ml_instance->i_positions.size()*sizeof(glm::vec4)), &ml_instance->i_positions[0]);
And then in vertex specification--
glBindBuffer(GL_ARRAY_BUFFER, iVBO);
glVertexAttribPointer(i_positions, 4, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(i_positions);
glVertexAttribDivisor(i_positions,1); // **THIS IS WHERE THE PROGRAM CRASHES**
glDrawElementsInstanced(GL_TRIANGLES, indices.size(), GL_UNSIGNED_INT, 0,TREES_INSTANCE_COUNT);
I have checked ml_instance->i_positions, it has all the data that needs to render.
I have checked the value of i_positions in vertex shader, it is the same as whatever I have defined there.
I am little out of ideas here, everything looks pretty much fine. What am I missing?
Related
I'm currently learning OpenGL, but I'm having some problems understanding how the different buffers relate to the VAO. In my following code, I'm creating one VAO and two buffers (VBO for the vertex positions and EBO for the vertex order). At this point, if I understood it correctly, there is no connection between the VAO, VBO and EBO. I basically just created one VAO and two buffers. Now with glBindVertexArray and glBindBuffer I tell the OpenGL state machine my currently used VAO and assign my created buffers to a specific buffer type. With glBufferData I then load my data into the buffers. As I understand it, the VAO is still empty at this point and only gets data loaded into with the glVertexAttribPointer function. Now, OpenGL interprets the Data from GL_ELEMENT_ARRAY_BUFFER and loads them into the VAO at index 0. With glEnableVertexAttribArray(0) I then specify that the data at index 0 from my VAO (the vertex positions) should be used in the rendering process.
unsigned int VAO, VBO, EBO;
glGenVertexArrays(1, &VAO);
glGenBuffers(1, &VBO);
glGenBuffers(1, &EBO);
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), 0);
glEnableVertexAttribArray(0);
In my main loop, I specify a shader for my triangles, bind a VAO for rendering and then draw the elements followed by swapping the front and back buffers.
glUseProgram(shader);
glBindVertexArray(VAO);
glDrawElements(GL_TRIANGLES, 9, GL_UNSIGNED_INT, 0);
glfwSwapBuffers(window);
But how exactly does OpenGL know the order from the vertices? I have only uploaded the vertex position data into my VAO, but not the order(element) data. Does glVertexAttribPointer perhaps take into account the currently bound data from the GL_ELEMENT_ARRAY_BUFFER when loading data into the VAO? What am I missing here?
Now, OpenGL interprets the Data from GL_ELEMENT_ARRAY_BUFFER and loads them into the VAO at index 0.
No. Well sort of, but not in the way you seem to be describing it.
Most buffer binding points bind a buffer to the OpenGL context. The GL_ELEMENT_ARRAY_BUFFER binding point is different. It attaches the buffer to the VAO that itself is bound to the context. That is, the act of calling glBindBuffer(GL_ELEMENT_ARRAY_BUFFER) itself is what creates an association between the VAO and the index buffer. This association has nothing to do with glVertexAttribPointer.
This also means that if you don't have a VAO bound to the context, you cannot bind anything to GL_ELEMENT_ARRAY_BUFFER.
This question already has answers here:
What is the role of glBindVertexArrays vs glBindBuffer and what is their relationship?
(5 answers)
Closed 1 year ago.
I'm in the process of learning OpenGL (3.3) with C++, I can draw simple polygons using Vertex Buffer Objects, Vertex Array Objects, and Index Buffers, but the way I code them still feels a little like magic, so I'd like to understand what's going on behind the scenes.
I have a basic understanding of the binding system OpenGL uses. I also understand that the VAO itself contains the bound index buffer (as specified here), and if I'm not mistaken the bound vertex buffer too.
By this logic, I first bind the VAO, then create my VBO (which behind the scenes is bound "inside" the VAO?), I can perform operations on the VBO, and it will all work. But then when I come to unbind the buffers I used after I'm done setting up things, it seems that I must unbind the VAO first, then the vertex & index buffers, so the unbinding occurs in the same order as the binding, and not in reverse order.
That is extremely counter intuitive. So my question is, why does the order of unbinding matter?
(to clarify, I am rebinding the VAO anyway before calling glDrawElements, so I don't think I even need to unbind anything at all. it's mostly for learning purposes)
Here's the relevant code:
GLuint VAO, VBO, IBO;
glGenVertexArrays(1, &VAO);
glBindVertexArray(VAO);
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glGenBuffers(1, &IBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0);
glEnableVertexAttribArray(0);
glBindVertexArray(0); //unbinding VAO here is fine
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
//if I unbind VAO here instead, it won't render the shape anymore
while (!glfwWindowShouldClose(window)) {
glfwPollEvents();
glUseProgram(shaderProgram);
glBindVertexArray(VAO);
glDrawElements(GL_TRIANGLES, 9, GL_UNSIGNED_INT, 0);
glfwSwapBuffers(window);
}
I believe I might have figured it out. Looking at this unbinding order
glBindBuffer(GL_ARRAY_BUFFER, 0); //unbinding VBO
glBindVertexArray(0); //unbinding VAO
clearly "ruins" my polygons, because unbinding the VBO while my VAO is still bound means the VBO, which is supposed to "contain" references to the buffers it uses, is now not bound to a vertex buffer anymore. It doesn't know about the buffers.
Unbinding the VAO first allows me to then unbind any buffers without affecting it. This would be the correct order.
glBindVertexArray(0); //unbinding VAO
glBindBuffer(GL_ARRAY_BUFFER, 0); //unbinding VBO
OpenGL's binding system is a little hard to grasp, especially when different things depend on one another, but I think my explanation is correct. Hopefully this could help some other learners in the future.
I have a a bunch of 3D models from the stereo camera on the Curiosity rover driving around on Mars. The models are being loaded from disk, multiple models at the same time, asynchronously. Now I need to upload this bunch of models asynchronously to the GPU(at runtime) to prevent a stall in the rendering loop, which is happening right now.
The way a model is uploaded right now:
glGenVertexArrays(1, &_vaoID);
glGenBuffers(1, &_vbo);
glGenBuffers(1, &_ibo);
glBindVertexArray(_vaoID);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glBufferData(GL_ARRAY_BUFFER, _vertices.size() * sizeof(Vertex), _vertices.data(), GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex),
reinterpret_cast<const GLvoid*>(offsetof(Vertex, location)));
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex),
reinterpret_cast<const GLvoid*>(offsetof(Vertex, tex)));
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex),
reinterpret_cast<const GLvoid*>(offsetof(Vertex, normal)));
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, _indices.size() * sizeof(int), _indices.data(), GL_STATIC_DRAW);
glBindVertexArray(0);
And the way it is rendered:
glBindVertexArray(_vaoID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _ibo);
glDrawElements(_mode, static_cast<GLsizei>(_indices.size()), GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
Right now I am uploading about 20 models to the GPU each time in the renderloop(when there are models loaded from disk and ready to be uploaded to the GPU) and it's way to much, the application stalls for about 50-400ms depending on the amount of vertices/normals/indices for the models.
Ping ponging between VBOs(updating one, reading from one) will probably not work in the current pipeline since each model has a random amount of vertices/normals/indices and needs to be connected to one specific texture.
I'm looking for any solution improving the performance.
Edit 1
I have now successfully created pointers to my VBO and IBO, however I'm confused how I'm supposed to unmap the buffers when they are returned to the main thread. My first thought was to unmap the VBO and IBO individually like this:
`
for (int i = 0; i < _vertices.size(); i++) {
_vertexBufferData[i] = _vertices.at(i);
}
glUnmapBuffer(GL_ARRAY_BUFFER);
for (int k = 0; k < _indices.size(); k++) {
_indexBufferData[k] = _indices.at(k);
}
glUnmapBuffer(GL_ELEMENT_ARRAY_BUFFER);
But i get an error saying that the buffer is already unbound or unmapped. Do I only have to do the first unmap?
The problem you describe is called Buffer Object Streaming.
In a few words, say you got a trigger that a specific model has to be loaded. Then:
Create the VAO and VBOs as you described, but without loading any data yet. You use glBufferStorage for this.
Map the buffer into your memory and launch a thread to fill that buffer with the data. The thread will do all the time consuming disk i/o and filling of the mapped memory region.
When the worker thread is done, notify the main thread.
In the main thread, once you got the notification, unmap the buffers and mark the VAO as loaded for subsequent rendering.
Obviously between 1 and 4 your main thread continues rendering as usual, without rendering that pending VAO.
it is clear to me that direct API (like glBegin(); glVertex(); glBegin(); ) is not effective for rendering complex scenes and real world applications like Games.
But for debugging and testing small things it is very useful. eg. for debugging physics of object in scene ( visualization of vectors, like velocity vector, force ... ).
You may ask - why not to fall back to OpenGL 1.x for such small things? Because the rest of program use OpenGL 3.0 features, and because I really like fragment shaders.
Is there any way to use it with OpenGL 3.0 and higher ?
Or what is the strategy for debuging if I don't want to write the whole ceremony like
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glGenBuffers(2, vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), diamond, GL_STATIC_DRAW);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
glBufferData(GL_ARRAY_BUFFER, 12 * sizeof(GLfloat), colors, GL_STATIC_DRAW);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(1);
glDrawArrays(GL_LINES, 0, 4);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDeleteProgram(shaderprogram);
glDeleteBuffers(2, vbo);
glDeleteVertexArrays(1, &vao);
for every single arrow I want to plot?
You can still use all the legacy features with OpenGL 3.2 or later if you create a context with the Compatibility Profile. Only the Core Profile has the legacy features removed.
Most platforms still seem to support the Compatibility Profile. One notable exception is Mac OS, where 3.x contexts are only supported with the Core Profile.
Up to OpenGL 3.1, there was only a single profile. The split into Compatibility and Core happened in 3.2.
Using the newer interfaces isn't really that bad once you got the hang of it. If your debugging use is relatively repetitive, you could always write some nice utility classes that provide a more convenient interface.
I am trying to render points from a VBO and a Element Buffer Object with glDrawRangeElements.
The VBO and EBO are instanciated like this:
glGenBuffers(1, &vertex_buffer);
glGenBuffers(1, &index_buffer);
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glBufferData(GL_ARRAY_BUFFER, vertex_buffer_size, NULL, GL_STREAM_DRAW);
glVertexAttribPointer(0,3,GL_FLOAT, GL_FALSE, 0, (char*)(NULL));
glEnableVertexAttribArray(0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, index_buffer_size, NULL, GL_STREAM_DRAW);
as you can see they do not have any "static" data.
I use glMapBuffer to populate the buffers and then I render them with glDrawRangeElements.
Problem:
Concretly, what I want to do is to make a terrain with Continuous LOD.
The code I use and posted majorly comes from Ranger Mk2 by Andras Balogh.
My problem is this: when I want to render the triangle strip, there seems to be a point on the 3 points of a triangle which is somewhere where it should not be.
For example,
this is what I get in wireframe mode -> http://i.stack.imgur.com/lCPqR.jpg
and this is what I get in point mode (Note the column that stretches up which is the points that are not well placed) -> http://i.stack.imgur.com/CF04p.jpg
Before you ask me to go to the post named "Rendering with glDrawRangeElements() not working properly", I wanted to let you know that I already went there.
Code:
So here is the render process:
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(4, GL_FLOAT, 0, 0);
glEnableVertexAttribArray(0);
glDrawRangeElements(GL_TRIANGLE_STRIP, 0, va_index, ia_index, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
glDisableClientState(GL_VERTEX_ARRAY);
and just before I do this (pre_render function):
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
vertex_array = (v4f*)(glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY));
index_array = (u32*)(glMapBuffer(GL_ELEMENT_ARRAY_BUFFER, GL_WRITE_ONLY));
//[...] Populate the buffers
glUnmapBuffer(GL_ARRAY_BUFFER);
glUnmapBuffer(GL_ELEMENT_ARRAY_BUFFER);
PS: When I render the terrain like this:
glBegin(GL_TRIANGLE_STRIP);
printf("%u %u\n", va_index, ia_index);
for(u32 i = 0; i < va_index; ++i){
//if(i <= va_index)
glVertex4fv(&vertex_array[i].x);
}
glEnd();
strangely it works (part of the triangles are not rendered though but that is another problem).
So my question is how can I make glDrawRangeElements function properly?
If you need any more information please ask, I will be glad to answer.
Edit: I use Qt Creator as IDE, with Mingw 4.8 on Windows 7. My Graphic card supports Opengl 4.4 (from Nvidia).
Not sure if this is causing your problem, but I notice that you have a mixture of API calls for built-in vertex attributes and generic vertex attributes.
Calls like glVertexAttribPointer, glEnableVertexAttribArray and glDisableVertexAttribArray are used for generic vertex attributes.
Calls like glVertexPointer, glEnableClientState and glDisableClientState are used for built-in vertex attributes.
You need to decide which approach you want to use, and then use a consistent set of API calls for that approach. If you use the fixed rendering pipeline, you need to use the built-in attributes. If you write your own shaders with the compatibility profile, you can use either. If you use the core profile, you need to use generic vertex attributes.
This call also looks suspicious, since it specifies a size of 3, where the rest of your code suggests that you're using positions with 4 coordinates:
glVertexAttribPointer(0,3,GL_FLOAT, GL_FALSE, 0, (char*)(NULL));