Sample code:
1. glGenBuffers(1, &VboId);
2. glBindBuffer(GL_ARRAY_BUFFER, VboId);
3. glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
4. glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
5. glEnableVertexAttribArray(0);
So the "0" on lines 4 (first argument) and 5 refer to an arbitrary identifier/location that we have chosen. In GLSL, if we want to refer to this data, we just have to refer to the same ID:
layout(location=0) in vec4 in_Position;
However, in a different example program, I've seen it done differently, with no reference to "layout locations". Instead, we do something like so:
1. glGenBuffers(1, &VboId);
2. glBindBuffer(GL_ARRAY_BUFFER, VboId);
3. glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
4. glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
5. glBindAttribLocation(shaderProgramHandle, 0, "in_position");
6. glEnableVertexAttribArray(0);
We've added an extra step (5) where we seem to bind this attribute pointer to a specific variable in a specific program. And then in our GLSL we just write this instead:
in vec3 in_position;
With no reference to a location.
If I'm not mistaken, these 2 programs essentially do the exact same thing...so why the difference? What are the pros and cons of each method?
(I've just started learning OpenGL 3.x)
There is no such thing as passing a VAO to a shader. A VAO simply establishes how vertex attributes are pulled from buffer objects for rendering.
The second example doesn't do anything, unless shaderProgramHandle has not been linked yet. glBindAttribLocation location only works before program linking. Once the program has been linked, you can't change where it gets its attributes from.
In any case, your real question is why some people use glBindAttribLocation(..., 0) instead of putting layout(location = X) in their shaders. The reason is very simple: layout(location) syntax is (relatively) new. glBindAttribLocation dates back to the very first version of GLSL's OpenGL interface, back in the ARB_vertex_shader extension back in 2003 or so. layout(location) comes from the much more recent ARB_explicit_attrib_location, which is only core in GL 3.3 and above. 3.3 only came out in 2010. So naturally more material will talk about the old way.
The "pros and cons" of each are quite obvious. From a purely practical standpoint, layout(location), being new, requires more recent drivers (though it does not require GL 3.3. NVIDIA's 6xxx+ hardware supports ARB_explicit_attrib_location despite being only 2.1). glBindAttribLocation works within source code, while layout(location) is built into the GLSL shader itself. So if you need to decide which attributes use which indices at runtime, it's a lot harder to do it with layout(location) than without it. But if, like most people, you want to control them from the shader, then layout(location) is what you need.
Related
I have an OpenGL 4.1 code using VAO and VBOs. Generation of buffer and array objects happens properly, however as soon as I want to draw my vertices, I get an INVALID OPERATION (code 1282) error. One of the possible explanations is that "the shader is not compiled".Hence my question: Is it a MUST to write & compile my own vertex and fragment shaders, or can I do without ?
As reference, my code:
Preparing:
glGenVertexArrays(1, vaoID); // Create new VAO
glBindVertexArray(vaoID);
glGenBuffers(1, vboID); // Create new VBO
glBindBuffer(GL_ARRAY_BUFFER, vboID); // Bind vbo1 as current vertex buffer
glBufferData(GL_ARRAY_BUFFER, sizeOfVertices, vertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, floatsPerPosition, GL_FLOAT, GL_FALSE, 0, 0);
glGenBuffers(1, eboID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, eboID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeOfIndices, indices, GL_STATIC_DRAW);
// reset bindings for VAO, VBO and EBO
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
Rendering:
glBindVertexArray(vaoID); // No Error
glDrawArrays(GL_TRIANGLES, 0, 6); // Generates Error 1282
glBindVertexArray(0);
In a core OpenGL profile, since 3.2, the use of a program object is not optional.
However, the only shader stage that is manditory is the vertex shader. The use of a fragment shader is optional, but of course, it would only be useful for depth or stencil operations (since it would not generate fragment colors).
In a compatibility OpenGL profile, you can render using the fixed-function pipeline, but you cannot use user-defined vertex attributes. You have to use the fixed-function attributes, set up with glVertexPointer, glColorPointer, etc. Note that NVIDIA plays fast-and-loose with the spec in this regard, mapping user-defined attributes to fixed-function ones.
According to the OpenGL specification, a shader must always be used when rendering geometry. When you have no shader enabled, the result of the draw call is undefined.
However, from what I can tell this is a bit more murky in practice. At least most NVidia and Intel drivers seem to come with a kind of "default shader" enabled by default, which inhibits the undefined behaviour.
These shaders have pre-specified vertex attribute input indices (for instance, NVidia uses 0 for vertices, 4 for vertex colours, etc), and implement what you could consider a default pipeline.
Please note here that while these default shaders exist, their implementation can vary significantly between vendors, or might not exist at all for specific vendors or driver versions.
So you should therefore always use a shader when drawing geometry in OpenGL 4.
it is clear to me that direct API (like glBegin(); glVertex(); glBegin(); ) is not effective for rendering complex scenes and real world applications like Games.
But for debugging and testing small things it is very useful. eg. for debugging physics of object in scene ( visualization of vectors, like velocity vector, force ... ).
You may ask - why not to fall back to OpenGL 1.x for such small things? Because the rest of program use OpenGL 3.0 features, and because I really like fragment shaders.
Is there any way to use it with OpenGL 3.0 and higher ?
Or what is the strategy for debuging if I don't want to write the whole ceremony like
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glGenBuffers(2, vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), diamond, GL_STATIC_DRAW);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
glBufferData(GL_ARRAY_BUFFER, 12 * sizeof(GLfloat), colors, GL_STATIC_DRAW);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(1);
glDrawArrays(GL_LINES, 0, 4);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDeleteProgram(shaderprogram);
glDeleteBuffers(2, vbo);
glDeleteVertexArrays(1, &vao);
for every single arrow I want to plot?
You can still use all the legacy features with OpenGL 3.2 or later if you create a context with the Compatibility Profile. Only the Core Profile has the legacy features removed.
Most platforms still seem to support the Compatibility Profile. One notable exception is Mac OS, where 3.x contexts are only supported with the Core Profile.
Up to OpenGL 3.1, there was only a single profile. The split into Compatibility and Core happened in 3.2.
Using the newer interfaces isn't really that bad once you got the hang of it. If your debugging use is relatively repetitive, you could always write some nice utility classes that provide a more convenient interface.
I am trying to render points from a VBO and a Element Buffer Object with glDrawRangeElements.
The VBO and EBO are instanciated like this:
glGenBuffers(1, &vertex_buffer);
glGenBuffers(1, &index_buffer);
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glBufferData(GL_ARRAY_BUFFER, vertex_buffer_size, NULL, GL_STREAM_DRAW);
glVertexAttribPointer(0,3,GL_FLOAT, GL_FALSE, 0, (char*)(NULL));
glEnableVertexAttribArray(0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, index_buffer_size, NULL, GL_STREAM_DRAW);
as you can see they do not have any "static" data.
I use glMapBuffer to populate the buffers and then I render them with glDrawRangeElements.
Problem:
Concretly, what I want to do is to make a terrain with Continuous LOD.
The code I use and posted majorly comes from Ranger Mk2 by Andras Balogh.
My problem is this: when I want to render the triangle strip, there seems to be a point on the 3 points of a triangle which is somewhere where it should not be.
For example,
this is what I get in wireframe mode -> http://i.stack.imgur.com/lCPqR.jpg
and this is what I get in point mode (Note the column that stretches up which is the points that are not well placed) -> http://i.stack.imgur.com/CF04p.jpg
Before you ask me to go to the post named "Rendering with glDrawRangeElements() not working properly", I wanted to let you know that I already went there.
Code:
So here is the render process:
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(4, GL_FLOAT, 0, 0);
glEnableVertexAttribArray(0);
glDrawRangeElements(GL_TRIANGLE_STRIP, 0, va_index, ia_index, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
glDisableClientState(GL_VERTEX_ARRAY);
and just before I do this (pre_render function):
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
vertex_array = (v4f*)(glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY));
index_array = (u32*)(glMapBuffer(GL_ELEMENT_ARRAY_BUFFER, GL_WRITE_ONLY));
//[...] Populate the buffers
glUnmapBuffer(GL_ARRAY_BUFFER);
glUnmapBuffer(GL_ELEMENT_ARRAY_BUFFER);
PS: When I render the terrain like this:
glBegin(GL_TRIANGLE_STRIP);
printf("%u %u\n", va_index, ia_index);
for(u32 i = 0; i < va_index; ++i){
//if(i <= va_index)
glVertex4fv(&vertex_array[i].x);
}
glEnd();
strangely it works (part of the triangles are not rendered though but that is another problem).
So my question is how can I make glDrawRangeElements function properly?
If you need any more information please ask, I will be glad to answer.
Edit: I use Qt Creator as IDE, with Mingw 4.8 on Windows 7. My Graphic card supports Opengl 4.4 (from Nvidia).
Not sure if this is causing your problem, but I notice that you have a mixture of API calls for built-in vertex attributes and generic vertex attributes.
Calls like glVertexAttribPointer, glEnableVertexAttribArray and glDisableVertexAttribArray are used for generic vertex attributes.
Calls like glVertexPointer, glEnableClientState and glDisableClientState are used for built-in vertex attributes.
You need to decide which approach you want to use, and then use a consistent set of API calls for that approach. If you use the fixed rendering pipeline, you need to use the built-in attributes. If you write your own shaders with the compatibility profile, you can use either. If you use the core profile, you need to use generic vertex attributes.
This call also looks suspicious, since it specifies a size of 3, where the rest of your code suggests that you're using positions with 4 coordinates:
glVertexAttribPointer(0,3,GL_FLOAT, GL_FALSE, 0, (char*)(NULL));
I came from this question:
opengl vbo advice
I use OpenGL 3.3 and will not to use deprecated features. Im using Assimp to import my blender models. But im a bit confused as to how much i should split them up in terms of VAO's and VBO's.
First off a little side question. I use glDrawElements, do that mean i cannot interleave my vertex attributes or can the VAO figure out using the glVertexAttribPointer and the glDrawElements offset to see where my vertex position is?
Main question i guess, boils down to how do i structure my VAO/VBO's for a model with multiple moving parts, and multiple meshes pr. part.
Each node in assimp can contain multiple meshes where each mesh has texture, vertices, normals, material etc. The nodes in assimp contains the transformations. Say i have a ship with a cannon turret on it. I want to be able to roatate the turret. Do this mean i will make the ship node a seperate VAO with VBO's for each mesh containing its attributes(or multiple VBO's etc.).
I guess it goes like
draw(ship); //call to draw ship VAO
pushMatrix(turretMatrix) //updating uniform modelview matrix for the shader
draw(turret); //call to draw turret VAO
I don't fully understand UBO(uniform buffer objects) yet, but it seems i can pass in multiple uniforms, will that help me contain a full model with moveable parts in a single VAO?
first, off VAO only "remembers" the last vertex attribute bindings (and VBO binding for an index buffer (the GL_ELEMENT_ARRAY_BUFFER_BINDING), if there is one). So it does not remember offsets in glDrawElements(), you need to call that later when using the VAO. It laso does not prevent you from using interleaved vertex arrays. Let me try to explain:
int vbo[3];
glGenBuffers(3, vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
glBufferData(GL_ARRAY_BUFFER, data0, size0);
glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
glBufferData(GL_ARRAY_BUFFER, data1, size1);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo[2]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, data2, size2);
// create some buffers and fill them with data
int vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// create a VAO
{
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]); // not saved in VAO
glVertexAttribPointer(0, 3, GL_FLOAT, false, 3 * sizeof(float), NULL); // this is VAO saved state
glEnableVertexAttribArray(0); // this is VAO saved state
// sets up one vertex attrib array from vbo[0] (say positions)
glBindBuffer(GL_ARRAY_BUFFER, vbo[1]); // not saved in VAO
glVertexAttribPointer(1, 3, GL_FLOAT, false, 5 * sizeof(float), NULL); // this is VAO saved state
glVertexAttribPointer(2, 2, GL_FLOAT, false, 5 * sizeof(float), (const void*)(2 * sizeof(float))); // this is VAO saved state
glEnableVertexAttribArray(1); // this is VAO saved state
glEnableVertexAttribArray(2); // this is VAO saved state
// sets up two more VAAs from vbo[1] (say normals interleaved with texcoords)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo[2]); // this is VAO saved state
// uses the third buffer as the source for indices
}
// set up state that VAO "remembers"
glBindVertexArray(0); // bind different vaos, etc ...
Later ...
glBindVertexArray(vao); // bind our VAO (so we have VAAs 0, 1 and 2 as well as index buffer)
glDrawElements(GL_TRIANGLE_STRIP, 57, GL_UNSIGNED_INT, NULL);
glDrawElements(GL_TRIANGLE_STRIP, 23, GL_UNSIGNED_INT, (const void*)(57 * sizeof(unsigned int)));
// draws two parts of the mesh as triangle strips
So you see ... you can draw interleaved vertex arrays using glDrawElements using a single VAO and one or more VBOs.
To answer the second part of your question, you either can have different VAOs and VBOs for different parts of the mesh (so drawing separate parts is easy), or you can fuse all into one VAO VBO pair (so you need not call glBind*() often) and use multiple glDraw*() calls to draw individual parts of the mesh (as seen in the code above - imagine the first glDrawElements() draws the ship and the second draws the turret, you just update some matrix uniform between the calls).
Because shaders can contain multiple modelview matrices in uniforms, you can also encode mesh id as another vertex attribute, and let the vertex shader choose which matrix to use to transform the vertex, based on this attribute. This idea can also be extended to using multiple matrices per a single vertex, with some weights assigned for each matrix. This is commonly used when animating organic objects such as player character (look up "skinning").
As uniform buffer objects go, the only advantage is that you can pack a lot of data into them and that they can be easily shared between shaders (just bind the UBO to any shader that is able to use it). There is no real advantage in using them for you, except if you would be to have objects with 1OOOs of matrices.
Also, i wrote the source codes above from memory. Let me know if there are some errors / problems ...
#theswine
Not binding this during VAO initialization causes my program to crash, but binding it after binding the VAO causes it to run correctly. Are you sure this isn't saved in the VAO?
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]); // not saved in VAO
(BTW: sorry for bringing up an old topic, I just thought this could be useful to others, this post sure was! (which reminds me, thank you!!))
Is it possible to store different vertex attributes in different vertex buffers?
All the examples I've seen so far do something like this
float data[] =
{
//position
v1x, v1y, v1z,
v2x, v2y, v2z,
...
vnx, vny, vnz,
//color
c1r, c1g, c1b,
c2r, c2g, c2b,
...
cnr, cng, cnb,
};
GLuint buffname;
glGenBuffers(1, &buffname);
glBindBuffer(GL_ARRAY_BUFFER, buffname);
glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW);
And the drawing is done something like this:
glBindBuffer(GL_ARRAY_BUFFER, buffname);
glEnableVertexAttrib(position_location);
glEnableVertexAttrib(color_location);
glVertexAttribPointer(position_location, 3, GL_FLOAT, GL_FALSE, 0, 0);
glVertexAttribPointer(color_location, 3, GL_FLOAT, GL_FALSE, 0, (void*)(3*n));
glDrawArrays(GL_TRIANGLES, 0, n/3);
glDisableVertexAttrib(position_location);
glDisableVertexAttrib(color_location);
glBindBuffer(GL_ARRAY_BUFFER, 0);
Isn't it possible to store position data and color data in different VBOs? The problem is I don't understand how this would work out because you can't bind two buffers at once, can you?
If there is a simple but inefficient solution, I would prefer it over a more complicated but efficient solution because I am in primary learning state and I don't want to complicate things too much.
Also, if what I'm asking is possible, is it a good idea or not?
To clarify: I do understand how I could store different attributes in different VBO's. I don't understand how I would later draw them.
The association between attribute location X and the buffer object that provides that attribute is made with the glVertexAttribPointer command. The way it works is simple, but unintuitive.
At the time glVertexAttribPointer is called (that's the part a lot of people don't get), whatever buffer object is currently bound to GL_ARRAY_BUFFER becomes associated with the attribute X, where X is the first parameter of glVertexAttribPointer.
So if you want to have an attribute that comes from one buffer and an attribute that comes from another, you do this:
glEnableVertexAttrib(position_location);
glEnableVertexAttrib(color_location);
glBindBuffer(GL_ARRAY_BUFFER, buffPosition);
glVertexAttribPointer(position_location, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, buffColor);
glVertexAttribPointer(color_location, 3, GL_FLOAT, GL_FALSE, 0, 0);
As for whether you should split attributes into different buffers... I would say that you should only do it if you have a demonstrable need.
For example, let's say you're doing a dynamic height-map, perhaps for some kind of water effect. The Z position of each element changes, but this also means that the normals change. However, the XY positions and the texture coordinates do not change.
Efficient streaming often requires either double-buffering buffer objects or invalidating them (reallocating them with a glBufferData(NULL) or glMapBufferRange(GL_INVALIDATE_BIT)). Either way only works if the streamed data is in another buffer object from the non-streamed data.
Another example of a demonstrable need is if memory is a concern and several objects share certain attribute lists. Perhaps objects have different position and normal arrays but the same color and texture coordinate arrays. Or something like that.
But otherwise, it's best to just put everything for an object into one buffer. Even if you don't interleave the arrays.