Is a compiled shader compulsory in OpenGl 4? - c++

I have an OpenGL 4.1 code using VAO and VBOs. Generation of buffer and array objects happens properly, however as soon as I want to draw my vertices, I get an INVALID OPERATION (code 1282) error. One of the possible explanations is that "the shader is not compiled".Hence my question: Is it a MUST to write & compile my own vertex and fragment shaders, or can I do without ?
As reference, my code:
Preparing:
glGenVertexArrays(1, vaoID); // Create new VAO
glBindVertexArray(vaoID);
glGenBuffers(1, vboID); // Create new VBO
glBindBuffer(GL_ARRAY_BUFFER, vboID); // Bind vbo1 as current vertex buffer
glBufferData(GL_ARRAY_BUFFER, sizeOfVertices, vertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, floatsPerPosition, GL_FLOAT, GL_FALSE, 0, 0);
glGenBuffers(1, eboID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, eboID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeOfIndices, indices, GL_STATIC_DRAW);
// reset bindings for VAO, VBO and EBO
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
Rendering:
glBindVertexArray(vaoID); // No Error
glDrawArrays(GL_TRIANGLES, 0, 6); // Generates Error 1282
glBindVertexArray(0);

In a core OpenGL profile, since 3.2, the use of a program object is not optional.
However, the only shader stage that is manditory is the vertex shader. The use of a fragment shader is optional, but of course, it would only be useful for depth or stencil operations (since it would not generate fragment colors).
In a compatibility OpenGL profile, you can render using the fixed-function pipeline, but you cannot use user-defined vertex attributes. You have to use the fixed-function attributes, set up with glVertexPointer, glColorPointer, etc. Note that NVIDIA plays fast-and-loose with the spec in this regard, mapping user-defined attributes to fixed-function ones.

According to the OpenGL specification, a shader must always be used when rendering geometry. When you have no shader enabled, the result of the draw call is undefined.
However, from what I can tell this is a bit more murky in practice. At least most NVidia and Intel drivers seem to come with a kind of "default shader" enabled by default, which inhibits the undefined behaviour.
These shaders have pre-specified vertex attribute input indices (for instance, NVidia uses 0 for vertices, 4 for vertex colours, etc), and implement what you could consider a default pipeline.
Please note here that while these default shaders exist, their implementation can vary significantly between vendors, or might not exist at all for specific vendors or driver versions.
So you should therefore always use a shader when drawing geometry in OpenGL 4.

Related

Transform feedback without a framebuffer?

I'm interested in using a vertex shader to process a buffer without producing any rendered output. Here's the relevant snippet:
glUseProgram(program);
GLuint tfOutputBuffer;
glGenBuffers(1, &tfOutputBuffer);
glBindBuffer(GL_ARRAY_BUFFER, tfOutputBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(double)*4*3, NULL, GL_STATIC_READ);
glEnable(GL_RASTERIZER_DISCARD_EXT);
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, tfOutputBuffer);
glBeginTransformFeedbackEXT(GL_TRIANGLES);
glBindBuffer(GL_ARRAY_BUFFER, positionBuffer);
glEnableVertexAttribArray(positionAttribute);
glVertexAttribPointer(positionAttribute, 4, GL_FLOAT, GL_FALSE, sizeof(double)*4, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elementBuffer);
glDrawElements(GL_TRIANGLES, 1, GL_UNSIGNED_INT, 0);
This works fine up until the glDrawElements() call, which results in GL_INVALID_FRAMEBUFFER_OPERATION. And glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT); returns GL_FRAMEBUFFER_UNDEFINED.
I presume this is because my GL context does not have a default framebuffer, and I have not bound another FBO. But, since I don't care about the rendered output and I've enabled GL_RASTERIZER_DISCARD_EXT, I thought a framebuffer shouldn't be necessary.
So, is there a way to use transform feedback without a framebuffer, or do I need to generate and bind a framebuffer even though I don't care about its contents?
This is actually perfectly valid behavior, as-per the specification.
OpenGL 4.4 Core Specification - 9.4.4 Effects of Framebuffer Completeness on Framebuffer Operations
A GL_INVALID_FRAMEBUFFER_OPERATION error is generated by attempts to render to or read from a framebuffer which is not framebuffer complete. This error is generated regardless of whether fragments are actually read from or written to the framebuffer. For example, it is generated when a rendering command is called and the framebuffer is incomplete, even if GL_RASTERIZER_DISCARD is enabled.
What you need to do to work around this is create an FBO with a 1 pixel color attachment and bind that. You must have a complete FBO bound or you get GL_INVALID_FRAMEBUFFER_OPERATION and one of the rules for completeness is that at least 1 complete image is attached.
OpenGL 4.3 actually allows you to skirt around this issue by defining an FBO with no attachments of any sort (see: GL_ARB_framebuffer_no_attachments). However, because you are using the EXT form of FBOs and Transform Feedback, I doubt you have a 4.3 implementation.

glDrawRangeElements of GL_TRIANGLE_STRIP not working properly (using glMapBuffer)

I am trying to render points from a VBO and a Element Buffer Object with glDrawRangeElements.
The VBO and EBO are instanciated like this:
glGenBuffers(1, &vertex_buffer);
glGenBuffers(1, &index_buffer);
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glBufferData(GL_ARRAY_BUFFER, vertex_buffer_size, NULL, GL_STREAM_DRAW);
glVertexAttribPointer(0,3,GL_FLOAT, GL_FALSE, 0, (char*)(NULL));
glEnableVertexAttribArray(0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, index_buffer_size, NULL, GL_STREAM_DRAW);
as you can see they do not have any "static" data.
I use glMapBuffer to populate the buffers and then I render them with glDrawRangeElements.
Problem:
Concretly, what I want to do is to make a terrain with Continuous LOD.
The code I use and posted majorly comes from Ranger Mk2 by Andras Balogh.
My problem is this: when I want to render the triangle strip, there seems to be a point on the 3 points of a triangle which is somewhere where it should not be.
For example,
this is what I get in wireframe mode -> http://i.stack.imgur.com/lCPqR.jpg
and this is what I get in point mode (Note the column that stretches up which is the points that are not well placed) -> http://i.stack.imgur.com/CF04p.jpg
Before you ask me to go to the post named "Rendering with glDrawRangeElements() not working properly", I wanted to let you know that I already went there.
Code:
So here is the render process:
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(4, GL_FLOAT, 0, 0);
glEnableVertexAttribArray(0);
glDrawRangeElements(GL_TRIANGLE_STRIP, 0, va_index, ia_index, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
glDisableClientState(GL_VERTEX_ARRAY);
and just before I do this (pre_render function):
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
vertex_array = (v4f*)(glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY));
index_array = (u32*)(glMapBuffer(GL_ELEMENT_ARRAY_BUFFER, GL_WRITE_ONLY));
//[...] Populate the buffers
glUnmapBuffer(GL_ARRAY_BUFFER);
glUnmapBuffer(GL_ELEMENT_ARRAY_BUFFER);
PS: When I render the terrain like this:
glBegin(GL_TRIANGLE_STRIP);
printf("%u %u\n", va_index, ia_index);
for(u32 i = 0; i < va_index; ++i){
//if(i <= va_index)
glVertex4fv(&vertex_array[i].x);
}
glEnd();
strangely it works (part of the triangles are not rendered though but that is another problem).
So my question is how can I make glDrawRangeElements function properly?
If you need any more information please ask, I will be glad to answer.
Edit: I use Qt Creator as IDE, with Mingw 4.8 on Windows 7. My Graphic card supports Opengl 4.4 (from Nvidia).
Not sure if this is causing your problem, but I notice that you have a mixture of API calls for built-in vertex attributes and generic vertex attributes.
Calls like glVertexAttribPointer, glEnableVertexAttribArray and glDisableVertexAttribArray are used for generic vertex attributes.
Calls like glVertexPointer, glEnableClientState and glDisableClientState are used for built-in vertex attributes.
You need to decide which approach you want to use, and then use a consistent set of API calls for that approach. If you use the fixed rendering pipeline, you need to use the built-in attributes. If you write your own shaders with the compatibility profile, you can use either. If you use the core profile, you need to use generic vertex attributes.
This call also looks suspicious, since it specifies a size of 3, where the rest of your code suggests that you're using positions with 4 coordinates:
glVertexAttribPointer(0,3,GL_FLOAT, GL_FALSE, 0, (char*)(NULL));

OpenGL: VAO/VBO confusion

The OpenGL Wiki: Vertex Specification states that:
Note: The GL_ARRAY_BUFFER​ binding is NOT part of the VAO's state! I know that's confusing, but that's the way it is.
Below is how I use the VAO, which seems to work as intended. What is wrong here? My understanding of OpenGL (or OpenGL Wiki), my OpenGL driver (OSX 10.9) or the OpenGL Wiki?
// ------ Pseudo-code ------
// setup
[...]
glBindVertexArray(vertexArrayObjectIdx1);
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferId1);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elementBufferId1);
for each vertex attribute
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
glBindVertexArray(vertexArrayObjectIdx2);
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferId2);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elementBufferId2);
for each vertex attribute
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
// rendering
[...]
glBindVertexArray(vertexArrayObjectIdx1);
glDrawElements(...);
glBindVertexArray(vertexArrayObjectIdx2);
glDrawElements(...);
It means that when you rebind the VAO the GL_ARRAY_BUFFER does not get rebound
However the glVertexAttribPointer does bind the (then) bound GL_ARRAY_BUFFER to the correct attribute in the VAO so it does work like you want.
Actually they could have defined glVertexAttribPointer as:
void glVertexAttribPointer(GLuint index​, GLint size​, GLenum type​, GLboolean normalized​, GLsizei stride​, uint bufferName, const GLvoid * pointer​);
and eliminate it's dependency on the bound GL_ARRAY_BUFFER. But hindsight and all that...
Important to remember is that the GL_ARRAY_BUFFER is not important to drawing but the how the vertex attributes are bound is.
in contrast the GL_ELEMENT_ARRAY_BUFFER is stored in the VAO
Down below in the same Wiki entry, you can see the explanation as well - This is also why GL_ARRAY_BUFFER​ is not VAO state; the actual association between an attribute index and a buffer is made by glVertexAttribPointer​. ...

OpenGL structure of VAO/VBO for model with moving parts?

I came from this question:
opengl vbo advice
I use OpenGL 3.3 and will not to use deprecated features. Im using Assimp to import my blender models. But im a bit confused as to how much i should split them up in terms of VAO's and VBO's.
First off a little side question. I use glDrawElements, do that mean i cannot interleave my vertex attributes or can the VAO figure out using the glVertexAttribPointer and the glDrawElements offset to see where my vertex position is?
Main question i guess, boils down to how do i structure my VAO/VBO's for a model with multiple moving parts, and multiple meshes pr. part.
Each node in assimp can contain multiple meshes where each mesh has texture, vertices, normals, material etc. The nodes in assimp contains the transformations. Say i have a ship with a cannon turret on it. I want to be able to roatate the turret. Do this mean i will make the ship node a seperate VAO with VBO's for each mesh containing its attributes(or multiple VBO's etc.).
I guess it goes like
draw(ship); //call to draw ship VAO
pushMatrix(turretMatrix) //updating uniform modelview matrix for the shader
draw(turret); //call to draw turret VAO
I don't fully understand UBO(uniform buffer objects) yet, but it seems i can pass in multiple uniforms, will that help me contain a full model with moveable parts in a single VAO?
first, off VAO only "remembers" the last vertex attribute bindings (and VBO binding for an index buffer (the GL_ELEMENT_ARRAY_BUFFER_BINDING), if there is one). So it does not remember offsets in glDrawElements(), you need to call that later when using the VAO. It laso does not prevent you from using interleaved vertex arrays. Let me try to explain:
int vbo[3];
glGenBuffers(3, vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
glBufferData(GL_ARRAY_BUFFER, data0, size0);
glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
glBufferData(GL_ARRAY_BUFFER, data1, size1);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo[2]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, data2, size2);
// create some buffers and fill them with data
int vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// create a VAO
{
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]); // not saved in VAO
glVertexAttribPointer(0, 3, GL_FLOAT, false, 3 * sizeof(float), NULL); // this is VAO saved state
glEnableVertexAttribArray(0); // this is VAO saved state
// sets up one vertex attrib array from vbo[0] (say positions)
glBindBuffer(GL_ARRAY_BUFFER, vbo[1]); // not saved in VAO
glVertexAttribPointer(1, 3, GL_FLOAT, false, 5 * sizeof(float), NULL); // this is VAO saved state
glVertexAttribPointer(2, 2, GL_FLOAT, false, 5 * sizeof(float), (const void*)(2 * sizeof(float))); // this is VAO saved state
glEnableVertexAttribArray(1); // this is VAO saved state
glEnableVertexAttribArray(2); // this is VAO saved state
// sets up two more VAAs from vbo[1] (say normals interleaved with texcoords)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo[2]); // this is VAO saved state
// uses the third buffer as the source for indices
}
// set up state that VAO "remembers"
glBindVertexArray(0); // bind different vaos, etc ...
Later ...
glBindVertexArray(vao); // bind our VAO (so we have VAAs 0, 1 and 2 as well as index buffer)
glDrawElements(GL_TRIANGLE_STRIP, 57, GL_UNSIGNED_INT, NULL);
glDrawElements(GL_TRIANGLE_STRIP, 23, GL_UNSIGNED_INT, (const void*)(57 * sizeof(unsigned int)));
// draws two parts of the mesh as triangle strips
So you see ... you can draw interleaved vertex arrays using glDrawElements using a single VAO and one or more VBOs.
To answer the second part of your question, you either can have different VAOs and VBOs for different parts of the mesh (so drawing separate parts is easy), or you can fuse all into one VAO VBO pair (so you need not call glBind*() often) and use multiple glDraw*() calls to draw individual parts of the mesh (as seen in the code above - imagine the first glDrawElements() draws the ship and the second draws the turret, you just update some matrix uniform between the calls).
Because shaders can contain multiple modelview matrices in uniforms, you can also encode mesh id as another vertex attribute, and let the vertex shader choose which matrix to use to transform the vertex, based on this attribute. This idea can also be extended to using multiple matrices per a single vertex, with some weights assigned for each matrix. This is commonly used when animating organic objects such as player character (look up "skinning").
As uniform buffer objects go, the only advantage is that you can pack a lot of data into them and that they can be easily shared between shaders (just bind the UBO to any shader that is able to use it). There is no real advantage in using them for you, except if you would be to have objects with 1OOOs of matrices.
Also, i wrote the source codes above from memory. Let me know if there are some errors / problems ...
#theswine
Not binding this during VAO initialization causes my program to crash, but binding it after binding the VAO causes it to run correctly. Are you sure this isn't saved in the VAO?
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]); // not saved in VAO
(BTW: sorry for bringing up an old topic, I just thought this could be useful to others, this post sure was! (which reminds me, thank you!!))

Why are there multiple ways to pass VAOs to a GLSL program?

Sample code:
1. glGenBuffers(1, &VboId);
2. glBindBuffer(GL_ARRAY_BUFFER, VboId);
3. glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
4. glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
5. glEnableVertexAttribArray(0);
So the "0" on lines 4 (first argument) and 5 refer to an arbitrary identifier/location that we have chosen. In GLSL, if we want to refer to this data, we just have to refer to the same ID:
layout(location=0) in vec4 in_Position;
However, in a different example program, I've seen it done differently, with no reference to "layout locations". Instead, we do something like so:
1. glGenBuffers(1, &VboId);
2. glBindBuffer(GL_ARRAY_BUFFER, VboId);
3. glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
4. glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
5. glBindAttribLocation(shaderProgramHandle, 0, "in_position");
6. glEnableVertexAttribArray(0);
We've added an extra step (5) where we seem to bind this attribute pointer to a specific variable in a specific program. And then in our GLSL we just write this instead:
in vec3 in_position;
With no reference to a location.
If I'm not mistaken, these 2 programs essentially do the exact same thing...so why the difference? What are the pros and cons of each method?
(I've just started learning OpenGL 3.x)
There is no such thing as passing a VAO to a shader. A VAO simply establishes how vertex attributes are pulled from buffer objects for rendering.
The second example doesn't do anything, unless shaderProgramHandle has not been linked yet. glBindAttribLocation location only works before program linking. Once the program has been linked, you can't change where it gets its attributes from.
In any case, your real question is why some people use glBindAttribLocation(..., 0) instead of putting layout(location = X) in their shaders. The reason is very simple: layout(location) syntax is (relatively) new. glBindAttribLocation dates back to the very first version of GLSL's OpenGL interface, back in the ARB_vertex_shader extension back in 2003 or so. layout(location) comes from the much more recent ARB_explicit_attrib_location, which is only core in GL 3.3 and above. 3.3 only came out in 2010. So naturally more material will talk about the old way.
The "pros and cons" of each are quite obvious. From a purely practical standpoint, layout(location), being new, requires more recent drivers (though it does not require GL 3.3. NVIDIA's 6xxx+ hardware supports ARB_explicit_attrib_location despite being only 2.1). glBindAttribLocation works within source code, while layout(location) is built into the GLSL shader itself. So if you need to decide which attributes use which indices at runtime, it's a lot harder to do it with layout(location) than without it. But if, like most people, you want to control them from the shader, then layout(location) is what you need.