Debuging OpenGL 3.0 without direct API ( aka glBegin() ... ) - opengl

it is clear to me that direct API (like glBegin(); glVertex(); glBegin(); ) is not effective for rendering complex scenes and real world applications like Games.
But for debugging and testing small things it is very useful. eg. for debugging physics of object in scene ( visualization of vectors, like velocity vector, force ... ).
You may ask - why not to fall back to OpenGL 1.x for such small things? Because the rest of program use OpenGL 3.0 features, and because I really like fragment shaders.
Is there any way to use it with OpenGL 3.0 and higher ?
Or what is the strategy for debuging if I don't want to write the whole ceremony like
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glGenBuffers(2, vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), diamond, GL_STATIC_DRAW);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
glBufferData(GL_ARRAY_BUFFER, 12 * sizeof(GLfloat), colors, GL_STATIC_DRAW);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(1);
glDrawArrays(GL_LINES, 0, 4);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDeleteProgram(shaderprogram);
glDeleteBuffers(2, vbo);
glDeleteVertexArrays(1, &vao);
for every single arrow I want to plot?

You can still use all the legacy features with OpenGL 3.2 or later if you create a context with the Compatibility Profile. Only the Core Profile has the legacy features removed.
Most platforms still seem to support the Compatibility Profile. One notable exception is Mac OS, where 3.x contexts are only supported with the Core Profile.
Up to OpenGL 3.1, there was only a single profile. The split into Compatibility and Core happened in 3.2.
Using the newer interfaces isn't really that bad once you got the hang of it. If your debugging use is relatively repetitive, you could always write some nice utility classes that provide a more convenient interface.

Related

Why does OpenGL buffer unbinding order matter? [duplicate]

This question already has answers here:
What is the role of glBindVertexArrays vs glBindBuffer and what is their relationship?
(5 answers)
Closed 1 year ago.
I'm in the process of learning OpenGL (3.3) with C++, I can draw simple polygons using Vertex Buffer Objects, Vertex Array Objects, and Index Buffers, but the way I code them still feels a little like magic, so I'd like to understand what's going on behind the scenes.
I have a basic understanding of the binding system OpenGL uses. I also understand that the VAO itself contains the bound index buffer (as specified here), and if I'm not mistaken the bound vertex buffer too.
By this logic, I first bind the VAO, then create my VBO (which behind the scenes is bound "inside" the VAO?), I can perform operations on the VBO, and it will all work. But then when I come to unbind the buffers I used after I'm done setting up things, it seems that I must unbind the VAO first, then the vertex & index buffers, so the unbinding occurs in the same order as the binding, and not in reverse order.
That is extremely counter intuitive. So my question is, why does the order of unbinding matter?
(to clarify, I am rebinding the VAO anyway before calling glDrawElements, so I don't think I even need to unbind anything at all. it's mostly for learning purposes)
Here's the relevant code:
GLuint VAO, VBO, IBO;
glGenVertexArrays(1, &VAO);
glBindVertexArray(VAO);
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glGenBuffers(1, &IBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0);
glEnableVertexAttribArray(0);
glBindVertexArray(0); //unbinding VAO here is fine
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
//if I unbind VAO here instead, it won't render the shape anymore
while (!glfwWindowShouldClose(window)) {
glfwPollEvents();
glUseProgram(shaderProgram);
glBindVertexArray(VAO);
glDrawElements(GL_TRIANGLES, 9, GL_UNSIGNED_INT, 0);
glfwSwapBuffers(window);
}
I believe I might have figured it out. Looking at this unbinding order
glBindBuffer(GL_ARRAY_BUFFER, 0); //unbinding VBO
glBindVertexArray(0); //unbinding VAO
clearly "ruins" my polygons, because unbinding the VBO while my VAO is still bound means the VBO, which is supposed to "contain" references to the buffers it uses, is now not bound to a vertex buffer anymore. It doesn't know about the buffers.
Unbinding the VAO first allows me to then unbind any buffers without affecting it. This would be the correct order.
glBindVertexArray(0); //unbinding VAO
glBindBuffer(GL_ARRAY_BUFFER, 0); //unbinding VBO
OpenGL's binding system is a little hard to grasp, especially when different things depend on one another, but I think my explanation is correct. Hopefully this could help some other learners in the future.

Is a compiled shader compulsory in OpenGl 4?

I have an OpenGL 4.1 code using VAO and VBOs. Generation of buffer and array objects happens properly, however as soon as I want to draw my vertices, I get an INVALID OPERATION (code 1282) error. One of the possible explanations is that "the shader is not compiled".Hence my question: Is it a MUST to write & compile my own vertex and fragment shaders, or can I do without ?
As reference, my code:
Preparing:
glGenVertexArrays(1, vaoID); // Create new VAO
glBindVertexArray(vaoID);
glGenBuffers(1, vboID); // Create new VBO
glBindBuffer(GL_ARRAY_BUFFER, vboID); // Bind vbo1 as current vertex buffer
glBufferData(GL_ARRAY_BUFFER, sizeOfVertices, vertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, floatsPerPosition, GL_FLOAT, GL_FALSE, 0, 0);
glGenBuffers(1, eboID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, eboID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeOfIndices, indices, GL_STATIC_DRAW);
// reset bindings for VAO, VBO and EBO
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
Rendering:
glBindVertexArray(vaoID); // No Error
glDrawArrays(GL_TRIANGLES, 0, 6); // Generates Error 1282
glBindVertexArray(0);
In a core OpenGL profile, since 3.2, the use of a program object is not optional.
However, the only shader stage that is manditory is the vertex shader. The use of a fragment shader is optional, but of course, it would only be useful for depth or stencil operations (since it would not generate fragment colors).
In a compatibility OpenGL profile, you can render using the fixed-function pipeline, but you cannot use user-defined vertex attributes. You have to use the fixed-function attributes, set up with glVertexPointer, glColorPointer, etc. Note that NVIDIA plays fast-and-loose with the spec in this regard, mapping user-defined attributes to fixed-function ones.
According to the OpenGL specification, a shader must always be used when rendering geometry. When you have no shader enabled, the result of the draw call is undefined.
However, from what I can tell this is a bit more murky in practice. At least most NVidia and Intel drivers seem to come with a kind of "default shader" enabled by default, which inhibits the undefined behaviour.
These shaders have pre-specified vertex attribute input indices (for instance, NVidia uses 0 for vertices, 4 for vertex colours, etc), and implement what you could consider a default pipeline.
Please note here that while these default shaders exist, their implementation can vary significantly between vendors, or might not exist at all for specific vendors or driver versions.
So you should therefore always use a shader when drawing geometry in OpenGL 4.

why is glVertexAttribDivisor crashing?

I am trying to render some trees with instancing. This is rather weird, but before sleeping yesterday night, I checked the code, and it was in a running state, when I got up this morning, it is crashing when I am calling glVertexAttribDivisor I haven't changed any code since yesterday.
Here is how I am sending data to GPU for instancing.
glGenBuffers(1, &iVBO);
glBindBuffer(GL_ARRAY_BUFFER, iVBO);
glBufferData(GL_ARRAY_BUFFER, (ml_instance->i_positions.size()*sizeof(glm::vec4)) , NULL, GL_STATIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, 0, (ml_instance->i_positions.size()*sizeof(glm::vec4)), &ml_instance->i_positions[0]);
And then in vertex specification--
glBindBuffer(GL_ARRAY_BUFFER, iVBO);
glVertexAttribPointer(i_positions, 4, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(i_positions);
glVertexAttribDivisor(i_positions,1); // **THIS IS WHERE THE PROGRAM CRASHES**
glDrawElementsInstanced(GL_TRIANGLES, indices.size(), GL_UNSIGNED_INT, 0,TREES_INSTANCE_COUNT);
I have checked ml_instance->i_positions, it has all the data that needs to render.
I have checked the value of i_positions in vertex shader, it is the same as whatever I have defined there.
I am little out of ideas here, everything looks pretty much fine. What am I missing?

glDrawRangeElements of GL_TRIANGLE_STRIP not working properly (using glMapBuffer)

I am trying to render points from a VBO and a Element Buffer Object with glDrawRangeElements.
The VBO and EBO are instanciated like this:
glGenBuffers(1, &vertex_buffer);
glGenBuffers(1, &index_buffer);
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glBufferData(GL_ARRAY_BUFFER, vertex_buffer_size, NULL, GL_STREAM_DRAW);
glVertexAttribPointer(0,3,GL_FLOAT, GL_FALSE, 0, (char*)(NULL));
glEnableVertexAttribArray(0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, index_buffer_size, NULL, GL_STREAM_DRAW);
as you can see they do not have any "static" data.
I use glMapBuffer to populate the buffers and then I render them with glDrawRangeElements.
Problem:
Concretly, what I want to do is to make a terrain with Continuous LOD.
The code I use and posted majorly comes from Ranger Mk2 by Andras Balogh.
My problem is this: when I want to render the triangle strip, there seems to be a point on the 3 points of a triangle which is somewhere where it should not be.
For example,
this is what I get in wireframe mode -> http://i.stack.imgur.com/lCPqR.jpg
and this is what I get in point mode (Note the column that stretches up which is the points that are not well placed) -> http://i.stack.imgur.com/CF04p.jpg
Before you ask me to go to the post named "Rendering with glDrawRangeElements() not working properly", I wanted to let you know that I already went there.
Code:
So here is the render process:
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(4, GL_FLOAT, 0, 0);
glEnableVertexAttribArray(0);
glDrawRangeElements(GL_TRIANGLE_STRIP, 0, va_index, ia_index, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
glDisableClientState(GL_VERTEX_ARRAY);
and just before I do this (pre_render function):
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
vertex_array = (v4f*)(glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY));
index_array = (u32*)(glMapBuffer(GL_ELEMENT_ARRAY_BUFFER, GL_WRITE_ONLY));
//[...] Populate the buffers
glUnmapBuffer(GL_ARRAY_BUFFER);
glUnmapBuffer(GL_ELEMENT_ARRAY_BUFFER);
PS: When I render the terrain like this:
glBegin(GL_TRIANGLE_STRIP);
printf("%u %u\n", va_index, ia_index);
for(u32 i = 0; i < va_index; ++i){
//if(i <= va_index)
glVertex4fv(&vertex_array[i].x);
}
glEnd();
strangely it works (part of the triangles are not rendered though but that is another problem).
So my question is how can I make glDrawRangeElements function properly?
If you need any more information please ask, I will be glad to answer.
Edit: I use Qt Creator as IDE, with Mingw 4.8 on Windows 7. My Graphic card supports Opengl 4.4 (from Nvidia).
Not sure if this is causing your problem, but I notice that you have a mixture of API calls for built-in vertex attributes and generic vertex attributes.
Calls like glVertexAttribPointer, glEnableVertexAttribArray and glDisableVertexAttribArray are used for generic vertex attributes.
Calls like glVertexPointer, glEnableClientState and glDisableClientState are used for built-in vertex attributes.
You need to decide which approach you want to use, and then use a consistent set of API calls for that approach. If you use the fixed rendering pipeline, you need to use the built-in attributes. If you write your own shaders with the compatibility profile, you can use either. If you use the core profile, you need to use generic vertex attributes.
This call also looks suspicious, since it specifies a size of 3, where the rest of your code suggests that you're using positions with 4 coordinates:
glVertexAttribPointer(0,3,GL_FLOAT, GL_FALSE, 0, (char*)(NULL));

Why are there multiple ways to pass VAOs to a GLSL program?

Sample code:
1. glGenBuffers(1, &VboId);
2. glBindBuffer(GL_ARRAY_BUFFER, VboId);
3. glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
4. glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
5. glEnableVertexAttribArray(0);
So the "0" on lines 4 (first argument) and 5 refer to an arbitrary identifier/location that we have chosen. In GLSL, if we want to refer to this data, we just have to refer to the same ID:
layout(location=0) in vec4 in_Position;
However, in a different example program, I've seen it done differently, with no reference to "layout locations". Instead, we do something like so:
1. glGenBuffers(1, &VboId);
2. glBindBuffer(GL_ARRAY_BUFFER, VboId);
3. glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
4. glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
5. glBindAttribLocation(shaderProgramHandle, 0, "in_position");
6. glEnableVertexAttribArray(0);
We've added an extra step (5) where we seem to bind this attribute pointer to a specific variable in a specific program. And then in our GLSL we just write this instead:
in vec3 in_position;
With no reference to a location.
If I'm not mistaken, these 2 programs essentially do the exact same thing...so why the difference? What are the pros and cons of each method?
(I've just started learning OpenGL 3.x)
There is no such thing as passing a VAO to a shader. A VAO simply establishes how vertex attributes are pulled from buffer objects for rendering.
The second example doesn't do anything, unless shaderProgramHandle has not been linked yet. glBindAttribLocation location only works before program linking. Once the program has been linked, you can't change where it gets its attributes from.
In any case, your real question is why some people use glBindAttribLocation(..., 0) instead of putting layout(location = X) in their shaders. The reason is very simple: layout(location) syntax is (relatively) new. glBindAttribLocation dates back to the very first version of GLSL's OpenGL interface, back in the ARB_vertex_shader extension back in 2003 or so. layout(location) comes from the much more recent ARB_explicit_attrib_location, which is only core in GL 3.3 and above. 3.3 only came out in 2010. So naturally more material will talk about the old way.
The "pros and cons" of each are quite obvious. From a purely practical standpoint, layout(location), being new, requires more recent drivers (though it does not require GL 3.3. NVIDIA's 6xxx+ hardware supports ARB_explicit_attrib_location despite being only 2.1). glBindAttribLocation works within source code, while layout(location) is built into the GLSL shader itself. So if you need to decide which attributes use which indices at runtime, it's a lot harder to do it with layout(location) than without it. But if, like most people, you want to control them from the shader, then layout(location) is what you need.