I am working on the code written by someone else and at the moment I have a fairly limited understanding of the codebase. Which is why I wasn't sure how to formulate my question properly, and whether it is an OpenGL question, or debugging strategy question. Furthermore, I obviously can't really share the whole code base, and the reason stuff is not working is most likely rooted in there. Regardless, perhaps someone just might have an idea of what might be going on, or where should I look at.
I have a vertex structure defined in the following way:
struct Vertex {
Vertex(glm::vec3 position, glm::vec3 normal):
_position(position), _normal(normal) {};
glm::vec3 _position;
glm::vec3 _normal;
};
I have a std vector of vertices which I fill out with vertex data extracted from a certain structure. For the sake of simplicity, let's assume it's another vector:
// std::vector<Vertex> data - contains vertex data
std::vector<Vertex> Vertices;
Vertices.reserve(data.size());
for (int i = 0; i < data.size(); i++) {
Vertices.emplace_back(Vertex(data[i]._position, data[i]._normal));
}
Then I generate a vertex buffer object, buffer my data and enable vertex attributes:
GLuint VB;
glGenBuffers(1, &VB);
glBindBuffer(GL_ARRAY_BUFFER, VB);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex)*Vertices.size(), &Vertices[0],
GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(0, 3,GL_FLOAT, GL_FALSE, sizeof(Vertex), 0);
glVertexAttribPointer(1, 3,GL_FLOAT, GL_FALSE, sizeof(Vertex), (const void*)
(sizeof(GL_FLOAT)*3));
Finally, I bind a shader and set up uniforms, then I call glDrawArrays:
glDrawArrays(GL_TRIANGLES, 0, Vertices.size());
// clean-up
// swap buffers (comes at some point in code, I haven't figure out where yet)
At this point nothing gets rendered. However, initially I made a mistake and swapped the parameters of the draw call, such that offset comes before the number of elements to draw:
glDrawArrays(GL_TRIANGLES, Vertices.size(), 0);
And surprisingly, that actually rendered what I wanted to render in the first place. However, the documentation clearly says that the offset comes first, and the number of elements after this. Which means that glDrawArrays(GL_TRIANGLES, Vertices.size(), 0) should have shown nothing, since I specified 'zero' number of elements to be drawn.
Now there are multiple windows in the application and shared vertex buffer objects. At some point I thought that the vertex buffer object I generated somehow gets passed around in the part of the code I haven't explored yet, and uses it to draw the geometry I didn't expect to be drawn. However that still doesn't explain the fact that when I use glDrawArrays(GL_TRIANGLES, Vertices.size(), 0) with 'zero' as the number of elements to be drawn - I see the geometry; whereas when I swap the parameters according to the documentation - nothing gets shown.
Given this scarce information that I shared, does someone by any chance have an idea of what might be going on? If not, how would you tackle this, how would you go about debugging (or understanding) it?
EDIT: Vertex and Fragment shader
Mind that this is a dummy shader that simply paints the whole geometry red. Regardless, shader is not the cause of my problems, given how geometry gets drawn depending on how I use the draw call (see above).
EDIT EDIT: Note that as long as I don't activate blending, the alpha component (which is 'zero' in the shader), won't have any effect on the produced image.
vertex shader:
#version 440
layout (location=0) in vec3 position;
layout (location=1) in vec3 normal;
uniform mat4 MVP; // model-view-projection matrix
void main() {
gl_Position = MVP*vec4(position, 1.0);
}
fragment shader:
#version 440
out vec4 outColor;
void main()
{
outColor = vec4(1, 0, 0, 0);
})glsl";
Regarding the glDrawArrays parameter inversion, have you tried stepping into that function call? Perhaps you are using an OpenGL wrapper of some sort which modifies the order or the arguments. I can confirm however that the documentation you quote is not wrong about the parameter order!
Related
I'm pretty new to 3D programming. I'm trying to learn OpenGL from this site. During the reading I couldn't really understand how the layout (location = 0) line really operates. I've tried to search for other explanation online both in the OpenGL wiki and in other sites, and I've managed to find this site from which I understood a little more.
So if I am correct the vertex shader takes some inputs and generates some outputs. The input of the shader are called vertex attributes and each one of them as an index location called attribute index. Now I expect that if the shader takes as input a single vertex and its attributes, it has to run multiple times, one for each vertex of the object I'm trying to render.
Is it correct what I wrote up until this point?
Now, what I didn't manage to understand is how layout (location = 0) really works. My assumption is that this intruction needs to tell the shader from where location in memory to pick the first index attribute. Thus each time the shader re-runs (if it actually re-runs), the location should move by one unit, like in a normal for loop. Is this interpretation correct? And, please, can anyone actually explain me, in an organic way, how the vertex shader operates?
P.S. Thank you in advance for your time and excuse my poor English: I'm still practising it!
Edit
This is the code. Following the first guide I linked I created an array of vertices:
float vertices[] {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.0f, 0.5f, 0.0f
};
then I created a vertex buffer object:
unsigned int VBO;
glGenBuffer(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
I added the data to the VBO:
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
while the vertex shader reads:
#version 330 core
layout (location = 0) in vec3 aPos:
void main() {
gl_Position(aPos.x, aPos.y, aPos.z, 1.0f);
}
You need to look at both sides of this. You bind a buffer containing all of your data. Say position and color.
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 color;
Now in the program, I can use these vectors without specifying the index of the vertex I am processing because we had to tell GL how to buffer the data.
We do that when we bind buffers to the program.
Lets say we want to create a triangle. It has 3 vertexes, each vertex has two attributes: color and position. We create a vertex shader that processes each vertex, in that program it is implied that each vertex has a color and position. You don't care about the index in the array it is (for now).
The program will take vertex i, v_i and process it. How it populates position and vector depend on how you bind the data. I could have two arrays,
positionData = [x0, y0, z0, x1, ... z3];
colorData = [r0, g0, b0, r1, ... b3];
So I would buffer this data, then I would bind that buffer to the program at the attribute location and specify how it is read. Eg. bind the positionBuffer to attribute location 0, read it in strides of three with no offset.
The same with the color data, but with location 1.
Alternatively I could do.
posColData = [ x0, y0, z0, r0, g0, b0, x1, y1, ... b3];
Then I would create posColBuffer and bind it to the 0th attribute, with a stride of 6. I would also bind the posColBuffer to the 1st attribute with a stride of 6 and an offset of 3.
The code you are using does this here.
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0);
glEnableVertexAttribArray(0); ;
They utilize the layout clause by just saying 0 since they know the location.
I am trying to figure out how SSBO works with a very basic example. The vertex shader:
#version 430
layout(location = 0) in vec2 Vertex;
void main() {
gl_Position = vec4(Vertex, 0.0, 1.0);
}
And the fragment shader:
#version 430
layout(std430, binding = 2) buffer ColorSSBO {
vec3 color;
};
void main() {
gl_FragColor = vec4(color, 1.0);
}
I know they work because if I replace vec4(color, 1.0) with vec4(1.0, 1.0, 1.0, 1.0) I see a white triangle in the center of the screen.
I initialize and bind the SSBO with the following code:
GLuint ssbo;
glGenBuffers(1, &ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo);
float color[] = {1.f, 1.f, 1.f};
glBufferData(GL_SHADER_STORAGE_BUFFER, 3*sizeof(float), color, GL_DYNAMIC_COPY);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 2, ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
What is wrong here?
My guess is that you are missing the SSBO binding before rendering. In your example, you are copying the content and then you bind it immediately, which is unnecessary for the declaration. In other words, the following line in your example:
...
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 2, ssbo);
...
Must be placed before rendering, such as:
...
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 2, ssbo);
/*
Your render calls and other bindings here.
*/
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
...
Without this, your shader (theoretically) will not be able to see the content.
In addition, as Andon M. Coleman has suggested, you have to use padding for your elements when declaring arrays (e.g., use vec4 instead of vec3). If you don't, it will apparently work but produce strange results because of this fact.
The following two links have helped me out understanding the meaning of an SSBO and how to handle them in your code:
https://www.khronos.org/opengl/wiki/Shader_Storage_Buffer_Object
http://www.geeks3d.com/20140704/tutorial-introduction-to-opengl-4-3-shader-storage-buffers-objects-ssbo-demo/
I hope this helps to anyone facing similar issues!
P.S.: I know this post is old, but I wanted to contribute.
When drawing a triangle, three points are necessary, and 3 separate sets of red green blue values are required for each point. You are only putting one set into the shader buffer. For the other two points, the value of color drops to the default, which is black (0.0,0.0,0.0). If you don't have blending enabled, it is likely that the triangle is being painted completely black because two of its vertices are black.
Try putting 2 more sets of red green blue values into the storage buffer to see it will load them as color values for the other two points.
I am trying to render an object using GLM for matrix transformations, but I'm getting this:
EDIT: Forgot to mention that the object I'm trying to render is a simple Torus.
I did a lot of digging around and one thing I noticed is glGetUniformLocation(program, "mvp") returns -1. The docs says it will return -1 if the uniform variable isn't used in the shader, even if it is declared. As you can see below, it has been declared and is being used in the vertex shader. I've checked against program to make sure it is valid, and such.
So my questions are:
Question 1:
Why is glGetUniformLocation(program, "mvp") returning -1 even though it is declared and is being used in the vertex shader?
Question 2: (Which I think may be related to Q1)
Another thing I'm not particularly clear on. My GameObject class has a struct called Mesh with variables GLuint vao and GLuint[4] vbo (Vertex Array Object) and (Vertex Buffer Object). I am using Assimp, and my GameObject class is based on this tutorial. The meshes are rendered in the same way as the tutorial, using:
glBindVertexArray(vao);
glDrawElements(GL_TRIANGLES, elementCount, GL_UNSIGNED_INT, NULL);
I'm not sure how VAO's and VBO's work. What I've found is that VAO's are used if you want access to the vertex arrays throughout your program, and VBO's are used if you just want to send it to the graphics card and not touch it again (Correct me if I'm wrong here). So why does the tutorial mix them? In the constructor for a mesh, it creates and binds a VAO then doesn't touch it for the rest of the constructor (unless creating and binding VBO's have an effect on the currently bound VAO). It then goes on and creates and binds VBO's for the vertex buffer, normal buffer, texture coordinate buffer, and index buffer. To render the object it binds the VAO and calls glDrawElements. What I'm confused about is how/where does OpenGL access the VBO's, and if it can't with the setup in the tutorial, which I'm pretty sure it can, what needs to change?
Source
void GameObject::render() {
GLuint program = material->shader->program;
glUseProgram(program);
glm::mat4 mvp = Game::camera->mvpMatrix(this->position);
GLuint mvpLoc = glGetUniformLocation(program, "mvp");
printf("MVP Location: %d\n", mvpLoc); // prints -1
glUniformMatrix4fv(mvpLoc, 1, GL_FALSE, &mvp[0][0]);
for (unsigned int i = 0; i < meshes.size(); i++) {
meshes.at(i)->render(); // renders element array for each mesh in the GameObject
}
}
Vertex shader (simple unlit red color):
#version 330 core
layout(location = 0) in vec3 position;
uniform mat4 mvp;
out vec3 vertColor;
void main(void) {
gl_Position = mvp * vec4(position, 1);
vertColor = vec3(1, 0, 0);
}
Fragment shader:
#version 330 core
in vec3 vertColor;
out vec3 color;
void main(void) {
color = vertColor;
}
Question 1
You've pretty much answered this one yourself. glGetUniformLocation(program, name) gets the location of the uniform "mvp" in the shader program program and returns -1 if the uniform is not declared (or not used: if you don't use it, it doesn't get compiled in). Your shader does declare and use mvp, which strongly suggests there is an issue with compiling the program. Are you sure you are using this shader in the program?
Question 2
A VBO stores the data values that the GPU will use. These could be colour values, normals, texture coordinates, whatever you like.
A VAO is used to express the layout of your VBOs - think of it like a map, indicating to your program where to find the data in the VBOs.
The example program does touch the VAO whenever it calls glVertexAttribPointer, e.g.
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
This is not related to your missing uniform.
I did a project in OpenGL version 3.2 once where I used a "sampler2DArray" to store multiple images with the same dimensions and rendered them using textured points.
Now I am trying to port that project to my gnu/linux computer. This computer only supports up to OpenGL version to 2.1 and GLSL version up to 1.20 (which doesn't have sampler2DArray). As far as I know there is no way to update OpenGL to support the newer features.
What I am currently trying to do is to use a sampler3D to store my images and use the depth value to select the image I want.
To send the texture depth from the vertex shader to the fragment shader I have declared a "varying" float variable holding the depth value (0.0 to 1.0).
I am rendering 4 images at the locations: (-0.5, +0.5), (+0.5, +0.5), (-0.5, -0.5) and (+0.5, -0.5).
The image switching method appears to be working (changing the "index" variable changes the image). But for some wierd reason all images gets rendered at (0.0, 0.0) and not at their assigned positions. This problem goes away when I don't assign to the varying variable containing the depth value for the texture and set the depth value to 0.0 in the fragment shader.
Here is the vertex shader:
#version 120
attribute vec2 position
attribute float index
varying float v_index
void main()
{
gl_Position = vec4(position, 0.0, 1.0);
v_index = index; // Removing this assignment makes the images appear at their assigned locations.
}
Here is the fragment shader:
#version 120
uniform sampler3D texture;
varying float v_index;
void main()
{
gl_FragColor = texture3D(texture, vec3(gl_PointCoord, v_index));
}
The structure I use represent vertices:
struct vertex {
GLfloat x;
GLfloat y;
GLfloat texture_index;
};
The calls to the glVertexAttribPointer function (the problem may be here too):
glBindAttribLocation(shader_program, 0, "position");
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(struct vertex), (void *)0);
glBindAttribLocation(shader_program, 1, "index");
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 1, GL_FLOAT, GL_FALSE, sizeof(struct vertex), (void *)(2 * sizeof(GLfloat));
I have also found a very similar question. The answer marked "accepted" claims that the cause of the problem is that the shaders have more than 16 varying vectors (which isn't the case for me).
Here is the link: Strange error in GLSL while assigning attribute to a varying in vertex shader
This looks like your attribute location bings aren't effective and the locations are assigned bu the GL. Without the assignment, the index attribute is not used, and only the position one is, so it is very likely that it gets location 0. Whe index is actually used, it might get 0 (on nvidia, those locations seem to be assigned in alphabetical order).
The glBindAttribLocation() calls only have an effect when linking the program, so these have to be called before glLinkProgram(), and you have to re-link the program when you want to change those (which you should really avoid). The code you have given suggests that this is called during your regular draw calls, so that these never have any effect on the linked program.
My program was meant to draw a simple textured cube on screen, however, I cannot get it to render anything other than the clear color. This is my draw function:
void testRender() {
glClearColor(.25f, 0.35f, 0.15f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUniformMatrix4fv(resources.uniforms.m4ModelViewProjection, 1, GL_FALSE, (const GLfloat*)resources.modelviewProjection.modelViewProjection);
glEnableVertexAttribArray(resources.attributes.vTexCoord);
glEnableVertexAttribArray(resources.attributes.vVertex);
//deal with vTexCoord first
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,resources.hiBuffer);
glBindBuffer(GL_ARRAY_BUFFER, resources.htcBuffer);
glVertexAttribPointer(resources.attributes.vTexCoord,2,GL_FLOAT,GL_FALSE,sizeof(GLfloat)*2,(void*)0);
//now the other one
glBindBuffer(GL_ARRAY_BUFFER,resources.hvBuffer);
glVertexAttribPointer(resources.attributes.vVertex,3,GL_FLOAT,GL_FALSE,sizeof(GLfloat)*3,(void*)0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, resources.htextures[0]);
glUniform1i(resources.uniforms.colorMap, 0);
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, (void*)0);
//clean up a bit
};
In addition, here is the vertex shader:
#version 330
in vec3 vVertex;
in vec2 vTexCoord;
uniform mat4 m4ModelViewProjection;
smooth out vec2 vVarryingTexCoord;
void main(void) {
vVarryingTexCoord = vTexCoord;
gl_Position = m4ModelViewProjection * vec4(vVertex, 1.0);
};
and the fragment shader (I have given up on textures for now):
#version 330
uniform sampler2D colorMap;
in vec2 vVarryingTexCoord;
out vec4 vVaryingFragColor;
void main(void) {
vVaryingFragColor = texture(colorMap, vVarryingTexCoord);
vVaryingFragColor = vec4(1.0,1.0,1.0,1.0);
};
the vertex array buffer for the position coordinates make a simple cube (with all coordinates a signed 0.25) while the modelview projection is just the inverse camera matrix (moved back by a factor of two) applied to a perspective matrix. However, even without the matrix transformation, I am unable to see anything onscreen. Originally, I had two different buffers that needed two different element index lists, but now both buffers (containing the vertex and texture coordinate data) are the same length and in order. The code itself is derived from the Durian Software Tutorial and the latest OpenGL Superbible. The rest of the code is here.
By this point, I have tried nearly everything I can think of. Is this code even remotely close? If so, why can't I get anything to render onscreen?
You're looking pretty good so far.
The only thing that I see right now is that you've got DEPTH_TEST enabled, but you don't clear the depth buffer. Even if the buffer initialized to a good value, you would be drawing empty scenes on every frame after the first one, because the depth buffer's not being cleared.
If that does not help, can you make sure that you have no glGetError() errors? You may have to clean up your unused texturing attributes/uniforms to get the errors to be clean, but that would be my next step.