Passing VBO to shaders with different layouts - opengl

If I have a vertex shader that expects this...
layout(location = 0) in vec3 aPos;
layout(location = 1) in vec4 aBoneWeights;
layout(location = 2) in vec4 aBoneIndices;
How do I pass a a VBO that is already organised for each vertex as
Position(vec3) | Color(vec3) | UV(vec2) | BoneWeight(vec4) | BoneIndex(vec4)
Do I have to make a new VBO? If my vertex data is interlaced, then do I have to create a new buffer of vertex data too?

Option 1: Create a different VAO for each shader
The VAO defines a mapping from you shader attributes (e.g. read the vec3's from this memory location in the VBO, with a stride of N bytes, and map it to the attribute bound to location X).
Some global to store the VAO name
GLuint g_vao;
Then to create it (For the data layout you have defined in your shader):
// create the VAO
glCreateVertexArrays(1, &g_vao);
// set up: layout(location = 0) in vec3 aPos;
glEnableVertexArrayAttrib(g_vao, 0); //< turn on attribute bound to location 0
// tell OpenGL that attribute 0 should be read from buffer 0
glVertexArrayAttribBinding(
g_vao, //< the VAO
0, //< the attribute index (location = 0)
0); //< the vertex buffer slot (start from zero usually)
// tell openGL where within the buffer the data exists
glVertexArrayAttribFormat(
g_vao, //< the VAO
0, //< the attribute index
3, //< there are 3 values xyz
GL_FLOAT, //< all of type float
GL_FALSE, //< do not normalise the vectors
0); //< the offset (in bytes) from the start of the buffer where the data starts
// set up: layout(location = 1) in vec4 aBoneWeights
glEnableVertexArrayAttrib(g_vao, 1); //< turn on attribute bound to location 0
// tell OpenGL that attribute 1 should be read from buffer 0
glVertexArrayAttribBinding(
g_vao, //< the VAO
1, //< the attribute index (location = 1)
0); //< the vertex buffer slot (start from zero usually)
// tell openGL where within the buffer the data exists
glVertexArrayAttribFormat(
g_vao, //< the VAO
1, //< the attribute index
4, //< there are 4 values
GL_FLOAT, //< all of type float
GL_FALSE, //< do not normalise the vectors
sizeof(float) * 8); //< the offset (in bytes) from the start of the buffer where the data starts
// set up: layout(location = 2) in vec4 aBoneIndices;
glEnableVertexArrayAttrib(g_vao, 2); //< turn on attribute bound to location 2
// tell OpenGL that attribute 2 should be read from buffer 0
glVertexArrayAttribBinding(
g_vao, //< the VAO
2, //< the attribute index (location = 2)
0); //< the vertex buffer slot (start from zero usually)
// tell openGL where within the buffer the data exists
glVertexArrayAttribFormat(
g_vao, //< the VAO
2, //< the attribute index
4, //< there are 4 values xyz
GL_FLOAT, //< all of type float
GL_FALSE, //< do not normalise the vectors
sizeof(float) * 12); //< the offset (in bytes) from the start of the buffer where the data starts
However, I think your shader definition is wrong for attribute 2 (because you will have to pass the bone indices as floating point data, which feels very wrong to me!).
I'd have thought you'd have wanted integers instead of floats:
layout(location = 2) in ivec4 aBoneIndices;
However when binding to integers, you need to use glVertexArrayAttribIFormat instead of glVertexArrayAttribFormat:
glVertexArrayAttribIFormat(
g_vao, //< the VAO
2, //< the attribute index
4, //< there are 4 indices
GL_UNSIGNED_INT, //< all of type uint32
sizeof(float) * 12);
After all of that, you'd need to bind the vertex buffer to the vertex slot zero you've been using above...
glVertexArrayVertexBuffer(
g_vao, //< the VAO
0, //< the vertex buffer slot
0, //< offset (in bytes) into the buffer
sizeof(float) * 16); //< num bytes between each element
Option 2: Just use the same VAO and the same VBO
Just encode the indices to have specific meanings, and then you can always use the same VAO.
layout(location = 0) in vec3 aPos;
//layout(location = 1) in vec4 aCol; //< not used in this shader
//layout(location = 2) in vec4 aUv; //< not used in this shader
layout(location = 3) in vec4 aBoneWeights;
layout(location = 4) in vec4 aBoneIndices;
/edit
In answer to your question, it very much depends on the version of OpenGL you are using. The answer I posted here uses the latest Direct State Access (DSA) extensions found in OpenGL4.5. If you can make use of them, I strongly suggest it.
OpenGL 3.0: glVertexAttribPointer
Yes, this will work. However, it's a mechanism that is strongly tied to OpenGL's bind paradigm. Each attribute is effectively bound to the buffer that was bound when you make the call to glVertexAttribPointer (i.e. you'll be doing: glBindBuffer(); glEnableVertexAttribArray(); glVertexAttribPointer();).
The problem with this is that it kinda locks you into creating a VAO for each VBO (or set of VBOs, if pulling data from more than one), because the attributes are bound to the exact buffer that was bound when you specified that attribute.
OpenGL 4.3: glVertexAttribFormat
This version is much like the one I've presented above. Rather than passing the VAO into the function call, you do a call to glBindVertexArray first (if you search for the docs on the above methods, the ones without the VAO argument simply use the currently bound VAO).
The advantage of this approach over the old API, is that it is trivial to bind the VAO to another VBO(s) [i.e. you can associate a VAO with a GLSL program, rather than having a VAO for each VBO/Program pair]- just a call to glBindVertexBuffer for each VBO, and it'll work nicely.
OpenGL 4.5: glVertexArrayAttribFormat
In the long run, this is by far the easiest version of the API to use. The main advantage is that you don't need to worry about which VAO is currently bound (because you pass it in as an argument). This has a number of advantages, because you no longer care about which VAO has been bound, and it also opens the door to modifying OpenGL objects from a different thread (something the older API versions would not allow).

Related

OpenGL pass integer array with GL_ARRAY_BUFFER

I am trying to pass some integer values to the Vertex Shader along with the vertex data.
I generate a buffer while vertex array is bound and then try to attach it to a location but it seems like in vertex shader the value is always 0.
here is part of the code that generates the buffer and it`s usage in the shader.
glm::vec3 materialStuff = glm::vec3(31, 32, 33);
glGenBuffers(1, &materialBufferIndex);
glBindBuffer(GL_ARRAY_BUFFER, materialBufferIndex);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec3), &materialStuff, GL_STATIC_DRAW);
glEnableVertexAttribArray(9);
glVertexAttribIPointer(9, 3, GL_INT, sizeof(glm::vec3), (void*)0);
And here is part of the shader that suppose to receive the integer values
// Some other locations
layout (location = 0) in vec3 vertex_position;
layout (location = 1) in vec2 vertex_texcoord;
layout (location = 2) in vec3 vertex_normal;
layout (location = 3) in vec3 vertex_tangent;
layout (location = 4) in vec3 vertex_bitangent;
layout (location = 5) in mat4 vertex_modelMatrix;
// layout (location = 6) in_use...
// layout (location = 7) in_use...
// layout (location = 8) in_use...
// The location I am attaching my integer buffer to
layout (location = 9) in ivec3 vertex_material;
// I also tried with these variations
//layout (location = 9) in int vertex_material[3];
//layout (location = 9) in int[3] vertex_material;
// and then in vertex shader I try to retrieve the int value by doing something like this
diffuseTextureInd = vertex_material[0];
That diffuseTextureInd should go to fragment shader through
out flat int diffuseTextureInd;
And I am planning to use this to index into an array of bindless textures that I already have set up and working. The issue is that it seems like vertex_material just contains 0s since my fragment shader always displays the 0th texture in the array.
Note: I know that my fragment shader is fine since if I do
diffuseTextureInd = 31;
in the vertex shader, the fragment shader correctly receives the correct index and displays the correct texture. But when I try to use the value from the layout location 9, it seems like I always get a 0. Any idea what I am doing wrong here?
The following definitions:
glm::vec3 materialStuff = glm::vec3(31, 32, 33);
glVertexAttribIPointer(9, 3, GL_INT, sizeof(glm::vec3), (void*)0);
...
layout (location = 9) in ivec3 vertex_material;
practically mean that:
glm::vec3 means that you declare vector of 3 floats rather than integers. glm::ivec3 should be used for vector of integer.
ivec3 vertex attribute means a vector of 3 integer values is expected for each vertex. At the same moment, materialStuff defines values only for a single vertex (makes no sense for a triangle, which would require at least 3 glm::ivec3).
What is supposed to be declared for passing a single integer vertex attribute:
layout (location = 9) in int vertex_material;
(without any array qualifier)
GLint materialStuff[3] = { 31, 32, 33 };
glVertexAttribIPointer(9, 1, GL_INT, sizeof(GLint)*3, (void*)0);
It should be noticed though, that passing different per-vertex integer to fragment shader makes no sense, which I suppose you solved by flat keyword. Existing pipeline defines only per-vertex inputs, not per-triangle or something like this. There are glVertexAttribDivisor() defining the vertex attribute rate, but it is applicable only to rendering instances via glDrawArraysInstanced()/glDrawElementsInstanced() (specific vertex attribute might be increment per instance), not triangles.
There are ways to handle per-triangle inputs - this could be done by defining Uniform Buffer Object or Texture Buffer Object (same as 1D texture but for accessing by index without interpolation) instead of generic Vertex Attribute. But tricks will be still necessary to determine the triangle index in this array - again, from vertex attribute or from built-in variables like gl_VertexID in Vertex Shader, gl_PrimitiveIDIn in Geometry Shader or gl_PrimitiveID in Fragment Shader (I cannot say, though, how these counters are affected by culling).

Concept: what is the use of glDrawBuffer and glDrawBuffers?

I am reading the red book OpenGL programming guide when I come across these two methods, which strikes me as unnecessary since we already can specify which color buffer the output is going to go to with layout (location = ) or glBindFragDataLocation. Am I misunderstanding anything here?
Not all color attachments which are attached to a framebuffer have to be rendered to by a shader program. glDrawBuffers specifies a list of color buffers to be drawn into.
e.g. Lets assume you have a frambuffer with the 3 color attchments GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 and GL_COLOR_ATTACHMENT2:
Fragment shader
layout (location = 0) out vec4 out_color1;
layout (location = 1) out vec4 out_color2;
drawbufferr specification:
const GLenum buffers[]{ GL_COLOR_ATTACHMENT2, GL_COLOR_ATTACHMENT0 };
glDrawBuffers( 2, buffers );
out_color1 sends its data to the draw buffer at index 0 (because of the location = 0 declaration). The call to glDrawBuffers above sets this buffer to be GL_COLOR_ATTACHMENT2. Similarly, out_color2 sends its data to index 1, which is set to be GL_COLOR_ATTACHMENT0. Attachment 1 doesn't get data written to it.

OpenGL instancing : how to debug missing per instance data

I am relatively familiar with instanced drawing and per instance data: I've implemented this in the past with success.
Now I am refactoring some old code, and I introduced a bug on how per instance data are supplied to shaders.
The relevant bits are the following:
I have a working render loop implemented using glMultiDrawElementsIndirect: if I ignore the per instance data everything draws as expected.
I have a vbo storing the world transforms of my objects. I used AMD's CodeXL to debug this: the buffer is correctly populated with data, and is bind when drawing a frame.
glBindBuffer(GL_ARRAY_BUFFER,batch.mTransformBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::mat4) * OBJ_NUM, &xforms, GL_DYNAMIC_DRAW);
The shader specifies the input location explicitly:
#version 450
layout(location = 0) in vec3 vertexPos;
layout(location = 1) in vec4 vertexCol;
//...
layout(location = 6)uniform mat4 ViewProj;
layout(location = 10)uniform mat4 Model;
The ViewProj matrix is equal for all instances and is set correctly using:
glUniformMatrix4fv(6, 1, GL_FALSE, &viewProjMat[0][0]);
Model is per instance world matrix it's wrong: contains all zeros.
After binding the buffer and before drawing each frame, I am trying to setup the attribute pointers and divisors in such a way that every drawn instance will receive a different transform:
for (size_t i = 0; i < 4; ++i)
{
glEnableVertexAttribArray(10 + i);
glVertexAttribPointer(10 + i, 4, GL_FLOAT, GL_FALSE,
sizeof(GLfloat) * 16,
(const GLvoid*) (sizeof(GLfloat) * 4 * i));
glVertexAttribDivisor(10 + i, 1);
}
Now, I've looked and the code for a while and I really can't figure out what I am missing. CodeXL clearly show that Model (location 10) isn't correctly filled. No OpenGL error is generated.
My question is: does anyone know under which circumstances the setup of per instance data may fail silently? Or any suggestion on how to debug further this issue?
layout(location = 6)uniform mat4 ViewProj;
layout(location = 10)uniform mat4 Model;
These are uniforms, not input values. They don't get fed by attributes; they get fed by glUniform* calls. If you want Model to be an input value, then qualify it with in, not uniform.
Equally importantly, inputs and uniforms do not get the same locations. What I mean is that uniform locations have a different space from input locations. An input can have the same location index as a uniform, and they won't refer to the same thing. Input locations only refer to attribute indices; uniform locations refer to uniform locations.
Lastly, uniform locations don't work like input locations. With attributes, each vec4-equivalent uses a separate attribute index. With uniform locations, every basic type (anything that isn't a struct or an array) uses a single uniform location. So if ViewProj is a uniform location, then it only takes up 1 location. But if Model is an input, then it takes up 4 attribute indices.

How to use GLM with OpenGL?

I am trying to render an object using GLM for matrix transformations, but I'm getting this:
EDIT: Forgot to mention that the object I'm trying to render is a simple Torus.
I did a lot of digging around and one thing I noticed is glGetUniformLocation(program, "mvp") returns -1. The docs says it will return -1 if the uniform variable isn't used in the shader, even if it is declared. As you can see below, it has been declared and is being used in the vertex shader. I've checked against program to make sure it is valid, and such.
So my questions are:
Question 1:
Why is glGetUniformLocation(program, "mvp") returning -1 even though it is declared and is being used in the vertex shader?
Question 2: (Which I think may be related to Q1)
Another thing I'm not particularly clear on. My GameObject class has a struct called Mesh with variables GLuint vao and GLuint[4] vbo (Vertex Array Object) and (Vertex Buffer Object). I am using Assimp, and my GameObject class is based on this tutorial. The meshes are rendered in the same way as the tutorial, using:
glBindVertexArray(vao);
glDrawElements(GL_TRIANGLES, elementCount, GL_UNSIGNED_INT, NULL);
I'm not sure how VAO's and VBO's work. What I've found is that VAO's are used if you want access to the vertex arrays throughout your program, and VBO's are used if you just want to send it to the graphics card and not touch it again (Correct me if I'm wrong here). So why does the tutorial mix them? In the constructor for a mesh, it creates and binds a VAO then doesn't touch it for the rest of the constructor (unless creating and binding VBO's have an effect on the currently bound VAO). It then goes on and creates and binds VBO's for the vertex buffer, normal buffer, texture coordinate buffer, and index buffer. To render the object it binds the VAO and calls glDrawElements. What I'm confused about is how/where does OpenGL access the VBO's, and if it can't with the setup in the tutorial, which I'm pretty sure it can, what needs to change?
Source
void GameObject::render() {
GLuint program = material->shader->program;
glUseProgram(program);
glm::mat4 mvp = Game::camera->mvpMatrix(this->position);
GLuint mvpLoc = glGetUniformLocation(program, "mvp");
printf("MVP Location: %d\n", mvpLoc); // prints -1
glUniformMatrix4fv(mvpLoc, 1, GL_FALSE, &mvp[0][0]);
for (unsigned int i = 0; i < meshes.size(); i++) {
meshes.at(i)->render(); // renders element array for each mesh in the GameObject
}
}
Vertex shader (simple unlit red color):
#version 330 core
layout(location = 0) in vec3 position;
uniform mat4 mvp;
out vec3 vertColor;
void main(void) {
gl_Position = mvp * vec4(position, 1);
vertColor = vec3(1, 0, 0);
}
Fragment shader:
#version 330 core
in vec3 vertColor;
out vec3 color;
void main(void) {
color = vertColor;
}
Question 1
You've pretty much answered this one yourself. glGetUniformLocation(program, name) gets the location of the uniform "mvp" in the shader program program and returns -1 if the uniform is not declared (or not used: if you don't use it, it doesn't get compiled in). Your shader does declare and use mvp, which strongly suggests there is an issue with compiling the program. Are you sure you are using this shader in the program?
Question 2
A VBO stores the data values that the GPU will use. These could be colour values, normals, texture coordinates, whatever you like.
A VAO is used to express the layout of your VBOs - think of it like a map, indicating to your program where to find the data in the VBOs.
The example program does touch the VAO whenever it calls glVertexAttribPointer, e.g.
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
This is not related to your missing uniform.

OpenGL - GLSL assigning to varying variable breaks the vertex positioning

I did a project in OpenGL version 3.2 once where I used a "sampler2DArray" to store multiple images with the same dimensions and rendered them using textured points.
Now I am trying to port that project to my gnu/linux computer. This computer only supports up to OpenGL version to 2.1 and GLSL version up to 1.20 (which doesn't have sampler2DArray). As far as I know there is no way to update OpenGL to support the newer features.
What I am currently trying to do is to use a sampler3D to store my images and use the depth value to select the image I want.
To send the texture depth from the vertex shader to the fragment shader I have declared a "varying" float variable holding the depth value (0.0 to 1.0).
I am rendering 4 images at the locations: (-0.5, +0.5), (+0.5, +0.5), (-0.5, -0.5) and (+0.5, -0.5).
The image switching method appears to be working (changing the "index" variable changes the image). But for some wierd reason all images gets rendered at (0.0, 0.0) and not at their assigned positions. This problem goes away when I don't assign to the varying variable containing the depth value for the texture and set the depth value to 0.0 in the fragment shader.
Here is the vertex shader:
#version 120
attribute vec2 position
attribute float index
varying float v_index
void main()
{
gl_Position = vec4(position, 0.0, 1.0);
v_index = index; // Removing this assignment makes the images appear at their assigned locations.
}
Here is the fragment shader:
#version 120
uniform sampler3D texture;
varying float v_index;
void main()
{
gl_FragColor = texture3D(texture, vec3(gl_PointCoord, v_index));
}
The structure I use represent vertices:
struct vertex {
GLfloat x;
GLfloat y;
GLfloat texture_index;
};
The calls to the glVertexAttribPointer function (the problem may be here too):
glBindAttribLocation(shader_program, 0, "position");
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(struct vertex), (void *)0);
glBindAttribLocation(shader_program, 1, "index");
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 1, GL_FLOAT, GL_FALSE, sizeof(struct vertex), (void *)(2 * sizeof(GLfloat));
I have also found a very similar question. The answer marked "accepted" claims that the cause of the problem is that the shaders have more than 16 varying vectors (which isn't the case for me).
Here is the link: Strange error in GLSL while assigning attribute to a varying in vertex shader
This looks like your attribute location bings aren't effective and the locations are assigned bu the GL. Without the assignment, the index attribute is not used, and only the position one is, so it is very likely that it gets location 0. Whe index is actually used, it might get 0 (on nvidia, those locations seem to be assigned in alphabetical order).
The glBindAttribLocation() calls only have an effect when linking the program, so these have to be called before glLinkProgram(), and you have to re-link the program when you want to change those (which you should really avoid). The code you have given suggests that this is called during your regular draw calls, so that these never have any effect on the linked program.