OpenGL vertex shader processing - glsl

I'm pretty new to 3D programming. I'm trying to learn OpenGL from this site. During the reading I couldn't really understand how the layout (location = 0) line really operates. I've tried to search for other explanation online both in the OpenGL wiki and in other sites, and I've managed to find this site from which I understood a little more.
So if I am correct the vertex shader takes some inputs and generates some outputs. The input of the shader are called vertex attributes and each one of them as an index location called attribute index. Now I expect that if the shader takes as input a single vertex and its attributes, it has to run multiple times, one for each vertex of the object I'm trying to render.
Is it correct what I wrote up until this point?
Now, what I didn't manage to understand is how layout (location = 0) really works. My assumption is that this intruction needs to tell the shader from where location in memory to pick the first index attribute. Thus each time the shader re-runs (if it actually re-runs), the location should move by one unit, like in a normal for loop. Is this interpretation correct? And, please, can anyone actually explain me, in an organic way, how the vertex shader operates?
P.S. Thank you in advance for your time and excuse my poor English: I'm still practising it!
Edit
This is the code. Following the first guide I linked I created an array of vertices:
float vertices[] {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.0f, 0.5f, 0.0f
};
then I created a vertex buffer object:
unsigned int VBO;
glGenBuffer(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
I added the data to the VBO:
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
while the vertex shader reads:
#version 330 core
layout (location = 0) in vec3 aPos:
void main() {
gl_Position(aPos.x, aPos.y, aPos.z, 1.0f);
}

You need to look at both sides of this. You bind a buffer containing all of your data. Say position and color.
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 color;
Now in the program, I can use these vectors without specifying the index of the vertex I am processing because we had to tell GL how to buffer the data.
We do that when we bind buffers to the program.
Lets say we want to create a triangle. It has 3 vertexes, each vertex has two attributes: color and position. We create a vertex shader that processes each vertex, in that program it is implied that each vertex has a color and position. You don't care about the index in the array it is (for now).
The program will take vertex i, v_i and process it. How it populates position and vector depend on how you bind the data. I could have two arrays,
positionData = [x0, y0, z0, x1, ... z3];
colorData = [r0, g0, b0, r1, ... b3];
So I would buffer this data, then I would bind that buffer to the program at the attribute location and specify how it is read. Eg. bind the positionBuffer to attribute location 0, read it in strides of three with no offset.
The same with the color data, but with location 1.
Alternatively I could do.
posColData = [ x0, y0, z0, r0, g0, b0, x1, y1, ... b3];
Then I would create posColBuffer and bind it to the 0th attribute, with a stride of 6. I would also bind the posColBuffer to the 1st attribute with a stride of 6 and an offset of 3.
The code you are using does this here.
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0);
glEnableVertexAttribArray(0); ;
They utilize the layout clause by just saying 0 since they know the location.

Related

Make a shader storage buffer in different shader programs accessible

What layout and binding do i have to do to make a (working) shader storage buffer readable in a second shader program?
I set up and populated a SSBO which i bound successfully and used in a geometry shader. That shader reads and writes to that SSBO - no problems so far. No rendering done there.
In the next step, my rendering pass (second shader program) shall have access to this data. The idea is to have a big data set while the vertex shader of the second program only uses some indices per render call to pick certain values of that SSBO.
Do i miss some specific binding commands or did i place them at the wrong spot?
Is the layout consistent in both programs? Did i mess up the instances?
I just can't find any examples of a SSBO used in two programs..
Creating, populating and binding:
float data[48000];
data[0] = -1.0;
data[1] = 1.0;
data[2] = -1.0;
data[3] = -1.0;
data[4] = 1.0;
data[5] = -1.0;
data[6] = 1.0;
data[7] = 1.0;
data[16000] = 0.0;
data[16001] = 1.0;
data[16002] = 0.0;
data[16003] = 0.0;
data[16004] = 1.0;
data[16005] = 0.0;
data[16006] = 1.0;
data[16007] = 1.0;
GLuint ssbo;
glGenBuffers(1, &ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(data), &data, GL_DYNAMIC_COPY);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 1, ssbo);
Instancing in geometry shader
layout(std140, binding = 1) buffer mesh
{
vec2 points[8000];
vec2 texs[8000];
vec4 colors_and_errors[8000];
} mesh_data;
Second instance in the vertex shader of the other program
layout(std140, binding = 1) buffer mesh
{
vec2 points[8000];
vec2 texs[8000];
vec4 colors_and_errors[8000];
} mesh_data;
Are the instances working against each other?
Right now am not posting my bindings done in the render loop since i am not sure what i am doing there. I tried to bind before/ after changing the used program; without success.
Does anybody have an idea?
EDIT: Do i also have to bind the SSBO to the second program outside of the render loop? In a different way than the first binding?
EDIT: Although i did not solve this particular problem, i found a work-around that might be even more in the sense of opengl.
I used the SSBO of the first program as vertex attributes in the second program. This and the indexed-rendering function of opengl solved this issue.
(Should this be marked as solved?)
It seems like you're most of the way there, but there are a few things you should watch out for.
Is the layout consistent in both programs?
layout(std140, binding = 1) buffer mesh
You need to be careful about this layout. std140 will round up alignments to vec4, so will no longer line up with the data you're providing from the C code. In this case, std430 should work for you.
Do i also have to bind the SSBO to the second program outside of the render loop? In a different way than the first binding?
Once you've bound the SSBO once, assuming both programs are using the same binding point (in your example, they are) then you should be fine. Sharing data between programs is fine, but synchronisation is required. You can enforce this with a memory barrier.
You don't mention VAOs, but you will only be able to use SSBOs after you've bound a VAO (not on the default one).
I think this might be best explained with an example.
Vertex shader for the first program. It uses the buffer data for its position and texture coords and then flips the positions in Y.
layout(std430, binding = 1) buffer mesh {
vec4 points[3];
vec2 texs[3];
} mesh_data;
out highp vec2 coords;
void main() {
coords = mesh_data.texs[gl_VertexID];
gl_Position = mesh_data.points[gl_VertexID];
mesh_data.points[gl_VertexID] = vec4(gl_Position.x, -gl_Position.y, gl_Position.zw);
}
Verted shader for the second program. It just uses the data but doesn't modify it.
layout(std430, binding = 1) buffer mesh {
vec4 points[3];
vec2 texs[3];
} mesh_data;
out highp vec2 coords;
void main() {
coords = mesh_data.texs[gl_VertexID];
gl_Position = mesh_data.points[gl_VertexID];
}
In the application, you need to bind a VAO.
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
Then setup your SSBO.
float const data[] = {
-0.5f, -0.5f, 0.0f, 1.0,
0.0f, 0.5f, 0.0f, 1.0,
0.5f, -0.5f, 0.0f, 1.0,
0.0f, 0.0f,
0.5f, 1.0f,
1.0f, 0.0f
};
glGenBuffers(1, &ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(data), data, GL_DYNAMIC_COPY);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 1, ssbo);
Make the draw calls using the first program.
glUseProgram(first_program);
glDrawArrays(GL_TRIANGLES, 0, 3);
Insert a memory barrier to ensure the writes complete from the preceding draw call before the next draw call tries to read from the buffer.
glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
Make the draw calls using the second program.
glUseProgram(second_program);
glDrawArrays(GL_TRIANGLES, 0, 3);
I hope that clarifies things! Let me know if you have any further questions.

glDrawArrays unexpected behavior - wrong order of arguments produces desired image

I am working on the code written by someone else and at the moment I have a fairly limited understanding of the codebase. Which is why I wasn't sure how to formulate my question properly, and whether it is an OpenGL question, or debugging strategy question. Furthermore, I obviously can't really share the whole code base, and the reason stuff is not working is most likely rooted in there. Regardless, perhaps someone just might have an idea of what might be going on, or where should I look at.
I have a vertex structure defined in the following way:
struct Vertex {
Vertex(glm::vec3 position, glm::vec3 normal):
_position(position), _normal(normal) {};
glm::vec3 _position;
glm::vec3 _normal;
};
I have a std vector of vertices which I fill out with vertex data extracted from a certain structure. For the sake of simplicity, let's assume it's another vector:
// std::vector<Vertex> data - contains vertex data
std::vector<Vertex> Vertices;
Vertices.reserve(data.size());
for (int i = 0; i < data.size(); i++) {
Vertices.emplace_back(Vertex(data[i]._position, data[i]._normal));
}
Then I generate a vertex buffer object, buffer my data and enable vertex attributes:
GLuint VB;
glGenBuffers(1, &VB);
glBindBuffer(GL_ARRAY_BUFFER, VB);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex)*Vertices.size(), &Vertices[0],
GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(0, 3,GL_FLOAT, GL_FALSE, sizeof(Vertex), 0);
glVertexAttribPointer(1, 3,GL_FLOAT, GL_FALSE, sizeof(Vertex), (const void*)
(sizeof(GL_FLOAT)*3));
Finally, I bind a shader and set up uniforms, then I call glDrawArrays:
glDrawArrays(GL_TRIANGLES, 0, Vertices.size());
// clean-up
// swap buffers (comes at some point in code, I haven't figure out where yet)
At this point nothing gets rendered. However, initially I made a mistake and swapped the parameters of the draw call, such that offset comes before the number of elements to draw:
glDrawArrays(GL_TRIANGLES, Vertices.size(), 0);
And surprisingly, that actually rendered what I wanted to render in the first place. However, the documentation clearly says that the offset comes first, and the number of elements after this. Which means that glDrawArrays(GL_TRIANGLES, Vertices.size(), 0) should have shown nothing, since I specified 'zero' number of elements to be drawn.
Now there are multiple windows in the application and shared vertex buffer objects. At some point I thought that the vertex buffer object I generated somehow gets passed around in the part of the code I haven't explored yet, and uses it to draw the geometry I didn't expect to be drawn. However that still doesn't explain the fact that when I use glDrawArrays(GL_TRIANGLES, Vertices.size(), 0) with 'zero' as the number of elements to be drawn - I see the geometry; whereas when I swap the parameters according to the documentation - nothing gets shown.
Given this scarce information that I shared, does someone by any chance have an idea of what might be going on? If not, how would you tackle this, how would you go about debugging (or understanding) it?
EDIT: Vertex and Fragment shader
Mind that this is a dummy shader that simply paints the whole geometry red. Regardless, shader is not the cause of my problems, given how geometry gets drawn depending on how I use the draw call (see above).
EDIT EDIT: Note that as long as I don't activate blending, the alpha component (which is 'zero' in the shader), won't have any effect on the produced image.
vertex shader:
#version 440
layout (location=0) in vec3 position;
layout (location=1) in vec3 normal;
uniform mat4 MVP; // model-view-projection matrix
void main() {
gl_Position = MVP*vec4(position, 1.0);
}
fragment shader:
#version 440
out vec4 outColor;
void main()
{
outColor = vec4(1, 0, 0, 0);
})glsl";
Regarding the glDrawArrays parameter inversion, have you tried stepping into that function call? Perhaps you are using an OpenGL wrapper of some sort which modifies the order or the arguments. I can confirm however that the documentation you quote is not wrong about the parameter order!

How to use GLM with OpenGL?

I am trying to render an object using GLM for matrix transformations, but I'm getting this:
EDIT: Forgot to mention that the object I'm trying to render is a simple Torus.
I did a lot of digging around and one thing I noticed is glGetUniformLocation(program, "mvp") returns -1. The docs says it will return -1 if the uniform variable isn't used in the shader, even if it is declared. As you can see below, it has been declared and is being used in the vertex shader. I've checked against program to make sure it is valid, and such.
So my questions are:
Question 1:
Why is glGetUniformLocation(program, "mvp") returning -1 even though it is declared and is being used in the vertex shader?
Question 2: (Which I think may be related to Q1)
Another thing I'm not particularly clear on. My GameObject class has a struct called Mesh with variables GLuint vao and GLuint[4] vbo (Vertex Array Object) and (Vertex Buffer Object). I am using Assimp, and my GameObject class is based on this tutorial. The meshes are rendered in the same way as the tutorial, using:
glBindVertexArray(vao);
glDrawElements(GL_TRIANGLES, elementCount, GL_UNSIGNED_INT, NULL);
I'm not sure how VAO's and VBO's work. What I've found is that VAO's are used if you want access to the vertex arrays throughout your program, and VBO's are used if you just want to send it to the graphics card and not touch it again (Correct me if I'm wrong here). So why does the tutorial mix them? In the constructor for a mesh, it creates and binds a VAO then doesn't touch it for the rest of the constructor (unless creating and binding VBO's have an effect on the currently bound VAO). It then goes on and creates and binds VBO's for the vertex buffer, normal buffer, texture coordinate buffer, and index buffer. To render the object it binds the VAO and calls glDrawElements. What I'm confused about is how/where does OpenGL access the VBO's, and if it can't with the setup in the tutorial, which I'm pretty sure it can, what needs to change?
Source
void GameObject::render() {
GLuint program = material->shader->program;
glUseProgram(program);
glm::mat4 mvp = Game::camera->mvpMatrix(this->position);
GLuint mvpLoc = glGetUniformLocation(program, "mvp");
printf("MVP Location: %d\n", mvpLoc); // prints -1
glUniformMatrix4fv(mvpLoc, 1, GL_FALSE, &mvp[0][0]);
for (unsigned int i = 0; i < meshes.size(); i++) {
meshes.at(i)->render(); // renders element array for each mesh in the GameObject
}
}
Vertex shader (simple unlit red color):
#version 330 core
layout(location = 0) in vec3 position;
uniform mat4 mvp;
out vec3 vertColor;
void main(void) {
gl_Position = mvp * vec4(position, 1);
vertColor = vec3(1, 0, 0);
}
Fragment shader:
#version 330 core
in vec3 vertColor;
out vec3 color;
void main(void) {
color = vertColor;
}
Question 1
You've pretty much answered this one yourself. glGetUniformLocation(program, name) gets the location of the uniform "mvp" in the shader program program and returns -1 if the uniform is not declared (or not used: if you don't use it, it doesn't get compiled in). Your shader does declare and use mvp, which strongly suggests there is an issue with compiling the program. Are you sure you are using this shader in the program?
Question 2
A VBO stores the data values that the GPU will use. These could be colour values, normals, texture coordinates, whatever you like.
A VAO is used to express the layout of your VBOs - think of it like a map, indicating to your program where to find the data in the VBOs.
The example program does touch the VAO whenever it calls glVertexAttribPointer, e.g.
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
This is not related to your missing uniform.

OpenGL - GLSL assigning to varying variable breaks the vertex positioning

I did a project in OpenGL version 3.2 once where I used a "sampler2DArray" to store multiple images with the same dimensions and rendered them using textured points.
Now I am trying to port that project to my gnu/linux computer. This computer only supports up to OpenGL version to 2.1 and GLSL version up to 1.20 (which doesn't have sampler2DArray). As far as I know there is no way to update OpenGL to support the newer features.
What I am currently trying to do is to use a sampler3D to store my images and use the depth value to select the image I want.
To send the texture depth from the vertex shader to the fragment shader I have declared a "varying" float variable holding the depth value (0.0 to 1.0).
I am rendering 4 images at the locations: (-0.5, +0.5), (+0.5, +0.5), (-0.5, -0.5) and (+0.5, -0.5).
The image switching method appears to be working (changing the "index" variable changes the image). But for some wierd reason all images gets rendered at (0.0, 0.0) and not at their assigned positions. This problem goes away when I don't assign to the varying variable containing the depth value for the texture and set the depth value to 0.0 in the fragment shader.
Here is the vertex shader:
#version 120
attribute vec2 position
attribute float index
varying float v_index
void main()
{
gl_Position = vec4(position, 0.0, 1.0);
v_index = index; // Removing this assignment makes the images appear at their assigned locations.
}
Here is the fragment shader:
#version 120
uniform sampler3D texture;
varying float v_index;
void main()
{
gl_FragColor = texture3D(texture, vec3(gl_PointCoord, v_index));
}
The structure I use represent vertices:
struct vertex {
GLfloat x;
GLfloat y;
GLfloat texture_index;
};
The calls to the glVertexAttribPointer function (the problem may be here too):
glBindAttribLocation(shader_program, 0, "position");
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(struct vertex), (void *)0);
glBindAttribLocation(shader_program, 1, "index");
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 1, GL_FLOAT, GL_FALSE, sizeof(struct vertex), (void *)(2 * sizeof(GLfloat));
I have also found a very similar question. The answer marked "accepted" claims that the cause of the problem is that the shaders have more than 16 varying vectors (which isn't the case for me).
Here is the link: Strange error in GLSL while assigning attribute to a varying in vertex shader
This looks like your attribute location bings aren't effective and the locations are assigned bu the GL. Without the assignment, the index attribute is not used, and only the position one is, so it is very likely that it gets location 0. Whe index is actually used, it might get 0 (on nvidia, those locations seem to be assigned in alphabetical order).
The glBindAttribLocation() calls only have an effect when linking the program, so these have to be called before glLinkProgram(), and you have to re-link the program when you want to change those (which you should really avoid). The code you have given suggests that this is called during your regular draw calls, so that these never have any effect on the linked program.

Rendering rectangle texture with GLSL

I created a class that renders videoframes (on Mac) to a custom framebuffer object. As input I have a YUV texture, and I successfully created a fragment shader, which takes as input 3 rectangle textures (one for Y, U, and V planes each, the data for which is uploaded by glTexSubImage2D using GL_TEXTURE_RECTANGLE_ARB, GL_LUMINANCE and GL_UNSIGNED_BYTE), before rendering I set the active textures to three different texture units (0, 1 and 2) and bind a texture for each, and for performance reasons I used GL_APPLE_client_storage and GL_APPLE_texture_range. Then I rendered it using glUseProgram(myProg), glBegin(GL_QUADS) ... glEnd().
That worked fine, and I got the expected result (aside from a flickering effect, which I guess has to do with the fact that I used two different GL contexts on two different threads, and I suppose they get into each other's way at some point [that's topic for another question later]). Anyway, I decided to further improve my code by adding a vertex shader as well, so that I can skip the glBegin/glEnd - which I read is outdated and should be avoided anyway.
So as a next step I created two buffer objects, one for the vertices and one for the texture coordinates:
const GLsizeiptr posSize = 4 * 4 * sizeof(GLfloat);
const GLfloat posData[] =
{
-1.0f, -1.0f, -1.0f, 1.0f,
1.0f, -1.0f, -1.0f, 1.0f,
1.0f, 1.0f, -1.0f, 1.0f,
-1.0f, 1.0f, -1.0f, 1.0f
};
const GLsizeiptr texCoordSize = 4 * 2 * sizeof(GLfloat);
const GLfloat texCoordData[] =
{
0.0, 0.0,
1.0, 0.0,
1.0, 1.0,
0.0, 1.0
};
glGenBuffers(1, &m_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, m_vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, posSize, posData, GL_STATIC_DRAW);
glGenBuffers(1, &m_texCoordBuffer);
glBindBuffer(GL_ARRAY_BUFFER, m_texCoordBuffer);
glBufferData(GL_ARRAY_BUFFER, texCoordSize, texCoordData, GL_STATIC_DRAW);
Then after loading the shaders I try to retrieve the locations of the attributes in the vertex shader:
m_attributeTexCoord = glGetAttribLocation( m_shaderProgram, "texCoord");
m_attributePos = glGetAttribLocation( m_shaderProgram, "position");
which gives me 0 for texCoord and 1 for position, which seems fine.
After getting the attributes I also call
glEnableVertexAttribArray(m_attributePos);
glEnableVertexAttribArray(m_attributeTexCoord);
(I am doing that only once, or does it have to be done before every glVertexAttribPointer and glDrawArrays? Does it need to be done per texture unit? or while my shader is activated with glProgram? Or can I do it just anywhere?)
After that I changed the rendering code to replace the glBegin/glEnd:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, texID_Y);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, texID_U);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, texID_V);
glUseProgram(myShaderProgID);
// new method with shaders and buffers
glBindBuffer(GL_ARRAY_BUFFER, m_vertexBuffer);
glVertexAttribPointer(m_attributePos, 4, GL_FLOAT, GL_FALSE, 0, NULL);
glBindBuffer(GL_ARRAY_BUFFER, m_texCoordBuffer);
glVertexAttribPointer(m_attributeTexCoord, 2, GL_FLOAT, GL_FALSE, 0, NULL);
glDrawArrays(GL_QUADS, 0, 4);
glUseProgram(0);
But since changing the code to this, I always only ever get a black screen as result. So I suppose I am missing some simple steps, maybe some glEnable/glDisable or setting some things properly - but but like I said I am new to this, so I haven't really got an idea. For your reference, here is the vertex shader:
#version 110
attribute vec2 texCoord;
attribute vec4 position;
// the tex coords for the fragment shader
varying vec2 texCoordY;
varying vec2 texCoordUV;
//the shader entry point is the main method
void main()
{
texCoordY = texCoord;
texCoordUV = texCoordY * 0.5; // U and V are only half the size of Y texture
gl_Position = gl_ModelViewProjectionMatrix * position;
}
My guess is that I am missing something obvious here, or just don't have a deep enough understanding of the ongoing processes here yet. I tried using OpenGLShaderBuilder as well, which helped me get the original code for the fragment shader right (this is why I haven't posted it here), but since adding the vertex shader it doesn't give me any output either (was wondering how it could know how to produce the output, if it doesn't know the position/texCoord attributes anyway?)
I haven't closely studied every line, but I think your logic is mostly correct. What I see missing is glEnableVertexAttribArray. You need to enable both vertex attributes before the glDrawArrays call.