Bind an SSBO to a fragment shader - opengl

I have a an SSBO which stores vec4 colour values for each pixel on screen and is pre populated with values by a compute shader before the main loop.
I'm now trying to get this data onscreen which I guess involves using the fragment shader (Although if you know a better method for this I'm open to suggestions)
So I'm trying to get the buffer or at least the data in it to the fragment shader so that I can set the colour of each fragment to the corresponding value in the buffer but I cannot find any way of doing this?
I have been told that I can bind the SSBO to the fragment shader but I don't know how to do this? Other thoughts I had was somehow moving the data from the SSBO to a texture but I can't work that out either
UPDATE:
In response thokra's excellent answer and following comments here is the code to set up my buffer:
//Create the buffer
GLuint pixelBufferID;
glGenBuffers(1, &pixelBufferID);
//Bind it
glBindBuffer(GL_SHADER_STORAGE_BUFFER, pixelBufferID);
//Set the data of the buffer
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(vec4) * window.getNumberOfPixels, new vec4[window.getNumberOfPixels], GL_DYNAMIC_DRAW);
//Bind the buffer to the correct interface block number
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, pixelBufferID);
Then I call the compute shader and this part works, I check the data has been populated correctly. Then in my fragment shader, just as a test:
layout(std430, binding=0) buffer PixelBuffer
{
vec4 data[];
} pixelBuffer
void main()
{
gl_FragColor = pixelBuffer.data[660000];
}
What I've noticed is that it seems to take longer and longer the higher the index so at 660000 it doesn't actually crash its just taking an silly amount of time.

Storage buffers work quite similarly to uniform buffers. To get a sense of how those work I suggest something like this. The main differences are that storage buffer can hold substantially higher amounts of data and the you can randomly read from and write to them.
There are multiple angles of working this, but I'll start with the most basic one - the interface block inside your shader. I will only describe a subset of the possibilities when using interface blocks but it should be enough to get you started.
In contrast to "normal" variables, you cannot specify buffer variables in the global scope. You need to use an interface block (Section 4.3.9 - GLSL 4.40 Spec) as per Section 4.3.7 - GLSL 4.40 Spec:
The buffer qualifier can be used to declare interface blocks (section 4.3.9 “Interface Blocks”), which are then referred to as shader storage blocks. It is a compile-time error to declare buffer variables at global scope (outside a block).
Note that the above mentioned section differs slightly from the ARB extension.
So, to get access to stuff in your storage buffer you'll need to define a buffer interface block inside your fragment shader (or any other applicable stage):
layout (binding = 0) buffer BlockName
{
float values[]; // just as an example
};
Like with any other block without an instance name, you'll refer to the buffer storage as if values were at global scope, e.g.:
void main()
{
// ...
values[0] = 1.f;
// ...
}
On the application level the only thing you now need to know is that the buffer interface block BlockName has the binding 0 after the program has been successfully linked.
After creating a storage buffer object with your application, you first bind the buffer to the binding you specified for the corresponding interface block using
glBindBufferBase(GLenum target​, GLuint index​, GLuint buffer​);
for binding the complete buffer to the index or
glBindBufferRange(GLenum target​, GLuint index​, GLuint buffer​, GLintptr offset​, GLsizeiptr size​);
for binding a subset specified by an offset and a number of of the buffer to the index.
Note that index refers to the binding specified in your layout for the corresponding interface block.
And that's basically it. Be aware that there are certain limits for the storage buffer size, the number of binding points, maximum storage block sizes and so on. I refer you to the corresponding sections in the GL and GLSL specs.
Also, there is a minimal example in the ARB extension. Reading the issues sections of extension also often provides further insight into the exposed functionality and the rationale behind it. I advise you to read through it.
Leave a comment if you run into problems.

Related

Setting OpenGL uniform value from shader storage buffer

In OpenGL, I have one compute shader which writes output values into a shader storage buffer on the device.
Then another shader (fragment shader) reads that value and uses it.
So this happens all on the device, without synchronizing with the host.
Is there way to instead have the fragment shader receive the values as a uniform, except that the content of the uniform is not set by the host with glUniform(), but it takes the value that is on the device-side shader storage buffer? In a way similar to how glDrawIndirect() can take parameters from a device-side buffer, instead of from the host, avoiding pipeline stalling.
This would allow simplifying a program where the fragment shader will receive the value either as a constant set by the host, or dynamically from a previous shader, depending on configuration.
Uniforms can be aggregated into an interface block:
layout(binding = 0) uniform InBlock {
// ... your uniforms go here ...
} IN;
Then the compute-shader written buffer can be bound to that interface block binding point:
glBindBuffersBase(GL_UNIFORM_BUFFER, 0, buffer_id);
In fact this is the preferred way of doing things in general, rather than setting each uniform one-by-one.

Does GL_SHADER_STORAGE_BUFFER locations collide with other shaders locations?

I have multiple glsl files that use shader storage bufffer. If I bind buffer bases with other shader files, but they have same locations in storage buffer, they seem to affect each other. Does this mean that I have to unbind it somehow? When I chose other locations for each files, they didn't seem to have impact to the code.
for example
first.vs
layout(std430, binding = 0) buffer texture_coordinate_layout
{
vec2 texture_coordinates[];
};
second.vs
layout(std430, binding = 0) buffer vertices_layout
{
vec2 vertices[];
};
when having two different shader programs, when I bind with each like so
first shader program
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_vertex_ssbo);
second shader program
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_texture_coordiante_ssbo);
Buffer bindings are part of context state, not the shader program. Index 0 in the context is index 0; it's not associated with any program directly.
The program only specifies which indexed binding point is used for that particular variable when the program gets used for rendering purposes. If you need to use a particular buffer object for a particular variable in a program, then before rendering, you need to ensure that the particular buffer is bound to the context at the index which the program will read. Always.

Opengl 3/4 : Can I bind the same buffer object to different targets

In my specific case, I'm trying to bind a vertex buffer object into a uniform buffer object.
For more details, in my opaque object rendering pipeline in deferred shading, I create a G buffer then render light volumes one point light at a time using a light vbo.
I then need all these lights as a ubo available for iteration in forward rendering for translucent objects.
Texture objects are directly and forever associated with the target type with which they are first used. This is not the case for buffer objects.
There is no such thing as a "vertex buffer object" or a "uniform buffer object" (ignore the name of the corresponding extensions). There are only "buffer objects", which can be used for various OpenGL operations, like providing arrays of vertex data, or the storage for uniform blocks, or any number of other things. It is 100% fine to use a buffer as a source for vertex data, then use the same buffer (and same portion of that buffer) as a source for uniform data.

Re-compiling shader in openGL

I'm writing my own OpenGL-3D-Application and have stumbled across a little problem:
I want the number of light sources to be dynamic. For this, my shader contains an array of my lights struct:uniform PointLight pointLights[NR_POINT_LIGHTS];
The variable NR_POINT_LIGHTS is set by preprocessor, and the command for this is generated by my applications code (Java). So when creating a shader program, I pass the desired start-amount of PintLights, complete the source text with the preprocessor command, compile, link and use. This works great.
Now I want to change this variable. I re-build the shader-source-string, re-compile and re-link a new shaderProgram and continue using this onoe. It just appears that all uniforms set in the old program are getting lost in the progress (of course, I once set them for the old program).
My ideas on how to fix this:
Don't compile a new program, but rather somehow change the source data for the currently running shaders and somehow re-compile them, to continue using the program with the right uniform values
Copy all uniform data from the old program to the newly generated one
What is the right way to do this? How do I do this? I'm not very experienced yet and don't know if any of my ideas is even possible.
You're looking for a Uniform Buffer or (4.3+ only) a Shader Storage Buffer.
struct Light {
vec4 position;
vec4 color;
vec4 direction;
/*Anything else you want*/
}
Uniform Buffer:
const int MAX_ARRAY_SIZE = /*65536 / sizeof(Light)*/;
layout(std140, binding = 0) uniform light_data {
Light lights[MAX_ARRAY_SIZE];
};
uniform int num_of_lights;
Host Code for Uniform Buffer:
glGenBuffers(1, &light_ubo);
glBindBuffer(GL_UNIFORM_BUFFER, light_ubo);
glBufferData(GL_UNIFORM_BUFFER, sizeof(GLfloat) * static_light_data.size(), static_light_data.data(), GL_STATIC_DRAW); //Can be adjusted for your needs
GLuint light_index = glGetUniformBlockIndex(program_id, "light_data");
glBindBufferBase(GL_UNIFORM_BUFFER, 0, light_ubo);
glUniformBlockBinding(program_id, light_index, 0);
glUniform1i(glGetUniformLocation(program_id, "num_of_lights"), static_light_data.size() / 12); //My lights have 12 floats per light, so we divide by 12.
Shader Storage Buffer (4.3+ Only):
layout(std430, binding = 0) buffer light_data {
Light lights[];
};
/*...*/
void main() {
/*...*/
int num_of_lights = lights.length();
/*...*/
}
Host Code for Shader Storage Buffer (4.3+ Only):
glGenBuffers(1, &light_ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, light_ssbo);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(GLfloat) * static_light_data.size(), static_light_data.data(), GL_STATIC_DRAW); //Can be adjusted for your needs
light_ssbo_block_index = glGetProgramResourceIndex(program_id, GL_SHADER_STORAGE_BLOCK, "light_data");
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, light_ssbo);
glShaderStorageBlockBinding(program_id, light_ssbo_block_index, 0);
The main difference between the two is that Uniform Buffers:
Have compatibility with older, OpenGL3.x hardware,
Are limited on most systems to 65kb per buffer
Arrays need to have their [maximum] size declared statically at the compile-time of the shader.
Whereas Shader Storage Buffers:
Require hardware no older than 5 years
Have a API mandated minimum allowable size of 16Mb (and most systems will allow up to 25% the total VRAM)
Can dynamically query the size of any arrays stored in the buffer (though this can be buggy on older AMD systems)
Can be slower than Uniform Buffers on the Shader side (roughly equivalent to a Texture Access)
Don't compile a new program, but rather somehow change the source data for the currently running shaders and somehow re-compile them, to continue using the program with the right uniform values
This isn't do-able at runtime if I'm understanding right (implying that you could change the shader-code of the compiled shader program) but if you modify the shader source text you can compile a new shader program. Thing is, how often do the number of lights change in your scene? Because this is a fairly expensive process to do.
You could specify a max number of lights if you don't mind having a limitation and only use the lights in the shader that have been populated with information, saving you the task of tweaking the source text and recompiling a whole new shader program, but that leaves you with a limitation on the number of lights (If you aren't planning on having absolutely loads of lights in your scene but are planning of having the number of lights change relatively often then this is probably going to be best for you)
However, if you really want to go down the route that you are looking at here:
Copy all uniform data from the old program to the newly generated one
You can look at using a Uniform Block. If you're going to be using shader programs with similar or shared uniforms, Uniform Blocks are a good way of managed those 'universal' uniform variables across your shade programs, or in your case the shader you are moving to as you grow the amount of lights in the shader. Theres a good tutorial on uniform blocks here
Lastly, depending on the OpenGL version you're using, you might still be able to achieve dynamic array sizes. OpenGL 4.3 introduced the ability to use buffers and have unbound array sizes, that you would use glBindBufferRange to send the length of your lights array to. You'll see more talk about that functionality in this question and this wiki reference.
The last would probably be my preference, but it depends on if you're aiming at hardware supporting older OpenGL versions.

OpenGL How Many VAOs

I am writing an OpenGL3+ application and have some confusion about the use of VAOs. Right now I just have one VAO, a normalised quad set around the origin. This single VAO contains 3 VBOs; one for positions, one for surface normals, and one GL_ELEMENT_ARRAY_BUFFER for indexing (so I can store just 4 vertices, rather than 6).
I have set up some helper methods to draw objects to the scene, such as drawCube() which takes position and rotation values and follows the procedure;
Bind the quad VAO.
Per cube face:
Create a model matrix that represents this face.
Upload the model matrix to the uniform mat4 model vertex shader variable.
Call glDrawElements() to draw the quad into the position for this face.
I have just set about the task of adding per-cube colors and realised that I can't add my color VBO to the single VAO as it will change with each cube, and this doesn't feel right.
I have just read the question; OpenGL VAO best practices, which tells me that my approach is wrong, and that I should use more VAOs to save the work of setting the whole scene up every time.
How many VAOs should be used? Clearly my approach of having 1 is not optimal, should there be a VAO for every static surface in the scene? What about ones that move?
I am writing to a uniform variable for each vertex, is that correct? I read that uniform shader variables should not change mid-frame, if I am able to write different values to my uniform variable, how do uniforms differ from simple in variables in a vertex shader?
Clearly my approach of having 1 is not optimal, should there be a VAO for every static surface in the scene?
Absolutely not. Switching VAOs is costly. If you allocate one VAO per object in your scene, you need to switch the VAO before rendering such objects. Scale that up to a few hundred or thousand objects currently visible and you get just as much VAO changes. The questions is, if you have multiple objects which share a common memory layout, i.e. sizes/types/normalization/strides of elements are the same, why would you want to define multiple VAOs that all store the same information? You control the offset where you want to start pulling vertex attributes from directly with a corresponding draw call.
For non-indexed geometry this is trivial, since you provide a first (or an array of offsets in the multi-draw case) argument to gl[Multi]DrawArrays*() which defines the offset into the associated ARRAY_BUFFER's data store.
For indexed geometry, and if you store indices for multiple objects in a single ELEMENT_ARRAY_BUFFER, you can use gl[Multi]DrawElementsBaseVertex to provide a constant offset for indices or manually offset your indices by adding a constant offset before uploading them to the buffer object.
Being able to provide offsets into a buffer store also implies that you can store multiple distinct objects in a single ARRAY_BUFFER and corresponding indices in a single ELEMENT_ARRAY_BUFFER. However, how large buffer objects should be depends on your hardware and vendors differ in their recommendations.
I am writing to a uniform variable for each vertex, is that correct? I read that uniform shader variables should not change mid-frame, if I am able to write different values to my uniform variable, how do uniforms differ from simple in variables in a vertex shader?
First of all, a uniforms and shader input/output variables declared as in/out differ in various instances:
input/output variables define an interface between shader stages, i.e. output variables in one shader stage are backed by a corresponding and equally named input variable in the following stage. A uniform is available in all stages if declared with the same name and is constant until changed by the application.
input variables inside a vertex shader are filled from an ARRAY_BUFFER. Uniforms inside a uniform block are backed a UNIFORM_BUFFER.
input variables can also be written directly using the glVertexAttrib*() family of functions. single uniforms are written using the glUniform*() family of functions.
the values of uniforms are program state. the values of input variables are not.
The semantic difference should also be obvious: uniforms, as their name suggests, are usually constant among a set of primitives, whereas input variables usually change per vertex or fragment (due to interpolation).
EDIT: To clarify and to factor in Nicol Bolas' remark: Uniforms cannot be changed by the application for a set of vertices submitted by a single draw call, neither can vertex attributes by calling glVertexAttrib*(). Vertex shader inputs backed by a buffer objects will change either once per vertex or at some specific rate set by glVertexAttribDivisor.
EDIT2: To clarify how a VAO can theoretically store multiple layouts, you can simply define multiple arrays with different indices but equal semantics. For instance,
glVertexAttribPointer(0, 4, ....);
and
glVertexAttribPointer(1, 3, ....);
could define two arrays with indices 0 and 1, component sized 3 and 4 and both refer to position attributes of vertices. However, depending on what you want to render, you can bind a hypothetical vertex shader input
// if you have GL_ARB_explicit_attrib_location or GL3.3 available, use explicit
// locations
/*layout(location = 0)*/ in vec4 Position;
or
/*layout(location = 1)*/ in vec3 Position;
to either index 0 or 1 explicitly or glBindAttribLocation() and still use the same VAO. AFAIK, the spec says nothing about what happens if an attribute is enabled but not sourced by the current shader but I suspect implementation to simply ignore the attribute in that case.
Whether you source the data for said attributes from the same or a different buffer object is another question but of course possible.
Personally I tend to use one VBO and VAO per layout, i.e. if my data is made up of an equal number of attributes with the same properties, I put them into a single VBO and a single VAO.
In general: You can experiment with this stuff a lot. Do it!