Does GL_SHADER_STORAGE_BUFFER locations collide with other shaders locations? - c++

I have multiple glsl files that use shader storage bufffer. If I bind buffer bases with other shader files, but they have same locations in storage buffer, they seem to affect each other. Does this mean that I have to unbind it somehow? When I chose other locations for each files, they didn't seem to have impact to the code.
for example
first.vs
layout(std430, binding = 0) buffer texture_coordinate_layout
{
vec2 texture_coordinates[];
};
second.vs
layout(std430, binding = 0) buffer vertices_layout
{
vec2 vertices[];
};
when having two different shader programs, when I bind with each like so
first shader program
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_vertex_ssbo);
second shader program
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_texture_coordiante_ssbo);

Buffer bindings are part of context state, not the shader program. Index 0 in the context is index 0; it's not associated with any program directly.
The program only specifies which indexed binding point is used for that particular variable when the program gets used for rendering purposes. If you need to use a particular buffer object for a particular variable in a program, then before rendering, you need to ensure that the particular buffer is bound to the context at the index which the program will read. Always.

Related

Setting OpenGL uniform value from shader storage buffer

In OpenGL, I have one compute shader which writes output values into a shader storage buffer on the device.
Then another shader (fragment shader) reads that value and uses it.
So this happens all on the device, without synchronizing with the host.
Is there way to instead have the fragment shader receive the values as a uniform, except that the content of the uniform is not set by the host with glUniform(), but it takes the value that is on the device-side shader storage buffer? In a way similar to how glDrawIndirect() can take parameters from a device-side buffer, instead of from the host, avoiding pipeline stalling.
This would allow simplifying a program where the fragment shader will receive the value either as a constant set by the host, or dynamically from a previous shader, depending on configuration.
Uniforms can be aggregated into an interface block:
layout(binding = 0) uniform InBlock {
// ... your uniforms go here ...
} IN;
Then the compute-shader written buffer can be bound to that interface block binding point:
glBindBuffersBase(GL_UNIFORM_BUFFER, 0, buffer_id);
In fact this is the preferred way of doing things in general, rather than setting each uniform one-by-one.

How does OpenGL differentiate binding points in VAO from ones defined with glBindBufferBase?

I am writing a particle simulation which uses OpenGL >= 4.3 and came upon a "problem" (or rather the lack of one), which confuses me.
For the compute shader part, I use various GL_SHADER_STORAGE_BUFFERs which are bound to binding points via glBindBufferBase().
One of these GL_SHADER_STORAGE_BUFFERs is also used in the vertex shader to supply normals needed for rendering.
The binding in both the compute and vertex shader GLSL (these are called shaders 1 below) looks like this:
OpenGL part:
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 1, normals_ssbo);
GLSL part:
...
layout(std430, binding = 1) buffer normals_ssbo
{
vec4 normals[];
};
...
The interesting part is that in a seperate shader program with a different vertex shader (below called shader 2), the binding point 1 is (re-)used like this:
GLSL:
layout(location = 1) in vec4 Normal;
but in this case, the normals come from a different buffer object and the binding is done using a VAO, like this:
OpenGL:
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, 0);
As you can see, the binding point and the layout of the data (both are vec4) are the same, but the actual buffer objects differ.
Now to my questions:
Why does the VAO of shader 2, which is created and used after setting up shaders 1 (which use glBindBufferBase for binding), seamingly overwrite (?) the binding point, but shaders 1 still remember the SSBO binding and work fine without calling glBindBufferBase again before using them?
How does OpenGL know which of those two buffer objects the binding point (which in both cases is 1) should use? Are binding points created via VAO and glBindBufferBase simply completely seperate things? If that's the case, why does something like this NOT work:
layout(std430, binding = 1) buffer normals_ssbo
{
vec4 normals[];
};
layout(location = 1) in vec4 Normal;
Are binding points created via VAO and glBindBufferBase simply completely seperate things?
Yes, they are. That's why they're set by two different functions.
If that's the case, why does something like this NOT work:
Two possibilities present themselves. You implemented it incorrectly on the rendering side, or your driver has a bug. Which is which cannot be determined without seeing your actual code.

opengl pass texture to program: once or at every rendering?

I've a program with two texture: one from a video, and one from an image.
For the image texture, do I have to pass it to the program at each rendering, or can I do it just once? ie can I do
glActiveTexture(GLenum(GL_TEXTURE1))
glBindTexture(GLenum(GL_TEXTURE_2D), texture.id)
glUniform1i(textureLocation, 1)
just once? I believed so, but in my experiment, this works ok if there no video texture involved, but as soon as I add the video texture that I'm attaching at every rendering pass (since it's changing) the only way to get the image is to run the above code at each rendering frame.
Let's dissect what your doing, including some unnecessary stuff, and what the GL does.
First of all, none of the C-style casts you're doing in your code are necessary. Just use GL_TEXTURE_2D and so on instead of GLenum(GL_TEXTURE_2D).
glActiveTexture(GL_TEXTURE0 + i), where i is in the range [0, GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS - 1], selects the currently active texture unit. Commands that alter texture unit state will affect unit i as long as you don't call glActiveTexture with another valid unit identifier.
As soon as you call glBindTexture(target, name) with the current active texture unit i, the state of the texture unit is changed to refer to name for the specified target when sampling it with the appropriate sampler in a shader (i.e. name might be bound to TEXTURE_2D and the corresponding sample would have to be a sampler2D). You can only bind one texture object to a specific target for the currently active texture unit - so, if you need to sample two 2D textures in your shader, you'd need to use two texture units.
From the above, it should be obvious what glUniform1i(samplerLocation, i) does.
So, if you have two 2D textures you need to sample in a shader, you need two texture units and two samplers, each referring to one specific unit:
GLuint regularTextureName = 0;
GLunit videoTextureName = 0;
GLint regularTextureSamplerLocation = ...;
GLint videoTextureSamplerLocation = ...;
GLenum regularTextureUnit = 0;
GLenum videoTextureUnit = 1;
// setup texture objects and shaders ...
// make successfully linked shader program current and query
// locations, or better yet, assign locations explicitly in
// the shader (see below) ...
glActiveTexture(GL_TEXTURE0 + regularTextureUnit);
glBindTexture(GL_TEXTURE_2D, regularTextureName);
glUniform(regularTextureSamplerLocation, regularTextureUnit);
glActiveTexture(GL_TEXTURE0 + videoTextureUnit);
glBindTexture(GL_TEXTURE_2D, videoTextureName);
glUniform(videoTextureSampleLocation, videoTextureUnit);
Your fragment shader, where I assume you'll be doing the sampling, would have to have the corresponding samplers:
layout(binding = 0) uniform sampler2D regularTextureSampler;
layout(binding = 1) uniform sampler2D videoTextureSampler;
And that's it. If both texture objects bound to the above units are setup correctly, it doesn't matter if the contents of the texture changes dynamically before each fragment shader invocation - there are numerous scenarios where this is common place, e.g. deferred rendering or any other render-to-texture algorithm so you're not exactly breaking new ground with some video texture.
As to the question on how often you need to do this: you need to do it when you need to do it - don't change state that doesn't need changing. If you never change the bindings of the corresponding texture unit, you don't need to rebind the texture at all. Set them up once correctly and leave them alone.
The same goes for the sampler bindings: if you don't sample other texture objects with your shader, you don't need to change the shader program state at all. Set it up once and leave it alone.
In short: don't change state if don't have to.
EDIT: I'm not quite sure if this is the case or not, but if you're using teh same shader with one sampler for both textures in separate shader invocations, you'd have to change something, but guess what, it's as simple as letting the sampler refer to another texture unit:
// same texture unit setup as before
// shader program is current
while (rendering)
{
glUniform(samplerLocation, regularTextureUnit);
// draw call sampling the regular texture
glUniform(samplerLocation, videoTextureUnit);
// draw call sampling teh video texture
}
You should bind the texture before every draw. You only need to set the location once. You can also do layout(binding = 1) in your shader code for that. The location uniform stays with the program. The texture binding is a global GL state. Also be careful about ActiveTexture: it is a global GL state.
Good practice would be:
On program creation, once, set texture location (uniform)
On draw: SetActive(i), Bind(i), Draw, SetActive(i) Bind(0), SetActive(0)
Then optimize later for redundant calls.

Bind an SSBO to a fragment shader

I have a an SSBO which stores vec4 colour values for each pixel on screen and is pre populated with values by a compute shader before the main loop.
I'm now trying to get this data onscreen which I guess involves using the fragment shader (Although if you know a better method for this I'm open to suggestions)
So I'm trying to get the buffer or at least the data in it to the fragment shader so that I can set the colour of each fragment to the corresponding value in the buffer but I cannot find any way of doing this?
I have been told that I can bind the SSBO to the fragment shader but I don't know how to do this? Other thoughts I had was somehow moving the data from the SSBO to a texture but I can't work that out either
UPDATE:
In response thokra's excellent answer and following comments here is the code to set up my buffer:
//Create the buffer
GLuint pixelBufferID;
glGenBuffers(1, &pixelBufferID);
//Bind it
glBindBuffer(GL_SHADER_STORAGE_BUFFER, pixelBufferID);
//Set the data of the buffer
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(vec4) * window.getNumberOfPixels, new vec4[window.getNumberOfPixels], GL_DYNAMIC_DRAW);
//Bind the buffer to the correct interface block number
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, pixelBufferID);
Then I call the compute shader and this part works, I check the data has been populated correctly. Then in my fragment shader, just as a test:
layout(std430, binding=0) buffer PixelBuffer
{
vec4 data[];
} pixelBuffer
void main()
{
gl_FragColor = pixelBuffer.data[660000];
}
What I've noticed is that it seems to take longer and longer the higher the index so at 660000 it doesn't actually crash its just taking an silly amount of time.
Storage buffers work quite similarly to uniform buffers. To get a sense of how those work I suggest something like this. The main differences are that storage buffer can hold substantially higher amounts of data and the you can randomly read from and write to them.
There are multiple angles of working this, but I'll start with the most basic one - the interface block inside your shader. I will only describe a subset of the possibilities when using interface blocks but it should be enough to get you started.
In contrast to "normal" variables, you cannot specify buffer variables in the global scope. You need to use an interface block (Section 4.3.9 - GLSL 4.40 Spec) as per Section 4.3.7 - GLSL 4.40 Spec:
The buffer qualifier can be used to declare interface blocks (section 4.3.9 “Interface Blocks”), which are then referred to as shader storage blocks. It is a compile-time error to declare buffer variables at global scope (outside a block).
Note that the above mentioned section differs slightly from the ARB extension.
So, to get access to stuff in your storage buffer you'll need to define a buffer interface block inside your fragment shader (or any other applicable stage):
layout (binding = 0) buffer BlockName
{
float values[]; // just as an example
};
Like with any other block without an instance name, you'll refer to the buffer storage as if values were at global scope, e.g.:
void main()
{
// ...
values[0] = 1.f;
// ...
}
On the application level the only thing you now need to know is that the buffer interface block BlockName has the binding 0 after the program has been successfully linked.
After creating a storage buffer object with your application, you first bind the buffer to the binding you specified for the corresponding interface block using
glBindBufferBase(GLenum target​, GLuint index​, GLuint buffer​);
for binding the complete buffer to the index or
glBindBufferRange(GLenum target​, GLuint index​, GLuint buffer​, GLintptr offset​, GLsizeiptr size​);
for binding a subset specified by an offset and a number of of the buffer to the index.
Note that index refers to the binding specified in your layout for the corresponding interface block.
And that's basically it. Be aware that there are certain limits for the storage buffer size, the number of binding points, maximum storage block sizes and so on. I refer you to the corresponding sections in the GL and GLSL specs.
Also, there is a minimal example in the ARB extension. Reading the issues sections of extension also often provides further insight into the exposed functionality and the rationale behind it. I advise you to read through it.
Leave a comment if you run into problems.

What's glUniformBlockBinding used for?

Assuming I have a shader program with a UniformBlock at index 0.
Binding the UniformBuffer the following is apparently enough to bind a UniformBuffer to the block:
glUseProgram(program);
glBindBuffer(GL_UNIFORM_BUFFER, buffer);
glBindBufferBase(GL_UNIFORM_BUFFER, 0, buffer);
I only have to use glUniformBlockBinding when I bind the buffer to a different index than used in the shader program.
//...
glBindBufferBase(GL_UNIFORM_BUFFER, 1, buffer)
glUniformBlockBinding(program, 0, 1); // bind uniform block 1 to index 0
Did I understand it right? Would I only have to use glUniformBlockBinding if I use use the buffer in different programs where the appropriate blocks have different indices?
Per-program active uniform block indices differ from global binding locations.
The general idea here is that assuming you use the proper layout, you can bind a uniform buffer to one location in GL and use it in multiple GLSL programs. But the mapping between each program's individual buffer block indices and GL's global binding points needs to be established by this command.
To put this in perspective, consider sampler uniforms.
Samplers have a uniform location the same as any other uniform, but that location actually says nothing about the texture image unit the sampler uses. You still bind your textures to GL_TEXTURE7 for instance instead of the location of the sampler uniform.
The only conceptual difference between samplers and uniform buffers in this respect is that you do not assign the binding location using glUniform1i (...) to set the index. There is a special command that does this for uniform buffers.
Beginning with GLSL 4.20 (and applied retroactively by GL_ARB_shading_language_420pack), you can also establish a uniform block's binding location explicitly from within the shader.
GLSL 4.20 (or the appropriate extension) allows you to write the following:
layout (std140, binding = 0) uniform MyUniformBlock
{
vec4 foo;
vec4 bar;
};
Done this way, you never have to determine the uniform block index for MyUniformBlock; this block will be bound to 0 at link-time.