Re-compiling shader in openGL - opengl

I'm writing my own OpenGL-3D-Application and have stumbled across a little problem:
I want the number of light sources to be dynamic. For this, my shader contains an array of my lights struct:uniform PointLight pointLights[NR_POINT_LIGHTS];
The variable NR_POINT_LIGHTS is set by preprocessor, and the command for this is generated by my applications code (Java). So when creating a shader program, I pass the desired start-amount of PintLights, complete the source text with the preprocessor command, compile, link and use. This works great.
Now I want to change this variable. I re-build the shader-source-string, re-compile and re-link a new shaderProgram and continue using this onoe. It just appears that all uniforms set in the old program are getting lost in the progress (of course, I once set them for the old program).
My ideas on how to fix this:
Don't compile a new program, but rather somehow change the source data for the currently running shaders and somehow re-compile them, to continue using the program with the right uniform values
Copy all uniform data from the old program to the newly generated one
What is the right way to do this? How do I do this? I'm not very experienced yet and don't know if any of my ideas is even possible.

You're looking for a Uniform Buffer or (4.3+ only) a Shader Storage Buffer.
struct Light {
vec4 position;
vec4 color;
vec4 direction;
/*Anything else you want*/
}
Uniform Buffer:
const int MAX_ARRAY_SIZE = /*65536 / sizeof(Light)*/;
layout(std140, binding = 0) uniform light_data {
Light lights[MAX_ARRAY_SIZE];
};
uniform int num_of_lights;
Host Code for Uniform Buffer:
glGenBuffers(1, &light_ubo);
glBindBuffer(GL_UNIFORM_BUFFER, light_ubo);
glBufferData(GL_UNIFORM_BUFFER, sizeof(GLfloat) * static_light_data.size(), static_light_data.data(), GL_STATIC_DRAW); //Can be adjusted for your needs
GLuint light_index = glGetUniformBlockIndex(program_id, "light_data");
glBindBufferBase(GL_UNIFORM_BUFFER, 0, light_ubo);
glUniformBlockBinding(program_id, light_index, 0);
glUniform1i(glGetUniformLocation(program_id, "num_of_lights"), static_light_data.size() / 12); //My lights have 12 floats per light, so we divide by 12.
Shader Storage Buffer (4.3+ Only):
layout(std430, binding = 0) buffer light_data {
Light lights[];
};
/*...*/
void main() {
/*...*/
int num_of_lights = lights.length();
/*...*/
}
Host Code for Shader Storage Buffer (4.3+ Only):
glGenBuffers(1, &light_ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, light_ssbo);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(GLfloat) * static_light_data.size(), static_light_data.data(), GL_STATIC_DRAW); //Can be adjusted for your needs
light_ssbo_block_index = glGetProgramResourceIndex(program_id, GL_SHADER_STORAGE_BLOCK, "light_data");
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, light_ssbo);
glShaderStorageBlockBinding(program_id, light_ssbo_block_index, 0);
The main difference between the two is that Uniform Buffers:
Have compatibility with older, OpenGL3.x hardware,
Are limited on most systems to 65kb per buffer
Arrays need to have their [maximum] size declared statically at the compile-time of the shader.
Whereas Shader Storage Buffers:
Require hardware no older than 5 years
Have a API mandated minimum allowable size of 16Mb (and most systems will allow up to 25% the total VRAM)
Can dynamically query the size of any arrays stored in the buffer (though this can be buggy on older AMD systems)
Can be slower than Uniform Buffers on the Shader side (roughly equivalent to a Texture Access)

Don't compile a new program, but rather somehow change the source data for the currently running shaders and somehow re-compile them, to continue using the program with the right uniform values
This isn't do-able at runtime if I'm understanding right (implying that you could change the shader-code of the compiled shader program) but if you modify the shader source text you can compile a new shader program. Thing is, how often do the number of lights change in your scene? Because this is a fairly expensive process to do.
You could specify a max number of lights if you don't mind having a limitation and only use the lights in the shader that have been populated with information, saving you the task of tweaking the source text and recompiling a whole new shader program, but that leaves you with a limitation on the number of lights (If you aren't planning on having absolutely loads of lights in your scene but are planning of having the number of lights change relatively often then this is probably going to be best for you)
However, if you really want to go down the route that you are looking at here:
Copy all uniform data from the old program to the newly generated one
You can look at using a Uniform Block. If you're going to be using shader programs with similar or shared uniforms, Uniform Blocks are a good way of managed those 'universal' uniform variables across your shade programs, or in your case the shader you are moving to as you grow the amount of lights in the shader. Theres a good tutorial on uniform blocks here
Lastly, depending on the OpenGL version you're using, you might still be able to achieve dynamic array sizes. OpenGL 4.3 introduced the ability to use buffers and have unbound array sizes, that you would use glBindBufferRange to send the length of your lights array to. You'll see more talk about that functionality in this question and this wiki reference.
The last would probably be my preference, but it depends on if you're aiming at hardware supporting older OpenGL versions.

Related

OpenGL Compute Shader Binding Point Redundancy

I wrote my first OGL compute shader. After many hours and examples I finally got things working. One of the sections of programs I don't understand is block indices and binding points. (Following this example among others.)
My computer shader has:
layout (std430, binding=2) buffer particles
{
particle ps[];
};
layout (std430, binding=3) buffer spheres
{
sphere ss[];
};
The part I am unclear on is the binding=X. My setup code has the following:
GLuint block_index =
glGetProgramResourceIndex(
compute_shader->getProgramId(),
GL_SHADER_STORAGE_BLOCK,
"particles");
GLuint ssbo_binding_point_index = 2;
GL_CALL(glShaderStorageBlockBinding,
compute_shader->getProgramId(),
block_index,
ssbo_binding_point_index);
(Note, GL_CALL just forwards the call and checks for errors afterwards.)
Finally, I before each invocation of the compute shader I have:
GL_CALL(glBindBufferBase, GL_SHADER_STORAGE_BUFFER, 2, particles->bufferId());
GL_CALL(glUseProgram, compute_shader->getProgramId());
This works, but it seems overly complicated. Is there a simpler way? Why can't I just query the set the buffer base like any other uniform?
Thanks!
By using layout(binding = #), you're already setting the block binding index for the SSBO. You don't need to set it again from code; not unless you're changing the index to a new one. And you're using the same index here.
So just bind the buffer and move forward.

How Many Shader Programs Do I Really Need?

Let's say I have a shader set up to use 3 textures, and that I need to render some polygon that needs all the same shader attributes except that it requires only 1 texture. I have noticed on my own graphics card that I can simply call glDisableVertexAttrib() to disable the other two textures, and that doing so apparently causes the disabled texture data received by the fragment shader to be all white (1.0f). In other words, if I have a fragment shader instruction (pseudo-code)...
final_red = tex0.red * tex1.red * tex2.red
...the operation produces the desired final value regardless whether I have 1, 2, or 3 textures enabled. From this comes a number of questions:
Is it legit to disable expected textures like this, or is it a coincidence that my particular graphics card has this apparent mathematical safeguard?
Is the "best practice" to create a separate shader program that only expects a single texture for single texture rendering?
If either approach is valid, is there a benefit to creating a second shader program? I'm thinking it would be cost less time to make 2 glDisableVertexAttrib() calls than to make a glUseProgram() + 5-6 glGetUniform() calls, but maybe #4 addresses that issue.
When changing the active shader program with glUseProgram() do I need to call glGetUniform... functions every time to re-establish the location of each uniform in the program, or is the location of each expected to be consistent until the shader program is deallocated?
Disabling vertex attributes would not really disable your textures, it would just give you undefined texture coordinates. That might produce an affect similar to disabling a certain texture, but to do this properly you should use a uniform or possibly subroutines (if you have dozens of variations of the same shader).
As far as time taken to disable a vertex array state, that's probably going to be slower than changing a uniform value. Setting uniform values don't really affect the render pipeline state, they're just small changes to memory. Likewise, constantly swapping the current GLSL program does things like invalidate shader cache, so that's also significantly more expensive than setting a uniform value.
If you're on a modern GL implementation (GL 4.1+ or one that implements GL_ARB_separate_shader_objects) you can even set uniform values without binding a GLSL program at all, simply by calling glProgramUniform* (...)
I am most concerned with the fact that you think you need to call glGetUniformLocation (...) each time you set a uniform's value. The only time the location of a uniform in a GLSL program changes is when you link it. Assuming you don't constantly re-link your GLSL program, you only need to query those locations once and store them persistently.

Bind an SSBO to a fragment shader

I have a an SSBO which stores vec4 colour values for each pixel on screen and is pre populated with values by a compute shader before the main loop.
I'm now trying to get this data onscreen which I guess involves using the fragment shader (Although if you know a better method for this I'm open to suggestions)
So I'm trying to get the buffer or at least the data in it to the fragment shader so that I can set the colour of each fragment to the corresponding value in the buffer but I cannot find any way of doing this?
I have been told that I can bind the SSBO to the fragment shader but I don't know how to do this? Other thoughts I had was somehow moving the data from the SSBO to a texture but I can't work that out either
UPDATE:
In response thokra's excellent answer and following comments here is the code to set up my buffer:
//Create the buffer
GLuint pixelBufferID;
glGenBuffers(1, &pixelBufferID);
//Bind it
glBindBuffer(GL_SHADER_STORAGE_BUFFER, pixelBufferID);
//Set the data of the buffer
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(vec4) * window.getNumberOfPixels, new vec4[window.getNumberOfPixels], GL_DYNAMIC_DRAW);
//Bind the buffer to the correct interface block number
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, pixelBufferID);
Then I call the compute shader and this part works, I check the data has been populated correctly. Then in my fragment shader, just as a test:
layout(std430, binding=0) buffer PixelBuffer
{
vec4 data[];
} pixelBuffer
void main()
{
gl_FragColor = pixelBuffer.data[660000];
}
What I've noticed is that it seems to take longer and longer the higher the index so at 660000 it doesn't actually crash its just taking an silly amount of time.
Storage buffers work quite similarly to uniform buffers. To get a sense of how those work I suggest something like this. The main differences are that storage buffer can hold substantially higher amounts of data and the you can randomly read from and write to them.
There are multiple angles of working this, but I'll start with the most basic one - the interface block inside your shader. I will only describe a subset of the possibilities when using interface blocks but it should be enough to get you started.
In contrast to "normal" variables, you cannot specify buffer variables in the global scope. You need to use an interface block (Section 4.3.9 - GLSL 4.40 Spec) as per Section 4.3.7 - GLSL 4.40 Spec:
The buffer qualifier can be used to declare interface blocks (section 4.3.9 “Interface Blocks”), which are then referred to as shader storage blocks. It is a compile-time error to declare buffer variables at global scope (outside a block).
Note that the above mentioned section differs slightly from the ARB extension.
So, to get access to stuff in your storage buffer you'll need to define a buffer interface block inside your fragment shader (or any other applicable stage):
layout (binding = 0) buffer BlockName
{
float values[]; // just as an example
};
Like with any other block without an instance name, you'll refer to the buffer storage as if values were at global scope, e.g.:
void main()
{
// ...
values[0] = 1.f;
// ...
}
On the application level the only thing you now need to know is that the buffer interface block BlockName has the binding 0 after the program has been successfully linked.
After creating a storage buffer object with your application, you first bind the buffer to the binding you specified for the corresponding interface block using
glBindBufferBase(GLenum target​, GLuint index​, GLuint buffer​);
for binding the complete buffer to the index or
glBindBufferRange(GLenum target​, GLuint index​, GLuint buffer​, GLintptr offset​, GLsizeiptr size​);
for binding a subset specified by an offset and a number of of the buffer to the index.
Note that index refers to the binding specified in your layout for the corresponding interface block.
And that's basically it. Be aware that there are certain limits for the storage buffer size, the number of binding points, maximum storage block sizes and so on. I refer you to the corresponding sections in the GL and GLSL specs.
Also, there is a minimal example in the ARB extension. Reading the issues sections of extension also often provides further insight into the exposed functionality and the rationale behind it. I advise you to read through it.
Leave a comment if you run into problems.

GLSL shader for each situation

In my game I want to create seperate GLSL shaders for each situation. In example if i would have 3 models character, shiny sword and blury ghost i would like to set renderShader, animationShader and lightingShader to the character, then renderShader, lightingShader and specularShader to shiny sword, and finally i would like to set renderShader, lightingShader and blurShader to the blury ghost.
The renderShader should multiply the positions of vertices by projection, world and other matrices, and it's fragmet shader should simply set the texture to the model.
animationShader should transform vertices by given bone transforms.
lightingShader should do the lighting and specularLighting should do the specular lighting.
blurShader should do the blur effect.
Now first of all how can i do multiple vertex transforms on different shaders? Because the animationShader should calculate the animated positions of vertices and then renderShader should get that position and trasform it by some matrices.
Secondly how can i change the color of fragments on different shader?
The basic idea is that i want to be able to use different shaders for each sutuations/effects, and i don't know how to achieve it.
I need to know how should i use these shaders in opengl, and how should i use GLSL so that all shaders would complete each other and the shaders would not care if another shader is used or not.
What you're asking for is decidedly non-trivial, and is probably extreme overkill for the relatively limited number of "shader" types you describe.
Doing what you want will require developing what is effectively your own shading language. It may be a highly #defined version of GLSL, but the shaders you write would not be pure GLSL. They would have specialized hooks and be written in ways that code could be expected to flow into other code.
You'll need to have your own way of specifying the inputs and outputs of your language. When you want to connect shaders together, you have to say who's outputs go to which shader's inputs. Some inputs can come from actual shader stage inputs, while others come from other shaders. Some outputs written by a shader will be actual shader stage outputs, while others will feed other shaders.
Therefore, a shader who needs an input from another shader must execute after that other shader. Your system will have to work out the dependency graph.
Once you've figured out all of the inputs and outputs for a specific sequence of shaders, you have to take all of those shader text files and compile them into GLSL, as appropriate. Obviously, this is a non-trivial process.
Your shader language might look like this:
INPUT vec4 modelSpacePosition;
OUTPUT vec4 clipSpacePosition;
uniform mat4 modelToClipMatrix;
void main()
{
clipSpacePosition = modelToClipMatrix * modelSpacePosition;
}
Your "compiler" will need to do textual transformations on this, converting references to modelSpacePosition into an actual vertex shader input or a variable written by another shader, as appropriate. Similarly, if clipSpacePosition is to be written to gl_Position, you will need to convert all uses of clipSpacePosition to gl_Position. Also, you will need to remove the explicit output declaration.
In short, this will be a lot of work.
If you're going to do this, I would strongly urge you to avoid trying to merge the concept of vertex and fragment shaders. Keep this shader system working within the well-defined shader stages. So your "lightingShader" would need to be either a vertex shader or a fragment shader. If it's a fragment shader, then one of the shaders in the vertex shader that feeds into it will need to provide a normal in some way, or you'll need the fragment shader component to compute the normal via some mechanism.
Effectively for every combination of the shader stages you'll have to create an individual shader program. To save work and redundancy you'd use some caching structure to create a program for each requested combination only one time and reuse it, whenever it is requested.
Similar you can do with the shader stages. However shader stages can not be linked from several compilation units (yet, this is an ongoing effort in OpenGL development to get there, separable shaders of OpenGL-4 are a stepping stone there). But you can compile a shader from several sources. So you'd write functions for each desired effect into a separate source and then combine them at compilation time. And again use a caching structure to map source module combinations to shader object.
Update due to comment
Let's say you want to have some modularity. For this we can exploit the fact that glShaderSource accepts multiple source strings, it simply concatenates. You write a number of shader modules. One doing the illumination per-vertex calculations
uniform vec3 light_positions[N_LIGHT_SOURCES];
out vec3 light_directions[N_LIGHT_SOURCES];
out vec3 light_halfdirections[N_LIGHT_SOURCES];
void illum_calculation()
{
for(int i = 0; i < N_LIGHT_SOURCES; i++) {
light_directions[i] = ...;
light_halfdirections[i] = ...;
}
}
you put this into illum_calculation.vs.glslmod (the filename and extensions are arbitrary). Next you have a small module that does bone animation
uniform vec4 armature_pose[N_ARMATURE_BONES];
uniform vec3 armature_bones[N_ARMATURE_BONES];
in vec3 vertex_position;
void skeletal_animation()
{
/* ...*/
}
put this into illum_skeletal_anim.vs.glslmod. Then you have some common header
#version 330
uniform ...;
in ...;
and some common tail which contains the main function, which invokes all the different stages
void main() {
skeletal_animation();
illum_calculation();
}
and so on. Now you can load all those modules, in the right order into a single shader stage. The same you can do with all shader stages. The fragment shader is special, since it can write to several framebuffer targets at the same time (in OpenGL versions large enough). And technically you can pass a lot of varyings between the stages. So you could pass a own set of varyings between shader stages for each framebuffer target. However the geometry and the transformed vertex positions are common to all of them.
You have to provide different shader programs for each Model you want to render.
You can switch between different shader combinations using the glUseProgram function.
So before rendering your character or shiny sword or whatever you have to initialize the appropriate shader attributes and uniforms.
So it just a question of the design of the code of your game,
because you need to provide all uniform attributes to the shader, for example light information, texture samples and you must enable all necessary vertex attributes of the shader in order to assign position, color and so on.
These attributes can differ between the shaders and also your client side model can have different kind of Vertex attribute structures.
That means the model of your code directly influences the assigned shader and depends on it.
If you want to share common code between different shader programs, e.g illuminateDiffuse
you have to outsource this function and providing it to your shader through simply insert the string literal which represents the function into your shaders code, which is nothin more than a string literal. So you can reach a kind of modularity or include behavior through string manipulation of you shader code.
In any case the shader compiler tells you whats wrong.
Best regards

OpenGL 4.1 GL_ARB_separate_program_objects usefulness

I have been reading this OpenGL4.1 new features review.I don't really understand the idea behind GL_ARB_separate_program_objects usage , at least based on how the post author puts it:
It allows to independently use shader stages without changing others
shader stages. I see two mains reasons for it: Direct3D, Cg and even
the old OpenGL ARB program does it but more importantly it brings some
software design flexibilities allowing to see the graphics pipeline at
a lower granularity. For example, my best enemy the VAO, is a
container object that links buffer data, vertex layout data and GLSL
program input data. Without a dedicated software design, this means
that when I change the material of an object (a new fragment shader),
I need different VAO... It's fortunately possible to keep the same VAO
and only change the program by defining a convention on how to
communicate between the C++ program and the GLSL program. It works
well even if some drawbacks remains.
Now ,this line :
For example, my best enemy the VAO, is a container object that links buffer data, vertex layout data and GLSL program input data.Without a dedicated software design, this means that when I change the material of an object (a new fragment shader), I need different VAO...
makes me wonder.In my OpenGL programs I use VAO objects and I can switch between different shader programs without doing any change to VAO itself.So ,have I misunderstood the whole idea? Maybe he means we can switch shaders for the same program without re-linking ?
I'm breaking this answer up into multiple parts.
What the purpose of ARB_separate_shader_objects is
The purpose of this functionality is to be able to easily mix-and-match between vertex/fragment/geometry/tessellation shaders.
Currently, you have to link all shader stages into one monolithic program. So I could be using the same vertex shader code with two different fragment shaders. But this results in two different programs.
Each program has its own set of uniforms and other state. Which means that if I want to change some uniform data in the vertex shader, I have to change it in both programs. I have to use glGetUniformLocation on each (since they could have different locations). I then have to set the value on each one individually.
That's a big pain, and it's highly unnecessary. With separate shaders, you don't have to. You have a program that contains just the vertex shader, and two programs that contain the two fragment shaders. Changing vertex shader uniforms doesn't require two glGetUniformLocation calls. Indeed, it's easier to cache the data, since there's only one vertex shader.
Also, it deals with the combinatorial explosion of shader combinations.
Let's say you have a vertex shader that does simple rigid transformations: it takes a model-to-camera matrix and a camera-to-clip matrix. Maybe a matrix for normals too. And you have a fragment shader that will sample from some texture, do some lighting computations based on the normal, and return a color.
Now let's say you add another fragment shader that takes extra lighting and material parameters. It doesn't have any new inputs from the vertex shaders (no new texture coordinates or anything), just new uniforms. Maybe it's for projective lighting, which the vertex shader isn't involved with. Whatever.
Now let's say we add a new vertex shader that does vertex weighted skinning. It provides the same outputs as the old vertex shader, but it has a bunch of uniforms and input weights for skinning.
That gives us 2 vertex shaders and 2 fragment shaders. A total of 4 program combinations.
What happens when we add 2 more compatible fragment shaders? We get 8 combinations. If we have 3 vertex and 10 fragment shaders, we have 30 total program combinations.
With separate shaders, 3 vertex and 10 fragment shaders needs 30 program pipeline objects, but only 13 program objects. That's over 50% fewer program objects than the non-separate case.
Why the quoted text is wrong
Now ,this line [...] makes me wonder.
It should make you wonder; it's wrong in several ways. For example:
the VAO, is a container object that links buffer data, vertex layout data and GLSL program input data.
No, it does not. It ties buffer objects that provide vertex data to the vertex formats for that data. And it specifies which vertex attribute indices that goes to. But how tightly coupled this is to "GLSL program input data" is entirely up to you.
Without a dedicated software design, this means that when I change the material of an object (a new fragment shader), I need different VAO...
Unless this line equates "a dedicated software design" with "reasonable programming practice", this is pure nonsense.
Here's what I mean. You'll see example code online that does things like this when they set up their vertex data:
glBindBuffer(GL_ARRAY_BUFFER, buffer_object);
glEnableVertexAttribArray(glGetAttribLocation(prog, "position"));
glVertexAttribPointer(glGetAttribLocation(prog, "position"), ...);
There is a technical term for this: terrible code. The only reason to do this is if the shader specified by prog is somehow not under your direct control. And if that's the case... how do you know that prog has an attribute named "position" at all?
Reasonable programming practice for shaders is to use conventions. That's how you know prog has an attribute named "position". But if you know that every program is going to have an attribute named "position", why not take it one step further? When it comes time to link a program, do this:
GLuint prog = glCreateProgram();
glAttachShader(prog, ...); //Repeat as needed.
glBindAttribLocation(prog, 0, "position");
After all, you know that this program must have an attribute named "position"; you're going to assume that when you get it's location later. So cut out the middle man and tell OpenGL what location to use.
This way, you don't have to use glGetAttribLocation; just use 0 when you mean "position".
Even if prog doesn't have an attribute named "position", this will still link successfully. OpenGL doesn't mind if you bind attribute locations that don't exist. So you can just apply a series of glBindAttribLocation calls to every program you create, without problems. Indeed, you can have multiple conventions for your attribute names, and as long as you stick to one set or the other, you'll be fine.
Even better, stick it in the shader and don't bother with the glBindAttribLocation solution at all:
#version 330
layout(location = 0) in vec4 position;
In short: always use conventions for your attribute locations. If you see glGetAttribLocation in a program, consider that a code smell. That way, you can use any VAO for any program, since the VAO is simply written against the convention.
I don't see how having a convention equates to "dedicated software design", but hey, I didn't write that line either.
I can switch between different shader programs
Yes, but you have to replace whole programs altogether. Separate shader objects allow you to replace only one stage (e.g. only vertex shader).
If you have for example N vertex shaders and M vertex shaders, using conventional linking you would have N * M program objects (to cover all posible combinations). Using separate shader objects, they are separated from each other, and thus you need to keep only N + M shader objects. That's a significant improvement in complex scenarios.