GLSL shader for each situation - opengl

In my game I want to create seperate GLSL shaders for each situation. In example if i would have 3 models character, shiny sword and blury ghost i would like to set renderShader, animationShader and lightingShader to the character, then renderShader, lightingShader and specularShader to shiny sword, and finally i would like to set renderShader, lightingShader and blurShader to the blury ghost.
The renderShader should multiply the positions of vertices by projection, world and other matrices, and it's fragmet shader should simply set the texture to the model.
animationShader should transform vertices by given bone transforms.
lightingShader should do the lighting and specularLighting should do the specular lighting.
blurShader should do the blur effect.
Now first of all how can i do multiple vertex transforms on different shaders? Because the animationShader should calculate the animated positions of vertices and then renderShader should get that position and trasform it by some matrices.
Secondly how can i change the color of fragments on different shader?
The basic idea is that i want to be able to use different shaders for each sutuations/effects, and i don't know how to achieve it.
I need to know how should i use these shaders in opengl, and how should i use GLSL so that all shaders would complete each other and the shaders would not care if another shader is used or not.

What you're asking for is decidedly non-trivial, and is probably extreme overkill for the relatively limited number of "shader" types you describe.
Doing what you want will require developing what is effectively your own shading language. It may be a highly #defined version of GLSL, but the shaders you write would not be pure GLSL. They would have specialized hooks and be written in ways that code could be expected to flow into other code.
You'll need to have your own way of specifying the inputs and outputs of your language. When you want to connect shaders together, you have to say who's outputs go to which shader's inputs. Some inputs can come from actual shader stage inputs, while others come from other shaders. Some outputs written by a shader will be actual shader stage outputs, while others will feed other shaders.
Therefore, a shader who needs an input from another shader must execute after that other shader. Your system will have to work out the dependency graph.
Once you've figured out all of the inputs and outputs for a specific sequence of shaders, you have to take all of those shader text files and compile them into GLSL, as appropriate. Obviously, this is a non-trivial process.
Your shader language might look like this:
INPUT vec4 modelSpacePosition;
OUTPUT vec4 clipSpacePosition;
uniform mat4 modelToClipMatrix;
void main()
{
clipSpacePosition = modelToClipMatrix * modelSpacePosition;
}
Your "compiler" will need to do textual transformations on this, converting references to modelSpacePosition into an actual vertex shader input or a variable written by another shader, as appropriate. Similarly, if clipSpacePosition is to be written to gl_Position, you will need to convert all uses of clipSpacePosition to gl_Position. Also, you will need to remove the explicit output declaration.
In short, this will be a lot of work.
If you're going to do this, I would strongly urge you to avoid trying to merge the concept of vertex and fragment shaders. Keep this shader system working within the well-defined shader stages. So your "lightingShader" would need to be either a vertex shader or a fragment shader. If it's a fragment shader, then one of the shaders in the vertex shader that feeds into it will need to provide a normal in some way, or you'll need the fragment shader component to compute the normal via some mechanism.

Effectively for every combination of the shader stages you'll have to create an individual shader program. To save work and redundancy you'd use some caching structure to create a program for each requested combination only one time and reuse it, whenever it is requested.
Similar you can do with the shader stages. However shader stages can not be linked from several compilation units (yet, this is an ongoing effort in OpenGL development to get there, separable shaders of OpenGL-4 are a stepping stone there). But you can compile a shader from several sources. So you'd write functions for each desired effect into a separate source and then combine them at compilation time. And again use a caching structure to map source module combinations to shader object.
Update due to comment
Let's say you want to have some modularity. For this we can exploit the fact that glShaderSource accepts multiple source strings, it simply concatenates. You write a number of shader modules. One doing the illumination per-vertex calculations
uniform vec3 light_positions[N_LIGHT_SOURCES];
out vec3 light_directions[N_LIGHT_SOURCES];
out vec3 light_halfdirections[N_LIGHT_SOURCES];
void illum_calculation()
{
for(int i = 0; i < N_LIGHT_SOURCES; i++) {
light_directions[i] = ...;
light_halfdirections[i] = ...;
}
}
you put this into illum_calculation.vs.glslmod (the filename and extensions are arbitrary). Next you have a small module that does bone animation
uniform vec4 armature_pose[N_ARMATURE_BONES];
uniform vec3 armature_bones[N_ARMATURE_BONES];
in vec3 vertex_position;
void skeletal_animation()
{
/* ...*/
}
put this into illum_skeletal_anim.vs.glslmod. Then you have some common header
#version 330
uniform ...;
in ...;
and some common tail which contains the main function, which invokes all the different stages
void main() {
skeletal_animation();
illum_calculation();
}
and so on. Now you can load all those modules, in the right order into a single shader stage. The same you can do with all shader stages. The fragment shader is special, since it can write to several framebuffer targets at the same time (in OpenGL versions large enough). And technically you can pass a lot of varyings between the stages. So you could pass a own set of varyings between shader stages for each framebuffer target. However the geometry and the transformed vertex positions are common to all of them.

You have to provide different shader programs for each Model you want to render.
You can switch between different shader combinations using the glUseProgram function.
So before rendering your character or shiny sword or whatever you have to initialize the appropriate shader attributes and uniforms.
So it just a question of the design of the code of your game,
because you need to provide all uniform attributes to the shader, for example light information, texture samples and you must enable all necessary vertex attributes of the shader in order to assign position, color and so on.
These attributes can differ between the shaders and also your client side model can have different kind of Vertex attribute structures.
That means the model of your code directly influences the assigned shader and depends on it.
If you want to share common code between different shader programs, e.g illuminateDiffuse
you have to outsource this function and providing it to your shader through simply insert the string literal which represents the function into your shaders code, which is nothin more than a string literal. So you can reach a kind of modularity or include behavior through string manipulation of you shader code.
In any case the shader compiler tells you whats wrong.
Best regards

Related

OpenGL, C++, In Out variables

I need to pass some variables directly from vertex shader to fragment shader but my pipeline also contains a TCS a TES and A GS that simply do passthrough stuff.
I already know that fragment shader expects to recieve values for its "in" variables from the last linked shader of the program, in my case the Geometry Shader, but I don't want to do MVP and normal calculations there.
How can I output a variable directly to the fragment shader from vertex shader? (skipping the rest of the shaders in the middle)
Is that even possible?
How can i output a variable directly to the fragment shader from vertex shader? (skipping the rest of the shaders in the middle)
You don't.
Each stage can only access values provided by the previous active stage in the pipeline. If you want to communicate from the VS to the FS, then every stage between them must shepherd those values through themselves. After all:
my pipeline also contains a TCS a TES
If you're doing tessellation, then how exactly could a VS directly communicate with an FS? The fragment shaders inputs are per-fragment values generated by doing rasterization on the primitive being rendered. But since tessellation is active, the primitives the VS is operating on don't exist anymore; only the post-tessellation primitives exist.
So if the VS's primitives are all gone, how do the tessellated primitives get values? For a vertex that didn't exist until the tessellator activated, from where would it get a vertex value to be rasterized and interpolated across the generated primitive?
The job of figuring that out is given to the TES. It will use the values output from the VS (sent through the TCS if present) and interpolate/generate them in accord with the tessellation interpolation scheme it is coded with. That is what the TES is for.
The GS is very much the same way. Geometry shaders can take one primitive and turn it into twenty. It can discard entire primitives. How could the VS possibly communicate vertex information to a fragment shader through a GS which may just drop that primitive on the floor or create 30 separate ones? Or convert triangles into lines?
So there's not even a conceptual way for the VS to provide values to the FS through other shader pipelines.

Why do we need vertex shader in OpenGL?

As far as I understand, the color of a pixel is determined by the fragment shader. Why do we need a vertex shader then? Is there anything a fragment shader cannot do (or cannot easily do) but a vertex shader can do (easily)?
But I still can't quite understand why it is named a "shader".
Because that's what programs executed as part of the rendering process are called. The Renderman interface specification was one of the first programmable rendering processes, and they called all of their programmable elements "shaders", even though they didn't all compute colors.
And therefore, "shader" has become the term used for describing any such program.
Vertex shaders convert vertex data, creating a 1:1 mapping from input vertices to output vertices. Fragment shaders operate on fragments. An FS invocation has no control over where it will be executed. They are generated in the location that the rasterizer says they go, and the FS has no way to affect this.
By contrast, a vertex shader has complete control over where the vertices will go.

Is writing to `gl_Position` in Vertex Shader necessary when there is a Geometry Shader

Suppose I have a geometry shader that computes its output gl_Position from some inputs other than gl_in[].gl_Position. If the previous pipelines stages (vertex and tessellation) do not write to their out gl_Position, does the action of the entire pipeline still remain well-defined?
Or put it other way, does the value of gl_Position has any effect on the functioning of the GL before the completion of the Geometry shader? If not, it would mean I can simply use it as an extra slot for passing data without any special spatial interpretation between stages, right?
(The question assumes OpenGL 4.5 forward profile.)
gl_Position only needs to be written by the final Vertex Processing stage (VS, tessellation, and GS) in the rendering pipeline. So if you have an active GS, the VS that connects to it need not write to gl_Position at all. Or it can put any arbitrary vec4 data in it.
Do note that gl_Position does still need to be written by whatever the final vertex processing stage is. Assuming you want rasterization, of course. And no, that's not being flippant; you could be doing transform feedback.
If not, it would mean I can simply use it as an extra slot for passing data without any special spatial interpretation between stages, right?
If there's a GS, then none of the outputs from the previous shader stages will have "any special spatial interpretation". gl_Position isn't special in this regard.
Interpolation is a function of the Rasterizer, which happens after vertex processing. Indeed, in GLSL 4.30+, the interpolation qualifiers on the fragment shader inputs are the only ones that matter. They're not even used for interface matching anymore.

which shader stage must write gl_ClipDistance[x]

It is available as output variable in all shaders except fragment shader. So which shader stage must write it? Is its value taken from the last shader stage that wrote it?
Also please explan what is the purpose of having the value gl_ClipDistance in fragment shader?
As long as you only work with vertex and fragment shaders, you write it in the vertex shader. According to the GLSL spec, geometry and tessellation shaders can write it as well.
The fragment shader can read the value. Based on the way I read the documentation, it would give you the interpolated value of the clip distance for your fragment.
Considering it is really only useful for clipping... you need to write it in the last stage of vertex processing in your GLSL program. Currently there is only one stage that does not fall under the category of vertex processing, so whatever comes immediately before the fragment shader needs to output this.
If you are using a geometry shader, that would be where you write it. Now, generally in a situation like this you might also write it in the vertex shader that runs before the geometry shader, passing it through. You don't have to do anything like that, but that is typical. Since it is part of gl_PerVertex, it is designed to be passed through multiple vertex processing stages that way.
Name
gl_ClipDistance — provides a forward-compatible mechanism for vertex clipping
Description
[...]
The value of gl_ClipDistance (or the gl_ClipDistance member of the gl_out[] array, in the case of the tessellation control shader) is undefined after the vertex, tessellation control, and tessellation evaluation shading stages if the corresponding shader executable does not write to gl_ClipDistance.
If you do not write to it in the final vertex processing stage, then it becomes undefined immediately before clipping occurs.

OpenGL 4.1 GL_ARB_separate_program_objects usefulness

I have been reading this OpenGL4.1 new features review.I don't really understand the idea behind GL_ARB_separate_program_objects usage , at least based on how the post author puts it:
It allows to independently use shader stages without changing others
shader stages. I see two mains reasons for it: Direct3D, Cg and even
the old OpenGL ARB program does it but more importantly it brings some
software design flexibilities allowing to see the graphics pipeline at
a lower granularity. For example, my best enemy the VAO, is a
container object that links buffer data, vertex layout data and GLSL
program input data. Without a dedicated software design, this means
that when I change the material of an object (a new fragment shader),
I need different VAO... It's fortunately possible to keep the same VAO
and only change the program by defining a convention on how to
communicate between the C++ program and the GLSL program. It works
well even if some drawbacks remains.
Now ,this line :
For example, my best enemy the VAO, is a container object that links buffer data, vertex layout data and GLSL program input data.Without a dedicated software design, this means that when I change the material of an object (a new fragment shader), I need different VAO...
makes me wonder.In my OpenGL programs I use VAO objects and I can switch between different shader programs without doing any change to VAO itself.So ,have I misunderstood the whole idea? Maybe he means we can switch shaders for the same program without re-linking ?
I'm breaking this answer up into multiple parts.
What the purpose of ARB_separate_shader_objects is
The purpose of this functionality is to be able to easily mix-and-match between vertex/fragment/geometry/tessellation shaders.
Currently, you have to link all shader stages into one monolithic program. So I could be using the same vertex shader code with two different fragment shaders. But this results in two different programs.
Each program has its own set of uniforms and other state. Which means that if I want to change some uniform data in the vertex shader, I have to change it in both programs. I have to use glGetUniformLocation on each (since they could have different locations). I then have to set the value on each one individually.
That's a big pain, and it's highly unnecessary. With separate shaders, you don't have to. You have a program that contains just the vertex shader, and two programs that contain the two fragment shaders. Changing vertex shader uniforms doesn't require two glGetUniformLocation calls. Indeed, it's easier to cache the data, since there's only one vertex shader.
Also, it deals with the combinatorial explosion of shader combinations.
Let's say you have a vertex shader that does simple rigid transformations: it takes a model-to-camera matrix and a camera-to-clip matrix. Maybe a matrix for normals too. And you have a fragment shader that will sample from some texture, do some lighting computations based on the normal, and return a color.
Now let's say you add another fragment shader that takes extra lighting and material parameters. It doesn't have any new inputs from the vertex shaders (no new texture coordinates or anything), just new uniforms. Maybe it's for projective lighting, which the vertex shader isn't involved with. Whatever.
Now let's say we add a new vertex shader that does vertex weighted skinning. It provides the same outputs as the old vertex shader, but it has a bunch of uniforms and input weights for skinning.
That gives us 2 vertex shaders and 2 fragment shaders. A total of 4 program combinations.
What happens when we add 2 more compatible fragment shaders? We get 8 combinations. If we have 3 vertex and 10 fragment shaders, we have 30 total program combinations.
With separate shaders, 3 vertex and 10 fragment shaders needs 30 program pipeline objects, but only 13 program objects. That's over 50% fewer program objects than the non-separate case.
Why the quoted text is wrong
Now ,this line [...] makes me wonder.
It should make you wonder; it's wrong in several ways. For example:
the VAO, is a container object that links buffer data, vertex layout data and GLSL program input data.
No, it does not. It ties buffer objects that provide vertex data to the vertex formats for that data. And it specifies which vertex attribute indices that goes to. But how tightly coupled this is to "GLSL program input data" is entirely up to you.
Without a dedicated software design, this means that when I change the material of an object (a new fragment shader), I need different VAO...
Unless this line equates "a dedicated software design" with "reasonable programming practice", this is pure nonsense.
Here's what I mean. You'll see example code online that does things like this when they set up their vertex data:
glBindBuffer(GL_ARRAY_BUFFER, buffer_object);
glEnableVertexAttribArray(glGetAttribLocation(prog, "position"));
glVertexAttribPointer(glGetAttribLocation(prog, "position"), ...);
There is a technical term for this: terrible code. The only reason to do this is if the shader specified by prog is somehow not under your direct control. And if that's the case... how do you know that prog has an attribute named "position" at all?
Reasonable programming practice for shaders is to use conventions. That's how you know prog has an attribute named "position". But if you know that every program is going to have an attribute named "position", why not take it one step further? When it comes time to link a program, do this:
GLuint prog = glCreateProgram();
glAttachShader(prog, ...); //Repeat as needed.
glBindAttribLocation(prog, 0, "position");
After all, you know that this program must have an attribute named "position"; you're going to assume that when you get it's location later. So cut out the middle man and tell OpenGL what location to use.
This way, you don't have to use glGetAttribLocation; just use 0 when you mean "position".
Even if prog doesn't have an attribute named "position", this will still link successfully. OpenGL doesn't mind if you bind attribute locations that don't exist. So you can just apply a series of glBindAttribLocation calls to every program you create, without problems. Indeed, you can have multiple conventions for your attribute names, and as long as you stick to one set or the other, you'll be fine.
Even better, stick it in the shader and don't bother with the glBindAttribLocation solution at all:
#version 330
layout(location = 0) in vec4 position;
In short: always use conventions for your attribute locations. If you see glGetAttribLocation in a program, consider that a code smell. That way, you can use any VAO for any program, since the VAO is simply written against the convention.
I don't see how having a convention equates to "dedicated software design", but hey, I didn't write that line either.
I can switch between different shader programs
Yes, but you have to replace whole programs altogether. Separate shader objects allow you to replace only one stage (e.g. only vertex shader).
If you have for example N vertex shaders and M vertex shaders, using conventional linking you would have N * M program objects (to cover all posible combinations). Using separate shader objects, they are separated from each other, and thus you need to keep only N + M shader objects. That's a significant improvement in complex scenarios.