What is the easiest way to join more shaders (sources) to a glsl program? Normally when a vertex shader and fragment shader are attached to a program does something like this:
vertex = glCreateShader(GL_VERTEX_SHADER);
fragment = glCreateShader(GL_FRAGMENT_SHADER);
programa = glCreateProgram();
char *vsFuente = LEE_SHADER(rutaVert);
char *fsFuente = LEE_SHADER(rutaFrag);
if(vsFuente == NULL)return GL_FALSE;
if(fsFuente == NULL)return GL_FALSE;
const char *vs = vsFuente;
const char *fs = fsFuente;
glShaderSource(vertex ,1,&vs,NULL);
glShaderSource(fragment,1,&fs,NULL);
delete [] vsFuente;
delete [] fsFuente;
glCompileShader(vertex);
glCompileShader(fragment);
glAttachShader(programa,vertex);
glAttachShader(programa,fragment);
glLinkProgram(programa);
Well, if I want to join another pair of shader (vertex and fragment) to the same program, how do I do it? I use version 2.0 of OpenGL
Exactly the same way you added the shaders you already have in your code. You create more shaders, call glCompileShader() and glAttachShader() for them, and then call glLinkProgram().
You need to be aware of what this is doing, though. While it is perfectly legal to add multiple shaders of the same type to a program, only one of the shaders of each type can have a main() function.
The typical setup I have seen when multiple shaders of the same type are attached to a program (which is not very common) is that some of the shaders contain collections of "utility" functions, and then there's one shader of each type type that contains the main() function, and implements the overall shader logic. This setup can be useful for complex shaders, since the shaders with the utility functions can be compiled once, and then attached to multiple different programs without having to compile them each time.
I'm not sure if this is what you're after. If you really just want to use different sets of shaders, the much more common setup is that you create a program for each pair of vertex/fragment shaders, and then switch between them during your rendering by calling glUseProgram() with the program you need for each part of the rendering.
Related
I'm trying to create a single compute a shader program computeProgram and attach two source codes on it. Here are my codes:
unsigned int computeProgram = glCreateProgram();
glAttachShader(computeProgram, MyFirstComputeShaderSourceCode);
glAttachShader(computeProgram, MySecondComputeShaderSourceCode);
glLinkProgram(computeProgram);
glGetProgramiv(computeProgram, GL_LINK_STATUS, &success);
if (!success) {
glGetProgramInfoLog(computeProgram, 512, NULL, infoLog);
std::cout << "ERROR::SHADER::COMPUTE_PROGRAM::LINKING_FAILED\n" << infoLog << std::endl;
exit(1);
}
I get this type of linking error information:
ERROR::SHADER::COMPUTE_PROGRAM::LINKING_FAILED
ERROR: Duplicate function definitions for "main"; prototype: "main()" found.
I do have main functions in both shader source codes, and I understand why this is not gonna work cause there is only one main function expected in one program. But here comes my question: If I'm trying to link a vertex shader source and a fragment shader source to a single program, say, renderProgram, there are also two main functions, one in vertex shader, one in fragment shader. However, if I link these two, it somehow works fine.
Why is this difference happen? And if I want to use these two compute shaders, am I supposed to create two compute programs in order to avoid duplication of the main function?
Any help is appreciated!!
Why is this difference happen?
When you link a vertex shader and a fragment shader to the same shader program, then those two (as their names imply) are in different shader stages. Every shader stage expects exactly one definition of the main() function.
When you attach two shaders that are in the same shader stage, such as your two compute shader objects, then those get linked into the same shader stage (compute). And that does not work.
And if I want to use these two compute shaders, am I supposed to create two compute programs in order to avoid duplication of the main function?
Yes. When you have two compute shaders that each define their own functionality in terms of a main() function, then creating two shader programs each with one of the shader objects linked to it would work. Especially, when your two shaders have completely different interfaces with the host, such as SSBOs or samplers/images.
http://docs.gl/gl4/glCreateShaderProgram
Pseudo code:
const GLuint shader = glCreateShader(type);
if (shader) {
glShaderSource(shader, count, strings, NULL);
glCompileShader(shader);
const GLuint program = glCreateProgram();
if (program) {
GLint compiled = GL_FALSE;
glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);
glProgramParameteri(program, GL_PROGRAM_SEPARABLE, GL_TRUE);
if (compiled) {
glAttachShader(program, shader);
glLinkProgram(program);
glDetachShader(program, shader);
}
/* append-shader-info-log-to-program-info-log */
}
glDeleteShader(shader);
return program;
}
else {
return 0;
}
I didn't know that it is possible to compile several shaders at once. The problem that I have is that the documentation doesn't tell me how to call this.
What type should I specify if I want to compile a vertex + fragment shader? Initially I tried glCreateShader(GL_VERTEX_SHADER | GL_FRAGMENT_SHADER); but then it complained about two main functions. OpenGL probably interpreted two shaders as one shader.
glCreateShaderProgram() only compiles and links a single shader. The last argument is an array of strings just like the corresponding argument for glShaderSource(), which is also used for specifying the source of one shader. Being able to pass multiple strings is just a convenience for both of those calls. For example, if you have your shader code as an array of strings (say one per line), you can directly pass them in, without a need to concatenate them first.
glCreateShaderProgram() is intended to be used in conjunction with program pipeline objects. This mechanism allows you to combine shaders from several programs, without a need to link them. A typical call sequence would look like this:
GLuint vertProgId = glCreateShaderProgram(GL_VERTEX_SHADER, 1, &vertSrc);
GLuint fragProgId = glCreateShaderProgram(GL_FRAGMENT_SHADER, 1, &fragSrc);
GLuint pipelineId = 0;
glGenProgramPipelines(1, &pipelineId);
glUseProgramStages(pipelineId, GL_VERTEX_SHADER_BIT, vertProgId);
glUseProgramStages(pipelineId, GL_FRAGMENT_SHADER_BIT, fragProgId);
glBindProgramPipeline(pipelineId);
From the linked documentation:
glCreateShaderProgram creates a program object containing compiled and linked shaders for a single stage specified by type.
[emphasis mine]
What it means is that you cannot use it with a list of shaders targeting different parts of the rendering pipeline.
However, you could perhaps modify the sample implementation to cover that need for you. It would require to change the function signature by replacing the first parameter by an array of GLenum of count elements, then turn the implementation into a loop on each pair shader[i], strings[i] to add them to the generated program, where shader is an array of shader ids associated with each source string separately.
Once each shader has been compiled, the code would then link the whole into a program, release all the shader ids, and return the program id (actual implementation left as an exercise).
I don't think it's possible to create multiple shaders at once. glCreateShaderProgram just does everything needed to create a shader for you (create, compile, link).
If you look at the documentation at opengl.org, you will see that you can only create one shader at a time. You can only specify one of the following GLenums: GL_COMPUTE_SHADER, GL_VERTEX_SHADER, GL_TESS_CONTROL_SHADER, GL_TESS_EVALUATION_SHADER, GL_GEOMETRY_SHADER, or GL_FRAGMENT_SHADER.
I think you are right about what you are thinking. OpenGL is probably interpreting two shaders as one and that is not possible.
You can do this:
glCreateShader(GL_VERTEX_SHADER);
glCreateShader(GL_FRAGMENT_SHADER);
Look at the Description field of the source I have provided. It will explain what further to do with the shader objects.
I am not sure why you can pass an array of strings. It could be if you had multiple small snippets of shader code in different source code files.
Source: https://www.opengl.org/sdk/docs/man/html/glCreateShader.xhtml
I have been spending a lot of times tweaking my shaders and I want to quickly reload a shader without recompiling my program. What is the official way to hot reload shaders in OpenGL 4.1?
Here is my current (non-functional) method:
Create a program pipeline, create one separable program per shader (right now it's just one vertex and one fragment) and use it in the pipeline. Load uniforms into each program that needs it (is there a way to mimic the old style of one uniform shared across shaders?)
glGenProgramPipelines(1, &programPipeline);
glBindProgramPipeline(programPipeline); //is this necessary?
auto handle = glCreateShader(type);
const char* src = source.c_str();
glShaderSource(handle, 1, &src, NULL);
glCompileShader(handle);
program[idx] = glCreateProgram();
__glewProgramParameteri(program[idx], GL_PROGRAM_SEPARABLE, GL_TRUE); //xcode auto corrected this for me
glAttachShader(program[idx], handle);
glLinkProgram(program[idx]);
glDetachShader(program[idx], handle);
glUseProgramStages(programPipeline, shaderStage,
program[idx]);
The part I am stuck at is what to do with attributes. It fails with "GL_INVALID_OPERATION" when I try to enable a vertex attrib array when in the normal non-separable programs it would work fine. I have generated and bound a vao and got the attrib location before trying to enable the vertex attrib array. The online resources I've found only mentions uniforms in conjunction with separable programs, but nothing regarding attributes. Can they even be used with separable programs? What would be the alternative?
In my game I want to create seperate GLSL shaders for each situation. In example if i would have 3 models character, shiny sword and blury ghost i would like to set renderShader, animationShader and lightingShader to the character, then renderShader, lightingShader and specularShader to shiny sword, and finally i would like to set renderShader, lightingShader and blurShader to the blury ghost.
The renderShader should multiply the positions of vertices by projection, world and other matrices, and it's fragmet shader should simply set the texture to the model.
animationShader should transform vertices by given bone transforms.
lightingShader should do the lighting and specularLighting should do the specular lighting.
blurShader should do the blur effect.
Now first of all how can i do multiple vertex transforms on different shaders? Because the animationShader should calculate the animated positions of vertices and then renderShader should get that position and trasform it by some matrices.
Secondly how can i change the color of fragments on different shader?
The basic idea is that i want to be able to use different shaders for each sutuations/effects, and i don't know how to achieve it.
I need to know how should i use these shaders in opengl, and how should i use GLSL so that all shaders would complete each other and the shaders would not care if another shader is used or not.
What you're asking for is decidedly non-trivial, and is probably extreme overkill for the relatively limited number of "shader" types you describe.
Doing what you want will require developing what is effectively your own shading language. It may be a highly #defined version of GLSL, but the shaders you write would not be pure GLSL. They would have specialized hooks and be written in ways that code could be expected to flow into other code.
You'll need to have your own way of specifying the inputs and outputs of your language. When you want to connect shaders together, you have to say who's outputs go to which shader's inputs. Some inputs can come from actual shader stage inputs, while others come from other shaders. Some outputs written by a shader will be actual shader stage outputs, while others will feed other shaders.
Therefore, a shader who needs an input from another shader must execute after that other shader. Your system will have to work out the dependency graph.
Once you've figured out all of the inputs and outputs for a specific sequence of shaders, you have to take all of those shader text files and compile them into GLSL, as appropriate. Obviously, this is a non-trivial process.
Your shader language might look like this:
INPUT vec4 modelSpacePosition;
OUTPUT vec4 clipSpacePosition;
uniform mat4 modelToClipMatrix;
void main()
{
clipSpacePosition = modelToClipMatrix * modelSpacePosition;
}
Your "compiler" will need to do textual transformations on this, converting references to modelSpacePosition into an actual vertex shader input or a variable written by another shader, as appropriate. Similarly, if clipSpacePosition is to be written to gl_Position, you will need to convert all uses of clipSpacePosition to gl_Position. Also, you will need to remove the explicit output declaration.
In short, this will be a lot of work.
If you're going to do this, I would strongly urge you to avoid trying to merge the concept of vertex and fragment shaders. Keep this shader system working within the well-defined shader stages. So your "lightingShader" would need to be either a vertex shader or a fragment shader. If it's a fragment shader, then one of the shaders in the vertex shader that feeds into it will need to provide a normal in some way, or you'll need the fragment shader component to compute the normal via some mechanism.
Effectively for every combination of the shader stages you'll have to create an individual shader program. To save work and redundancy you'd use some caching structure to create a program for each requested combination only one time and reuse it, whenever it is requested.
Similar you can do with the shader stages. However shader stages can not be linked from several compilation units (yet, this is an ongoing effort in OpenGL development to get there, separable shaders of OpenGL-4 are a stepping stone there). But you can compile a shader from several sources. So you'd write functions for each desired effect into a separate source and then combine them at compilation time. And again use a caching structure to map source module combinations to shader object.
Update due to comment
Let's say you want to have some modularity. For this we can exploit the fact that glShaderSource accepts multiple source strings, it simply concatenates. You write a number of shader modules. One doing the illumination per-vertex calculations
uniform vec3 light_positions[N_LIGHT_SOURCES];
out vec3 light_directions[N_LIGHT_SOURCES];
out vec3 light_halfdirections[N_LIGHT_SOURCES];
void illum_calculation()
{
for(int i = 0; i < N_LIGHT_SOURCES; i++) {
light_directions[i] = ...;
light_halfdirections[i] = ...;
}
}
you put this into illum_calculation.vs.glslmod (the filename and extensions are arbitrary). Next you have a small module that does bone animation
uniform vec4 armature_pose[N_ARMATURE_BONES];
uniform vec3 armature_bones[N_ARMATURE_BONES];
in vec3 vertex_position;
void skeletal_animation()
{
/* ...*/
}
put this into illum_skeletal_anim.vs.glslmod. Then you have some common header
#version 330
uniform ...;
in ...;
and some common tail which contains the main function, which invokes all the different stages
void main() {
skeletal_animation();
illum_calculation();
}
and so on. Now you can load all those modules, in the right order into a single shader stage. The same you can do with all shader stages. The fragment shader is special, since it can write to several framebuffer targets at the same time (in OpenGL versions large enough). And technically you can pass a lot of varyings between the stages. So you could pass a own set of varyings between shader stages for each framebuffer target. However the geometry and the transformed vertex positions are common to all of them.
You have to provide different shader programs for each Model you want to render.
You can switch between different shader combinations using the glUseProgram function.
So before rendering your character or shiny sword or whatever you have to initialize the appropriate shader attributes and uniforms.
So it just a question of the design of the code of your game,
because you need to provide all uniform attributes to the shader, for example light information, texture samples and you must enable all necessary vertex attributes of the shader in order to assign position, color and so on.
These attributes can differ between the shaders and also your client side model can have different kind of Vertex attribute structures.
That means the model of your code directly influences the assigned shader and depends on it.
If you want to share common code between different shader programs, e.g illuminateDiffuse
you have to outsource this function and providing it to your shader through simply insert the string literal which represents the function into your shaders code, which is nothin more than a string literal. So you can reach a kind of modularity or include behavior through string manipulation of you shader code.
In any case the shader compiler tells you whats wrong.
Best regards
I have been reading this OpenGL4.1 new features review.I don't really understand the idea behind GL_ARB_separate_program_objects usage , at least based on how the post author puts it:
It allows to independently use shader stages without changing others
shader stages. I see two mains reasons for it: Direct3D, Cg and even
the old OpenGL ARB program does it but more importantly it brings some
software design flexibilities allowing to see the graphics pipeline at
a lower granularity. For example, my best enemy the VAO, is a
container object that links buffer data, vertex layout data and GLSL
program input data. Without a dedicated software design, this means
that when I change the material of an object (a new fragment shader),
I need different VAO... It's fortunately possible to keep the same VAO
and only change the program by defining a convention on how to
communicate between the C++ program and the GLSL program. It works
well even if some drawbacks remains.
Now ,this line :
For example, my best enemy the VAO, is a container object that links buffer data, vertex layout data and GLSL program input data.Without a dedicated software design, this means that when I change the material of an object (a new fragment shader), I need different VAO...
makes me wonder.In my OpenGL programs I use VAO objects and I can switch between different shader programs without doing any change to VAO itself.So ,have I misunderstood the whole idea? Maybe he means we can switch shaders for the same program without re-linking ?
I'm breaking this answer up into multiple parts.
What the purpose of ARB_separate_shader_objects is
The purpose of this functionality is to be able to easily mix-and-match between vertex/fragment/geometry/tessellation shaders.
Currently, you have to link all shader stages into one monolithic program. So I could be using the same vertex shader code with two different fragment shaders. But this results in two different programs.
Each program has its own set of uniforms and other state. Which means that if I want to change some uniform data in the vertex shader, I have to change it in both programs. I have to use glGetUniformLocation on each (since they could have different locations). I then have to set the value on each one individually.
That's a big pain, and it's highly unnecessary. With separate shaders, you don't have to. You have a program that contains just the vertex shader, and two programs that contain the two fragment shaders. Changing vertex shader uniforms doesn't require two glGetUniformLocation calls. Indeed, it's easier to cache the data, since there's only one vertex shader.
Also, it deals with the combinatorial explosion of shader combinations.
Let's say you have a vertex shader that does simple rigid transformations: it takes a model-to-camera matrix and a camera-to-clip matrix. Maybe a matrix for normals too. And you have a fragment shader that will sample from some texture, do some lighting computations based on the normal, and return a color.
Now let's say you add another fragment shader that takes extra lighting and material parameters. It doesn't have any new inputs from the vertex shaders (no new texture coordinates or anything), just new uniforms. Maybe it's for projective lighting, which the vertex shader isn't involved with. Whatever.
Now let's say we add a new vertex shader that does vertex weighted skinning. It provides the same outputs as the old vertex shader, but it has a bunch of uniforms and input weights for skinning.
That gives us 2 vertex shaders and 2 fragment shaders. A total of 4 program combinations.
What happens when we add 2 more compatible fragment shaders? We get 8 combinations. If we have 3 vertex and 10 fragment shaders, we have 30 total program combinations.
With separate shaders, 3 vertex and 10 fragment shaders needs 30 program pipeline objects, but only 13 program objects. That's over 50% fewer program objects than the non-separate case.
Why the quoted text is wrong
Now ,this line [...] makes me wonder.
It should make you wonder; it's wrong in several ways. For example:
the VAO, is a container object that links buffer data, vertex layout data and GLSL program input data.
No, it does not. It ties buffer objects that provide vertex data to the vertex formats for that data. And it specifies which vertex attribute indices that goes to. But how tightly coupled this is to "GLSL program input data" is entirely up to you.
Without a dedicated software design, this means that when I change the material of an object (a new fragment shader), I need different VAO...
Unless this line equates "a dedicated software design" with "reasonable programming practice", this is pure nonsense.
Here's what I mean. You'll see example code online that does things like this when they set up their vertex data:
glBindBuffer(GL_ARRAY_BUFFER, buffer_object);
glEnableVertexAttribArray(glGetAttribLocation(prog, "position"));
glVertexAttribPointer(glGetAttribLocation(prog, "position"), ...);
There is a technical term for this: terrible code. The only reason to do this is if the shader specified by prog is somehow not under your direct control. And if that's the case... how do you know that prog has an attribute named "position" at all?
Reasonable programming practice for shaders is to use conventions. That's how you know prog has an attribute named "position". But if you know that every program is going to have an attribute named "position", why not take it one step further? When it comes time to link a program, do this:
GLuint prog = glCreateProgram();
glAttachShader(prog, ...); //Repeat as needed.
glBindAttribLocation(prog, 0, "position");
After all, you know that this program must have an attribute named "position"; you're going to assume that when you get it's location later. So cut out the middle man and tell OpenGL what location to use.
This way, you don't have to use glGetAttribLocation; just use 0 when you mean "position".
Even if prog doesn't have an attribute named "position", this will still link successfully. OpenGL doesn't mind if you bind attribute locations that don't exist. So you can just apply a series of glBindAttribLocation calls to every program you create, without problems. Indeed, you can have multiple conventions for your attribute names, and as long as you stick to one set or the other, you'll be fine.
Even better, stick it in the shader and don't bother with the glBindAttribLocation solution at all:
#version 330
layout(location = 0) in vec4 position;
In short: always use conventions for your attribute locations. If you see glGetAttribLocation in a program, consider that a code smell. That way, you can use any VAO for any program, since the VAO is simply written against the convention.
I don't see how having a convention equates to "dedicated software design", but hey, I didn't write that line either.
I can switch between different shader programs
Yes, but you have to replace whole programs altogether. Separate shader objects allow you to replace only one stage (e.g. only vertex shader).
If you have for example N vertex shaders and M vertex shaders, using conventional linking you would have N * M program objects (to cover all posible combinations). Using separate shader objects, they are separated from each other, and thus you need to keep only N + M shader objects. That's a significant improvement in complex scenarios.