I have been spending a lot of times tweaking my shaders and I want to quickly reload a shader without recompiling my program. What is the official way to hot reload shaders in OpenGL 4.1?
Here is my current (non-functional) method:
Create a program pipeline, create one separable program per shader (right now it's just one vertex and one fragment) and use it in the pipeline. Load uniforms into each program that needs it (is there a way to mimic the old style of one uniform shared across shaders?)
glGenProgramPipelines(1, &programPipeline);
glBindProgramPipeline(programPipeline); //is this necessary?
auto handle = glCreateShader(type);
const char* src = source.c_str();
glShaderSource(handle, 1, &src, NULL);
glCompileShader(handle);
program[idx] = glCreateProgram();
__glewProgramParameteri(program[idx], GL_PROGRAM_SEPARABLE, GL_TRUE); //xcode auto corrected this for me
glAttachShader(program[idx], handle);
glLinkProgram(program[idx]);
glDetachShader(program[idx], handle);
glUseProgramStages(programPipeline, shaderStage,
program[idx]);
The part I am stuck at is what to do with attributes. It fails with "GL_INVALID_OPERATION" when I try to enable a vertex attrib array when in the normal non-separable programs it would work fine. I have generated and bound a vao and got the attrib location before trying to enable the vertex attrib array. The online resources I've found only mentions uniforms in conjunction with separable programs, but nothing regarding attributes. Can they even be used with separable programs? What would be the alternative?
Related
I'm writing my own OpenGL-3D-Application and have stumbled across a little problem:
I want the number of light sources to be dynamic. For this, my shader contains an array of my lights struct:uniform PointLight pointLights[NR_POINT_LIGHTS];
The variable NR_POINT_LIGHTS is set by preprocessor, and the command for this is generated by my applications code (Java). So when creating a shader program, I pass the desired start-amount of PintLights, complete the source text with the preprocessor command, compile, link and use. This works great.
Now I want to change this variable. I re-build the shader-source-string, re-compile and re-link a new shaderProgram and continue using this onoe. It just appears that all uniforms set in the old program are getting lost in the progress (of course, I once set them for the old program).
My ideas on how to fix this:
Don't compile a new program, but rather somehow change the source data for the currently running shaders and somehow re-compile them, to continue using the program with the right uniform values
Copy all uniform data from the old program to the newly generated one
What is the right way to do this? How do I do this? I'm not very experienced yet and don't know if any of my ideas is even possible.
You're looking for a Uniform Buffer or (4.3+ only) a Shader Storage Buffer.
struct Light {
vec4 position;
vec4 color;
vec4 direction;
/*Anything else you want*/
}
Uniform Buffer:
const int MAX_ARRAY_SIZE = /*65536 / sizeof(Light)*/;
layout(std140, binding = 0) uniform light_data {
Light lights[MAX_ARRAY_SIZE];
};
uniform int num_of_lights;
Host Code for Uniform Buffer:
glGenBuffers(1, &light_ubo);
glBindBuffer(GL_UNIFORM_BUFFER, light_ubo);
glBufferData(GL_UNIFORM_BUFFER, sizeof(GLfloat) * static_light_data.size(), static_light_data.data(), GL_STATIC_DRAW); //Can be adjusted for your needs
GLuint light_index = glGetUniformBlockIndex(program_id, "light_data");
glBindBufferBase(GL_UNIFORM_BUFFER, 0, light_ubo);
glUniformBlockBinding(program_id, light_index, 0);
glUniform1i(glGetUniformLocation(program_id, "num_of_lights"), static_light_data.size() / 12); //My lights have 12 floats per light, so we divide by 12.
Shader Storage Buffer (4.3+ Only):
layout(std430, binding = 0) buffer light_data {
Light lights[];
};
/*...*/
void main() {
/*...*/
int num_of_lights = lights.length();
/*...*/
}
Host Code for Shader Storage Buffer (4.3+ Only):
glGenBuffers(1, &light_ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, light_ssbo);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(GLfloat) * static_light_data.size(), static_light_data.data(), GL_STATIC_DRAW); //Can be adjusted for your needs
light_ssbo_block_index = glGetProgramResourceIndex(program_id, GL_SHADER_STORAGE_BLOCK, "light_data");
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, light_ssbo);
glShaderStorageBlockBinding(program_id, light_ssbo_block_index, 0);
The main difference between the two is that Uniform Buffers:
Have compatibility with older, OpenGL3.x hardware,
Are limited on most systems to 65kb per buffer
Arrays need to have their [maximum] size declared statically at the compile-time of the shader.
Whereas Shader Storage Buffers:
Require hardware no older than 5 years
Have a API mandated minimum allowable size of 16Mb (and most systems will allow up to 25% the total VRAM)
Can dynamically query the size of any arrays stored in the buffer (though this can be buggy on older AMD systems)
Can be slower than Uniform Buffers on the Shader side (roughly equivalent to a Texture Access)
Don't compile a new program, but rather somehow change the source data for the currently running shaders and somehow re-compile them, to continue using the program with the right uniform values
This isn't do-able at runtime if I'm understanding right (implying that you could change the shader-code of the compiled shader program) but if you modify the shader source text you can compile a new shader program. Thing is, how often do the number of lights change in your scene? Because this is a fairly expensive process to do.
You could specify a max number of lights if you don't mind having a limitation and only use the lights in the shader that have been populated with information, saving you the task of tweaking the source text and recompiling a whole new shader program, but that leaves you with a limitation on the number of lights (If you aren't planning on having absolutely loads of lights in your scene but are planning of having the number of lights change relatively often then this is probably going to be best for you)
However, if you really want to go down the route that you are looking at here:
Copy all uniform data from the old program to the newly generated one
You can look at using a Uniform Block. If you're going to be using shader programs with similar or shared uniforms, Uniform Blocks are a good way of managed those 'universal' uniform variables across your shade programs, or in your case the shader you are moving to as you grow the amount of lights in the shader. Theres a good tutorial on uniform blocks here
Lastly, depending on the OpenGL version you're using, you might still be able to achieve dynamic array sizes. OpenGL 4.3 introduced the ability to use buffers and have unbound array sizes, that you would use glBindBufferRange to send the length of your lights array to. You'll see more talk about that functionality in this question and this wiki reference.
The last would probably be my preference, but it depends on if you're aiming at hardware supporting older OpenGL versions.
What is the easiest way to join more shaders (sources) to a glsl program? Normally when a vertex shader and fragment shader are attached to a program does something like this:
vertex = glCreateShader(GL_VERTEX_SHADER);
fragment = glCreateShader(GL_FRAGMENT_SHADER);
programa = glCreateProgram();
char *vsFuente = LEE_SHADER(rutaVert);
char *fsFuente = LEE_SHADER(rutaFrag);
if(vsFuente == NULL)return GL_FALSE;
if(fsFuente == NULL)return GL_FALSE;
const char *vs = vsFuente;
const char *fs = fsFuente;
glShaderSource(vertex ,1,&vs,NULL);
glShaderSource(fragment,1,&fs,NULL);
delete [] vsFuente;
delete [] fsFuente;
glCompileShader(vertex);
glCompileShader(fragment);
glAttachShader(programa,vertex);
glAttachShader(programa,fragment);
glLinkProgram(programa);
Well, if I want to join another pair of shader (vertex and fragment) to the same program, how do I do it? I use version 2.0 of OpenGL
Exactly the same way you added the shaders you already have in your code. You create more shaders, call glCompileShader() and glAttachShader() for them, and then call glLinkProgram().
You need to be aware of what this is doing, though. While it is perfectly legal to add multiple shaders of the same type to a program, only one of the shaders of each type can have a main() function.
The typical setup I have seen when multiple shaders of the same type are attached to a program (which is not very common) is that some of the shaders contain collections of "utility" functions, and then there's one shader of each type type that contains the main() function, and implements the overall shader logic. This setup can be useful for complex shaders, since the shaders with the utility functions can be compiled once, and then attached to multiple different programs without having to compile them each time.
I'm not sure if this is what you're after. If you really just want to use different sets of shaders, the much more common setup is that you create a program for each pair of vertex/fragment shaders, and then switch between them during your rendering by calling glUseProgram() with the program you need for each part of the rendering.
I have been reading this OpenGL4.1 new features review.I don't really understand the idea behind GL_ARB_separate_program_objects usage , at least based on how the post author puts it:
It allows to independently use shader stages without changing others
shader stages. I see two mains reasons for it: Direct3D, Cg and even
the old OpenGL ARB program does it but more importantly it brings some
software design flexibilities allowing to see the graphics pipeline at
a lower granularity. For example, my best enemy the VAO, is a
container object that links buffer data, vertex layout data and GLSL
program input data. Without a dedicated software design, this means
that when I change the material of an object (a new fragment shader),
I need different VAO... It's fortunately possible to keep the same VAO
and only change the program by defining a convention on how to
communicate between the C++ program and the GLSL program. It works
well even if some drawbacks remains.
Now ,this line :
For example, my best enemy the VAO, is a container object that links buffer data, vertex layout data and GLSL program input data.Without a dedicated software design, this means that when I change the material of an object (a new fragment shader), I need different VAO...
makes me wonder.In my OpenGL programs I use VAO objects and I can switch between different shader programs without doing any change to VAO itself.So ,have I misunderstood the whole idea? Maybe he means we can switch shaders for the same program without re-linking ?
I'm breaking this answer up into multiple parts.
What the purpose of ARB_separate_shader_objects is
The purpose of this functionality is to be able to easily mix-and-match between vertex/fragment/geometry/tessellation shaders.
Currently, you have to link all shader stages into one monolithic program. So I could be using the same vertex shader code with two different fragment shaders. But this results in two different programs.
Each program has its own set of uniforms and other state. Which means that if I want to change some uniform data in the vertex shader, I have to change it in both programs. I have to use glGetUniformLocation on each (since they could have different locations). I then have to set the value on each one individually.
That's a big pain, and it's highly unnecessary. With separate shaders, you don't have to. You have a program that contains just the vertex shader, and two programs that contain the two fragment shaders. Changing vertex shader uniforms doesn't require two glGetUniformLocation calls. Indeed, it's easier to cache the data, since there's only one vertex shader.
Also, it deals with the combinatorial explosion of shader combinations.
Let's say you have a vertex shader that does simple rigid transformations: it takes a model-to-camera matrix and a camera-to-clip matrix. Maybe a matrix for normals too. And you have a fragment shader that will sample from some texture, do some lighting computations based on the normal, and return a color.
Now let's say you add another fragment shader that takes extra lighting and material parameters. It doesn't have any new inputs from the vertex shaders (no new texture coordinates or anything), just new uniforms. Maybe it's for projective lighting, which the vertex shader isn't involved with. Whatever.
Now let's say we add a new vertex shader that does vertex weighted skinning. It provides the same outputs as the old vertex shader, but it has a bunch of uniforms and input weights for skinning.
That gives us 2 vertex shaders and 2 fragment shaders. A total of 4 program combinations.
What happens when we add 2 more compatible fragment shaders? We get 8 combinations. If we have 3 vertex and 10 fragment shaders, we have 30 total program combinations.
With separate shaders, 3 vertex and 10 fragment shaders needs 30 program pipeline objects, but only 13 program objects. That's over 50% fewer program objects than the non-separate case.
Why the quoted text is wrong
Now ,this line [...] makes me wonder.
It should make you wonder; it's wrong in several ways. For example:
the VAO, is a container object that links buffer data, vertex layout data and GLSL program input data.
No, it does not. It ties buffer objects that provide vertex data to the vertex formats for that data. And it specifies which vertex attribute indices that goes to. But how tightly coupled this is to "GLSL program input data" is entirely up to you.
Without a dedicated software design, this means that when I change the material of an object (a new fragment shader), I need different VAO...
Unless this line equates "a dedicated software design" with "reasonable programming practice", this is pure nonsense.
Here's what I mean. You'll see example code online that does things like this when they set up their vertex data:
glBindBuffer(GL_ARRAY_BUFFER, buffer_object);
glEnableVertexAttribArray(glGetAttribLocation(prog, "position"));
glVertexAttribPointer(glGetAttribLocation(prog, "position"), ...);
There is a technical term for this: terrible code. The only reason to do this is if the shader specified by prog is somehow not under your direct control. And if that's the case... how do you know that prog has an attribute named "position" at all?
Reasonable programming practice for shaders is to use conventions. That's how you know prog has an attribute named "position". But if you know that every program is going to have an attribute named "position", why not take it one step further? When it comes time to link a program, do this:
GLuint prog = glCreateProgram();
glAttachShader(prog, ...); //Repeat as needed.
glBindAttribLocation(prog, 0, "position");
After all, you know that this program must have an attribute named "position"; you're going to assume that when you get it's location later. So cut out the middle man and tell OpenGL what location to use.
This way, you don't have to use glGetAttribLocation; just use 0 when you mean "position".
Even if prog doesn't have an attribute named "position", this will still link successfully. OpenGL doesn't mind if you bind attribute locations that don't exist. So you can just apply a series of glBindAttribLocation calls to every program you create, without problems. Indeed, you can have multiple conventions for your attribute names, and as long as you stick to one set or the other, you'll be fine.
Even better, stick it in the shader and don't bother with the glBindAttribLocation solution at all:
#version 330
layout(location = 0) in vec4 position;
In short: always use conventions for your attribute locations. If you see glGetAttribLocation in a program, consider that a code smell. That way, you can use any VAO for any program, since the VAO is simply written against the convention.
I don't see how having a convention equates to "dedicated software design", but hey, I didn't write that line either.
I can switch between different shader programs
Yes, but you have to replace whole programs altogether. Separate shader objects allow you to replace only one stage (e.g. only vertex shader).
If you have for example N vertex shaders and M vertex shaders, using conventional linking you would have N * M program objects (to cover all posible combinations). Using separate shader objects, they are separated from each other, and thus you need to keep only N + M shader objects. That's a significant improvement in complex scenarios.
Two questions:
I am rendering elements in a large VBO with different shaders. In GLSL 1.2 which I must use if I am correct as it is the most current version on OSX does not support uniform locations, which I assume means that the location of your attributes is wherever the compiler decides. Is there any way around this? For instance, as my VBO up with interleaved (x,y,z,nx,ny,nz,texU,texV), I need multiple shaders to be able to access these attributes in the same place every time. I am finding however that the compiler is giving them different locations leading to the location being the normals, and so on. I need their locations to be consistent with my VBO attribute location.
I just got my first GLSL rendering completed and it looks exactly like I forgot to enable the depth test with various polygons rendered on top of one another. I enabled depth testing with:
glEnable(GL_DEPTH_TEST);
And the problem persists. Is there a different way to enable them with shaders? I thought the depth buffer took care of this?
Problem 2 Solved. Turned out to be an SFML issue where I needed to specify the OpenGL settings when creating the window.
Attribute locations are specified in one of 3 places, in order from highest priority to lowest:
Through the use of the GLSL 3.30 (or better) or ARB_explicit_attrib_location extension syntax layout(location = #), where # is the attribute index. So if I have an input called position, I would give it index 3 like this:
layout(location = 3) in vec4 position;
This is my preferred method of handling this. Explicit_attrib_location is available on pretty much any hardware that is still being supported (that isn't Intel).
Explicit association via glBindVertexAttrib. You call this function before linking the program. To do the above, we would do this:
GLuint program = glCreateProgram();
glAttachShader(program, some_shader);
glBindVertexAttrib(program, 3, "position");
glLinkProgram(program);
You can set multiple attributes. Indeed, you can set multiple attribute names to the same index. The idea with that is to be able to just set a bunch of mappings automatically and let OpenGL figure out which one works with the actual shader code. So you could have "position" and "axis" map to index 3, and as long as you don't put a shader into this system that has both of those inputs, you'll be fine.
Note that you can also set attributes that don't exist. You could give "normal" an attribute that isn't specified in the shader. That is fine; the linker will only care about attributes that actually exist. So you can establish a complex convention for this sort of thing, and just run every program on it before linking:
void AttribConvention(GLuint prog)
{
glBindVertexAttrib(program, 0, "position");
glBindVertexAttrib(program, 1, "color");
glBindVertexAttrib(program, 2, "normal");
glBindVertexAttrib(program, 3, "tangent");
glBindVertexAttrib(program, 4, "bitangent");
glBindVertexAttrib(program, 5, "texCoord");
}
GLuint program = glCreateProgram();
glAttachShader(program, some_shader);
AttribConvention(program);
glLinkProgram(program);
Even if a particular shader doesn't have all of these attributes, it will still work.
Let OpenGL assign it. If you don't assign an attribute index to an attribute in one of the other ways, the GLSL linker will assign it for you. You can fetch the attribute post-linking with glGetAttribLocation.
I really don't advise this, because OpenGL will assign the indices arbitrarily. So every shader that uses an attribute named position may have the position in a different index. I don't think it's a good idea. So if you can't explicitly set it in the shader, then at least explicitly set it in your OpenGL code before linking. That way, you can have a convention about what attribute index 0 means, what index 1 means, etc.
On OpenGL 3.3+ you have VAOs, when you use them, you do bind VBOs to it and you can define attributes in a custom order : http://www.opengl.org/sdk/docs/man3/xhtml/glEnableVertexAttribArray.xml (remember that attributes must be contiguous)
A nice/easy implementation of this can be found on XNA : VertexDeclaration, you might want to see all the Vertex* types as well.
Some hint on getting v3 to work with SFML :
http://en.sfml-dev.org/forums/index.php?topic=6314.0
An example on how to create and use VAOs : http://www.opentk.com/files/issues/HelloGL3.cs
(It's C# but I guess you'll get it)
Update :
On v2.1 you have it too http://www.opengl.org/sdk/docs/man/xhtml/glEnableVertexAttribArray.xml, though you can't create VAOs. Almost the same functionality can be achieved but you will have to bind attributes every time since it'll be on the fixed pipeline.
I am trying to use GLSL with openGL 2.0.
Can anyone give me a good tutorial to follow, so that I can setup GLSL properly.
Regards
Zeeshan
Depending on what you are trying to achieve and what is your current knowledge, you can take different approaches.
If you are trying to learn OpenGL 2.0 while also learning GLSL, I suggest getting the Red book and the Orange book as a set, as they go hand in hand.
If you want a less comprehensive guide that will just get you started, check out the OpenGL bible.
If I misunderstood your question and you already know OpenGL, and want to study more about GLSL in particular, here's a good phong shading example that shows the basics.
Compiling a shader source is really simple,
First you need to allocate a shader slot for your source, just like you allocate a texture, using glCreateShader:
GLuint vtxShader = glCreateShader(GL_VERTEX_SHADER);
GLuint pxlShader = glCreateShader(GL_FRAGMENT_SHADER);
After that you need to load your source code somehow. Since this is really a platform dependent solution, this is up to you.
After obtaining the source, you set it using glShaderSource:
glShaderSource(vtxShader, 1, &vsSource, 0);
glShaderSource(pxlShader, 1, &psSource, 0);
Then you compile your sources with glCompileShader:
glCompileShader(vtxShader);
glCompileShader(pxlShader);
Link the shaders to each other, first allocate a program using glCreateProgram, attach the shaders into the program using glAttachShader, and link them using glLinkProgram:
GLuint shaderId = glCreateProgram();
glAttachShader(shaderId, vtxShader);
glAttachShader(shaderId, pxlShader);
glLinkProgram(shaderId);
Then, just like a texture, you bind it to the current rendering stage using glUseProgram:
glUseProgram(shaderId);
To unbind it, use an id of 0 or another shader ID.
For cleanup:
glDetachShader(shaderId, vtxShader);
glDetachShader(shaderId, pxlShader);
glDeleteShader(vtxShader);
glDeleteShader(pxlShader);
glDeleteProgram(shaderId);
And that's mostly everything to it, you can use the glUniform function family alongside with glGetUniform to set parameters as well.