I am trying to create something like effect system for OpenGL, and I want to be able to define a number of shaders in the same file. But I discovered the following problem. Say I have two shaders: A and B. Shader A uses texA and shader B uses texB. Then despite the fact that neither shader A uses texB nor shader B uses texA, both textures will be enumerated in both programs (I am using separate programs, so every shader corresponds to one program). One consequence is that I cannot have many textures defined in one file since the shader will fail to link (it compiles successfully but the linker then complains that the number of texture samplers exceeds the HW limit). Other problem is that I am doing automatic resource binding and my shaders have lots of false resource dependencies.
So is there a way to tell the shader compiler/linker to remove all unused resources from the separate program?
Shader sampler units are not there to select textures, but to pass texture units to the shader. The textures themself are bound to the texture units. So the selection which texture to use should not be done in the shader, but in the host program.
Or you could use bindless textures if your OpenGL implementation (=GPU driver) supports these.
Related
I want to have a single shader program that has a Compute stage along with the standard graphics stages (vertex, tess control, tess eval, fragment).
Unfortunately if I attach the Compute stage to the rest of the program and then link it, calls to location queries such as glGetAttribLocation (for uniforms/attributes in any stage) start returning -1, indicating they failed to find the named objects. I also tried using layout(location=N), which resulted in nothing being drawn.
If I attach the stages to two different shader programs and use them one right after the other, both work well (the compute shader writes to a VBO and the draw shader reads from the same VBO), except that I have to switch between them.
Are there limitations on combining Compute stage with the standard graphics stages? All the examples I can find have two programs, but I have not found an explanation for why that would need to be the case.
OpenGL actively forbids linking a program that contains a compute shader with any non-compute shader types. You should have gotten a linker error when you tried it.
Also, there's really no reason to do so. The only hypothetical benefit you might have gotten from it is having the two sets of shaders sharing uniform values. There just isn't much to gain from having them in the same program.
Is there any way to automatically compile OpenGL shaders for Vulkan? The problem is with the uniforms.
'non-opaque uniforms outside a block' : not allowed when using GLSL for Vulkan
I have tried compiling for OpenGL, then decompiling with spirv-cross with --vulkan-semantics, but it still has non-opaque uniforms.
spirv-cross seems to only have tools for compiling Vulkan shaders for OpenGL.
[--glsl-emit-push-constant-as-ubo]
[--glsl-emit-ubo-as-plain-uniforms]
A shader meant for OpenGL consumption will not work on Vulkan. Even ignoring the difference in how they consider uniforms, they have very different resource models. Vulkan uses descriptor sets and binding points, with all resources using the same binding indices (set+binding). By contrast, OpenGL gives each kind of resource its own unique set of indices. So a GLSL shader meant for OpenGL consumption might assign a texture uniform and a uniform block to the same binding index. But you can't do that in a GLSL shader meant for Vulkan, not unless the two resources are in different descriptor sets.
If you want to share shaders, you're going to need to employ pre-processor trickery to make sure that the shader assigns resources (including how it apportions uniforms) for the specific target that the shader is being compiled for.
I have some math functions written in GLSL and I want to use them in TES, geometry and fragment stages of the same shader program. All of them are pretty valid for all these shader types, and for now the code is just copy-pasted across shader files.
I want to extract the functions from the shader files and put them into a separate file, yielding a shader "library". I can see at least two ways to do it:
Make a shader source preprocessor which will insert "library" code into shader source where appropriate.
Create additional shader objects, one per shader type, which are compiled from the library source code, and link them together. This way the "library" executable code will be duplicated across shader stages on compiler and linker level. Actually, this is how "library" shaders may be used, but in this variant they are stage-specific and cannot be shared across pipeline stages.
Is it possible to compile shader source only once (with appropriate restrictions), link it to a shader program and use it in any stage of the pipeline? I mean something like this:
GLuint shaderLib = glCreateShader(GL_LIBRARY_SHADER);
//...add source and compile....
glAttachShader(shProg, vertexShader);
glAttachShader(shProg, tesShader);
glAttachShader(shProg, geomShader);
glAttachShader(shProg, fragShader);
glAttachShader(shProg, shaderLib);
glLinkProgram(shProg); // Links OK; vertex, TES, geom and frag shader successfully use functions from shaderLib.
Of course, library shader should not have in or out global variables, but it may use uniforms. Also, function prototypes should be declared before usage in each shader source, as it is possible to do when linking several shaders of the same type into one program.
And if the above is not possible, then WHY? Such "library" shaders look very logical for the C-like compilation model of GLSL.
Is it possible to compile shader source only once (with appropriate restrictions), link it to a shader program and use it in any stage of the pipeline?
No. I'd suggest, if you have to do this, to just add the text to the various shaders. Well, don't add it directly to the actual string; instead, add it to the list of shader strings you provide via glShaderSource/glCreateShaderProgram.
And if the above is not possible, then WHY?
Because each shader stage is separate.
It should be noted that not even Vulkan changes this. Well, not the way you want. It does allows you to have the reverse: multiple shader stages all in a single SPIR-V module. But it doesn't (at the API level) allow you to have multiple modules provide code for a single stage.
All the high level constructs of structs, functions, linking of modules, etc, are mostly all just niceties that the APIs provide to make your life easier on input. HLSL for example allows you to do use #include's, but to do something similar in GLSL you need to perform that pre-process step yourself.
When the GPU has to execute your shader it is running the same code hundreds, thousands, if not millions of times per frame - the driver will have converted your shader into its own HW specific instruction set and optimised it down to the smallest number of instructions practicable to guarantee best performance of their instruction caches and shader engines.
according to the https://www.opengl.org/wiki/Texture, "if two different GLSL samplers have different texture types, but are associated with the same texture image unit, then rendering will fail. Give each sampler a different texture image unit."
but the glactivetexture use enum to iterate texture unit. How to make sure the texture unit is associate with correct target when i want to reuse the unit. for example use unit 2 for 2D first, and then want reuse it for 3D. I have tried glBindTexture(GL_TEXTURE_1D/2D/3D, 0); but seems not work. Should i use glenable?
You have misinterpreted what this statement on the OpenGL Wiki means.
It is referring to sampler uniforms in GLSL. It is an error to have a sampler2D and samplerCube, for instance, that both reference the same texture image unit. Since this situation cannot be determined at compile- / link-time, there is no error state that will be generated. Instead, you will create undefined behavior at shader run-time if you try to use two different types of samplers that refer to the same texture image unit.
Regarding enabling GL_TEXTURE_1D, etc. That is for the fixed-function pipeline. It does nothing in shader-based OpenGL, textures are effectively "enabled" or "disabled" completely programatically. If you do not sample anything from a certain texture image unit during the execution of your shader, then you can think of it as "disabled." However, ultimately such thinking is not productive in the programmable pipeline. You should simply forget that those states ever existed.
I would like to have two pixel shaders; the first doing one thing, and then the next doing something else. Is this possible, or do I have to pack everything into the one shader?
You can do do this, e.g. by doing function calls from the main entrypoint to functions that are implemented in the various shader objects.
main() {
callToShaderObject1()
callToShaderObject2()
}
each of those callToShaderObject functions can live in different shader objects, but all objects have to be attached and linked in the same program before it can be used.
They can't run at the same time, but you are free to use different shaders for different geometry, or to render in multiple passes using different shaders.
The answer depends on the GL version, you need to check glAttachShader documentation for the version you are using. GLES versions (including webgl) do not allow attaching multiple fragment shaders to a single program and will raise GL_INVALID_OPERATION error when attempted.
glAttachShader - OpenGL 4:
It is permissible to attach multiple shader objects of the same type because each may contain a portion of the complete shader.
glAttachShader - OpenGL ES 3.2:
It is not permissible to attach multiple shader objects of the same type.