Can you have multiple pixel (fragment) shaders in the same program? - opengl

I would like to have two pixel shaders; the first doing one thing, and then the next doing something else. Is this possible, or do I have to pack everything into the one shader?

You can do do this, e.g. by doing function calls from the main entrypoint to functions that are implemented in the various shader objects.
main() {
callToShaderObject1()
callToShaderObject2()
}
each of those callToShaderObject functions can live in different shader objects, but all objects have to be attached and linked in the same program before it can be used.

They can't run at the same time, but you are free to use different shaders for different geometry, or to render in multiple passes using different shaders.

The answer depends on the GL version, you need to check glAttachShader documentation for the version you are using. GLES versions (including webgl) do not allow attaching multiple fragment shaders to a single program and will raise GL_INVALID_OPERATION error when attempted.
glAttachShader - OpenGL 4:
It is permissible to attach multiple shader objects of the same type because each may contain a portion of the complete shader.
glAttachShader - OpenGL ES 3.2:
It is not permissible to attach multiple shader objects of the same type.

Related

OpenGL Compute stage with other stages

I want to have a single shader program that has a Compute stage along with the standard graphics stages (vertex, tess control, tess eval, fragment).
Unfortunately if I attach the Compute stage to the rest of the program and then link it, calls to location queries such as glGetAttribLocation (for uniforms/attributes in any stage) start returning -1, indicating they failed to find the named objects. I also tried using layout(location=N), which resulted in nothing being drawn.
If I attach the stages to two different shader programs and use them one right after the other, both work well (the compute shader writes to a VBO and the draw shader reads from the same VBO), except that I have to switch between them.
Are there limitations on combining Compute stage with the standard graphics stages? All the examples I can find have two programs, but I have not found an explanation for why that would need to be the case.
OpenGL actively forbids linking a program that contains a compute shader with any non-compute shader types. You should have gotten a linker error when you tried it.
Also, there's really no reason to do so. The only hypothetical benefit you might have gotten from it is having the two sets of shaders sharing uniform values. There just isn't much to gain from having them in the same program.

Which shaders have to have input layout?

I'm creating a game based on DirectX 11.1. Now I'm coding shaders part and I have one question: How many shader types have to have their own separate input layout? I have every existing DirectX 11.1 in mind, including compute shaders, geometry shaders and other.
Assuming you're talking about ID3D11InputLayout, the only shader stage that requires this is the vertex shader. The other stages have their inputs/outputs defined as the arguments and return types of their main function, respectively.

How to remove unused resources from an OpenGL program

I am trying to create something like effect system for OpenGL, and I want to be able to define a number of shaders in the same file. But I discovered the following problem. Say I have two shaders: A and B. Shader A uses texA and shader B uses texB. Then despite the fact that neither shader A uses texB nor shader B uses texA, both textures will be enumerated in both programs (I am using separate programs, so every shader corresponds to one program). One consequence is that I cannot have many textures defined in one file since the shader will fail to link (it compiles successfully but the linker then complains that the number of texture samplers exceeds the HW limit). Other problem is that I am doing automatic resource binding and my shaders have lots of false resource dependencies.
So is there a way to tell the shader compiler/linker to remove all unused resources from the separate program?
Shader sampler units are not there to select textures, but to pass texture units to the shader. The textures themself are bound to the texture units. So the selection which texture to use should not be done in the shader, but in the host program.
Or you could use bindless textures if your OpenGL implementation (=GPU driver) supports these.

Transform Feedback varyings

From some examples of using OpenGL transform feedback I see that the glTransformFeedbackVaryings​ are mapped after program compilation and before it is linking.Is this way enforced for all OpenGL versions?Can't layout qualifier be used to set the indices just like for vertex arrays?I am asking because in my code shader programs creation process is abstracted from other routines and before splitting it to controllable compile/link methods I would like to know if there's a way around.
Update:
How is it done when using separable shader objects?There is no explicit linkage step.
UPDATE:
It is still not clear to me how to set glTransformFeedbackVaryings when using separate shader objects.
This explantion is completely unclear to me:
If separable program objects are in use, the set of attributes
captured is taken from the program object active on the last shader
stage processing the primitives captured by transform feedback. The
set of attributes to capture in transform feedback mode for any other
program active on a previous shader stage is ignored.
I actually thought I could activate a pipeline object and do the query.But it seem to have no effect.My transform feedback writes nothing.Then I found this discussion in Transform Feedback docs:
Can you output varyings from a seperate shader program created
with glCreateShaderProgramEXT?
RESOLVED: No.
glTransformFeedbackVaryings requires a re-link to take effect on a
program. glCreateShaderProgramEXT detaches and deletes the shader
object use to create the program so a glLinkProgram will fail.
You can still create a vertex or geometry shader program
with the standard GLSL creation process where you could use
glTransformFeedbackVaryings and glLinkProgram.
This is unclear too.Does the answer mean that to set transform feedback varyings one should use the regular shader programs only?I don't get it.
What you are asking is possible using 4.4.2.1 Transform Feedback Layout Qualifiers, unfortunately it is an OpenGL 4.4 feature. It is available in extension form through GL_ARB_enhanced_layouts, but this is a relatively new extension and adoption is sparse at the moment.
It is considerably more complicated than any of the more traditional layout qualifiers in GLSL, so your best bet for the foreseeable future is to manage the varyings from the GL API instead of in your shader.
As far as varyings in SSO (separable shader object) programs, the OpenGL specification states the following:
OpenGL 4.4 (Core Profile) - 13.2 Transform Feedback - pp. 392
If separable program objects are in use, the set of attributes captured is taken
from the program object active on the last shader stage processing the primitives
captured by transform feedback. The set of attributes to capture in transform feedback mode for any other program active on a previous shader stage is ignored.
Ordinarily linking identifies varyings (denoted in/out in modern GLSL) that are actually used between stages and establishes the set of "active" uniforms for a GLSL program. Linking trims the dead weight that is not shared across multiple stages and performs static interface validation between stages and it is also when binding locations for any remaining varyings or uniforms are set. Since each program object can be a single stage when using SSOs, the linker is not going to reduce the number of inputs/outputs (varyings) and you can ignore a lot of language in the specification that says it must occur before or after linking.
Since linking is not a step in creating a program object for use with separate shader objects, your transform feedback has to be relative to a single stage (which can mean a different program object depending on which stage you select). OpenGL uses the program associated with the final vertex processing stage enabled in your pipeline for this purpose; this could be a vertex shader, tessellation evaluation shader, or geometry shader (in that order). Whichever program provides the final vertex processing stage for your pipeline is the program object that transform feedback varyings are relative to.

Multiple instances of shaders

I recently read that you can
"have multiple instances of OpenGL shaders"
but no other details were given on this.
I'd like some clarification as to what exactly this means.
For one, I know you can have more than one glProgram running, and that you can switch between them. Is that all this is referring to? I assuming that switching between several created shader programs per frame would essentially mean I am using several programs "simultaneously".
Or does it somehow refer to having multiple "instances" of the same shader program? That would make no sense to me.
Some basic clarification would be enjoyed here!
When you create a program object you're linking together several shaders. Usually at least a vertex and a fragment shader. Now say you want to render, say some glow around some object. That glow would be created by a different fragment shader, but the vertex shader would be the same as for the regular appearance. Now to save resources you can use the same vertex shader in multiple programs but with different fragment shaders being linked in. Of course you could also have the same fragment shader and different vertex shaders.
In short you can link a single shader into an arbitrary number of programs. As long as the linked shader stages are compatible with each other this helps with modularization.