Is it valid to draw with an empty shader program? - c++

In my game there is a render module that handles shaders, framebuffers and drawing. Now I want to encapsulate the logic of these three tasks separately. The idea is that I split up the render module in three modules. I do this for decreasing code complexity and easily implementing live shader reload, but isn't important for my question.
The drawing module would create empty shader objects with glCreateProgram() and globally store them along with the path to the source file. The shader module would check for them and create the actual shader by loading the source file, compiling and linking.
But in this concept, it could occur a case where the render module already wants to draw but the shader module hadn't create the actual shader. So my question is, is it valid to draw with an empty shader program? It is completely acceptable for me that the screen would be black when this happens. Creating the shaders should be ready very soon, so the delay might be unnoticeable.
Is it valid to draw with an empty shader program? How can I implement the idea of elsewhere loaded shaders otherwise?

How do you define "valid" and "empty"?
If the program object's last link was not successful (or if it had no last link), then calling glUseProgram on it is a GL_INVALID_OPERATION error. This also means that glUseProgram will fail, so the current program will not change. So all glUniform calls will refer to that program and not the new one; if that program is zero, you get more GL_INVALID_OPERATION errors.
If there is no current program (ie, the program is 0), then attempting to render will produce undefined behavior.
Is undefined behavior "valid" for your needs? If you're not going to show the user that frame (by not calling swap buffers), then what you render won't matter. Is having all of those errors in the OpenGL error queue "valid" for you?

Two relevant lines in the documentation:
glAttachShader
All operations that can be performed on a shader object are valid whether or not the shader object is attached to a program object. It is permissible to attach a shader object to a program object before source code has been loaded into the shader object or before the shader object has been compiled.
glUseProgram
If program is zero, then the current rendering state refers to an invalid program object and the results of shader execution are undefined. However, this is not an error.
If program does not contain shader objects of type GL_FRAGMENT_SHADER, an executable will be installed on the vertex, and possibly geometry processors, but the results of fragment shader execution will be undefined.
So it seems it is possible to do that, but you might not get the same result on all machines.

Related

How to remove unused resources from an OpenGL program

I am trying to create something like effect system for OpenGL, and I want to be able to define a number of shaders in the same file. But I discovered the following problem. Say I have two shaders: A and B. Shader A uses texA and shader B uses texB. Then despite the fact that neither shader A uses texB nor shader B uses texA, both textures will be enumerated in both programs (I am using separate programs, so every shader corresponds to one program). One consequence is that I cannot have many textures defined in one file since the shader will fail to link (it compiles successfully but the linker then complains that the number of texture samplers exceeds the HW limit). Other problem is that I am doing automatic resource binding and my shaders have lots of false resource dependencies.
So is there a way to tell the shader compiler/linker to remove all unused resources from the separate program?
Shader sampler units are not there to select textures, but to pass texture units to the shader. The textures themself are bound to the texture units. So the selection which texture to use should not be done in the shader, but in the host program.
Or you could use bindless textures if your OpenGL implementation (=GPU driver) supports these.

How Many Shader Programs Do I Really Need?

Let's say I have a shader set up to use 3 textures, and that I need to render some polygon that needs all the same shader attributes except that it requires only 1 texture. I have noticed on my own graphics card that I can simply call glDisableVertexAttrib() to disable the other two textures, and that doing so apparently causes the disabled texture data received by the fragment shader to be all white (1.0f). In other words, if I have a fragment shader instruction (pseudo-code)...
final_red = tex0.red * tex1.red * tex2.red
...the operation produces the desired final value regardless whether I have 1, 2, or 3 textures enabled. From this comes a number of questions:
Is it legit to disable expected textures like this, or is it a coincidence that my particular graphics card has this apparent mathematical safeguard?
Is the "best practice" to create a separate shader program that only expects a single texture for single texture rendering?
If either approach is valid, is there a benefit to creating a second shader program? I'm thinking it would be cost less time to make 2 glDisableVertexAttrib() calls than to make a glUseProgram() + 5-6 glGetUniform() calls, but maybe #4 addresses that issue.
When changing the active shader program with glUseProgram() do I need to call glGetUniform... functions every time to re-establish the location of each uniform in the program, or is the location of each expected to be consistent until the shader program is deallocated?
Disabling vertex attributes would not really disable your textures, it would just give you undefined texture coordinates. That might produce an affect similar to disabling a certain texture, but to do this properly you should use a uniform or possibly subroutines (if you have dozens of variations of the same shader).
As far as time taken to disable a vertex array state, that's probably going to be slower than changing a uniform value. Setting uniform values don't really affect the render pipeline state, they're just small changes to memory. Likewise, constantly swapping the current GLSL program does things like invalidate shader cache, so that's also significantly more expensive than setting a uniform value.
If you're on a modern GL implementation (GL 4.1+ or one that implements GL_ARB_separate_shader_objects) you can even set uniform values without binding a GLSL program at all, simply by calling glProgramUniform* (...)
I am most concerned with the fact that you think you need to call glGetUniformLocation (...) each time you set a uniform's value. The only time the location of a uniform in a GLSL program changes is when you link it. Assuming you don't constantly re-link your GLSL program, you only need to query those locations once and store them persistently.

OpenGL transform feedback definition completely inside shader

I'm trying to get my transform feedback running. I wanted to specify my buffer layout completely from the shaders using the core 4.4 or the GL_ARB_enhanced_layouts extension using layout (xfb_offset=xx) declarers. I assumed that after declaring these in a vertex shader i can call
GLint iTransformFeedbackVars;
glGetProgramiv(m_uProgramID, GL_TRANSFORM_FEEDBACK_VARYINGS, &iTransformFeedbackVars);
to get the number of potential variables that want to be written to a transform feedback buffer. But my opengl keeps returning 0 for "iTransformFeedbackVars". I tried calling the above command BEFORE and AFTER linking the program.
Am I missing something here or is it even possible to let the shader specify the variables it wants to write to and my code create the buffer(s) after the wishes of the shader?

Transform Feedback varyings

From some examples of using OpenGL transform feedback I see that the glTransformFeedbackVaryings​ are mapped after program compilation and before it is linking.Is this way enforced for all OpenGL versions?Can't layout qualifier be used to set the indices just like for vertex arrays?I am asking because in my code shader programs creation process is abstracted from other routines and before splitting it to controllable compile/link methods I would like to know if there's a way around.
Update:
How is it done when using separable shader objects?There is no explicit linkage step.
UPDATE:
It is still not clear to me how to set glTransformFeedbackVaryings when using separate shader objects.
This explantion is completely unclear to me:
If separable program objects are in use, the set of attributes
captured is taken from the program object active on the last shader
stage processing the primitives captured by transform feedback. The
set of attributes to capture in transform feedback mode for any other
program active on a previous shader stage is ignored.
I actually thought I could activate a pipeline object and do the query.But it seem to have no effect.My transform feedback writes nothing.Then I found this discussion in Transform Feedback docs:
Can you output varyings from a seperate shader program created
with glCreateShaderProgramEXT?
RESOLVED: No.
glTransformFeedbackVaryings requires a re-link to take effect on a
program. glCreateShaderProgramEXT detaches and deletes the shader
object use to create the program so a glLinkProgram will fail.
You can still create a vertex or geometry shader program
with the standard GLSL creation process where you could use
glTransformFeedbackVaryings and glLinkProgram.
This is unclear too.Does the answer mean that to set transform feedback varyings one should use the regular shader programs only?I don't get it.
What you are asking is possible using 4.4.2.1 Transform Feedback Layout Qualifiers, unfortunately it is an OpenGL 4.4 feature. It is available in extension form through GL_ARB_enhanced_layouts, but this is a relatively new extension and adoption is sparse at the moment.
It is considerably more complicated than any of the more traditional layout qualifiers in GLSL, so your best bet for the foreseeable future is to manage the varyings from the GL API instead of in your shader.
As far as varyings in SSO (separable shader object) programs, the OpenGL specification states the following:
OpenGL 4.4 (Core Profile) - 13.2 Transform Feedback - pp. 392
If separable program objects are in use, the set of attributes captured is taken
from the program object active on the last shader stage processing the primitives
captured by transform feedback. The set of attributes to capture in transform feedback mode for any other program active on a previous shader stage is ignored.
Ordinarily linking identifies varyings (denoted in/out in modern GLSL) that are actually used between stages and establishes the set of "active" uniforms for a GLSL program. Linking trims the dead weight that is not shared across multiple stages and performs static interface validation between stages and it is also when binding locations for any remaining varyings or uniforms are set. Since each program object can be a single stage when using SSOs, the linker is not going to reduce the number of inputs/outputs (varyings) and you can ignore a lot of language in the specification that says it must occur before or after linking.
Since linking is not a step in creating a program object for use with separate shader objects, your transform feedback has to be relative to a single stage (which can mean a different program object depending on which stage you select). OpenGL uses the program associated with the final vertex processing stage enabled in your pipeline for this purpose; this could be a vertex shader, tessellation evaluation shader, or geometry shader (in that order). Whichever program provides the final vertex processing stage for your pipeline is the program object that transform feedback varyings are relative to.

Can you have multiple pixel (fragment) shaders in the same program?

I would like to have two pixel shaders; the first doing one thing, and then the next doing something else. Is this possible, or do I have to pack everything into the one shader?
You can do do this, e.g. by doing function calls from the main entrypoint to functions that are implemented in the various shader objects.
main() {
callToShaderObject1()
callToShaderObject2()
}
each of those callToShaderObject functions can live in different shader objects, but all objects have to be attached and linked in the same program before it can be used.
They can't run at the same time, but you are free to use different shaders for different geometry, or to render in multiple passes using different shaders.
The answer depends on the GL version, you need to check glAttachShader documentation for the version you are using. GLES versions (including webgl) do not allow attaching multiple fragment shaders to a single program and will raise GL_INVALID_OPERATION error when attempted.
glAttachShader - OpenGL 4:
It is permissible to attach multiple shader objects of the same type because each may contain a portion of the complete shader.
glAttachShader - OpenGL ES 3.2:
It is not permissible to attach multiple shader objects of the same type.