OpenGL Compute stage with other stages - opengl

I want to have a single shader program that has a Compute stage along with the standard graphics stages (vertex, tess control, tess eval, fragment).
Unfortunately if I attach the Compute stage to the rest of the program and then link it, calls to location queries such as glGetAttribLocation (for uniforms/attributes in any stage) start returning -1, indicating they failed to find the named objects. I also tried using layout(location=N), which resulted in nothing being drawn.
If I attach the stages to two different shader programs and use them one right after the other, both work well (the compute shader writes to a VBO and the draw shader reads from the same VBO), except that I have to switch between them.
Are there limitations on combining Compute stage with the standard graphics stages? All the examples I can find have two programs, but I have not found an explanation for why that would need to be the case.

OpenGL actively forbids linking a program that contains a compute shader with any non-compute shader types. You should have gotten a linker error when you tried it.
Also, there's really no reason to do so. The only hypothetical benefit you might have gotten from it is having the two sets of shaders sharing uniform values. There just isn't much to gain from having them in the same program.

Related

OpenGL per-mesh material (shader)

So, I'm working on a simple game-engine with C++ and OpenGL 4. Right now I'm struggling with rendering imported models.
I'm using the FBX sdk to import fbx models using a very naive approach: basically I visit each node of the fbx and append the mesh data to a single big structure that is later used for rendering. However I want to be able to specify a different fragment shader for each material used by the model (for example a different shader for a car rims and lights).
As a reference, UE4 has a material system that allows the user to define a simple shader using a blueprint-like editor.
I would like to apply a similar concept to my engine, allowing to create a material object that specifies a piece of fragment shader code and a set of textures to use.
The problems I'm facing are:
It is clear that I must separate the draw calls for each model part that uses a different material, since I cannot swap program in the middle of a draw call (can I?): at this point, is it better to have a separate vao/vbo/ebo for each part or a single one and keep track of where a part ends and the next one begins? (I guess this is the best option)
Is it a good practice to pre-compile just the shader fragment and attach it to the current program on the fly (i.e. glAttach + glLinkProgram + glUseProgram) or is it better to pre-link an entire program for each material, considering that the vertex shader is always the same?
No, you cannot change the program in the middle of a draw call. There are different opinions and tests on how the GPU will perform based on the layout of your data. My experience is that, if you are not going to modify your meshes data after you upload them the first time, the most efficent way is to have a single VAO, with two VBO: one for indices and one for the rest of the data. When issuing draw calls, you offset the indexes buffer based on the mesh data (which you should keep track of), as well as offseting the configuration of the shader attributes. This approach allows for a more cache-friendly and efficent memory access, as the block of memory will be contigous. However, as I mentioned, there are cases where this wont be the most efficent approach (althought I believe it will be still efficent enough). It depends on your hardware and driver.
Precompile and link all your programs before launching the render loop. Its the most efficent approach
As an extra, I would recommend you to look into the UBER shaders technique. This methodology is based on creating a shader for different possible inputs, and create a set of defines or sub-routines architecture which allows you to compile different versions of the same shader (for instance, you might have a model with a normal texture and you will probably want to apply bump mapping, but other models might not have this texture, so executing the exact same shader will result in undefined behaviour or crash).

How to remove unused resources from an OpenGL program

I am trying to create something like effect system for OpenGL, and I want to be able to define a number of shaders in the same file. But I discovered the following problem. Say I have two shaders: A and B. Shader A uses texA and shader B uses texB. Then despite the fact that neither shader A uses texB nor shader B uses texA, both textures will be enumerated in both programs (I am using separate programs, so every shader corresponds to one program). One consequence is that I cannot have many textures defined in one file since the shader will fail to link (it compiles successfully but the linker then complains that the number of texture samplers exceeds the HW limit). Other problem is that I am doing automatic resource binding and my shaders have lots of false resource dependencies.
So is there a way to tell the shader compiler/linker to remove all unused resources from the separate program?
Shader sampler units are not there to select textures, but to pass texture units to the shader. The textures themself are bound to the texture units. So the selection which texture to use should not be done in the shader, but in the host program.
Or you could use bindless textures if your OpenGL implementation (=GPU driver) supports these.

Transform Feedback varyings

From some examples of using OpenGL transform feedback I see that the glTransformFeedbackVaryings​ are mapped after program compilation and before it is linking.Is this way enforced for all OpenGL versions?Can't layout qualifier be used to set the indices just like for vertex arrays?I am asking because in my code shader programs creation process is abstracted from other routines and before splitting it to controllable compile/link methods I would like to know if there's a way around.
Update:
How is it done when using separable shader objects?There is no explicit linkage step.
UPDATE:
It is still not clear to me how to set glTransformFeedbackVaryings when using separate shader objects.
This explantion is completely unclear to me:
If separable program objects are in use, the set of attributes
captured is taken from the program object active on the last shader
stage processing the primitives captured by transform feedback. The
set of attributes to capture in transform feedback mode for any other
program active on a previous shader stage is ignored.
I actually thought I could activate a pipeline object and do the query.But it seem to have no effect.My transform feedback writes nothing.Then I found this discussion in Transform Feedback docs:
Can you output varyings from a seperate shader program created
with glCreateShaderProgramEXT?
RESOLVED: No.
glTransformFeedbackVaryings requires a re-link to take effect on a
program. glCreateShaderProgramEXT detaches and deletes the shader
object use to create the program so a glLinkProgram will fail.
You can still create a vertex or geometry shader program
with the standard GLSL creation process where you could use
glTransformFeedbackVaryings and glLinkProgram.
This is unclear too.Does the answer mean that to set transform feedback varyings one should use the regular shader programs only?I don't get it.
What you are asking is possible using 4.4.2.1 Transform Feedback Layout Qualifiers, unfortunately it is an OpenGL 4.4 feature. It is available in extension form through GL_ARB_enhanced_layouts, but this is a relatively new extension and adoption is sparse at the moment.
It is considerably more complicated than any of the more traditional layout qualifiers in GLSL, so your best bet for the foreseeable future is to manage the varyings from the GL API instead of in your shader.
As far as varyings in SSO (separable shader object) programs, the OpenGL specification states the following:
OpenGL 4.4 (Core Profile) - 13.2 Transform Feedback - pp. 392
If separable program objects are in use, the set of attributes captured is taken
from the program object active on the last shader stage processing the primitives
captured by transform feedback. The set of attributes to capture in transform feedback mode for any other program active on a previous shader stage is ignored.
Ordinarily linking identifies varyings (denoted in/out in modern GLSL) that are actually used between stages and establishes the set of "active" uniforms for a GLSL program. Linking trims the dead weight that is not shared across multiple stages and performs static interface validation between stages and it is also when binding locations for any remaining varyings or uniforms are set. Since each program object can be a single stage when using SSOs, the linker is not going to reduce the number of inputs/outputs (varyings) and you can ignore a lot of language in the specification that says it must occur before or after linking.
Since linking is not a step in creating a program object for use with separate shader objects, your transform feedback has to be relative to a single stage (which can mean a different program object depending on which stage you select). OpenGL uses the program associated with the final vertex processing stage enabled in your pipeline for this purpose; this could be a vertex shader, tessellation evaluation shader, or geometry shader (in that order). Whichever program provides the final vertex processing stage for your pipeline is the program object that transform feedback varyings are relative to.

Multiple instances of shaders

I recently read that you can
"have multiple instances of OpenGL shaders"
but no other details were given on this.
I'd like some clarification as to what exactly this means.
For one, I know you can have more than one glProgram running, and that you can switch between them. Is that all this is referring to? I assuming that switching between several created shader programs per frame would essentially mean I am using several programs "simultaneously".
Or does it somehow refer to having multiple "instances" of the same shader program? That would make no sense to me.
Some basic clarification would be enjoyed here!
When you create a program object you're linking together several shaders. Usually at least a vertex and a fragment shader. Now say you want to render, say some glow around some object. That glow would be created by a different fragment shader, but the vertex shader would be the same as for the regular appearance. Now to save resources you can use the same vertex shader in multiple programs but with different fragment shaders being linked in. Of course you could also have the same fragment shader and different vertex shaders.
In short you can link a single shader into an arbitrary number of programs. As long as the linked shader stages are compatible with each other this helps with modularization.

Web-GL : Multiple Fragment Shaders per Program

Does anyone know if it's possible to have multiple fragment shaders run serially in a single Web-GL "program"? I'm trying to replicate some code I have written in WPF using shader Effects. In the WPF program I would wrap an image with multiple borders and each border would have an Effect attached to it (allowing for multiple Effects to run serially on the same image).
I'm afraid you're probably going to have to clarify your question a bit, but I'll take a stab at answering anyway:
WebGL can support, effectively, as many different shaders as you want. (There are of course practical limits like available memory but you'd have to be trying pretty hard to bump into them by creating too many shaders.) In fact, most "real world" WebGL/OpenGL applications will use a combination of many different shaders to produce the final scene rendered to your screen. (A simple example: Water will usually be rendered with a different shader or set of shaders than the rest of the environment around it.)
When dispatching render commands only one shader program may be active at a time. The currently active program is specified by calling gl.useProgram(shaderProgram); after which any geometry drawn will be rendered with that program. If you want to render an effect that requires multiple different shaders you will need to group them by shader and draw each batch separately:
gl.useProgram(shader1);
// Setup shader1 uniforms, bind the appropriate buffers, etc.
gl.drawElements(gl.TRIANGLES, shader1VertexCount, gl.UNSIGNED_SHORT, 0); // Draw geometry that uses shader1
gl.useProgram(shader2);
// Setup shader2 uniforms, bind the appropriate buffers, etc.
gl.drawElements(gl.TRIANGLES, shader2VertexCount, gl.UNSIGNED_SHORT, 0); // Draw geometry that uses shader2
// And so on...
The other answers are on the right track. You'd either need to create the shader on the fly that applies all the effects in one shader or framebuffers and apply the effects one at a time. There's an example of the later here
WebGL Image Processing Continued
As Toji suggested, you might want to clarify your question. If I understand you correctly, you want to apply a set of post-processing effects to an image.
The simple answer to your question is: No, you can't use multiple fragment shaders with one vertex shader.
However, there are two ways to accomplish this: First, you can write everything in one fragment shader and combine them in the end. This depends on the effects you want to have!
Second, you can write multiple shader programs (one for each effect) and write your results to a fragment buffer object (render to texture). Each shader would get the results of the previous effect and apply the next one. This would be a bit more complicated, but it is the most flexible approach.
If you mean to run several shaders in a single render pass, like so (example pulled from thin air):
Vertex color
Texture
Lighting
Shadow
...each stage attached to a single WebGLProgram object, and each stage with its own main() function, then no, GLSL doesn't work this way.
GLSL works more like C/C++, where you have a single global main() function that acts as your program's entry point, and any arbitrary number of libraries attached to it. The four examples above could each be a separate "library," compiled on its own but linked together into a single program, and invoked by a single main() function, but they may not each define their own main() function, because such definitions in GLSL are shared across the entire program.
This unfortunately requires you to write separate main() functions (at a minimum) for every shader you intend to use, which leads to a lot of redundant programming, even if you plan to reuse the libraries themselves. That's why I ended up writing a glorified string mangler to manage my GLSL libraries for Jax; I'm not sure how useful the code will be outside of my framework, but you are certainly free to take a look at it, and make use of anything you find helpful. The relevant files are:
lib/assets/javascripts/jax/shader.js.coffee
lib/assets/javascripts/jax/shader/program.js.coffee
spec/javascripts/jax/shader_spec.js.coffee (tests and usage examples)
spec/javascripts/jax/shader/program_spec.js.coffee (more tests and usage examples)
Good luck!