How to check that a GLSL shader is under native limits? - opengl

For GL_ARB_fragment_program and GL_ARB_vertex_program there's a query like GL_PROGRAM_UNDER_NATIVE_LIMITS_ARB. If the result of the query is GL_FALSE, then the program will most likely be executed in software.
Is there any way to query this for a GLSL shader or program object?

There is no query you can perform to test for something like that. If a shader cannot be executed on the hardware, for reasons other than the standard provides (using more uniforms than allowed, etc), then the implementation has two options. It can either execute it in hardware or simply fail to compile/link the shader and provide a message explaining why.

Related

OpenGL Compute stage with other stages

I want to have a single shader program that has a Compute stage along with the standard graphics stages (vertex, tess control, tess eval, fragment).
Unfortunately if I attach the Compute stage to the rest of the program and then link it, calls to location queries such as glGetAttribLocation (for uniforms/attributes in any stage) start returning -1, indicating they failed to find the named objects. I also tried using layout(location=N), which resulted in nothing being drawn.
If I attach the stages to two different shader programs and use them one right after the other, both work well (the compute shader writes to a VBO and the draw shader reads from the same VBO), except that I have to switch between them.
Are there limitations on combining Compute stage with the standard graphics stages? All the examples I can find have two programs, but I have not found an explanation for why that would need to be the case.
OpenGL actively forbids linking a program that contains a compute shader with any non-compute shader types. You should have gotten a linker error when you tried it.
Also, there's really no reason to do so. The only hypothetical benefit you might have gotten from it is having the two sets of shaders sharing uniform values. There just isn't much to gain from having them in the same program.

How to share shader code across shading stages in OpenGL

I have some math functions written in GLSL and I want to use them in TES, geometry and fragment stages of the same shader program. All of them are pretty valid for all these shader types, and for now the code is just copy-pasted across shader files.
I want to extract the functions from the shader files and put them into a separate file, yielding a shader "library". I can see at least two ways to do it:
Make a shader source preprocessor which will insert "library" code into shader source where appropriate.
Create additional shader objects, one per shader type, which are compiled from the library source code, and link them together. This way the "library" executable code will be duplicated across shader stages on compiler and linker level. Actually, this is how "library" shaders may be used, but in this variant they are stage-specific and cannot be shared across pipeline stages.
Is it possible to compile shader source only once (with appropriate restrictions), link it to a shader program and use it in any stage of the pipeline? I mean something like this:
GLuint shaderLib = glCreateShader(GL_LIBRARY_SHADER);
//...add source and compile....
glAttachShader(shProg, vertexShader);
glAttachShader(shProg, tesShader);
glAttachShader(shProg, geomShader);
glAttachShader(shProg, fragShader);
glAttachShader(shProg, shaderLib);
glLinkProgram(shProg); // Links OK; vertex, TES, geom and frag shader successfully use functions from shaderLib.
Of course, library shader should not have in or out global variables, but it may use uniforms. Also, function prototypes should be declared before usage in each shader source, as it is possible to do when linking several shaders of the same type into one program.
And if the above is not possible, then WHY? Such "library" shaders look very logical for the C-like compilation model of GLSL.
Is it possible to compile shader source only once (with appropriate restrictions), link it to a shader program and use it in any stage of the pipeline?
No. I'd suggest, if you have to do this, to just add the text to the various shaders. Well, don't add it directly to the actual string; instead, add it to the list of shader strings you provide via glShaderSource/glCreateShaderProgram.
And if the above is not possible, then WHY?
Because each shader stage is separate.
It should be noted that not even Vulkan changes this. Well, not the way you want. It does allows you to have the reverse: multiple shader stages all in a single SPIR-V module. But it doesn't (at the API level) allow you to have multiple modules provide code for a single stage.
All the high level constructs of structs, functions, linking of modules, etc, are mostly all just niceties that the APIs provide to make your life easier on input. HLSL for example allows you to do use #include's, but to do something similar in GLSL you need to perform that pre-process step yourself.
When the GPU has to execute your shader it is running the same code hundreds, thousands, if not millions of times per frame - the driver will have converted your shader into its own HW specific instruction set and optimised it down to the smallest number of instructions practicable to guarantee best performance of their instruction caches and shader engines.

Can't access fbo attached texture in GLSL compute shader with gimage2D

I recently wanted to work on a compute shader for OpenGL. In this experiment, I wanted to access one of the color textures attached to a FrameBufferObject. When attempting to pass the texture to the compute shader with a layout(rgba32f) readonly image2D, nothing was passed in. I rewrote the compute shader to use a sampler2D instead. The sampler worked just fine.
I also tested the gimage2D compute shader with just a texture, that wasn't attached to anything. This also worked as expected.
I haven't found any documentation stating that a texture attached to an FBO can't be accessed in a compute shader using gimage2D. I also haven't found any documentation stating that a compute shader can't write to an FBO.
I guess my question is why can't a texture, attached to an FBO, be accessed, in a compute shader, using gimage2D? Is there documentation explaining this?
First, in regards to your statement:
"I guess my question is why can't a texture, attached to an FBO, be accessed, in a compute shader, using gimage2D?"
You don't use gimage2D, if you see a type prefixed with g in GLSL documentation it is a generic type. (e.g. gvec<N>, gsampler..., etc.) It means that the function has overloads for every kind of vec<N> or sampler.... In this case, gimage2D is the short way of saying "... this function accepts image2D, iimage2D or uimage2D".
There is no actual gimage2D type, the g prefix was invented solely for the purpose of keeping GLSL documentation short and readable ;)
I think you already know this, because the only actual code listed in the question is using image2D, but the way things were written I was not sure.
As for your actual question, you should look into memory barriers.
Pay special attention to: GL_FRAMEBUFFER_BARRIER_BIT.
Compute Shaders are scheduled separately from stages of the render pipeline; they have their own single-stage pipeline. This means that if you draw something into an FBO attachment, your computer shader may run before you even start drawing or the compute shader may use an (invalid) cached view of the data because the change made in the render pipeline was not visible to the compute pipeline. Memory barriers will help to synchronize the render pipeline and compute pipeline for resources that are shared between both.
The render pipeline has a lot of implicit synchronization and multi-stage data flow that gives a pretty straightforward sequential ordering for shaders (e.g. glDraw* initiates vertex->geometry->fragment), but the compute pipeline does away with virtually all of this in favor of explicit synchronization. There are all sorts of hazards that you need to consider with compute shaders and image load/store that you do not with traditional vertex/geometry/tessellation/fragment.
In other words, while declaring something coherent in a compute shader together with an appropriate barrier at the shader level will take care of synchronization between compute shader invocations, since the compute pipeline is separate from the render pipeline it does nothing to synchronize image load/store between a compute shader and a fragment shader. For that, you need glMemoryBarrier (...) to synchronize access to the memory resource at the command level. glDraw* (...) (entry-point for the render pipeline) is a separate command from glDispatch* (...) (entry-point for the compute pipeline) and you need to ensure these separate commands are ordered properly for image load/store to exhibit consistent behavior.
Without a memory barrier, there is no guarantee about the order commands are executed in; only that they produce results consistent with the order you issued them. In the render pipeline, which has strictly defined input/output for each shader stage, GL implementations can intelligently re-order commands while maintaining this property with relative ease. With compute shaders as well as image load/store in general, where the I/O is completely determined by run-time flow it is impossible without some help (memory barriers).
TL;DR: The reason why it works if you use a sampler and not image load/store comes down to coherency guarantees (or the lack thereof). Image load/store simply does not guarantee that reads from an image are coherent (strictly ordered) with respect anything that writes to an image, and instead requires you to explicitly synchronize access to the image. This is actually beneficial as it allows you to simultaneously read/write the same image without leading to undefined behavior, but it requires some extra effort on your part to make it work.

Transform Feedback varyings

From some examples of using OpenGL transform feedback I see that the glTransformFeedbackVaryings​ are mapped after program compilation and before it is linking.Is this way enforced for all OpenGL versions?Can't layout qualifier be used to set the indices just like for vertex arrays?I am asking because in my code shader programs creation process is abstracted from other routines and before splitting it to controllable compile/link methods I would like to know if there's a way around.
Update:
How is it done when using separable shader objects?There is no explicit linkage step.
UPDATE:
It is still not clear to me how to set glTransformFeedbackVaryings when using separate shader objects.
This explantion is completely unclear to me:
If separable program objects are in use, the set of attributes
captured is taken from the program object active on the last shader
stage processing the primitives captured by transform feedback. The
set of attributes to capture in transform feedback mode for any other
program active on a previous shader stage is ignored.
I actually thought I could activate a pipeline object and do the query.But it seem to have no effect.My transform feedback writes nothing.Then I found this discussion in Transform Feedback docs:
Can you output varyings from a seperate shader program created
with glCreateShaderProgramEXT?
RESOLVED: No.
glTransformFeedbackVaryings requires a re-link to take effect on a
program. glCreateShaderProgramEXT detaches and deletes the shader
object use to create the program so a glLinkProgram will fail.
You can still create a vertex or geometry shader program
with the standard GLSL creation process where you could use
glTransformFeedbackVaryings and glLinkProgram.
This is unclear too.Does the answer mean that to set transform feedback varyings one should use the regular shader programs only?I don't get it.
What you are asking is possible using 4.4.2.1 Transform Feedback Layout Qualifiers, unfortunately it is an OpenGL 4.4 feature. It is available in extension form through GL_ARB_enhanced_layouts, but this is a relatively new extension and adoption is sparse at the moment.
It is considerably more complicated than any of the more traditional layout qualifiers in GLSL, so your best bet for the foreseeable future is to manage the varyings from the GL API instead of in your shader.
As far as varyings in SSO (separable shader object) programs, the OpenGL specification states the following:
OpenGL 4.4 (Core Profile) - 13.2 Transform Feedback - pp. 392
If separable program objects are in use, the set of attributes captured is taken
from the program object active on the last shader stage processing the primitives
captured by transform feedback. The set of attributes to capture in transform feedback mode for any other program active on a previous shader stage is ignored.
Ordinarily linking identifies varyings (denoted in/out in modern GLSL) that are actually used between stages and establishes the set of "active" uniforms for a GLSL program. Linking trims the dead weight that is not shared across multiple stages and performs static interface validation between stages and it is also when binding locations for any remaining varyings or uniforms are set. Since each program object can be a single stage when using SSOs, the linker is not going to reduce the number of inputs/outputs (varyings) and you can ignore a lot of language in the specification that says it must occur before or after linking.
Since linking is not a step in creating a program object for use with separate shader objects, your transform feedback has to be relative to a single stage (which can mean a different program object depending on which stage you select). OpenGL uses the program associated with the final vertex processing stage enabled in your pipeline for this purpose; this could be a vertex shader, tessellation evaluation shader, or geometry shader (in that order). Whichever program provides the final vertex processing stage for your pipeline is the program object that transform feedback varyings are relative to.

Web-GL : Multiple Fragment Shaders per Program

Does anyone know if it's possible to have multiple fragment shaders run serially in a single Web-GL "program"? I'm trying to replicate some code I have written in WPF using shader Effects. In the WPF program I would wrap an image with multiple borders and each border would have an Effect attached to it (allowing for multiple Effects to run serially on the same image).
I'm afraid you're probably going to have to clarify your question a bit, but I'll take a stab at answering anyway:
WebGL can support, effectively, as many different shaders as you want. (There are of course practical limits like available memory but you'd have to be trying pretty hard to bump into them by creating too many shaders.) In fact, most "real world" WebGL/OpenGL applications will use a combination of many different shaders to produce the final scene rendered to your screen. (A simple example: Water will usually be rendered with a different shader or set of shaders than the rest of the environment around it.)
When dispatching render commands only one shader program may be active at a time. The currently active program is specified by calling gl.useProgram(shaderProgram); after which any geometry drawn will be rendered with that program. If you want to render an effect that requires multiple different shaders you will need to group them by shader and draw each batch separately:
gl.useProgram(shader1);
// Setup shader1 uniforms, bind the appropriate buffers, etc.
gl.drawElements(gl.TRIANGLES, shader1VertexCount, gl.UNSIGNED_SHORT, 0); // Draw geometry that uses shader1
gl.useProgram(shader2);
// Setup shader2 uniforms, bind the appropriate buffers, etc.
gl.drawElements(gl.TRIANGLES, shader2VertexCount, gl.UNSIGNED_SHORT, 0); // Draw geometry that uses shader2
// And so on...
The other answers are on the right track. You'd either need to create the shader on the fly that applies all the effects in one shader or framebuffers and apply the effects one at a time. There's an example of the later here
WebGL Image Processing Continued
As Toji suggested, you might want to clarify your question. If I understand you correctly, you want to apply a set of post-processing effects to an image.
The simple answer to your question is: No, you can't use multiple fragment shaders with one vertex shader.
However, there are two ways to accomplish this: First, you can write everything in one fragment shader and combine them in the end. This depends on the effects you want to have!
Second, you can write multiple shader programs (one for each effect) and write your results to a fragment buffer object (render to texture). Each shader would get the results of the previous effect and apply the next one. This would be a bit more complicated, but it is the most flexible approach.
If you mean to run several shaders in a single render pass, like so (example pulled from thin air):
Vertex color
Texture
Lighting
Shadow
...each stage attached to a single WebGLProgram object, and each stage with its own main() function, then no, GLSL doesn't work this way.
GLSL works more like C/C++, where you have a single global main() function that acts as your program's entry point, and any arbitrary number of libraries attached to it. The four examples above could each be a separate "library," compiled on its own but linked together into a single program, and invoked by a single main() function, but they may not each define their own main() function, because such definitions in GLSL are shared across the entire program.
This unfortunately requires you to write separate main() functions (at a minimum) for every shader you intend to use, which leads to a lot of redundant programming, even if you plan to reuse the libraries themselves. That's why I ended up writing a glorified string mangler to manage my GLSL libraries for Jax; I'm not sure how useful the code will be outside of my framework, but you are certainly free to take a look at it, and make use of anything you find helpful. The relevant files are:
lib/assets/javascripts/jax/shader.js.coffee
lib/assets/javascripts/jax/shader/program.js.coffee
spec/javascripts/jax/shader_spec.js.coffee (tests and usage examples)
spec/javascripts/jax/shader/program_spec.js.coffee (more tests and usage examples)
Good luck!