Layout specifiers for uniforms - glsl

In the vertex shader we can specify two OUT uniforms.
layout(location = 0) out float v_transparency;
layout(location = 1) out vec2 v_texcoord;
In the Fragment shader we can receive them like this.
layout(location = 0) in float v_transparency;
layout(location = 1) in vec2 v_texcoord;
The same can also be done without specifying the layout , Is there any advantage in specifying the layouts ?

If you link shader objects together into programs containing multiple shader stages, then the interfaces between the two programs must exactly match. That is, for every input in one stage, there must be a corresponding output in the next, and vice-versa. Otherwise, a linker error occurs.
If you use separate programs however, this is not necessary. It is possible for a shader stage output to not be listed among the next shader stage's inputs. Inputs not filled by an output have unspecified values (and therefore this is only useful if you don't read from it); outputs not received by an input are ignored.
However, this rule only applies to interface variables declared with layout(location = #) qualifiers. For variables that match via name (as well as input/output interface blocks), every input must go to an output and vice-versa.
Also, for interface variables with location qualifiers, the types do not have to exactly match. If the output and input types are different vector sizes (or scalar) of the same basic type (vec2 and vec3, uvec4 and uint, but not uvec3 and ivec2), so long as the input has fewer components than the output, the match will still work. So the output can be a vec4 while the input is a vec2; the input will only get the first 2 values.
Note that this is also a function of separate programs, not just the use of location qualifiers.

Related

How can I determine the limit on Vertex Shader outputs?

I'm trying to introduce multiple shadow casters, and in the process I'm passing a transformed position for each light in the VS output. The shader compiler reports errors as a result of the output size exceeding some internal limit, but I've been thus far unable to determine what the limit actually is (or how to find it).
The output structure is as follows:
out VS_OUT
{
vec3 FragPos;
vec3 Normal;
vec3 Colour;
vec2 TexCoord;
vec4[MAX_SHADOW_CASTERS] LightSpaceFragPos;
} vso;
Compilation has failed at MAX_SHADOW_CASTERS value 64 and 32, at the very least.
Additionally, any suggestions around improving the capacity of the array would be appreciated. These values are naturally per fragment, so a universal texture/buffer would be impractical as far as I understand.
The maximum number of varying vector variables (that's what the interface between the vertex and fragment shader stages are called - and in earlier GLSL versions those actually had the keyword varying instead of in/out) can be queryied via glGetIntegerv with GL_MAX_VARYING_VECTORS as the pname parameter argument.

Write to some fragment locations but not others

Currently, I have two shaders that are intended to process the same type of objects, but produce different output: one color for the screen, the other selection info.
Output of draw shader:
layout(location = 0) out vec4 outColor;
Output of selection shader:
layout(location = 0) out vec4 selectionInfo0;
layout(location = 1) out ivec4 selectionInfo1;
I am considering combining the shaders together (these two and others in my application) for clarity and ease of maintenance (why edit two shaders when you can edit one?).
Output of unified shader:
layout(location = 0) out vec4 outColor;
layout(location = 1) out vec4 selectionInfo0;
layout(location = 2) out ivec4 selectionInfo1;
Under this scheme, I would set a uniform that determines which fragments need to be written to.
Can I write to some fragment locations and not others?
void main()
{
if(Mode == 1){
outColor = vec4(1, 0, 0, 1);
}
else {
selectionInfo0 = vec4(0.1, 0.2, 0.3, 0.4);
selectionInfo1 = ivec4(1, 2, 3, 4);
}
}
Is this a legitimate approach? Is there anything I should be concerned about?
Is this a legitimate approach?
That depends on how you define "legitimate". It can be made to function.
A fragment is either discarded in its entirety or it is not. If it is discarded, then the fragment has (mostly) no effect. If it is not discarded, then all of its outputs either have defined values (ie: you wrote to them), or they have undefined values.
However, undefined values can be OK, depending on other state. For example, the frambuffer's draw buffer state routes FS output colors to actual color attachments. It can also route them to GL_NONE, which throws them away. Similarly, you can use color write masks on a per-attachment basis, turning off writes to attachments you don't want to write to.
But this means that you cannot determine it on a per-fragment basis. You can only determine this using state external to the shader. The FS can't make this happen or not happen; it has to be done between draw calls with state changes.
If Mode is some kind of uniform value, then that should be OK. But if it is something derived on a per-vertex or per-fragment basis, then this will not work effectively.
As for branching performance, again, that depends on Mode. If it's a uniform, you shouldn't be concerned at all. Modern GPUs can handle that just fine. If it is something else... well, your entire scheme stops working for reasons that have already been detailed, so there's no reason to worry about it ;)
That all being said, I would advise against this sort of complexity. It is a confusing way of handling things from the driver's perspective. Also, because you're relying on a lot of things that other applications are not relying on, you open yourself up to driver bugs. Your idea is different from a traditional Ubershader, because your options fundamentally change the nature of the render targets and outputs.
So I would suggest you try to do things in as conventional a way as possible. If you really want to minimize the number of separate files you work with, employ #ifdefs, and simply patch the shader string with a #define, based on the reason you're loading it. So you have one shader file, but 2 programs built from it.

Am I able to initiate blank variables and declare them afterwards?

I would like to do something like this:
vec4 text;
if (something){
text = texture(backgroundTexture, pass_textureCoords);
}
if (somethingElse){
text = texture(anotherTexture, pass_textureCoords);
}
Is this valid GLSL code, and if not, is there any suitable alternative?
The answer depends on what something and somethingElse actually are.
If this is a uniform control flow, which means that all shader invocations execute the same branch, then this is perfectly valid.
If they are non uniform expressions, then there are some limitations:
When a texture uses mipmapping or anisotropic filtering of any kind, then any texture function that requires implicit derivatives will retrieve undefined results.
To sum it up: Yes, this is perfectly valid glsl code but textures in non-uniform controls flow have some restrictions. More details on this can be found here.
Side note: In the code sample you are not declaring the variables afterwards. You are just assigning values to them.
Edit (more info)
To elaborate a bit more on what uniform and non-uniform control flow is: Shaders can (in general) have two types of inputs. Uniform variables like uniform vec3 myuniform; and varyings like in vec3 myvarying. The difference is where the data comes from and how it can change during an invocation:
Uniforms are set from application side, and are (due to this) constant over an invocation (where invocation simplified means draw command).
Varyings are read from vertex inputs (in the vertex shader) or are passed and interpolated from previous shader stages (e.g. in the fragment shader).
Uniform control flow now means, that the condition of the corresponding if statement only depends on uniforms. Everything else is non-uniform control flow. Let's have a look at this fragment shader example:
//Uniforms
uniform vec3 my_uniform;
uniform sampler2D my_sampler;
/Varyings
in vec2 tex_coord;
in vec2 ndc_coords;
void main()
{
vec3 result = vec3(0,0,0);
if (my_uniform.y > 0.5) //Uniform control flow
result += texture(my_sampler, tex_coord).rgb;
if (ndc_coords.y > 0.5) //Non-uniform control flow
result += texture(my_sampler, tex_coord).rgb;
float some_value = ndc_coords.y + my_uniform.y;
if (some_value > 0.5) //Still non-uniform control flow
result += texture(my_sampler, tex_coord).rgb;
}
In this shader only the first texture read happens in uniform control flow (since the if condition only depends on a uniform variable). The other two reads are in non-uniform control flow since the if condition also depends on an varying (ndc_coords).

Two layouts with the same location in GLSL 4.4

Is it possible to have two layouts with the same location equate to two different input variables of different types in a shader? Currently, my program is not explicitly assigning any location for the vertex, texture, normal vertex arrays. But in my shader, when I have selected the location 0 for both my vertex position and texture coords, it gives me a perfect output. I wanted to know if this is just a coincidence or is it really possible to assign to the same location? Here is my definition of the input variables in the vertex shader:
#version 440
layout (location = 0) in vec4 VertexPosition;
layout (location = 2) in vec4 VertexNormal;
layout (location = 0) in vec2 VertexTexCoord;
Technically... yes, you can. For Vertex Shader inputs (and only for vertex shader inputs), you can assign two variables to the same location. However, you may not attempt to read from both of them. You can dynamically select which one to read from, but it's undefined behavior if your shader takes a path that reads from both variables.
The relevant quote from the standard is:
The one exception where component aliasing is permitted is for two input variables (not block members) to a vertex shader, which are allowed to have component aliasing.This vertex-variable component aliasing is intended only to support vertex shaders where each execution path accesses at most one input per each aliased component.
But this is stupid and pointless. Don't do it.

cg shader parameters

I'm trying to write a shader using cg (for ogre3d). I can't seem to parse a working shader that I'd like to use as a starting point for my own code.
Here's the declaration for the shader:
void main
(
float2 iTexCoord0 : TEXCOORD0,
out float4 oColor : COLOR,
uniform sampler2D covMap1,
uniform sampler2D covMap2,
uniform sampler2D splat1,
uniform sampler2D splat2,
uniform sampler2D splat3,
uniform sampler2D splat4,
uniform sampler2D splat5,
uniform sampler2D splat6,
uniform float splatScaleX,
uniform float splatScaleZ
)
{...}
My questions:
iTexCoord0 is obviously an input parameter. Why is it not declared uniform?
(oColor is obviously an output parameter. No question)
covMap1 - splat6 are textures. Are these parameters or something loaded into the graphics card memory (like globals)? The ogre definition for the shader program doesn't list them as parameters.
Are splatScaleX and splatScaleZ also parameters? The ogre definition for the shader program also doesn't list these as parameters.
Does the order of declaration mean anything when sending values from an external program?
I'd like to pass in an array of floats (the height map). I assume that would be
uniform float splatScaleZ,
uniform float heightmap[1024]
)
{...}
If I don't pass one of the parameters will the shader just not be executed (and my object will be invisible because it has no texture)?
Is there a better way to debug these than just hit/miss and guess?
iTexCoord0 is obviously an input parameter. Why is it not declared uniform?
Uniforms are not the same things as input parameters. iTexCoord is a varying input, which is to say that for every vertex, it can have a unique value. This is set with commands like glVertexAttribPointer. Things like vertex coordinates, normals, texcoords, vertex colors, are examples of what you might use a varying input for.
Uniforms are the other hand are intended to be static for an entire draw call, or potentially for the entire frame or life of the program. They are set with glUniform* commands. A uniform might be something like the modelview matrix for an object, or the position of the sun for a lighting calculation. They don't change very often.
[edit] These specific commands I think actually work with GLSL, but the theory should be the same for CG. Lookup a cg specific tutorial to figure out the exact commands to set varyings and uniforms.
covMap1 - splat6 are textures. Are these parameters or something loaded into the graphics card memory (like globals)? The ogre definition for the shader program doesn't list them as parameters.
My CG is a little rusty, but if its the same as GLSL then the sampler2d is a uniform that takes an index that represent which sampler you want to sample from. When you do something like glActiveTexture(GL_TEXTURE3), glBindTexture(n), then you set the sampler uniform to "3" you can sample from that texture.
Does the order of declaration mean anything when sending values from an external program?
No, the variables are referred to in the external program by their string variable names.
If I don't pass one of the parameters will the shader just not be executed (and my object will be invisible because it has no texture)?
Unknown. They will likely have "some" initial value, though whether that makes any sense and will generate anything visible is hard to guess.
Is there a better way to debug these than just hit/miss and guess?
http://http.developer.nvidia.com/CgTutorial/cg_tutorial_appendix_b.html
See section B.3.8