I am now setting up layout quantifier (location) for my GLSL shaders. And this question hits me whether those quantifier ID need differ with each other.
Does it have to be:
layout (location = 0) uniform vec3 v1;
layout (location = 1) in vec3 v2;
layout (location = 2) uniform vec3 v3;
layout (location = 3) in vec3 v4;
Or it can be (as the location can be specified as uniforms or inputs):
layout (location = 0) uniform vec3 v1;
layout (location = 0) in vec3 v2;
layout (location = 1) uniform vec3 v3;
layout (location = 1) in vec3 v4;
Thanks.
While for vertex shader attributes the layout location is the attribute index,
the layout location for uniform variables is the uniform location.
These are different things.
If you do not set explicit layout locations and read the locations after linking the shader program, you can see that they can be both in the same range.
This can be done by glGetAttribLocation
and glGetUniformLocation
Both of your variants are correct and possible.
Attribute locations must be unique and uniform locations must be unique.
But they don't have to be unambiguous, beyond the location index type.
For more detailed information on layout qualifier, I recommend the OGL and GLSL documentation of the Khronos Group:
Layout Qualifier (GLSL)
Respectively see OpenGL 4.6 API Core Profile Specification - 7.3.1 Program Interfaces.
Each entry in the active resource list for an interface is assigned a unique unsigned integer index in the range zero to N − 1, where N is the number of entries in the active resource list.
While the interface type for uniform variables is UNIFORM, the type for attributes is PROGRAM_INPUT. The location of the different program resources can be get with the instruction glGetProgramResourceLocation by its program interface type and name.
Related
I'm about to implement some functionality which will use uniform buffer objects, and I'm trying to understand the limitations of UBOs before doing so.
For example let's use these GL_MAX_* values and this simple fragment shader:
- GL_MAX_UNIFORM_BUFFER_BINDINGS -> 84
- GL_MAX_UNIFORM_BLOCK_SIZE -> 16384
- GL_MAX_VERTEX_UNIFORM_BLOCKS -> 14
#version 330 core
layout (location = 0) in vec3 aPos;
// UBO
layout (std140, binding = 0) uniform Matrices
{
mat4 projection;
mat4 view;
};
// Individual uniform variables
uniform mat4 model;
uniform vec3 camPos;
void main()
{
gl_Position = projection * view * model * vec4(aPos, 1.0);
}
Questions:
Do individual uniform variables each consume one of the GL_MAX_VERTEX_UNIFORM_BLOCKS, or is there a default uniform block where these variables are stored (I'm guessing the later)?
Is the GL_MAX_UNIFORM_BLOCK_SIZE the limit for all defined uniform blocks, or does each defined UBO have a max size of this parameter making in this example the max amount of uniform data allowed to be passed to the shader program 229,376 bytes (spread across multiple ubos)?
If my assumption in question 1 is correct where individually defined uniform variables are contained in a default uniform buffer object:
A.) does this default buffer also adhere to the 16384 byte limit, meaning that the combined size of all individually defined uniform variables must not exceed 16384 bytes?
B.) does this default buffer consume a uniform block, leaving max available (before defining any other ubos) 13?
Do individually defined uniform variables count toward the GL_MAX_UNIFORM_BUFFER_BINDINGS parameter, leaving 81 available binding locations in this example?
Uniforms not declared in a block do not count against any uniform block limits. Nor do uniform block limits apply to them; non-block uniforms have their own, separate limitations.
In a GLSL shader, if I have the following layout specifications:
layout (location = 0) uniform mat4 modelMatrix;
layout (location = 1) uniform mat4 viewMatrix;
layout (location = 5) uniform mat4 projMatrix;
layout (location = 30) uniform vec3 diffuseColor;
layout (location = 40) uniform vec3 specularColor;
void main()
{
...
}
Does it matter that there are gaps between the locations? Do these gaps have any impacts in terms of actual memory layout of the data or performance?
Whether it affects performance cannot be known without testing on various implementations. However, as far as the OpenGL specification is concerned, uniform locations are just numbers; they do not represent anything specific about the hardware. So gaps in locations are fine, from a standardization point of view.
Most OpenGL implementations do have an upper limit on the number of bindings afforded to Attributes, Uniforms, etc. So if you specify a number above the maximum limit, the GL might not handle it correctly.
But a lot of it is implementation specific. An implementation might, for example, only allow up to 16* attribute locations, but has no problem indexing any valid integer value so long as the number of unique locations doesn't exceed 16.
More importantly, there's no limit on simply skipping locations:
layout(location = 0) in vec2 vertex;
layout(location = 1) in vec4 color;
layout(location = 3) in uint indicator;
layout(location = 7) in vec2 tex;
Which, of course, you bind as expected:
glEnableVertexArrayAttrib(0);
glEnableVertexArrayAttrib(1);
glEnableVertexArrayAttrib(3);
glEnableVertexArrayAttrib(7);
//Assuming all the data is tightly packed in a single Array Buffer
glVertexAttribPointer(0, 2, GL_FLOAT, false, 0, (void*)(36));
glVertexAttribPointer(1, 4, GL_FLOAT, false, 8, (void*)(36));
glVertexAttribIPointer(3, 1, GL_UNSIGNED_INT, 24, (void*)(36));
glVertexAttribPointer(7, 2, GL_FLOAT, false, 28, (void*)(36));
*I haven't looked it up, but I do know OpenGL guarantees that implementations support at least some number of Attribute and Uniform locations and bindings. I don't know what that number is, but the number '84' keeps popping into my head for some reason.
i installed the latest Vulkan SDK on my computer how ever whenever i want to generate the SPIR-V files for my shaders through glslValidator.exe it fails and returns the following errors
ERROR: Shader.vert:17: 'location' : SPIR-V requires location for user input/output
ERROR: 1 compilation errors. No code generated.
ERROR: Linking vertex stage: Missing entry point: Each stage requires one entry point
SPIR-V is not generated for failed compile or link
I found out that since update 1.0.51.1 there are some changes that might cause my old shaders to fail
Require locations on user in/out in GL_KHR_vulkan_glsl (internal issue 783).
what is the proper/new way to fix this issue?
vertex shader
#version 450
#extension GL_ARB_separate_shader_objects : enable
layout(binding = 0)uniform UniformBufferObject {
mat4 model;
mat4 view;
mat4 proj;
} ubo;
layout(location = 0)in vec3 inPosition;
layout(location = 1)in vec3 inNormals;
layout(location = 2)in vec2 inTexCoord;
layout(location = 0)out vec3 fragColor;
layout(location = 1)out vec2 fragTexCoord;
out vec4 Normal;
out gl_PerVertex{
vec4 gl_Position;
};
void main()
{
gl_Position = ubo.proj * ubo.view * ubo.model * vec4 (inPosition, 1.0);
//fragColor = inColor;
fragTexCoord = inTexCoord;
Normal = ubo.proj * ubo.view * ubo.model * vec4 (inNormals, 1.0);
}
I assume that You need to explicitly set the location through layout qualifier for all Your variables:
layout( location=<number> ) ...
Vulkan requires all input, output and uniform variables to have an explicitly provided location value. Interface matching between shader stages is performed only through a location value (as opposed to OpenGL where it can be performed through both names or locations). I'm not sure as I have always provided the location value, but maybe in earlier versions glslangValidator set them implicitly (if locations were missing).
I currently have this vertex shader:
#version 130
in vec3 position;
in vec2 textureCoords;
in vec3 normal;
out vec2 pass_textureCoords;
out vec3 surfaceNormal;
out vec3 toLightVector;
uniform vec3 lightPosition;
void main(void){
surfaceNormal = normal;
toLightVector = vec3(1, 100, 1);
gl_Position = ftransform(); #Yes, I use the fixed-function pipeline. Please don't kill me.
pass_textureCoords = textureCoords;
}
This works fine, but then when I add this:
#version 130
in vec3 position;
in vec2 textureCoords;
in vec3 normal;
out vec2 pass_textureCoords;
out vec3 surfaceNormal;
out vec3 toLightVector;
uniform vec3 lightPosition;
void main(void){
vec3 pos = position; #Completely USELESS added line
surfaceNormal = normal;
toLightVector = vec3(1, 100, 1);
gl_Position = ftransform();
pass_textureCoords = textureCoords;
}
The whole object just turns black (Or green sometimes, but you get the idea - It isn't working).
Expected behaviour:
Actual behaviour:
(The terrain and water are rendered without any shaders, hence why they are not changed)
It's as if the variable "position" is poisonous - If I use it anywhere, even for something useless, my shader simply does not work correctly.
Why could this be happening, and how could I fix it?
You're running into problems because you use both the fixed function position attribute in your vertex shader, and a generic attribute bound to location 0. This is not valid.
While you're not using the fixed function gl_Vertex attribute explicitly, the usage is implied here:
gl_Position = ftransform();
This line is mostly equivalent to this, with some additional precision guarantees:
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
But then you are also using a generic attribute for the position:
in vec3 position;
...
vec3 pos = position;
If you assign location 0 for this generic attribute, the result is an invalid shader program. I found the following on page 94 of the OpenGL 3.3 Compatibility Profile spec, in section "2.14.3 Vertex Attributes":
LinkProgram will also fail if the vertex shaders used in the program object contain assignments (not removed during pre-processing) to an attribute variable bound to generic attribute zero and to the conventional vertex position (gl_Vertex).
To avoid this problem, the ideal approach is of course to move away from using fixed function attributes, and adopt the OpenGL Core Profile. If that is outside the scope of the work you can tackle, you need to at least avoid assigning location 0 to any of your generic vertex attributes. You can either:
Use a location > 0 for all attributes when you set the location of the generic attributes with glBindAttribLocation().
Do not set the location at all, and use glGetAttribLocation() to get the automatically assigned locations after you link the program.
Say that I have a vertex shader. It's input section looks like this (simplified):
layout(location = 0) in vec3 V_pos;
layout(location = 1) in vec3 V_norm;
layout(location = 2) in vec2 V_texcoord1;
layout(location = 3) in vec2 V_texcoord2;
layout(location = 4) in int V_texNum;
What I want is to have the first 4 inputs come from an element buffer, while the last will come from a regular buffer. Eg, in this example, each element has two uv pairs, and I want to be able to give certain faces different textures to sample from.
Can this be done? One other option would be to give the shader a huge uniform of integers containing the values for texNum, and access that with gl_VertexID. But, that seems like a really ugly way to do it.
I'm using OpenGL 3.3 (happy to use extensions though) and c++.