I'm a bit confused on what would be the right way to bind the texture when uniforms are using the layout binding.
layout(binding = 0, std140) uniform uCommon
{
mat4 projectionMatrix;
mat4 viewMatrix;
};
layout(binding = 1, std140) uniform uModel
{
mat4 modelViewProjectionMatrix;
};
layout(binding = 3) uniform sampler2D uTexture;
To bind my first texture I should use "GL_TEXTURE0 + 3"?
glActiveTexture(GL_TEXTURE0 + 3);
glBindTexture(GL_TEXTURE_2D, textureId);
Is this the correct way?
EDIT: Or is sampler using a separate binding from other uniforms? Can I use:
layout(binding = 0) uniform sampler2D uTexture;
while still using
layout(binding = 0, std140) uniform uCommon
Uniform block binding indices have nothing to do with sampler binding locations. These are different things.
The integer-constant-expression, which is used to specify the binding point or unit has not to be unique across all usages of the keyword binding.
See OpenGL Shading Language 4.60 Specification; 4.4.5 Uniform and Shader Storage Block Layout Qualifiers; page 77
The binding identifier specifies the uniform buffer binding point corresponding to the uniform or shader storage block, which will be used to obtain the values of the member variables of the block.
See OpenGL Shading Language 4.60 Specification; 4.4.6 Opaque-Uniform Layout Qualifiers; page 79
Image and sampler types both take the uniform layout qualifier identifier for binding:
layout-qualifier-id :
binding = integer-constant-expression
The identifier binding specifies which unit will be bound.
Related
Trying to translate a vertex/frag shader from glsl 330 to glsl es1.0
(Basically taking a step back since the original app was written for a desktop version of OpenGL3.0, but webGL2.0 is still not fully supported by some browsers, like IE or Safari; to my knowledge).
I understand 1.0 is using attribute/varying versus in/out, but I am having an issue that I cannot use integers with varying. There is an array of per-vertex integer values representing a texture unit index for that vertex. I do not see a way to convey that information to the fragment shader. If I send the values as floats it will start interpolating. Right ?
#version 330 //for openGL 3.3
//VERTEX shader
//---------------------------------------------------------------------------------
//uniform variables stay constant for the whole glDraw call
uniform mat4 ProjViewModelMatrix;
uniform mat4 NormalsMatrix;
uniform vec4 DefaultColor;
uniform vec4 LightColor;
uniform vec3 LightPosition;
uniform float LightIntensity;
uniform bool ExcludeFromLight;
//---------------------------------------------------------------------------------
//non-uniform variables get fed per vertex from the buffers
layout (location=0) in vec3 VertexCoord;
layout (location=1) in vec4 VertexColor;
layout (location=2) in vec3 VertexNormal;
layout (location=3) in vec2 VertexUVcoord;
layout (location=4) in int vertexTexUnit;
//---------------------------------------------------------------------------------
//Output variables to fragment shader
out vec4 thisColor;
out vec2 vertexUVcoord;
flat out int TexUnitIdx; // <------ PROBLEM
out float VertLightIntensity;
//---------------------------------------------------------------------------------
void main ()
{ /* ... blah ... */ }
The accompanied fragment shader that needs translation looks like this
#version 330 //for openGL 3.3
//FRAGMENT shader
//---------------------------------------------------------------------------------
//uniform variables
uniform bool useTextures; //If no textures, don't bother reading the TextureUnit array
uniform vec4 AmbientColor; //Background illumination
uniform sampler2D TextureUnit[6]; //Allow up to 6 texture units per draw call
//---------------------------------------------------------------------------------
//non-uniform variables
in vec2 vertexUVcoord;
in vec4 thisColor;
flat in int TexUnitIdx; // <------ PROBLEM
in float VertLightIntensity;
//---------------------------------------------------------------------------------
//Output color to graphics card
out vec4 pixelColor;
//---------------------------------------------------------------------------------
void main ()
{ /* ... blah ... */ }
There are no integer based attributes in GLSL ES 1.0
You can pass in floats (and supply as unsigned bytes) of course. Pass in false for normalize flag when calling gl.vertexAttribPointer
An other hand, neither GLSL ES 1.0 nor GLSL ES 3.00 allow indexing an array of samplers.
From the spec
12.30 Dynamic Indexing
...
Indexing of arrays of samplers by constant-index-expressions is supported in GLSL ES 1.00. A constant-index-expression
is an expression formed from constant-expressions and certain loop indices, defined for
a subset of loop constructs. Should this functionality be included in GLSL ES 3.00?
RESOLUTION: No. Arrays of samplers may only be indexed by constant-integral-expressions.
"Should this functionality be included in GLSL ES 3.00?" means should Dynamic indexing of samplers be included in GLES ES 3.00
I quoted the GLSL ES 3.00 spec since it references the GLSL ES 1.0 spec as well.
So, you have to write code so that your indies are constant-index-expressions.
attribute float TexUnitNdx;
...
uniform sampler2D TextureUnit[6];
vec4 getValueFromSamplerArray(float ndx, vec2 uv) {
if (ndx < .5) {
return texture2D(TextureUnit[0], uv);
} else if (ndx < 1.5) {
return texture2D(TextureUnit[1], uv);
} else if (ndx < 2.5) {
return texture2D(TextureUnit[2], uv);
} else if (ndx < 3.5) {
return texture2D(TextureUnit[3], uv);
} else if (ndx < 4.5) {
return texture2D(TextureUnit[4], uv);
} else {
return texture2D(TextureUnit[5], uv);
}
}
vec4 color = getValueFromSamplerArray(TexUnitNdx, someTexCoord);
or something like that. It might be faster to arrange your ifs into a binary search.
I'm implementing simple ray tracing with a compute shader.
But I'm stuck in linking the program object of the compute shader.
#version 440
struct triangle {
vec3 points[3];
};
struct sphere {
vec3 pos;
float r;
};
struct hitinfo {
vec2 lambda;
int idx;
};
layout(binding = 0, rgba32f) uniform image2D framebuffer;
// wrriten by compute shader
layout (local_size_x = 1, local_size_y = 1) in;
uniform triangle triangles[2500];
uniform sphere spheres[2500];
uniform int num_triangles;
uniform int num_spheres;
uniform vec3 eye;
uniform vec3 ray00;
uniform vec3 ray10;
uniform vec3 ray01;
uniform vec3 ray11;
Here is my compute shader code and I can get a "Out of resource" error.
I know the reason of this error is the size of triangles but I need that size.
Is there any way how a large number of triangles can be passed into the shader?
There is just a very limited amount of uniforms a shader can have. If you need more data then what fits in your uniforms, you can either use Uniform Buffer Objects or Shader Storage Buffer Objects to back up a uniform.
In this case, you will have to define a GLSL Interface Block and bind the buffer to that uniform. This means, that you only need one uniform in order to store a large number of similar elements.
I am fairly new to OpenGL and trying to achieve instancing using uniform arrays. However the number of instances I am trying to invoke is larger than the MAX_UNIFORM_LOCATIONS limit:
QOpenGLShader::link: error: count of uniform locations > MAX_UNIFORM_LOCATIONS(262148 > 98304)error: Too many vertex shader default uniform block components
error: Too many vertex shader uniform components
What other ways are possible that will work with that large a number of objects? So far this is my shader code:
layout(location = 0) in vec4 vertex;
layout(location = 1) in vec3 normal;
out vec3 vert;
out vec3 vertNormal;
out vec3 color;
uniform mat4 projMatrix;
uniform mat4 camMatrix;
uniform mat4 worldMatrix;
uniform mat4 myMatrix;
uniform sampler2D sampler;
uniform vec3 positions[262144];
void main() {
vec3 t = vec3(positions[gl_InstanceID].x, positions[gl_InstanceID].y, positions[gl_InstanceID].z);
float val = 0;
mat4 wm = myMatrix * mat4(1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, t.x, t.y, t.z, 1) * worldMatrix;
color = vec3(0.4, 1.0, 0);
vert = vec3(wm * vertex);
vertNormal = mat3(transpose(inverse(wm))) * normal;
gl_Position = projMatrix * camMatrix * wm * vertex;
}
If it should matter, I am using QOpenGLExtraFunctions.
There are many alternatives for overcoming the limitations of uniform storage:
UBOs, for example; they usually have a larger storage capacity than non-block uniforms. Now in your case, that probably won't work, since storing 200,000 vec3svec4s will require more storage than most implementations allow UBOs to provide. What you need is unbounded storage.
Instanced Arrays
Instanced arrays use the instanced rendering mechanism to automatically fetch vertex attributes based on the instance index. This requires that your VAO setup work change a bit.
Your shader would look like this:
layout(location = 0) in vec4 vertex;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec3 position;
out vec3 vert;
out vec3 vertNormal;
out vec3 color;
uniform mat4 projMatrix;
uniform mat4 camMatrix;
uniform mat4 worldMatrix;
uniform mat4 myMatrix;
uniform sampler2D sampler;
void main() {
vec3 t = position;
/*etc*/
}
Here, the shader itself never uses gl_InstanceID. That happens automatically based on your VAO.
That setup code would have to include the following:
glBindBuffer(GL_ARRAY_BUFFER, buffer_containing_instance_data);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), 0);
glVertexAttribDivisor(2, 1);
This code assumes that the instance data is at the start of the buffer and is 3 floats-per-value (tightly packed). Since you're using vertex attributes, you can use the usual vertex attribute compression techniques on them.
The last call, to glVertexAttribDivisor is what tells OpenGL that it will only move to the next value in the array once per instance, rather than based on the vertex's index.
Note that by using instanced arrays, you also gain the ability to use the baseInstance glDraw* calls. The baseInstance in OpenGL is only respected by instanced arrays; gl_InstanceID never is affected by it.
Buffer Textures
Buffer textures are linear, one-dimensional textures that get their data from a buffer object's storage.
Your shader logic would look like this:
layout(location = 0) in vec4 vertex;
layout(location = 1) in vec3 normal;
out vec3 vert;
out vec3 vertNormal;
out vec3 color;
uniform mat4 projMatrix;
uniform mat4 camMatrix;
uniform mat4 worldMatrix;
uniform mat4 myMatrix;
uniform sampler2D sampler;
uniform samplerBuffer positions;
void main() {
vec3 t = texelFetch(positions, gl_InstanceID).xyz;
/*etc*/
}
Buffer textures can only be accessed via the direct texel fetching functions like texelFetch.
Buffer textures in GL 4.x can use a few 3-channel formats, but earlier GL versions don't give you that option (not without an extension). So you may want to expand your data to a 4-channel value rather than 3 channel.
Another problem is that buffer textures do have a maximum size limitation. And the required minimum is only 64KB of size, so the instanced array method will probably be more reliable (since it has no size restriction). However, all non-Intel OpenGL implementations give a huge size for buffer textures.
SSBOs
Shader storage buffer objects are like UBOs, only you can both read and write to them. That latter tool isn't important for you. The main advantage here is that the minimum required OpenGL size for them is have a minimum required size of 16MB (and implementations generally return a size limit on the order of available video memory). So size limits aren't a problem.
Your shader code would look like this:
layout(location = 0) in vec4 vertex;
layout(location = 1) in vec3 normal;
out vec3 vert;
out vec3 vertNormal;
out vec3 color;
uniform mat4 projMatrix;
uniform mat4 camMatrix;
uniform mat4 worldMatrix;
uniform mat4 myMatrix;
uniform sampler2D sampler;
buffer PositionSSBO
{
vec4 positions[];
};
void main() {
vec3 t = positions[gl_InstanceID].xyz;
/*etc*/
}
Note that we explicitly use a vec4 here. That's because you should never use vec3 in a buffer-backed interface block (ie: UBO/SSBO).
In code, SSBOs work much like UBOs. You bind them for use with glBindBufferRange.
I'm writing an application using OpenGL 4.3 and GLSL and I need the shader to do basic UV mapping. The problem is that GLSL compiler seems to be optimising-out the UV coordinates. I cannot access them from the application side of things.
Vertex shader:
#version 330 core
uniform mat4 projection;
layout (location = 0) in vec4 position;
layout (location = 1) in vec2 uvCoord;
out vec2 texCoord;
void main(void)
{
texCoord = uvCoord;
gl_Position = position;
}
Vertex shader:
#version 330 core
in vec2 texCoord;
out vec4 color;
uniform sampler2D tex;
void main(void)
{
color = texture2D(tex, texCoord);
}
Both the vertex and fragment shader compile and link without errors, but when I call the attributes using the following code:
GLint effectPositionLocation = glGetAttribLocation(effect->getEffect(), "position");
GLint effectUVLocation = glGetAttribLocation(effect->getEffect(), "uvCoord");
I get the 0 for the position and -1 for the uvCoord, so I can only assume that the uvCoord has been optimised out even though I am using it to pass it from the vertex shader to the fragment shader.
The result is that the geometry is displayed but only in black, no texture mapping.
I have Written similar applications in Direct3D and HLSL with no problem of attributes being optimised out. I'm thinking that it is something simple that I am forgetting or not doing but have not found out what.
Replace the 'texture2D' with 'texture', and your attribute will be used.
Bad GLSL compiler: it should not compile your shader since texture2D is not available in core profile.
EDIT: You may have forgotten to call glEnableVertexAttribArray(1); after setting your glVertexAttribPointers.
My vertex shader looks as follows:
#version 120
uniform float m_thresh;
varying vec2 texCoord;
void main(void)
{
gl_Position = ftransform();
texCoord = gl_TexCoord[0].xy;
}
and my fragment shader:
#version 120
uniform float m_thresh;
uniform sampler2D grabTexture;
varying vec2 texCoord;
void main(void)
{
vec4 grab = vec4(texture2D(grabTexture, texCoord.xy));
vec3 colour = vec3(grab.xyz * m_thresh);
gl_FragColor = vec4( colour, 0.5 );
}
basically i am getting the error message "Error in shader -842150451 - 0<9> : error C7565: assignment to varying 'texCoord'"
But I have another shader which does the exact same thing and I get no error when I compile that and it works!!!
Any ideas what could be happening?
For starters, there is no sensible reason to construct a vec4 from texture2D (...). Texture functions in GLSL always return a vec4. Likewise, grab.xyz * m_thresh is always a vec3, because a scalar multiplied by a vector does not change the dimensions of the vector.
Now, here is where things get interesting... the gl_TexCoord [n] GLSL built-in you are using is actually a pre-declared varying. You should not be reading from this in a vertex shader, because it defines a vertex shader output / fragment shader input.
The appropriate vertex shader built-in variable in GLSL 1.2 for getting the texture coordinates for texture unit N is actually gl_MultiTexCoord<N>
Thus, your vertex and fragment shaders should look like this:
Vertex Shader:
#version 120
//varying vec2 texCoord; // You actually do not need this
void main(void)
{
gl_Position = ftransform();
//texCoord = gl_MultiTexCoord0.st; // Same as comment above
gl_TexCoord [0] = gl_MultiTexCoord0;
}
Fragment Shader:
#version 120
uniform float m_thresh;
uniform sampler2D grabTexture;
//varying vec2 texCoord;
void main(void)
{
//vec4 grab = texture2D (grabTexture, texCoord.st);
vec4 grab = texture2D (grabTexture, gl_TexCoord [0].st);
vec3 colour = grab.xyz * m_thresh;
gl_FragColor = vec4( colour, 0.5 );
}
Remember how I said gl_TexCoord [n] is a built-in varying? You can read/write to this instead of creating your own custom varying vec2 texCoord; in GLSL 1.2. I commented out the lines that used a custom varying to show you what I meant.
The OpenGLĀ® Shading Language (1.2) - 7.6 Varying Variables - pp. 53
The following built-in varying variables are available to write to in a vertex shader. A particular one should be written to if any functionality in a corresponding fragment shader or fixed pipeline uses it or state derived from it.
[...]
varying vec4 gl_TexCoord[]; // at most will be gl_MaxTextureCoords
The OpenGLĀ® Shading Language (1.2) - 7.3 Vertex Shader Built-In Attributes - pp. 49
The following attribute names are built into the OpenGL vertex language and can be used from within a vertex shader to access the current values of attributes declared by OpenGL.
[...]
attribute vec4 gl_MultiTexCoord0;
The bottom line is that gl_MultiTexCoord<N> defines vertex attributes (vertex shader input), gl_TexCoord [n] defines a varying (vertex shader output, fragment shader input). It is also worth mentioning that these are not available in newer (core) versions of GLSL.