I have code trying to upload data into an OpenGL shader, but when I call glGetAttribLocation() going for the array of data I am looking for, it always returns -1 as location (thus not found). I have no idea how to debug this issue in the first place, since the variables are in the code (albeit the vertex shader only passes it on to the geometry shader).
Can someone help me figure out why the glGetAttribLocation returns not found? Other items, like worldMatrix for example, when using glGetUniformLocation(), work just fine.
C++ Code trying to get the attribute id:
for (unsigned int i = 0; i < _nNumCameras; ++i) {
const auto glName = "texcoords[" + std::to_string(i) + "]";
const auto location = glGetAttribLocation(id(), glName.c_str());
if (location == -1) {
continue;
}
Vertex Shader:
#version 430
#define NUM_CAMERAS 3
uniform mat4 worldMatrix;
uniform mat4 viewProjMatrix;
layout(location = 0)in vec3 position;
layout(location = 1)in vec3 normal;
layout(location = 2)in float radius;
in vec2 texcoords[NUM_CAMERAS];
in uint cameraIds[NUM_CAMERAS];
out vec4 gl_Position;
out VS_OUT {
vec2 v_texCoords[NUM_CAMERAS];
vec3 v_normal;
uint cameraIDs[NUM_CAMERAS];
} vs_out;
void main()
{
gl_Position.xyz = position.xyz;
gl_Position.w = 1;
gl_Position = worldMatrix * gl_Position;
gl_Position = viewProjMatrix * gl_Position;
vs_out.v_texCoords = texcoords;
vs_out.cameraIDs = cameraIds;
vs_out.v_normal = normal;
}
Geometry Shader:
#version 430
#define NUM_CAMERAS 3
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
out VS_OUT {
vec2 v_texCoords[NUM_CAMERAS];
vec3 v_normal;
uint cameraIDs[NUM_CAMERAS];
} gs_in[];
out GS_OUT {
vec2 v_texcoord;
} gs_out;
flat out uint camera_id_unique;
void main() {
// Code selecting best camera texture
...
///
gl_Position = gl_in[0].gl_Position;
gs_out.v_texcoord = gs_in[0].v_texCoords[camera_id_unique];
EmitVertex();
gl_Position = gl_in[1].gl_Position;
gs_out.v_texcoord = gs_in[1].v_texCoords[camera_id_unique];
EmitVertex();
gl_Position = gl_in[2].gl_Position;
gs_out.v_texcoord = gs_in[2].v_texCoords[camera_id_unique];
EmitVertex();
EndPrimitive();
}
Fragment Shader:
#version 430
#define NUM_CAMERAS 3
uniform sampler2D colorTextures[NUM_CAMERAS];
in GS_OUT {
vec2 v_texcoord;
} fs_in;
flat in uint camera_id_unique;
out vec4 color;
void main(){
color = texture(colorTextures[camera_id_unique], fs_in.v_texcoord);
}
Arrayed program resources work in different ways depending on whether they are arrays of basic types or arrays of structs (or arrays). Resources that are arrays of basic types only expose the entire array as a single resource, whose name is "name[0]" and which has an explicit array size (if you query that property). Other arrayed resources expose separate names for each array element.
Since texcoords is an array of basic types, there is no "texcoords[2]" or "texcoords[1]"; there is only "texcoords[0]". Arrayed attributes are always assigned contiguous locations, the locations for indices 1 and 2 will simply be 1 or 2 plus the location for index 0.
Related
I have a vertex shader that takes in position, texture coordinates, normals and some uniforms:
#version 330 core
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 texCoord;
layout(location = 2) in vec3 normal;
//MVP
uniform mat4 u_Model;
uniform mat4 u_View;
uniform mat4 u_Proj;
//Lighting
uniform mat4 u_InvTranspModel;
uniform mat4 u_LightMVP;
out DATA
{
vec2 v_TexCoord;
vec3 v_Normal;
vec3 v_FragPos;
vec4 v_LightSpacePos;
mat4 v_Proj;
} data_out[];
void main()
{
gl_Position = u_Model * position;
data_out.v_TexCoord = texCoord;
data_out.v_Normal = mat3(u_InvTranspModel) * normal;
data_out.v_Proj = u_Proj * u_View * u_Model;
//Light
data_out.v_FragPos = vec3(u_Model*position);
data_out.v_LightSpacePos = u_LightMVP * position;
}
I'm passing into my geometry shader:
#version 330 core
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
in DATA
{
vec2 v_TexCoord;
vec3 v_Normal;
vec3 v_FragPos;
vec4 v_LightSpacePos;
mat4 v_Proj;
} data_in[];
out vec2 g_TexCoord;
out vec3 g_Normal;
out vec3 g_FragPos;
out vec4 g_LightSpacePos;
void main()
{
gl_Position = data_in[0].v_Proj * gl_in[0].gl_Position;
g_Normal = data_in[0].v_Normal;
g_FragPos = data_in[0].v_FragPos;
g_TexCoord = data_in[0].v_TexCoord;
g_LightSpacePos = data_in[0].v_LightSpacePos;
EmitVertex();
gl_Position = data_in[1].v_Proj * gl_in[1].gl_Position;
g_Normal = data_in[1].v_Normal;
g_FragPos = data_in[1].v_FragPos;
g_TexCoord = data_in[1].v_TexCoord;
g_LightSpacePos = data_in[1].v_LightSpacePos;
EmitVertex();
gl_Position = data_in[2].v_Proj * gl_in[2].gl_Position;
g_Normal = data_in[2].v_Normal;
g_FragPos = data_in[2].v_FragPos;
g_TexCoord = data_in[2].v_TexCoord;
g_LightSpacePos = data_in[2].v_LightSpacePos;
EmitVertex();
EndPrimitive();
}
However I get the errors:
ERROR: 0:28: '.' : dot operator to an array only takes length()
ERROR: 0:28: 'assign' : cannot convert from 'attribute 2-component vector of highp float' to 'varying unknown-sized array of highp block'
ERROR: 0:29: '.' : dot operator to an array only takes length()
ERROR: 0:29: 'assign' : cannot convert from '3-component vector of highp float' to 'varying unknown-sized array of highp block'
................ Etc
I get these errors for every variable in the vertex shader that is passed to the geometry shader.
This shader works when its in a form without geometry shader, and should be setup currently to do nothing, so what have I done wrong?
The output of the vertex shader is not an array, even if the next stage is a Geometry shader. The Vertex shader processes a single vertex, and therefore the outputs are always single values related to that vertex. The geometry shader's input interface is an array because the geometry shader takes a primitive (multiple vertices) as input rather than a single vertex. Remove [] in the vertex shader:
out DATA
{
vec2 v_TexCoord;
vec3 v_Normal;
vec3 v_FragPos;
vec4 v_LightSpacePos;
mat4 v_Proj;
} data_out; // <--- remove []
I did run into some trouble setting up my animation shader for my OpenGL application. Basically it takes in an array of 50 glm::mat4 matrices and should set them as uniform in my GLSL shader. Yet only the first value is actually send to the shader, all other array entries in the shader are set to 0.
I think the problem occurs when passing from C++ to GLSL:
class model{
...
glm::mat4 finalBoneTransforms[50];
...
}
model::draw(){
//Set Joints
int jointLoc = glGetUniformLocation(shaderID, "jointTransforms");
glUniformMatrix4fv(jointLoc, 50 , GL_FALSE, glm::value_ptr(finalBoneTransforms[0]));
...
}
So how comes that only the first value is passed? Shouldn't OpenGL take in 50 elements stored in the contiguous memory to the first element, which is referenced via the value_ptr?
I would highly prefer to use arrays instead of vectors to make sure not to suffer any pointer loss due to reallocation. Arent elements in an array stored in contiguous memory? Any other obvious mistakes causing that weird behaviour?
Edit: Heres the shader code:
#version 330 core
const int MAX_JOINTS = 50;
const int MAX_WEIGHTS = 4;
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormals;
layout (location = 2) in vec2 aTexCoord;
layout (location = 3) in vec4 aBoneWeight;
layout (location = 4) in ivec4 aBoneIndex;
out vec2 texCoords;
uniform mat4 jointTransforms[MAX_JOINTS];
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main(void) {
vec4 totalLocalPos = vec4(0.0);
vec4 totalNormal = vec4(0.0);
for (int i = 0; i<MAX_WEIGHTS; i++) {
mat4 jointTransform = jointTransforms[aBoneIndex[i]];
vec4 posePosition = jointTransform * vec4(aPos, 1.0);
totalLocalPos += posePosition * aBoneWeight[i];
}
gl_Position = projection*view * totalLocalPos;
texCoords = aTexCoord;
};
I am writing the following fragment shader for a game engine:
#version 330 core
layout (location = 0) out vec4 color;
uniform vec4 colour;
uniform vec2 light_pos;
in DATA
{
vec4 position;
vec2 uv;
float tid;
vec4 color;
} fs_in;
uniform sampler2D textures[32];
void main()
{
float intensity = 1.0 / length(fs_in.position.xy - light_pos);
vec4 texColor = fs_in.color;
if(fs_in.tid > 0.0){
int tid = int(fs_in.tid + 0.5);
texColor = texture(textures[tid], fs_in.uv);
}
color = texColor * intensity;
}
The line texColor = texture(textures[tid], fs_in.uv); causes the 'sampler arrays indexed with non-constant expressions are forbidden in GLSL 1.30 and later' expression when compiling the shader.
The vertex shader if needed is:
#version 330 core
layout (location = 0) in vec4 position;
layout (location = 1) in vec2 uv;
layout (location = 2) in float tid;
layout (location = 3) in vec4 color;
uniform mat4 pr_matrix;
uniform mat4 vw_matrix = mat4(1.0);
uniform mat4 ml_matrix = mat4(1.0);
out DATA
{
vec4 position;
vec2 uv;
float tid;
vec4 color;
} vs_out;
void main()
{
gl_Position = pr_matrix * vw_matrix * ml_matrix * position;
vs_out.position = ml_matrix * position;
vs_out.uv = uv;
vs_out.tid = tid;
vs_out.color = color;
}
In GLSL 3.3 indexing for sampler arrays is only allowed by an integral constant expression (see GLSL 3.3 Spec, Section 4.1.7).
In more modern version, starting from GLSL 4.0 it is allowed to index sampler arrays by dynamic uniform expressions (see GLSL 4.0 Spec, Section 4.1.7)
What you actually are trying is to index the array by a varying which is simply impossible. If it is absolutely unavoidable to do that, you could pack the 2d textures into a 2d array texture or into a 3d texture and use the index to address the layer (or the 3rd dimension) of the texture.
I overcame this issue by using a little code-generation:
in Javascript it looks like this:
const maxNumTextures = 32;
function makeSwitchCase (i) {
return `
case ${i}:
sum += texture2D(texturesArray[${i}], vUv);
break;
`
};
const fragmentShader = `
#define numTextures ${maxNumTextures}
uniform sampler2D baseTexture;
uniform sampler2D texturesArray[numTextures];
varying vec2 vUv;
void main() {
vec4 base = texture2D(baseTexture, vUv);
vec4 sum = vec4(0.0);
for (int i = 0; i < numTextures; ++i) {
switch(i) {
// we are not allowed to use i as index to access texture in array in current version of GLSL
${new Array(maxNumTextures)
.fill(0)
.map((_, i) => makeSwitchCase(i))
.join('')}
default: break;
}
}
gl_FragColor = base + sum;
}
`;
Note: we use ` to start and stop multiline strings in JS and ${some_value} to provide value for template
I had a similar problem. Switching to a newer version (#460 to be precise) worked in my case.
The problem is that in line texColor = texture(textures[tid], fs_in.uv) the sampler array texturues is given a non constant index.
The easy work-around it is to use a large switch statement to replace the line texColor = texture(textures[tid], fs_in.uv) so that the texture array can be assessed by a constant number. For example, you may write something like this:
uniform sampler2D textures[32];
void main()
{
float intensity = 1.0 / length(fs_in.position.xy - light_pos);
vec4 texColor = fs_in.color;
if(fs_in.tid > 0.0){
int tid = int(fs_in.tid + 0.5);
switch (int(tid)){
case 0:
texColor = texture(textures[0], fs_in.uv);
break;
case 1:
texColor = texture(textures[1], fs_in.uv);
break;
case 2:
texColor = texture(textures[2], fs_in.uv);
break;
// repeat for all textures
}
}
color = texColor * intensity;
}
I created two shaders for my program to render simple objects.
Vertex shader source:
#version 400 core
layout (location = 1) in vec4 i_vertexCoords;
layout (location = 2) in vec3 i_textureCoords;
layout (location = 3) in vec3 i_normalCoords;
layout (location = 4) in int i_material;
uniform mat4 u_transform;
out VertexData {
vec3 textureCoords;
vec3 normalCoords;
int material;
} vs_out;
void main() {
vs_out.textureCoords = i_textureCoords;
vs_out.material = i_material;
gl_Position = u_transform * i_vertexCoords;
vs_out.normalCoords = gl_Position.xyz;
}
Fragment shader source:
#version 400 core
struct MaterialStruct {
int ambientTexutre;
int diffuseTexture;
int specularTexture;
int bumpTexture;
vec4 ambientColor;
vec4 diffuseColor;
vec4 specularColor;
float specularComponent;
float alpha;
int illuminationModel;
};
in VertexData {
vec3 textureCoords;
vec3 normalCoords;
int material;
} vs_out;
layout (std140) uniform MaterialsBlock {
MaterialStruct materials[8];
} u_materials;
uniform sampler2D u_samplers[16];
out vec4 fs_color;
void main() {
MaterialStruct m = u_materials.materials[vs_out.material];
fs_color = vec4(m.diffuseColor.rgb, m.diffuseColor.a * m.alpha);
}
Program created with this two shaders renders picture 2
When I change main() function contents to next:
void main() {
MaterialStruct m = u_materials.materials[vs_out.material];
fs_color = vec4(m.diffuseColor.rgb * (vs_out.normalCoords.z + 0.5), m.diffuseColor.a * m.alpha);
}
It renders picture 1, but materials still exists (if I'm trying to select material from u_materials.materials manually it works). Shader thinks what vs_out.material is constant and equals 0, but it isn't. Data is not changed (excluding transformation matrix)
Could someone explain the solution of this problem?
The GLSL 4.5 spec states in section 4.3.4 "Input Variables":
Fragment shader inputs that are signed or unsigned integers, integer vectors, or any double-precision
floating-point type must be qualified with the interpolation qualifier flat.
You can't use interpolation with those types, and actually, your code shouldn't compile on a strict implementation.
I wonder whether I should consider not used variables in glsl.
In next situation(which is just sample code for description).
If "trigger" is true, "position" and "normal" are not used in the fragment shader.
Then, are "position" and "normal" abandoned? or calculated by rasterizer?
Vertex shader :
layout(location = 0) in vec3 vertice;
layout(location = 1) in vec2 uv;
layout(location = 2) in vec3 normal;
out VertexData{
out vec3 position;
out vec2 uv;
out vec3 normal;
flat out bool trigger;
} VertexOut;
void main(){
gl_Position = vertice;
VertexOut.uv = uv;
if(uv.x > 0){
VertexOut.trigger = true;
else{
VertexOut.trigger = false;
VertexOut.position = vertice;
VertexOut.normal = normal;
}
}
Fragment shdaer :
in VertexData{
in vec3 position;
in vec2 uv;
in vec3 normal;
flat in bool trigger;
}VertexIn;
out vec4 color;
uniform sampler2D tex;
void main(){
if(VertexIn.trigger){
color = texture2D(tex, VertexIn.uv);
}
else{
vec4 result = texture2D(tex, VertexIn.uv);
/*
Lighting with position and normal...
*/
color = result;
}
}
Depends on hardware. On newer hardware smart compiler can avoid interpolating/fetching this values. Older hardware don't have this operations controllable by fragment shader, so it would be done anyway (interpolated from undefined value - which is still undefined).
Conditional operation isn't free though, especially if your fragment branches are highly divergent.