OpenGL Shader error 1282 - c++

I am trying to add lighting to my current scene of a simple cube. After setting up my uniforms I get a 1282 error from glGetError() for this piece of code
GLuint ambientHandle = glGetUniformLocation(program->getHandle(), "ambientProduct");
glUniform4fv( ambientHandle, 1, ambientProduct );
GLuint diffuseHandle = glGetUniformLocation(program->getHandle(), "diffuseProduct");
glUniform4fv( diffuseHandle, 1, diffuseProduct );
GLuint specularHandle = glGetUniformLocation(program->getHandle(), "specularProduct");
glUniform4fv( specularHandle, 1, specularProduct );
GLuint lightPosHandle = glGetUniformLocation(program->getHandle(), "lightPosition");
glUniform4fv( lightPosHandle, 1, light.position );
GLuint shinyHandle = glGetUniformLocation(program->getHandle(), "shininess");
glUniform1f( shinyHandle, materialShininess );
Here are my shaders:
vertex.glsl
#version 120
attribute vec4 coord3d;
attribute vec3 normal3d;
// output values that will be interpretated per-fragment
varying vec3 fN;
varying vec3 fE;
varying vec3 fL;
uniform vec4 lightPosition;
uniform mat4 mTransform;
void main()
{
fN = normal3d;
fE = coord3d.xyz;
fL = lightPosition.xyz;
if( lightPosition.w != 0.0 ) {
fL = lightPosition.xyz - coord3d.xyz;
}
gl_Position = mTransform*coord3d;
}
fragment.glsl
// per-fragment interpolated values from the vertex shader
varying vec3 fN;
varying vec3 fL;
varying vec3 fE;
uniform vec4 ambientProduct, diffuseProduct, specularProduct;
uniform mat4 mTransform;
uniform vec4 lightPosition;
uniform float shininess;
void main()
{
// Normalize the input lighting vectors
vec3 N = normalize(fN);
vec3 E = normalize(fE);
vec3 L = normalize(fL);
vec3 H = normalize( L + E );
vec4 ambient = ambientProduct;
float Kd = max(dot(L, N), 0.0);
vec4 diffuse = Kd*diffuseProduct;
float Ks = pow(max(dot(N, H), 0.0), shininess);
vec4 specular = Ks*specularProduct;
// discard the specular highlight if the light's behind the vertex
if( dot(L, N) < 0.0 ) {
specular = vec4(0.0, 0.0, 0.0, 1.0);
}
gl_FragColor = ambient + diffuse + specular;
gl_FragColor.a = 1.0;
}
The products and position are each a struct of three GLfloats and shininess is a float. I have checked all of the values of the handles and the values I am passing and they all seem valid. Ideas?
--EDIT:
I have narrowed it to the glUniform4fv calls. It happens after each one. Also I have double checked that the program->getHandle() is pointing to something that looks valid.
I have checked program->getHandle is a valid program
Here are the values of all handles:
Program handle 3
ambientHandle 0
diffuseHandle 1
specularHandle 5
lightPosHandle 2
shinyHandle 4
So they all look good. For testing I am commenting out the lines below the ones for ambientProduct. For clarity I am explicitly using this line instead
glUniform4f( ambientHandle, ambientProd.x, ambientProd.y, ambientProd.z, ambientProd.w );
These are the values for ambientProd at the time that line is executed.
x = 0.200000003, y = 0.0, z = 0.200000003, w = 1.0
A collaborator on this project moved the call for glUseProgram. Thanks for the help folks.

Error number ´1282` is not very descriptive.
Possible error codes for glGetUniformLocation are:
GL_INVALID_VALUE
GL_INVALID_OPERATION
Which don't have a fixed value. Try to get the error string with gluErrorString() or take a look in the header to which of those 1282 maps.
Just a shot in the dark: but did you ...
check your shader got compiled without error?
check your shader got linked without error?
BTW: return type is GLint not GLuint
"Shaders compiled and linked without error" Hmm, this looks odd.
According to spec (see: http://www.opengl.org/sdk/docs/man/xhtml/glGetUniformLocation.xml) GL_INVALID_OPERATION should only be generated if:
program is not a program objec
program has not been successfully linked
Other question:
are you sure the getHandle()method of the class your program Object belongs to returns the right id. I mean the one that was used in the sucessfully linking.
you should be able to verify with checking if glIsProgram(program-getHandle()) returns GL_TRUE
EDIT: Ah - I missed those calls to glUniform4fv in your example.
Correct return type for glGetUniformLocation is still GLint, but I don't think thats the problem.
According to spec (see: http://www.opengl.org/sdk/docs/man/xhtml/glUniform.xml) GLUniformXX generates GL_INVALID_OPERATION for a whole bunch of reasons. To me the only one that possibly seems to apply is:
there is no current program object
Did you call glUseProgram (program->getHandle()) prior to trying to calling glUniform()?

Generally this error number occurs when you are using a different programID from the programID generated by openGL at the time of creating the shader. It means that you are using a different programID at the time of binding vertexShader or fragmentShader or whatever other shader you are using.

Related

glGetUniformLocation returns -1 even though I used the variable in shader

It seems that the glsl compiler optimizes unused variable (remove).
In my case I used the variable but the glGetUniformLocation returns -1
rgbScene.addRenderStage(
[&obj = std::as_const(model)](Camera* cam) {
auto& mesh = std::get<Mesh>(obj);
auto& program = std::get<ShaderProgram>(obj);
glUseProgram(program);
glBindVertexArray(mesh.getData<VAO>());
int loc;
glUniformMatrix4fv(
loc = glGetUniformLocation(program, "proj_matrix"),
1, GL_FALSE,
glm::value_ptr(cam->getProjectionMatrix())
);
glUniformMatrix4fv(
loc = glGetUniformLocation(program, "view_matrix"),
1, GL_FALSE,
glm::value_ptr(cam->getViewMatrix())
);
glUniformMatrix4fv(
loc = glGetUniformLocation(program, "model_matrix"),
1, GL_FALSE,
glm::value_ptr(mesh.getModelMatrix())
);
glUniform3fv(
loc = glGetUniformLocation(program, "light_direction"),
1, glm::value_ptr(-(cam->getForward()))
);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, mesh.getData<VertexData>());
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, nullptr);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, mesh.getData<NormalData>());
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, nullptr);
glDrawArrays(GL_TRIANGLES, 0, mesh.getSize());
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(0);
});
I checked variable loc by debuging line by line in Visual Studio 2019 and at the last glGetUniformLocation returns -1
here's my vertex shader code
#version 460 core
uniform mat4 proj_matrix;
uniform mat4 view_matrix;
uniform mat4 model_matrix;
uniform vec3 light_direction;
layout(location = 0) in vec3 pos;
layout(location = 1) in vec3 normal;
out VS_OUT
{
vec3 N;
vec3 L;
vec3 V;
} vs_out;
void main(void)
{
vec4 P = view_matrix * model_matrix * vec4(pos, 1.0);
vs_out.N = mat3(view_matrix * model_matrix) * normal;
vs_out.L = mat3(view_matrix) * (-light_direction);
vs_out.V = -P.xyz;
gl_Position = proj_matrix * P;
}
I tried changing the variable name, different order... but cannot fixed this problem
Is there any other rules for uniform variable in shader??
-- Edit --
for fragment shader,
#version 460 core
layout (location = 0) out vec4 color;
in VS_OUT
{
vec3 N;
vec3 L;
vec3 V;
} fs_in;
uniform vec3 diffuse_albedo = vec3(0.8, 0.3, 0.2);
uniform vec3 specular_albedo = vec3(0.7);
uniform float specular_power = 128.0;
void main(void)
{
vec3 N = normalize(fs_in.N);
vec3 L = normalize(fs_in.L);
vec3 V = normalize(fs_in.V);
vec3 R = reflect(-L, N);
vec3 diffuse = max(dot(N, L), 0.0) * diffuse_albedo;
vec3 specular = pow(max(dot(R, V), 0.0), specular_power) * specular_albedo;
color = vec4(diffuse + specular, 1.0);
// color = vec4(1.0,1.0, 1.0, 1.0);
}
The fragment shader inputs N, L and V variables have to be "used", too.
Note, the active resources are determined when the program is linked. If an input to the fragment shader is unused, the uniforms which set the corresponding output variable in the vertex shader may not become active.
See OpenGL 4.6 Core Profile Specification - 7.3.1 Program Interfaces, page 102:
7.3.1 Program Interfaces
When a program object is made part of the current rendering state, its executable code may communicate with other GL pipeline stages or application code through a variety of interfaces. When a program is linked, the GL builds a list of active resources for each interface. Examples of active resources include variables, interface blocks, and subroutines used by shader code. Resources referenced in shader code are considered active unless the compiler and linker can conclusively determine that they have no observable effect on the results produced by the executable code of the program. For example, variables might be considered inactive if they are declared but not used in executable code, used only in a clause of an if statement that would never be executed, used only in functions that are never called, or used only in computations of temporary variables having no effect on any shader output. In cases where the compiler or linker cannot make a conclusive determination, any resource referenced by shader code will be considered active. The set of active resources for any interface is implementation-dependent because it depends on various analysis and optimizations performed by the compiler and linker
If a program is linked successfully, the GL will generate lists of active resources based on the executable code produced by the link.

Why does my WebGL shader not let me using varyings?

When I try to link my vertex and fragment shaders into a program, WebGL throws Varyings with the same name but different type, or statically used varyings in fragment shader are not declared in vertex shader: textureCoordinates
I have varying vec2 test in both my vertex and fragment shaders, and can't see any reason why the compiler wouldn't be able to find the same varying in both.
Vertex Shader:
varying vec2 test;
void main(void) {
gl_Position = vec4(0.0, 0.0, 0.0, 0.0);
test = vec2(1.0, 0.0);
}
Fragment Shader:
precision highp float;
varying vec2 test;
void main(void) {
gl_FragColor = vec4(test.xy, 0.0, 1.0);
}
Test code:
const canvas = document.createElement('canvas');
gl = canvas.getContext('webgl')
let vert = gl.createShader(gl.VERTEX_SHADER);
gl.shaderSource(vert, "varying vec2 test;\nvoid main(void) {\n gl_Position = vec4(0.0, 0.0, 0.0, 0.0);\n test = vec2(1.0, 0.0);\n}");
gl.compileShader(vert);
let frag = gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(frag, "precision highp float;\nvarying vec2 test;\nvoid main() {\n\tgl_FragColor = vec4(test.xy, 0.0, 1.0);\n}");
gl.compileShader(frag);
let program = gl.createProgram();
gl.attachShader(program, vert);
gl.attachShader(program, frag);
gl.linkProgram(program);
gl.useProgram(program);
Just a guess, but I wonder if it's because you're not using the textureCoordinates in your fragment shader. The names & types match just fine, so i don't think that's the issue. I've done the same thing here:
Frag:
// The fragment shader is the rasterization process of webgl
// use float precision for this shader
precision mediump float;
// the input texture coordinate values from the vertex shader
varying vec2 vTextureCoord;
// the texture data, this is bound via gl.bindTexture()
uniform sampler2D texture;
// the colour uniform
uniform vec3 color;
void main(void) {
// gl_FragColor is the output colour for a particular pixel.
// use the texture data, specifying the texture coordinate, and modify it by the colour value.
gl_FragColor = texture2D(texture, vec2(vTextureCoord.s, vTextureCoord.t)) * vec4(color, 1.0);
}
Vert:
// setup passable attributes for the vertex position & texture coordinates
attribute vec3 aVertexPosition;
attribute vec2 aTextureCoord;
// setup a uniform for our perspective * lookat * model view matrix
uniform mat4 uMatrix;
// setup an output variable for our texture coordinates
varying vec2 vTextureCoord;
void main() {
// take our final matrix to modify the vertex position to display the data on screen in a perspective way
// With shader code here, you can modify the look of an image in all sorts of ways
// the 4th value here is the w coordinate, and it is called Homogeneous coordinates, (x,y,z,w).
// It effectively allows the perspective math to work. With 3d graphics, it should be set to 1. Less than 1 will appear too big
// Greater than 1 will appear too small
gl_Position = uMatrix * vec4(aVertexPosition, 1);
vTextureCoord = aTextureCoord;
}
Issue was resolved by updating Chrome for OSX from v51.something to 52.0.2743.82 (64-bit) Weird.

GLSL uniform access causing sefault in program

I have a program set up with deferred rendering. I am in the process of removing my position texture in favour of reconstructing positions from depth. I have done this before with no trouble but now for some reason I am getting a segfault when trying to access matrices I pass in through uniforms!
My fragment shader (vertex shader irrelevant):
#version 430 core
layout(location = 0) uniform sampler2D depth;
layout(location = 1) uniform sampler2D diffuse;
layout(location = 2) uniform sampler2D normal;
layout(location = 3) uniform sampler2D specular;
layout(location = 4) uniform mat4 view_mat;
layout(location = 5) uniform mat4 inv_view_proj_mat;
layout(std140) uniform light_data{
// position ect, works fine
} light;
in vec2 uv_f;
vec3 recontruct_pos(){
float z = texture(depth, uv_f);
vec4 pos = vec4(uv_f * 2.0 - 1.0, z * 2.0 - 1.0, 1.0);
//pos = inv_view_proj_mat * pos; //un-commenting this line causes segfault
return pos.xyz / pos.w;
}
layout(location = 3) out vec4 lit; // location 3 is lighting texture
void main(){
vec3 pos = reconstruct_pos();
lit = vec4(0.0, 1.0, 1.0, 1.0); // just fill screen with light blue
}
And as you can see the code causing this segfault is shown in the reconstruct_pos() function.
Why is this causing a segfault? I have checked the data within the application, it is correct.
EDIT:
The code I use to update my matrix uniforms:
// bind program
glUniformMatrix4fv(4, 1, GL_FALSE, &view_mat[0][0]);
glUniformMatrix4fv(5, 1, GL_FALSE, &inv_view_proj_mat[0][0]);
// do draw calls
The problem was my call to glBindBufferBase when allocating my light buffer. Now that I have corrected the arguments I am passing, everything works fine with no segfaults.
Now the next question is: Why are all of my uniform locations reporting to be -1 O_o
Maybe it's the default location, who knows.
glUniformMatrix() method expects the input data be a flattened array with column major order (i.e. float array[16];), not a two-dimensional array (i.e. float array[4][4]). The latter may cause you either a segfault or a program malfunction, due to the 4 single-dimensional arrays composing the 2-dimensional array not being located sequentially.

OpenGL shaders: uniform variables count incorrect

I want to do bump/normal/parallax mapping but for this purpose I need multitexturing - use 2 textures at a time - one for the color and one for the height map. But this task accomplishment appeared absurdly problematic.
I have the following code for the vertex shader:
#version 330 core
/* 0: in
* 1: out
* 2: uniform
*/
// 0: in
layout (location = 0) in vec3 v_vertexPos;
layout (location = 1) in vec2 v_vertexTexCoord;
// 1: out
out vec2 f_vertexTexCoord;
// 2: uniform
uniform mat4 vertexMvp = mat4( 1.0f );
void main()
{
f_vertexTexCoord = v_vertexTexCoord;
gl_Position = vertexMvp * vec4( v_vertexPos, 1.0f );
}
and the following for the fragment one:
#version 330 core
/* 0: in
* 1: out
* 2: uniform
*/
// 0: in
in vec2 f_vertexTexCoord;
// 1: out
layout (location = 0) out vec4 f_color;
// 2: uniform
uniform sampler2D cTex;
uniform sampler2D hTex;
// #define BUMP
void main()
{
vec4 colorVec = texture2D( cTex, f_vertexTexCoord );
#ifdef BUMP
vec4 bumpVec = texture2D( hTex, f_vertexTexCoord );
f_color = vec4( mix( bumpVec.rgb, colorVec.rgb, colorVec.a), 1.0 );
#else
f_color = texture2D( cTex, f_vertexTexCoord );
#endif
}
The shaders get compiled and attached to the shader program. The program is then linked and then used. The only reported active uniform variables by glGetActiveUniform are the vertex shader's uniform vertexMvp and the fragment one's cTex. hTex is not recognized and querying for its location the result is -1. The GL_ARB_multitexture OpenGL extension is supported by the graphics card ( supports OpenGL version up to 4.3 ).
Tested the simple multitexturing example provided here which has only fragment shader defined, using the stock vertex one. This example works like a charm.
Any suggestions?
"GLSL compilers and linkers try to be as efficient as possible. Therefore, they do their best to eliminate code that does not affect the stage outputs. Because of this, a uniform defined in a shader file does not have to be made available in the linked program. It is only available if that uniform is used by code that affects the stage output, and that the uniform itself can change the output of the stage.
Therefore, a uniform that is exposed by a fully linked program is called an "active" uniform; any other uniform specified by the original shaders is inactive. Inactive uniforms cannot be used to do anything in a program." - OpenGL.org
Since BUMP is not defined in your fragment shader, hTex is not used in your code, so is not an active uniform. This is expected behavior.

GLSL shading problem: Why is my sphere in greyscale instead of red? (see code)

I'm working on a beginner level GLSL shader program. I'm following this tutorial. But my sphere always appear in greyscale and not colored red as I expected.
Vertex Shader:
varying vec3 normal, lightDir;
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
normal = gl_NormalMatrix * gl_Normal;
vec4 vertex_in_modelview_space = gl_ModelViewMatrx * gl_Vertex;
lightDir = vec3(gl_LightSource[0].position – vertex_in_modelview_space);
}
Frag Shader:
varying vec3 normal, lightDir;
void main()
{
const vec4 AmbientColor = vec4(0.1, 0.0, 0.0, 1.0);
const vec4 DiffuseColor = vec4(1.0, 0.0, 0.0, 1.0);
vec3 normalized_normal = normalize(normal);
vec3 normalized_lightDir = normalize(lightDir);
float DiffuseTerm = clamp(dot(normal, lightDir), 0.0, 1.0);
gl_FragColor = AmbientColor + DiffuseColor * DiffuseTerm;
}
The code is just copy and paste off the tutorial.
From the frag shader, the diffuse color is red, but my sphere is greyscale. I know that the shaders are loaded correctly though because if I take out the code in the frag shader and use the following:
gl_FragColor = vec4(0.0,1.0,0.0,1.0);
then my sphere is solid green as expected. I do not know if it's something in the openGL code (like, Renderer.cpp) that's causing a conflict, or if there's something else wrong.
This is my first time coding in GLSL, and I'm quite confused about what gl_Enable's I need to turn on/off for the shader to work properly.
Thanks for any feedback!
EDIT:
Ok, if I call glColor3f before rendering, I can get the right color. But doesn't the light's color directly result in a change of color in the sphere? I'm worried that I'm not actually calling the functions in the shader...
EDIT2:
So it turns out that whenever I put any code in the vertex shader or frag shader (other than gl_Color = ...), the solid color I get disappears... I guess this means that there's something horribly wrong with my shaders?
EDIT3:
Here's the code for setting up my shader (supplied by my TA):
char *vs = NULL,*fs = NULL;
v = glCreateShader(GL_VERTEX_SHADER);
f = glCreateShader(GL_FRAGMENT_SHADER);
vs = textFileRead(vert);
fs = textFileRead(frag);
const char * ff = fs;
const char * vv = vs;
glShaderSource(v, 1, &vv,NULL);
glShaderSource(f, 1, &ff,NULL);
free(vs);
free(fs);
glCompileShader(v);
glCompileShader(f);
p = glCreateProgram();
glAttachShader(p,f);
glAttachShader(p,v);
glLinkProgram(p);
int infologLength = 0;
int charsWritten = 0;
char *infoLog;
glGetProgramiv(p, GL_INFO_LOG_LENGTH,&infologLength);
if (infologLength > 0)
{
infoLog = (char *)malloc(infologLength);
glGetProgramInfoLog(p, infologLength, &charsWritten, infoLog);
printf("%s\n",infoLog);
free(infoLog);
}
EDIT4:
Using shader logs as suggested by kvark, I managed to fix the bugs in the shaders (turns out there were a couple of mistakes). If you would like to see the final code, please leave a comment or message me (this question is getting long).
It's a good idea to check not just the link log, but also compile logs for each shader and compile/link result:
glGetShaderInfoLog(...)
glGetShaderiv(...,GL_COMPILE_STATUS,...)
glGetProgramiv(...,GL_LINK_STATUS,...)
Make sure the results are positive and the logs are empty (or good).
The diffuse term is calculated incorrectly in your example. It should have the following value:
float DiffuseTerm = max(0.0, dot(normalized_normal,normalized_lightDir) );
You don't need clamp() as the dot() result of normalized vectors can't exceed 1.
If you made sure the shader program is linked correctly, activated it on a draw and the result is still weird, try to select different components of your final color equation to find out the wrong one:
gl_FragColor = DiffuseColor; //can't be grayscale
gl_FragColor = vec4(DiffuseTerm); //should be diffuse grayscale
BTW, glColor3f should have nothing to do with your shader as you don't use gl_Color inside. If the result changes when you call it - that would mean the shader activation failed (it didn't link or wasn't used at all).
Good Luck!
Maybe it's due to an unwanted behaviour with your alpha channel result.
You're actually computing lighting on your alpha channel, actually having something like : g
gl_FragColor.a = 1.0 + 1.0 * DiffuseTerm
which will give you >= 1.0 values.
You should be careful not to include your alpha channel in your output (or even in your calculations).
Try making sure your blending is disabled, or fix your shader to something like :
varying vec3 normal, lightDir;
void main()
{
const vec3 AmbientColor = vec3(0.1, 0.0, 0.0);
const vec3 DiffuseColor = vec3(1.0, 0.0, 0.0);
vec3 normalized_normal = normalize(normal);
vec3 normalized_lightDir = normalize(lightDir);
float DiffuseTerm = clamp(dot(normal, lightDir), 0.0, 1.0);
gl_FragColor = vec4(AmbientColor + DiffuseColor * DiffuseTerm, 1.0);
}