opengl 3.3+ not compatibily with glsl 130 code? - opengl

I'm having an issue where my glsl 130 code wont run properly on my somewhat modern (ATI 5850) hardware while the identical code runs perfectly fine on an older laptop with a NVIDIA card I have.. This is the case no mater what opengl context I use. What occurs is the vectors: in_position, in_colour and in_normal don't bind properly on the new hardware. It appears I am forced to a newer version of glsl (330) on newer hardware.
Here is the glsl code for the vertex shader. Its fairly simple and basic.
#version 130
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
uniform mat4 normalMatrix;
in vec4 in_position;
in vec4 in_colour;
in vec3 in_normal;
out vec4 pass_colour;
smooth out vec3 vNormal;
void main()
{
gl_Position = projectionMatrix * viewMatrix * modelMatrix * in_position;
vec4 vRes = normalMatrix*vec4(in_normal, 0.0);
vNormal = vRes.xyz;
pass_colour = in_colour;
}
What happens is data for:
in vec4 in_position;
in vec4 in_colour;
in vec3 in_normal;
doesn't bind or not fully. The values are oddly distorted. From my testing everything else works properly. Changing the version to 330 and using the location keyword fixes the issue, but that also makes the code not compatible with older verions of opengl...
Here is a sample of the code I use to specify these locations.
for the program:
glBindAttribLocation(LD_standard_program, 0, "in_position");
glBindAttribLocation(LD_standard_program, 1, "in_colour");
glBindAttribLocation(LD_standard_program, 2, "in_normal");
and later for the data itself:
--- code to buffer vertex data
glEnableVertexAttribArray(0);
glVertexAttribPointer((GLuint) 0, 4, GL_FLOAT, GL_FALSE, 0, 0);
--- code to buffer colour data
glEnableVertexAttribArray(1);
glVertexAttribPointer((GLuint) 1, 4, GL_FLOAT, GL_FALSE, 0, 0);
--- code to buffer normal data
glEnableVertexAttribArray(2);
glVertexAttribPointer((GLuint) 2, 3, GL_FLOAT, GL_FALSE, 0, 0);
My question is: Isn't opengl supposed to be backwards compatible?? I'm starting to be afraid that I'll have to write seperate shaders for ever single version of opengl to make my program run on different hardware... Since binding these attributes is very basic functionality I doubt it's a bug in the ATI implementation...

Are you calling glBindAttribLocation before glLinkProgram? Calling after won't give any effect, because the vertex attributes are assigned indices only during glLinkProgram.
In GLSL 3.30+ there is better way of specifiying attribute indices directly in GLSL code:
layout(location=0) in vec4 in_position;
layout(location=1) in vec4 in_colour;
layout(location=2) in vec3 in_normal;
Edit: oh, I skipped the part you tried layout keyword already.

Related

glGetUniformLocation returns -1 even though I used the variable in shader

It seems that the glsl compiler optimizes unused variable (remove).
In my case I used the variable but the glGetUniformLocation returns -1
rgbScene.addRenderStage(
[&obj = std::as_const(model)](Camera* cam) {
auto& mesh = std::get<Mesh>(obj);
auto& program = std::get<ShaderProgram>(obj);
glUseProgram(program);
glBindVertexArray(mesh.getData<VAO>());
int loc;
glUniformMatrix4fv(
loc = glGetUniformLocation(program, "proj_matrix"),
1, GL_FALSE,
glm::value_ptr(cam->getProjectionMatrix())
);
glUniformMatrix4fv(
loc = glGetUniformLocation(program, "view_matrix"),
1, GL_FALSE,
glm::value_ptr(cam->getViewMatrix())
);
glUniformMatrix4fv(
loc = glGetUniformLocation(program, "model_matrix"),
1, GL_FALSE,
glm::value_ptr(mesh.getModelMatrix())
);
glUniform3fv(
loc = glGetUniformLocation(program, "light_direction"),
1, glm::value_ptr(-(cam->getForward()))
);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, mesh.getData<VertexData>());
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, nullptr);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, mesh.getData<NormalData>());
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, nullptr);
glDrawArrays(GL_TRIANGLES, 0, mesh.getSize());
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(0);
});
I checked variable loc by debuging line by line in Visual Studio 2019 and at the last glGetUniformLocation returns -1
here's my vertex shader code
#version 460 core
uniform mat4 proj_matrix;
uniform mat4 view_matrix;
uniform mat4 model_matrix;
uniform vec3 light_direction;
layout(location = 0) in vec3 pos;
layout(location = 1) in vec3 normal;
out VS_OUT
{
vec3 N;
vec3 L;
vec3 V;
} vs_out;
void main(void)
{
vec4 P = view_matrix * model_matrix * vec4(pos, 1.0);
vs_out.N = mat3(view_matrix * model_matrix) * normal;
vs_out.L = mat3(view_matrix) * (-light_direction);
vs_out.V = -P.xyz;
gl_Position = proj_matrix * P;
}
I tried changing the variable name, different order... but cannot fixed this problem
Is there any other rules for uniform variable in shader??
-- Edit --
for fragment shader,
#version 460 core
layout (location = 0) out vec4 color;
in VS_OUT
{
vec3 N;
vec3 L;
vec3 V;
} fs_in;
uniform vec3 diffuse_albedo = vec3(0.8, 0.3, 0.2);
uniform vec3 specular_albedo = vec3(0.7);
uniform float specular_power = 128.0;
void main(void)
{
vec3 N = normalize(fs_in.N);
vec3 L = normalize(fs_in.L);
vec3 V = normalize(fs_in.V);
vec3 R = reflect(-L, N);
vec3 diffuse = max(dot(N, L), 0.0) * diffuse_albedo;
vec3 specular = pow(max(dot(R, V), 0.0), specular_power) * specular_albedo;
color = vec4(diffuse + specular, 1.0);
// color = vec4(1.0,1.0, 1.0, 1.0);
}
The fragment shader inputs N, L and V variables have to be "used", too.
Note, the active resources are determined when the program is linked. If an input to the fragment shader is unused, the uniforms which set the corresponding output variable in the vertex shader may not become active.
See OpenGL 4.6 Core Profile Specification - 7.3.1 Program Interfaces, page 102:
7.3.1 Program Interfaces
When a program object is made part of the current rendering state, its executable code may communicate with other GL pipeline stages or application code through a variety of interfaces. When a program is linked, the GL builds a list of active resources for each interface. Examples of active resources include variables, interface blocks, and subroutines used by shader code. Resources referenced in shader code are considered active unless the compiler and linker can conclusively determine that they have no observable effect on the results produced by the executable code of the program. For example, variables might be considered inactive if they are declared but not used in executable code, used only in a clause of an if statement that would never be executed, used only in functions that are never called, or used only in computations of temporary variables having no effect on any shader output. In cases where the compiler or linker cannot make a conclusive determination, any resource referenced by shader code will be considered active. The set of active resources for any interface is implementation-dependent because it depends on various analysis and optimizations performed by the compiler and linker
If a program is linked successfully, the GL will generate lists of active resources based on the executable code produced by the link.

Simple GLSL render chain doesn't draw reliably

I have a simple compositing system which is supposed to render different textures and a background texture into an FBO. It also renders some primitives.
Here's an example:
I'm rendering using a simple GLSL shader for the texture and another one for the primitive. Also, I'm waiting for each shader to finish using glFinish after each glDrawArrays call.
So basically:
tex shader (background tex)
tex shader (tex 1)
primitive shader
tex shader (tex 2)
tex shader (tex 3)
When I only do this once, it works. But if I do another render pass directly after the first one finished, some textures just aren't rendered.
The primitive however is always rendered.
This doesn't happen always, but the more textures I draw, the more often this occurs.
Thus, I'm assuming that this is a timing problem.
I tried to troubleshoot for the last two days and I just can't find the reason for this.
I'm 100% sure that the textures are always valid (I downloaded them using glGetTexImage to verify).
Here are my texture shaders.
Vertex shader:
#version 150
uniform mat4 mvp;
in vec2 inPosition;
in vec2 inTexCoord;
out vec2 texCoordV;
void main(void)
{
texCoordV = inTexCoord;
gl_Position = mvp * vec4(inPosition, 0.0, 1.0);
}
Fragment shader:
#version 150
uniform sampler2D tex;
in vec2 texCoordV;
out vec4 fragColor;
void main(void)
{
fragColor = texture(tex, texCoordV);
}
And here's my invocation:
NSRect drawDestRect = NSMakeRect(xPos, yPos, str.texSize.width, str.texSize.height);
NLA_VertexRect rect = NLA_VertexRectFromNSRect(drawDestRect);
int texID = 0;
NLA_VertexRect texCoords = NLA_VertexRectFromNSRect(NSMakeRect(0.0f, 0.0f, 1.0f, 1.0f));
NLA_VertexRectFlipY(&texCoords);
[self.texApplyShader.arguments[#"inTexCoord"] setValue:&texCoords forNumberOfVertices:4];
[self.texApplyShader.arguments[#"inPosition"] setValue:&rect forNumberOfVertices:4];
[self.texApplyShader.arguments[#"tex"] setValue:&texID forNumberOfVertices:1];
GetError();
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, str.texName);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
glFinish();
The setValue:forNumberOfCoordinates: function is an object-based wrapper around OpenGL's parameter application functions. It basically does this:
glBindVertexArray(_vertexArrayObject);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBufferObject);
glBufferData(GL_ARRAY_BUFFER, bytesForGLType * numVertices, value, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray((GLuint)self.boundLocation);
glVertexAttribPointer((GLuint)self.boundLocation, numVectorElementsForType, GL_FLOAT, GL_FALSE, 0, 0);
Here are two screenshots of what it should look like (taken after first render pass) and what it actually looks like (taken after second render pass):
https://www.dropbox.com/s/0nmquelzo83ekf6/GLRendering_issues_correct.png?dl=0
https://www.dropbox.com/s/7aztfba5mbeq5sj/GLRendering_issues_wrong.png?dl=0
(in this example, the background texture is just black)
The primitive shader is as simple as it gets:
Vertex:
#version 150
uniform mat4 mvp;
uniform vec4 inColor;
in vec2 inPosition;
out vec4 colorV;
void main (void)
{
colorV = inColor;
gl_Position = mvp * vec4(inPosition, 0.0, 1.0);
}
Fragment:
#version 150
in vec4 colorV;
out vec4 fragColor;
void main(void)
{
fragColor = colorV;
}
Found the issue... I didn't realize that the FBO is drawn to the screen already after the first render pass. This happens on a different thread and wasn't locked properly.
Apparently the context was switched while the compositing took place which explains why it caused different issues randomly depending on when the second thread switched the context.

GLSL UV (vec2) coords Optimised-out

I'm writing an application using OpenGL 4.3 and GLSL and I need the shader to do basic UV mapping. The problem is that GLSL compiler seems to be optimising-out the UV coordinates. I cannot access them from the application side of things.
Vertex shader:
#version 330 core
uniform mat4 projection;
layout (location = 0) in vec4 position;
layout (location = 1) in vec2 uvCoord;
out vec2 texCoord;
void main(void)
{
texCoord = uvCoord;
gl_Position = position;
}
Vertex shader:
#version 330 core
in vec2 texCoord;
out vec4 color;
uniform sampler2D tex;
void main(void)
{
color = texture2D(tex, texCoord);
}
Both the vertex and fragment shader compile and link without errors, but when I call the attributes using the following code:
GLint effectPositionLocation = glGetAttribLocation(effect->getEffect(), "position");
GLint effectUVLocation = glGetAttribLocation(effect->getEffect(), "uvCoord");
I get the 0 for the position and -1 for the uvCoord, so I can only assume that the uvCoord has been optimised out even though I am using it to pass it from the vertex shader to the fragment shader.
The result is that the geometry is displayed but only in black, no texture mapping.
I have Written similar applications in Direct3D and HLSL with no problem of attributes being optimised out. I'm thinking that it is something simple that I am forgetting or not doing but have not found out what.
Replace the 'texture2D' with 'texture', and your attribute will be used.
Bad GLSL compiler: it should not compile your shader since texture2D is not available in core profile.
EDIT: You may have forgotten to call glEnableVertexAttribArray(1); after setting your glVertexAttribPointers.

GLSL/OpenGL - Single color instead of the texture

I recently started learning GLSL, and now i have a problem with texturing. I've read all topics about it, i've found the same problem solid color problem, but there was a different problem that caused that. So, i have a simple quadrilateral(ground) and i simply want to render a grass texture on it. Shaders:
Fragment:
#version 330
uniform sampler2D color_texture;
in vec4 color;
out vec2 texCoord0;
void main()
{
gl_FragColor = color+texture(color_texture,texCoord0.st);
}
Vertex:
#version 330
uniform mat4 projection_matrix;
uniform mat4 modelview_matrix;
in vec3 a_Vertex;
in vec3 a_Color;
in vec2 a_texCoord0;
out vec4 color;
out vec2 texCoord0;
void main()
{
texCoord0 = a_texCoord0;
gl_Position = (projection_matrix * modelview_matrix) * vec4(a_Vertex, 1.0);
color = vec4(a_Color,0.3);
}
My texture and primitive coords:
static GLint m_primcoords[12]=
{0,0,0,
0,0,100,
100,0,100,
100,0,0};
static GLfloat m_texcoords[8]=
{0.0f,0.0f,
0.0f,1.0f,
1.0f,1.0f,
1.0f,0.0f};
Buffers:
glBindBuffer(GL_ARRAY_BUFFER,vertexcBuffer);
glBufferData(GL_ARRAY_BUFFER,sizeof(GLint)*12,m_primcoords,GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER,colorBuffer);
glBufferData(GL_ARRAY_BUFFER,sizeof(GLfloat)*12,m_colcoords,GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER,textureBuffer);
glBufferData(GL_ARRAY_BUFFER,sizeof(GLfloat)*8,m_texcoords,GL_STATIC_DRAW);
and my rendering method:
GLfloat modelviewMatrix[16];
GLfloat projectionMatrix[16];
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
cameraMove();
GLuint texturegrass = ploadtexture("grass.BMP");
glBindTexture(GL_TEXTURE_2D, texturegrass);
glGetFloatv(GL_MODELVIEW_MATRIX,modelviewMatrix);
glGetFloatv(GL_PROJECTION_MATRIX,projectionMatrix);
shaderProgram->sendUniform4x4("modelview_matrix",modelviewMatrix);
shaderProgram->sendUniform4x4("projection_matrix",projectionMatrix);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glActiveTexture(GL_TEXTURE0);
shaderProgram->sendUniform("color_texture",0);
glBindBuffer(GL_ARRAY_BUFFER,colorBuffer);
glVertexAttribPointer((GLint)1,3,GL_FLOAT,GL_FALSE,0,0);
glBindBuffer(GL_ARRAY_BUFFER,textureBuffer);
glVertexAttribPointer((GLint)2,2,GL_FLOAT,GL_FALSE,0,(GLvoid*)m_texcoords);
glBindBuffer(GL_ARRAY_BUFFER,vertexcBuffer);
glVertexAttribPointer((GLint)0,3,GL_INT,GL_FALSE,0,0);
glDrawArrays(GL_QUADS,0,12);
So, it looks like the code only reads 4 pixels from my texture(corners) and the output color will be outColor = ctopleft+ctopright+cbotleft+cbotright like this.
I send more code if you want, but i think the problem lies behind these lines.
I tried different coordinates, ordering, everything. I also read almost all topics about problems like this. Im using the beginning ogl game programming 2nd ed., but dont have cd, so i cant check if I'm coding well, cuz only parts of codes are in the book.
There are a couple of problems with your code.
In the fragment shader, you have declared texCoord0 as out, it should be in in the fragment shader and out in the vertex shader, since it is passed from one to the other.
You are binding your texture before you set the "active" texture unit. It defaults to GL_TEXTURE0, but this is still bad practice.

glGetUniformLocation return -1 on nvidia cards

I've been having an issue running glGetUnfiormLocation calls. While building a project on school computers running ATI graphics cards, the program functions flawlessly. However, using the school computers running nvidia cards, the calls to glGetUniformLocation return -1.
//c++ side
glLinkProgram(ShaderIds[0]);
ExitOnGLError("ERROR: Could not link the shader program");
ModelMatrixUniformLocaion = glGetUniformLocation(ShaderIds[0], "ModelMatrix");
ViewMatrixUniformLocation = glGetUniformLocation(ShaderIds[0], "ViewMatrix");
ProjectionMatrixUniformLocation = glGetUniformLocation(ShaderIds[0], "ProjectionMatrix");
ExitOnGLError("ERROR: Could not get the shader uniform locations");
And here is the vertex shader
layout(location=0) in vec4 in_Position;
layout(location=1) in vec4 in_Color;
layout(location=2) in vec2 in_Coord;
uniform mat4 ModelMatrix;
uniform mat4 ViewMatrix;
uniform mat4 ProjectionMatrix;
varying vec2 v_UVCoord;
out vec4 ex_Color;
void main(void)
{
gl_Position = (ProjectionMatrix * ViewMatrix * ModelMatrix) * in_Position;
gl_PointSize = in_Coord.x;
ex_Color = in_Color;
v_UVCoord = in_Coord;
}
From what I can tell, it shouldn't be giving me -1 based on optimization of the uniforms, because they are all in use, as are the attributes. Any insight into the matter would be greatly appreciated.
Linking an OpenGL program does not give an OpenGL error when it fails. You must actually check to see if the program linked properly.