c++/OpenGL/GLSL, textures with "random" artifacts - c++

Would like to know if someone has experienced this and knows the reason. I'm getting these strange artifacts after using "texture arrays"
http://i.imgur.com/ZfLYmQB.png
(My gpu is AMD R9 270)
ninja edit deleted the rest of the post for readability since it was just showing code where the problem could have been, since the project is open source now I only show the source of the problem(fragment shader)
Frag:
#version 330 core
layout (location = 0) out vec4 color;
uniform vec4 colour;
uniform vec2 light;
in DATA{
vec4 position;
vec2 uv;
float tid;
vec4 color;
}fs_in;
uniform sampler2D textures[32];
void main()
{
float intensity = 1.0 / length(fs_in.position.xy - light);
vec4 texColor = fs_in.color;
if(fs_in.tid > 0.0){
int tid = int(fs_in.tid - 0.5);
texColor = texture(textures[tid], fs_in.uv);
}
color = texColor; // * intensity;
}
Edit: github repos (sorry if It is missing some libs, having trouble linking them to github) https://github.com/PedDavid/NubDevEngineCpp
Edit: Credits to derhass for pointing out I was doing something with undefined results (accessing the array without a constant ([tid])). I have it now working with
switch(tid){
case 0: textureColor = texture(textures[0], fs_in.uv); break;
...
case 31: textureColor = texture(textures[31], fs_in.uv); break;
}
Not the prettiest, but fine for now!

I'm getting these strange artifacts after using "texture arrays"
You are not using "texture arrays". You use arrays of texture samplers. From your fragment shader:
#version 330 core
// ...
in DATA{
// ...
float tid;
}fs_in;
//...
if(fs_in.tid > 0.0){
int tid = int(fs_in.tid - 0.5);
texColor = texture(textures[tid], fs_in.uv);
}
What you try to do here is not allowed as per the GLSL 3.30 specification which states
Samplers aggregated into arrays within a shader (using square brackets
[ ]) can only be indexed with integral constant expressions (see
section 4.3.3 “Constant Expressions”).
Your tid is not a constant, so this will not work.
In GL 4, this constraint has been somewhat relaxed to (quote is from GLSL 4.50 spec):
When aggregated into arrays within a shader, samplers can only be
indexed with a dynamically uniform integral expression, otherwise
results are undefined.
Your now your input also isn't dynamically uniform either, so you will get undefined results too.
I don't know what you are trying to achieve, but maybe you can get it dobe by using array textures, which will represent a complete set of images as a single GL texture object and will not impose such constraints at accessing them.

That looks like the shader is rendering whatever random data it finds in memory.
Have you tried checking that glBindTexture(...) is called at the right time (before render) and that the value used (as returned by glGenTextures(...)) is valid?

Related

GLSL ES1.0 and passing texture-unit indices to the fragment shader?

Trying to translate a vertex/frag shader from glsl 330 to glsl es1.0
(Basically taking a step back since the original app was written for a desktop version of OpenGL3.0, but webGL2.0 is still not fully supported by some browsers, like IE or Safari; to my knowledge).
I understand 1.0 is using attribute/varying versus in/out, but I am having an issue that I cannot use integers with varying. There is an array of per-vertex integer values representing a texture unit index for that vertex. I do not see a way to convey that information to the fragment shader. If I send the values as floats it will start interpolating. Right ?
#version 330 //for openGL 3.3
//VERTEX shader
//---------------------------------------------------------------------------------
//uniform variables stay constant for the whole glDraw call
uniform mat4 ProjViewModelMatrix;
uniform mat4 NormalsMatrix;
uniform vec4 DefaultColor;
uniform vec4 LightColor;
uniform vec3 LightPosition;
uniform float LightIntensity;
uniform bool ExcludeFromLight;
//---------------------------------------------------------------------------------
//non-uniform variables get fed per vertex from the buffers
layout (location=0) in vec3 VertexCoord;
layout (location=1) in vec4 VertexColor;
layout (location=2) in vec3 VertexNormal;
layout (location=3) in vec2 VertexUVcoord;
layout (location=4) in int vertexTexUnit;
//---------------------------------------------------------------------------------
//Output variables to fragment shader
out vec4 thisColor;
out vec2 vertexUVcoord;
flat out int TexUnitIdx; // <------ PROBLEM
out float VertLightIntensity;
//---------------------------------------------------------------------------------
void main ()
{ /* ... blah ... */ }
The accompanied fragment shader that needs translation looks like this
#version 330 //for openGL 3.3
//FRAGMENT shader
//---------------------------------------------------------------------------------
//uniform variables
uniform bool useTextures; //If no textures, don't bother reading the TextureUnit array
uniform vec4 AmbientColor; //Background illumination
uniform sampler2D TextureUnit[6]; //Allow up to 6 texture units per draw call
//---------------------------------------------------------------------------------
//non-uniform variables
in vec2 vertexUVcoord;
in vec4 thisColor;
flat in int TexUnitIdx; // <------ PROBLEM
in float VertLightIntensity;
//---------------------------------------------------------------------------------
//Output color to graphics card
out vec4 pixelColor;
//---------------------------------------------------------------------------------
void main ()
{ /* ... blah ... */ }
There are no integer based attributes in GLSL ES 1.0
You can pass in floats (and supply as unsigned bytes) of course. Pass in false for normalize flag when calling gl.vertexAttribPointer
An other hand, neither GLSL ES 1.0 nor GLSL ES 3.00 allow indexing an array of samplers.
From the spec
12.30 Dynamic Indexing
...
Indexing of arrays of samplers by constant-index-expressions is supported in GLSL ES 1.00. A constant-index-expression
is an expression formed from constant-expressions and certain loop indices, defined for
a subset of loop constructs. Should this functionality be included in GLSL ES 3.00?
RESOLUTION: No. Arrays of samplers may only be indexed by constant-integral-expressions.
"Should this functionality be included in GLSL ES 3.00?" means should Dynamic indexing of samplers be included in GLES ES 3.00
I quoted the GLSL ES 3.00 spec since it references the GLSL ES 1.0 spec as well.
So, you have to write code so that your indies are constant-index-expressions.
attribute float TexUnitNdx;
...
uniform sampler2D TextureUnit[6];
vec4 getValueFromSamplerArray(float ndx, vec2 uv) {
if (ndx < .5) {
return texture2D(TextureUnit[0], uv);
} else if (ndx < 1.5) {
return texture2D(TextureUnit[1], uv);
} else if (ndx < 2.5) {
return texture2D(TextureUnit[2], uv);
} else if (ndx < 3.5) {
return texture2D(TextureUnit[3], uv);
} else if (ndx < 4.5) {
return texture2D(TextureUnit[4], uv);
} else {
return texture2D(TextureUnit[5], uv);
}
}
vec4 color = getValueFromSamplerArray(TexUnitNdx, someTexCoord);
or something like that. It might be faster to arrange your ifs into a binary search.

Opengl error 1282 (invalid operation) when using texture()

I have the following fragment shader:
#version 330 core
layout (location = 0) out vec4 color;
uniform vec4 colour;
uniform vec2 light_pos;
in DATA
{
vec4 position;
vec2 texCoord;
float tid;
vec4 color;
} fs_in;
uniform sampler2D textures[32];
void main()
{
float intensity = 1.0 / length(fs_in.position.xy - light_pos);
vec4 texColor = fs_in.color;
if (fs_in.tid > 0.0)
{
int tid = int(fs_in.tid + 0.5);
texColor = texture(textures[tid], fs_in.texCoord);
}
color = texColor * intensity;
}
If I run my program, I get opengl error 1282, which is invalid operation. If I don't use the texture(), so I write texCoord = vec4(...) it works perfectly. I'm always passing in tid (texture ID) as 0 (no texture) so that part shouldn't even run. I've set the textures uniform to some placeholder, but as far as I know this shouldn't even matter. What could cause the invalid operation then?
Your shader compilation has most likely failed. Make sure that you always check the compile status after trying to compile the shader, using:
GLint val = GL_FALSE;
glGetShaderiv(shaderId, GL_COMPILE_STATUS, &val);
if (val != GL_TRUE)
{
// compilation failed
}
In your case, the shader is illegal because you're trying to access an array of samplers with a variable index:
texColor = texture(textures[tid], fs_in.texCoord);
This is not supported in GLSL 3.30. From the spec (emphasis added):
Samplers aggregated into arrays within a shader (using square brackets [ ]) can only be indexed with integral constant expressions (see section 4.3.3 “Constant Expressions”).
This restriction is relaxed in later OpenGL versions. For example, from the GLSL 4.50 spec:
When aggregated into arrays within a shader, samplers can only be indexed with a dynamically uniform integral expression, otherwise results are undefined.
This change was introduced in GLSL 4.00. But it would still not be sufficient for your case, since you're trying to use an in variable as the index, which is not dynamically uniform.
If your textures are all the same size, you may want to consider using an array texture instead. That will allow you to sample one of the layers in the array texture based on a dynamically calculated index.
I know this solution is late, but if it helps anybody..
As per Cherno's video, this should work. He however uses the attribute 'fs_in.tid' as a 'GL_BYTE' in the gl_VertexAttribPointer fo the texture index, for some reason regarding casting 1.0f always got converted to 0.0f and hence did not work.
Changing GL_BYTE to GL_FLOAT resolved this issue for me.
Regarding the 'opengl error 1282', its a very common mistake I faced. I used to forget to call glUseProgram(ShaderID) before setting any of the uniforms. Because of this the uniforms even though not being used/set at the time can cause an error, namely '1282'. This could be one of the solutions, this solved it for me.

OpenGL shader not passing variable from vertex to fragment shader

I'm encountering something really really strange.
I have a very simple program that renders a simple full-screen billboard using the following shader pipeline:
VERTEX SHADER:
#version 430
layout(location = 0) in vec2 pos;
out vec2 tex;
void main()
{
gl_Position = vec4(pos, 0, 1);
tex = (pos + 1) / 2;
}
FRAGMENT SHADER:
#version 430
in vec2 tex;
out vec3 frag_color;
void main()
{
frag_color = vec3(tex.x, tex.y, 1);
}
The program always renders the quad in the correct positions (so I've ruled out the VAO as culprit), but for some reason ignores whatever values I set for tex and always set it to vec2(0,0), rendering a blue box.
Is there something I'm missing here? I've done many opengl apps before, and I've never encountered this. :/
I've seen implementations of OpenGL 2.0 having problems at compile time when adding floats and not explicitly casted integers.
Even if it is not a problem I would think possible on a 4.3 implementation, maybe you should just add dots and suffixes to your "integer" constants.
The last line of you're vertex shader seems to be the only one that could be a problem (I believe values are always casted in constructors :p ) so you would have something like this instead:
tex = (pos + 1.0f) / 2.0f;
Remember that you're shaders can compile on thousands of different implementations so it's a good habit to always be explicit!!!

OpenGL shaders: uniform variables count incorrect

I want to do bump/normal/parallax mapping but for this purpose I need multitexturing - use 2 textures at a time - one for the color and one for the height map. But this task accomplishment appeared absurdly problematic.
I have the following code for the vertex shader:
#version 330 core
/* 0: in
* 1: out
* 2: uniform
*/
// 0: in
layout (location = 0) in vec3 v_vertexPos;
layout (location = 1) in vec2 v_vertexTexCoord;
// 1: out
out vec2 f_vertexTexCoord;
// 2: uniform
uniform mat4 vertexMvp = mat4( 1.0f );
void main()
{
f_vertexTexCoord = v_vertexTexCoord;
gl_Position = vertexMvp * vec4( v_vertexPos, 1.0f );
}
and the following for the fragment one:
#version 330 core
/* 0: in
* 1: out
* 2: uniform
*/
// 0: in
in vec2 f_vertexTexCoord;
// 1: out
layout (location = 0) out vec4 f_color;
// 2: uniform
uniform sampler2D cTex;
uniform sampler2D hTex;
// #define BUMP
void main()
{
vec4 colorVec = texture2D( cTex, f_vertexTexCoord );
#ifdef BUMP
vec4 bumpVec = texture2D( hTex, f_vertexTexCoord );
f_color = vec4( mix( bumpVec.rgb, colorVec.rgb, colorVec.a), 1.0 );
#else
f_color = texture2D( cTex, f_vertexTexCoord );
#endif
}
The shaders get compiled and attached to the shader program. The program is then linked and then used. The only reported active uniform variables by glGetActiveUniform are the vertex shader's uniform vertexMvp and the fragment one's cTex. hTex is not recognized and querying for its location the result is -1. The GL_ARB_multitexture OpenGL extension is supported by the graphics card ( supports OpenGL version up to 4.3 ).
Tested the simple multitexturing example provided here which has only fragment shader defined, using the stock vertex one. This example works like a charm.
Any suggestions?
"GLSL compilers and linkers try to be as efficient as possible. Therefore, they do their best to eliminate code that does not affect the stage outputs. Because of this, a uniform defined in a shader file does not have to be made available in the linked program. It is only available if that uniform is used by code that affects the stage output, and that the uniform itself can change the output of the stage.
Therefore, a uniform that is exposed by a fully linked program is called an "active" uniform; any other uniform specified by the original shaders is inactive. Inactive uniforms cannot be used to do anything in a program." - OpenGL.org
Since BUMP is not defined in your fragment shader, hTex is not used in your code, so is not an active uniform. This is expected behavior.

converting GLSL #130 segment to #330

I have the following piece of shader code that works perfectly with GLSL #130, but I would like to convert it to code that works with version #330 (as somehow the #130 version doesn't work on my Ubuntu machine with a Geforce 210; the shader does nothing). After several failed attempts (I keep getting undescribed link errors) I've decided to ask for some help. The code below dynamically changes the contrast and brightness of a texture using the uniform variables Brightness and Contrast. I have implemented it in Python using PyOpenGL:
def createShader():
"""
Compile a shader that adjusts contrast and brightness of active texture
Returns
OpenGL.shader - reference to shader
dict - reference to variables that can be passed to the shader
"""
fragmentShader = shaders.compileShader("""#version 130
uniform sampler2D Texture;
uniform float Brightness;
uniform float Contrast;
uniform vec4 AverageLuminance;
void main(void)
{
vec4 texColour = texture2D(Texture, gl_TexCoord[0].st);
gl_FragColor = mix(texColour * Brightness,
mix(AverageLuminance, texColour, Contrast), 0.5);
}
""", GL_FRAGMENT_SHADER)
shader = shaders.compileProgram(fragmentShader)
uniform_locations = {
'Brightness': glGetUniformLocation( shader, 'Brightness' ),
'Contrast': glGetUniformLocation( shader, 'Contrast' ),
'AverageLuminance': glGetUniformLocation( shader, 'AverageLuminance' ),
'Texture': glGetUniformLocation( shader, 'Texture' )
}
return shader, uniform_locations
I've looked up the changes that need to made for the new GLSL version and tried changing the fragment shader code to the following, but then only get non-descriptive Link errors:
fragmentShader = shaders.compileShader("""#version 330
uniform sampler2D Texture;
uniform float Brightness;
uniform float Contrast;
uniform vec4 AverageLuminance;
in vec2 TexCoord;
out vec4 FragColor;
void main(void)
{
vec4 texColour = texture2D(Texture, TexCoord);
FragColor = mix(texColour * Brightness,
mix(AverageLuminance, texColour, Contrast), 0.5);
}
""", GL_FRAGMENT_SHADER)
Is there anyone that can help me with this conversion?
I doubt that raising the shader version profile will solve any issue. #version 330 is OpenGL-3.3 and according to the NVidia product website the maximum OpenGL version supported by the GeForce 210 is OpenGL-3.1, i.e. #version 140
I created no vertex shader cause I didn't think I'd need one (I wouldn't know what I should make it do). It worked before without any vertex shader as well.
Probably only as long as you didn't use a fragment shader or before you were attempting to use a texture. The fragment shader needs input variables, coming from a vertex shader, to have something it can use as texture coordinates. TexCoord is not a built-in variable (and with higher GLSL versions any builtin variables suitable for the job have been removed), so you need to fill that with value (and sense) in a vertex shader.
the glGetString(GL_VERSION) on the NVidia machine reads out OpenGL version 3.3.0. This is Ubuntu, so it might be possible that it differs with the windows specifications?
Do you have the NVidia propriatary drivers installed? And are they actually used? Check with glxinfo or glGetString(GL_RENDERER). OpenGL-3.3 is not too far from OpenGL-3.1 and in theory OpenGL major versions map to hardware capabilities.