Why are Uniform variables not working in GLSL? - opengl

I am currently trying to create a Blur Shader for postprocessing purposes using LWJGL, but stumbled across a problem.
The Uniforms I want to change are resolution, radius and direction of the blur (horizontal or vertical, so I am able to use one Shader for both directions). I render the output to a FrameBuffer, which then gets rendered to the screen. But somehow, the Texture is just black. When I use constant values instead of uniforms, it works exactly as expected, so the problem must be the uniform variables.
This is the code of my fragment shader:
#version 330 core
out vec4 fragColor;
in vec2 coords;
in vec4 color;
in float samplerIndex;
uniform sampler2D samplers[32]; //Just an Array with ints from 0 to 31
uniform float radius;
uniform vec2 resolution; //Screen Resolution, or width & height of Framebuffer
uniform vec2 direction; //Either 1 or 0 for x and y
void main() {
int index = int(samplerIndex);
vec4 sum = vec4(0.0);
float blurX = radius / resolution.x * direction.x;
float blurY = radius / resolution.y * direction.y;
sum += texture2D(samplers[index], vec2(coords.x - 4.0 * blurX, coords.y - 4.0 * blurY)) * 0.0162162162;
sum += texture2D(samplers[index], vec2(coords.x - 3.0 * blurX, coords.y - 3.0 * blurY)) * 0.0540540541;
sum += texture2D(samplers[index], vec2(coords.x - 2.0 * blurX, coords.y - 2.0 * blurY)) * 0.1216216216;
sum += texture2D(samplers[index], vec2(coords.x - 1.0 * blurX, coords.y - 1.0 * blurY)) * 0.1945945946;
sum += texture2D(samplers[index], vec2(coords.x, coords.y)) * 0.2270270270;
sum += texture2D(samplers[index], vec2(coords.x + 1.0 * blurX, coords.y + 1.0 * blurY)) * 0.1945945946;
sum += texture2D(samplers[index], vec2(coords.x + 2.0 * blurX, coords.y + 2.0 * blurY)) * 0.1216216216;
sum += texture2D(samplers[index], vec2(coords.x + 3.0 * blurX, coords.y + 3.0 * blurY)) * 0.0540540541;
sum += texture2D(samplers[index], vec2(coords.x + 4.0 * blurX, coords.y + 4.0 * blurY)) * 0.0162162162;
fragColor = color * vec4(sum.rgb, 1.0);
}
This is how I render the FrameBuffer:
public void render(boolean fixed) {
model.setTextureID(Shader.getTextureID(texture));
model.buffer(vertices);
indices.put(0).put(1).put(2).put(2).put(3).put(0);
vertices.flip();
indices.flip();
shader.bind();
texture.bind();
shader.setUniform1iv("samplers", Shader.SAMPLERS);
shader.setUniform1f("radius", 10.0f);
shader.setUniform2f("resolution", width, height);
shader.setUniform2f("direction", horizontal, vertical);
if(fixed) {
shader.setUniformMatrix4f("camProjection", Camera.fixedProjection);
shader.setUniformMatrix4f("camTranslation", Camera.fixedTranslation);
} else {
shader.setUniformMatrix4f("camProjection", Camera.projection);
shader.setUniformMatrix4f("camTranslation", Camera.translation);
}
vao.bind();
vbo.bind();
vbo.uploadSubData(0, vertices);
ibo.bind();
ibo.uploadSubData(0, indices);
GL11.glDrawElements(GL11.GL_TRIANGLES, Model.INDICES, GL11.GL_UNSIGNED_INT, 0);
vao.unbind();
texture.unbind();
shader.unbind();
vertices.clear();
indices.clear();
}
The sampler uniform works perfectly fine though, the problem only seems to affect the other three uniforms.
I don't have much experience with OpenGL and GLSL, what am I missing?

Your shader code can't work at all, because of
samplers[index]
samplers is an array of sampler2D and index is set from the vertex shader input samplerIndex:
int index = int(samplerIndex);
See GLSL version 3.30, which you use (from OpenGL Shading Language 3.30 Specification - 4.1.7 Samplers ):
Samplers aggregated into arrays
within a shader (using square brackets [ ]) can only be indexed with integral constant expressions
See GLSL version 4.60 (most recent) (from OpenGL Shading Language 4.60 Specification - 4.1.7. Opaque Types):
(This rule applies to all the versions since GLSL 4.00)
When aggregated into arrays within a shader, these types can only be indexed with a dynamically uniform expression, or texture lookup will result in undefined values.
Thus, neither in the GLSL version which you use, nor in the most recent version, array of samplers can be indexed by an vertex shader input (attribute).
Since GLSL 4.00 it would be possible to index an array of samplers by an uniform, because indexing by a uniform variable is a dynamically uniform expression.
I recommend to use sampler2DArray (see Sampler) rather than an array of sampler2D.
When you use a sampler2DArray, then you don't need any indexing at all, because the "index" is encoded in the 3rd component of the texture coordinate at texture lookup (see texture).

Related

Fade texture inner borders to transparent in LibGDX using openGL shaders (glsl)

I'm currently working on a tile game in LibGDX and I'm trying to get a "fog of war" effect by obscuring unexplored tiles. The result I get from this is a dynamically generated black texture of the size of the screen that only covers unexplored tiles leaving the rest of the background visible. This is an example of the fog texture rendered on top of a white background:
What I'm now trying to achieve is to dynamically fade the inner borders of this texture to make it look more like a fog that slowly thickens instead of just a bunch of black boxes put together on top of the background.
Googling the problem I found out I could use shaders to do this, so I tried to learn some glsl (I'm at the very start with shaders) and I came up with this shader:
VertexShader:
//attributes passed from openGL
attribute vec3 a_position;
attribute vec2 a_texCoord0;
//variables visible from java
uniform mat4 u_projTrans;
//variables shared between fragment and vertex shader
varying vec2 v_texCoord0;
void main() {
v_texCoord0 = a_texCoord0;
gl_Position = u_projTrans * vec4(a_position, 1f);
}
FragmentShader:
//variables shared between fragment and vertex shader
varying vec2 v_texCoord0;
//variables visible from java
uniform sampler2D u_texture;
uniform vec2 u_textureSize;
uniform int u_length;
void main() {
vec4 texColor = texture2D(u_texture, v_texCoord0);
vec2 step = 1.0 / u_textureSize;
if(texColor.a > 0) {
int maxNearPixels = (u_length * 2 + 1) * (u_length * 2 + 1) - 1;
for(int i = 0; i <= u_length; i++) {
for(float j = 0; j <= u_length; j++) {
if(i != 0 || j != 0) {
texColor.a -= (1 - texture2D(u_texture, v_texCoord0 + vec2(step.x * float(i), step.y * float(j))).a) / float(maxNearPixels);
texColor.a -= (1 - texture2D(u_texture, v_texCoord0 + vec2(-step.x * float(i), step.y * float(j))).a) / float(maxNearPixels);
texColor.a -= (1 - texture2D(u_texture, v_texCoord0 + vec2(step.x * float(i), -step.y * float(j))).a) / float(maxNearPixels);
texColor.a -= (1 - texture2D(u_texture, v_texCoord0 + vec2(-step.x * float(i), -step.y * float(j))).a) / float(maxNearPixels);
}
}
}
}
gl_FragColor = texColor;
}
This is the result I got setting a length of 20:
So the shader I wrote kinda works, but has terrible performance cause it's O(n^2) where n is the length of the fade in pixels (so it can be very high, like 60 or even 80). It also has some problems, like that the edges are still a bit too sharp (I'd like a smother transition) and some of the angles of the border are less faded than others (I'd like to have a fade uniform everywhere).
I'm a little bit lost at this point: is there anything I can do to make it better and faster? Like I said I'm new to shaders, so: is it even the right way to use shaders?
As others mentioned in the comments, instead of blurring in the screen-space, you should filter in the tile-space while potentially exploiting the GPU bilinear filtering. Let's go through it with images.
First define a texture such that each pixel corresponds to a single tile, black/white depending on the fog at that tile. Here's such a texture blown up:
After applying the screen-to-tiles coordinate transformation and sampling that texture with GL_NEAREST interpolation we get the blocky result similar to what you have:
float t = texture2D(u_tiles, M*uv).r;
gl_FragColor = vec4(t,t,t,1.0);
If instead of GL_NEAREST we switch to GL_LINEAR, we get a somewhat better result:
This still looks a little blocky. To improve on that we can apply a smoothstep:
float t = texture2D(u_tiles, M*uv).r;
t = smoothstep(0.0, 1.0, t);
gl_FragColor = vec4(t,t,t,1.0);
Or here is a version with a linear shade-mapping function:
float t = texture2D(u_tiles, M*uv).r;
t = clamp((t-0.5)*1.5 + 0.5, 0.0, 1.0);
gl_FragColor = vec4(t,t,t,1.0);
Note: these images were generated within a gamma-correct pipeline (i.e. sRGB framebuffer enabled). This is one of those few scenarios, however, where ignoring gamma may give better results, so you're welcome to experiment.

GLSL Texel Size

Why people use this equation to specify the size of a single texel?
vec2 tex_offset = 1.0 / textureSize(image, 0);
In my opinion, this should be:
whole texels -> whole texture size
single texel -> single texel size
So
singleTexelSize = (singleTexel * wholeTextureSize) / wholeTexel = wholeTextureSize / wholeTexel
This is the whole version of the fragment shader of the Bloom lesson about OpenGL of Joey de Vries:
#version 330 core
out vec4 FragColor;
in vec2 TexCoords;
uniform sampler2D image;
uniform bool horizontal;
uniform float weight[5] = float[] (0.227027, 0.1945946, 0.1216216, 0.054054, 0.016216);
void main()
{
vec2 tex_offset = 1.0 / textureSize(image, 0); // gets size of single texel
vec3 result = texture(image, TexCoords).rgb * weight[0]; // current fragment's contribution
if(horizontal)
{
for(int i = 1; i < 5; ++i)
{
result += texture(image, TexCoords + vec2(tex_offset.x * i, 0.0)).rgb * weight[i];
result += texture(image, TexCoords - vec2(tex_offset.x * i, 0.0)).rgb * weight[i];
}
}
else
{
for(int i = 1; i < 5; ++i)
{
result += texture(image, TexCoords + vec2(0.0, tex_offset.y * i)).rgb * weight[i];
result += texture(image, TexCoords - vec2(0.0, tex_offset.y * i)).rgb * weight[i];
}
}
FragColor = vec4(result, 1.0);
}
I can't follow your proposed way of calculating texel size but maybe this makes it more clear:
Texture coordinates (also referred to as UVs) are normalized, so they're fractions between 0 and 1 no matter the size of the texture, meaning that a coordinate of 1 can be 256 pixels or 2048 pixels. To calculate how to get from one dedicated value to the next in a normalized space you need to divide 1 by the number of individual values.
To put in in a different context: Assume I have 10 apples, so that's 100%, how much percent make up one apple? 100(%) / 10(apples) = 10(%) = 1(apple).
So if you now want only whole apples from me you know that you have to ask for a multiple of 10%. Convert this to fractions by dividing by 100 and you're back to UVs.
For further reading check out this answer, it also has a nice diagram:
Simulate OpenGL texture wrap + interpolation on canvas?

OpenGL multipass shadow maps on Qt are not rendered

I'm trying to render dynamic shadows casted from the point light on some cubes. However, like there are no shadows at all on the cubes, particularly on the silver cube:
In my program I use Qt OpenGL API together with the native OpenGL one. I do two passes: 1. create a depth texture (glGenTextures()), create a framebuffer (glGenFramebuffers()) and attach the texture to that (glFramebufferTexture()). Then render all cubes to that framebuffer. 2. Render as usually all scene to the screen with using that texture in the shaders. I use QOpenGLShaderProgram class for creating shader programs.
How I calculate the shadows in the fragment shader:
float isFragInShadow(vec3 fragPos)
{
vec3 relFragPos = fragPos - lightPos;
float closestDepth = texture(depth_map, relFragPos).r;
closestDepth *= far_plane;
float currentDepth = length(relFragPos);
float bias = max(0.05 * (1.0 - dot(normalize(-relFragPos), Normal)), 0.005);
float shadow = currentDepth - bias > closestDepth ? 1.0 : 0.0;
return shadow;
}
...
float isShadow = isFragInShadow(FragPos);
vec3 resColor = ambient * obj_material.ambient + (1.0 - isShadow) * (diffuse * obj_material.diffuse + specular * obj_material.specular);
I also tried to render to the screen the depth texture and the result is following:
https://i.stack.imgur.com/qM9a1.png
How I render:
vec3 lightToFrag = FragPos - lightPos;
float closestDepth = texture(depth_map, lightToFrag).r;
Color = vec4(vec3(closestDepth), 1.0);
Why can I not see shadows in the scene?

OpenGL ES 2.0 shader integer operations

I am having trouble with getting integer operations working in the OpenGL ES 2.0 shaders.
GL_SHADING_LANGUAGE_VERSION: OpenGL ES GLSL ES 1.00
One the example lines where I'm having issues is: color.r = floor(f / 65536);
I get this error:
Error linking program: ERROR:SEMANTIC-4 (vertex shader, line 13) Operator not supported for operand types
To give more context to what's happening, within the library that I am working with the only way to pass the color is as a float to which 3 integers have been bit shifted into. 3 (8-bit) int -> float pass to shader | float - > r g b (using integer manipulation) this all works fine on the normal OpenGL but having trouble making this work on the raspberry pi.
Full vexter shader code here:
attribute vec4 position;
varying vec3 Texcoord;
uniform mat4 model;
uniform mat4 view;
uniform mat4 proj;
vec3 unpackColor(float f)
{
vec3 color;
f -= 0x1000000;
color.r = floor(f / 65536);
color.g = floor((f - color.r * 65536) / 256.0);
color.b = floor(f - color.r * 65536 - color.g * 256.0);
return color / 256.0;
}
void main()
{
Texcoord = unpackColor(position.w);
gl_Position = proj * view * model * vec4(position.xyz, 1.0);
}
Any ideas how to get this working?

Fullscreen blur GLSL shows diagonal line (OpenGL core)

In OpenGL 2.1, we could create a post-processing effect by rendering to a FBO using a fullscreen quad. OpenGL 3.1 removes GL_QUADS, so we have to emulate this using two triangles instead.
Unfortunately, I am getting a strange issue when trying to apply this technique: a diagonal line appears in the hypotenuse of the two triangles!
This screenshot demonstrates the issue:
  
I did not have a diagonal line in OpenGL 2.1 using GL_QUADS - it appeared in OpenGL 3.x core when I switched to GL_TRIANGLES. Unfortunately, most tutorials online suggest using two triangles and none of them show this issue.
// Fragment shader
#version 140
precision highp float;
uniform sampler2D ColorTexture;
uniform vec2 SampleDistance;
in vec4 TextureCoordinates0;
out vec4 FragData;
#define NORM (1.0 / (1.0 + 2.0 * (0.95894917 + 0.989575414)))
//const float w0 = 0.845633832 * NORM;
//const float w1 = 0.909997233 * NORM;
const float w2 = 0.95894917 * NORM;
const float w3 = 0.989575414 * NORM;
const float w4 = 1 * NORM;
const float w5 = 0.989575414 * NORM;
const float w6 = 0.95894917 * NORM;
//const float w7 = 0.909997233 * NORM;
//const float w8 = 0.845633832 * NORM;
void main(void)
{
FragData =
texture(ColorTexture, TextureCoordinates0.st - 2.0 * SampleDistance) * w2 +
texture(ColorTexture, TextureCoordinates0.st - SampleDistance) * w3+
texture(ColorTexture, TextureCoordinates0.st) * w4 +
texture(ColorTexture, TextureCoordinates0.st + SampleDistance) * w5 +
texture(ColorTexture, TextureCoordinates0.st + 2.0 * SampleDistance) * w6
;
}
// Vertex shader
#version 140
precision highp float;
uniform mat4 ModelviewProjection; // loaded with orthographic projection (-1,-1):(1,1)
uniform mat4 TextureMatrix; // loaded with identity matrix
in vec3 Position;
in vec2 TexCoord;
out vec4 TextureCoordinates0;
void main(void)
{
gl_Position = ModelviewProjection * vec4(Position, 1.0);
TextureCoordinates0 = (TextureMatrix * vec4(TexCoord, 0.0, 0.0));
}
What am I doing wrong? Is there a suggested way to perform a fullscreen post-processing effect in OpenGL 3.x/4.x core?
Reto Koradi is correct and I suspect that is what I meant when I wrote my original comment; I honestly do not remember this question.
You absolutely have to swap 2 of the vertex indices to draw a strip instead of a quad or you wind up with a smaller triangle cut out of part of the quad.
ASCII art showing difference between primitives generated in 0,1,2,3 order:
GL_QUADS GL_TRIANGLE_STRIP GL_TRIANGLE_FAN
0-3 0 3 0-3
| | |X| |\|
1-2 1-2 1-2
0-2
|/|
1-3
As for why this may produce better results than two triangles using 6 different vertices, floating-point variance may be to blame. Indexed rendering in general will usually solve this, but using a primitive type like GL_TRIANGLE_STRIP requires 2 fewer indices to draw a quad as two triangles.