WebGL flickering shader when setting varying value in Firefox - glsl

I'm following a basic tutorial learning about WebGL (https://webglfundamentals.org/webgl/lessons/webgl-fundamentals.html). Currently I'm just drawing a square (2 triangles) to the screen.
I tried to modify the basic vertex shader to pass the clipspace XY coordinates to the fragment shader, but when I write it the following way it causes flickering in Firefox
Vertex Shader
attribute vec2 a_position;
uniform vec2 u_resolution;
varying vec2 uv;
void main(){
vec2 pixelPos = a_position.xy / u_resolution;
pixelPos *= 2.0;
pixelPos -= 1.0;
uv = pixelPos;
gl_Position = vec4(pixelPos, 0.0, 1.0);
}
Fragment Shader
precision mediump float;
varying vec2 uv;
void main(){
gl_FragColor = vec4(uv, 0.0, 1.0);
}
The weird part is, if I modify the vertex shader to instead use uv = (pixelPos + 1.0) / 2.0; the flickering stops. Logically I don't see what the difference would be (no divide by zero errors or anything) so I'm not sure what's causing it.
Seems to work fine in Chrome though

uv = (pixelPos * 2.0) - 1.0 is not the same as uv = (pixelPos + 1.0) / 2.0;
vec2 pixelPos = a_position.xy / u_resolution; // in the range [0 to 1]
pixelPos *= 2.0; // in the range [0 to 2]
pixelPos -= 1.0; // in the range [-1 to 1]
uv = pixelPos;
vec2 pixelPos = a_position.xy / u_resolution; // in the range [0 to 1]
pixelPos += 1.0; // in the range [0 to 2]
pixelPos /= 2.0; // in the range [0 to 1]
uv = pixelPos;
The flickering may be caused by setting gl_FragColor's red and green values to negative numbers. They should be clamped to zero but the way clamping is handled in FF vs Chrome might cause the discrepancy.

Related

OpenGL 2D faded circle being stretched/compressed by the resolution

I'd like my faded lighting (based on distance from a point) to be a perfect circle no matter the resolution. Currently, the light is only a circle if the height and width of the window are equal.
This is what it looks like right now:
My fragment shader looks like this:
precision mediump float;
#endif
#define MAX_LIGHTS 10
// varying input variables from our vertex shader
varying vec4 v_color;
varying vec2 v_texCoords;
// a special uniform for textures
uniform sampler2D u_texture;
uniform vec2 u_resolution;
uniform float u_time;
uniform vec2 lightsPos[MAX_LIGHTS];
uniform vec3 lightsColor[MAX_LIGHTS];
uniform float lightsSize[MAX_LIGHTS];
uniform vec2 cam;
uniform vec2 randPos;
uniform bool dark;
void main()
{
vec4 lights = vec4(0.0,0.0,0.0,1.0);
float ratio = u_resolution.x/u_resolution.y;
vec2 st = gl_FragCoord.xy/u_resolution;
vec2 loc = vec2(.5 + randPos.x, 0.5 + randPos.y);
for(int i = 0; i < MAX_LIGHTS; i++)
{
if(lightsSize[i] != 0.0)
{
// trying to reshape the light
// vec2 st2 = st;
// st2.x *= ratio;
float size = 2.0/lightsSize[i];
float dist = max(0.0, distance(lightsPos[i], st)); // st here was replaced with st2 when experimenting
lights = lights + vec4(max(0.0, lightsColor[i].x - size * dist), max(0.0, lightsColor[i].y - size * dist), max(0.0, lightsColor[i].z - size * dist), 0.0);
}
}
if(dark)
{
lights.r = max(lights.r, 0.075);
lights.g = max(lights.g, 0.075);
lights.b = max(lights.b, 0.075);
}
else
{
lights.r += 1.0;
lights.g += 1.0;
lights.b += 1.0;
}
gl_FragColor = texture2D(u_texture, v_texCoords) * lights;
}
I tried reshaping the light by multiplying the x value of the pixel by the ratio of the screen width to the screen height but that caused the lights to be out of place. I couldn't figure out anything that would put them back in their correct place while maintaining their shape.
EDIT: the displacement is determined by my camera's position in my libgdx scene.
what you need is to rescale the difference between light position and fragment position
vec2 dr = st-lightsPos[i];
dr.x*=ratio;
float dist = length(dr);
Try normalizing st like that
vec2 st = (gl_FragCoord.xy - .5*u_resolution.xy) / min(u_resolution.x, u_resolution.y);
So that coordinates of your fragments are in range [-1; 1] for y and [-ratio/2; ratio/2] where ratio = u_resolution.x/u_resolution.y
You can also make it [0; 1] for y and [0; ratio] for x by doing
vec2 st = gl_FragCoord.xy / min(u_resolution.x, u_resolution.y);
But the former is more convenient in many cases

How to fix strange artifacts when applying Gaussian blur in OpenGL ES 2.0

I am getting strange artifacts when applying Gaussian Blur for a bloom effect in OpenGL ES 2.0 (with QT), as you can see below. On the left is with the bloom effect enabled, and on the right is with the effect turned off:
Strangely, this artifacting is only happening when I render certain lines, as you can see from the picture below. For what it's worth, these lines are actually quads that are being rendered to the screen with a constant pixel width. No other objects have any visible artifacting, just the expected Gaussian Blur, and even more strangly, most other lines that I've been rendering look just fine. You can see in the left picture that the cyan line on the left gets rendered without any weird artifacting.
Here are the shaders I've put together:
Vertex Shader:
#ifdef GL_ES
precision mediump int;
precision mediump float;
#endif
attribute vec3 aPos;
attribute vec2 aTexCoords;
varying vec2 blurTextureCoords[11];
varying vec2 texCoords;
uniform vec2 screenRes; // image width
uniform int effectType;
void main(void){
// Set GL position
gl_Position = vec4(aPos.x, aPos.y, 0.0, 1.0);
texCoords = aTexCoords;
if(effectType == 2){
// Horizontal blur
vec2 centerTexCoords = vec2(aPos.x, aPos.y)*0.5 + 0.5; // Coordinates of center of texture
float pixelSize = 1.0 / screenRes.x;
// Fill out texture coord array
for(int i=-5; i<=5; i++){
float i_float = float(i);
blurTextureCoords[i+5] = centerTexCoords + vec2(pixelSize * i_float, 0.0);
}
}
else if(effectType == 3){
// Vertical blur
vec2 centerTexCoords = vec2(aPos.x, aPos.y)*0.5 + 0.5; // Coordinates of center of texture
float pixelSize = 1.0 / screenRes.y;
// Fill out texture coord array
for(int i=-5; i<=5; i++){
float i_float = float(i);
blurTextureCoords[i+5] = centerTexCoords + vec2(0.0, pixelSize * i_float);
}
}
}
Frag shader
#ifdef GL_ES
precision mediump int;
precision mediump float;
#endif
varying vec2 blurTextureCoords[11];
varying vec2 texCoords;
uniform sampler2D screenTexture;
uniform sampler2D sceneTexture;
uniform int effectType;
void main(void){
vec4 color = texture2D(screenTexture, texCoords);
// Different effects
if(effectType == 2 || effectType == 3){
// Blur
gl_FragColor = vec4(0.0);
gl_FragColor += texture2D(screenTexture, blurTextureCoords[0]) * 0.0093;
gl_FragColor += texture2D(screenTexture, blurTextureCoords[1]) * 0.028002;
gl_FragColor += texture2D(screenTexture, blurTextureCoords[2]) * 0.065984;
gl_FragColor += texture2D(screenTexture, blurTextureCoords[3]) * 0.121703;
gl_FragColor += texture2D(screenTexture, blurTextureCoords[4]) * 0.175713;
gl_FragColor += texture2D(screenTexture, blurTextureCoords[5]) * 0.198596;
gl_FragColor += texture2D(screenTexture, blurTextureCoords[6]) * 0.175713;
gl_FragColor += texture2D(screenTexture, blurTextureCoords[7]) * 0.121703;
gl_FragColor += texture2D(screenTexture, blurTextureCoords[8]) * 0.065984;
gl_FragColor += texture2D(screenTexture, blurTextureCoords[9]) * 0.028002;
gl_FragColor += texture2D(screenTexture, blurTextureCoords[10]) * 0.0093;
}
else{
gl_FragColor = color;
}
}
I've been unable to find anything strange in other parts of my code, and it's really weird that only these specific lines are messed up. Any ideas?
Solved this! I needed to clamp the color values of my sampled texture coordinates between 0 and 1, since I'm using HDR and these values are saturating the Gaussian Blur effect.

World-space position from logarithmic depth buffer

After changing my current deferred renderer to use a logarithmic depth buffer I can not work out, for the life of me, how to reconstruct world-space depth from the depth buffer values.
When I had the OpenGL default z/w depth written I could easily calculate this value by transforming from window-space to NDC-space then perform inverse perspective transformation.
I did this all in the second pass fragment shader:
uniform sampler2D depth_tex;
uniform mat4 inv_view_proj_mat;
in vec2 uv_f;
vec3 reconstruct_pos(){
float z = texture(depth_tex, uv_f).r;
vec4 pos = vec4(uv_f, z, 1.0) * 2.0 - 1.0;
pos = inv_view_proj_mat * pos;
return pos.xyz / pos.w;
}
and got a result that looked pretty correct:
But now the road to a simple z value is not so easy (doesn't seem like it should be so hard either).
My vertex shader for my first pass with log depth:
#version 330 core
#extension GL_ARB_shading_language_420pack : require
layout(location = 0) in vec3 pos;
layout(location = 1) in vec2 uv;
uniform mat4 mvp_mat;
uniform float FC;
out vec2 uv_f;
out float logz_f;
out float FC_2_f;
void main(){
gl_Position = mvp_mat * vec4(pos, 1.0);
logz_f = 1.0 + gl_Position.w;
gl_Position.z = (log2(max(1e-6, logz_f)) * FC - 1.0) * gl_Position.w;
FC_2_f = FC * 0.5;
}
And my fragment shader:
#version 330 core
#extension GL_ARB_shading_language_420pack : require
// other uniforms and output variables
in vec2 uv_f;
in float FC_2_f;
void main(){
gl_FragDepth = log2(logz_f) * FC_2_f;
}
I have tried a few different approaches to get back the z-position correctly, all failing.
If I redefine my reconstruct_pos in the second pass to be:
vec3 reconstruct_pos(){
vec4 pos = vec4(uv_f, get_depth(), 1.0) * 2.0 - 1.0;
pos = inv_view_proj_mat * pos;
return pos.xyz / pos.w;
}
This is my current attempt at reconstructing Z:
uniform float FC;
float get_depth(){
float log2logz_FC_2 = texture(depth_tex, uv_f).r;
float logz = pow(2, log2logz_FC_2 / (FC * 0.5));
float pos_z = log2(max(1e-6, logz)) * FC - 1.0; // pos.z
return pos_z;
}
Explained:
log2logz_FC_2: the value written to depth buffer, so log2(1.0 + gl_Position.w) * (FC / 2)
logz: simply 1.0 + gl_Position.w
pos_z: the value of gl_Position.z before perspective devide
return value: gl_Position.z
Of course, that's just my working. I'm not sure what these values actually hold in the end, because I think I've screwed up some of the math or not correctly understood the transformations going on.
What is the correct way to get my world-space Z position from this logarithmic depth buffer?
In the end I was going about this all wrong. The way to get the world-space position back using a log buffer is:
Retrieve depth from texture
Reconstruct gl_Position.w
linearize reconstructed depth
translate to world space
Here's my implementation in glsl:
in vec2 uv_f;
uniform float nearz;
uniform float farz;
uniform mat4 inv_view_proj_mat;
float linearize_depth(in float depth){
float a = farz / (farz - nearz);
float b = farz * nearz / (nearz - farz);
return a + b / depth;
}
float reconstruct_depth(){
float depth = texture(depth_tex, uv_f).r;
return pow(2.0, depth * log2(farz + 1.0)) - 1.0;
}
vec3 reconstruct_world_pos(){
vec4 wpos =
inv_view_proj_mat *
(vec4(uv_f, linearize_depth(reconstruct_depth()), 1.0) * 2.0 - 1.0);
return wpos.xyz / wpos.w;
}
Which gives me the same result (but with better precision) as when I was using the default OpenGL depth buffer.

Downsample and upsample texture offsets, OpenGL GLSL

Let's say that I want to downsample from 4x4 to 2x2 texels texture, do some fancy stuff, and upsample it again from 2x2 to 4x4. How do I calculate the correct neighbor texels offsets? I can't use bilinear filtering or nearest filtering. I need to pick 4 samples for each fragment execution and pick the maximum one before downsampling. The same holds for the upsampling pass, i.e., I need to pick 4 samples for each fragment execution.
Have I calculated the neighbor offsets correctly(I'm using a fullscreen quad)?
//Downsample: 1.0 / 2.0, Upsample: 1.0 / 4.0.
vec2 texelSize = vec2(1.0 / textureWidth, 1.0 / textureHeight);
const vec2 DOWNSAMPLE_OFFSETS[4] = vec2[]
(
vec2(-0.5, -0.5) * texelSize,
vec2(-0.5, 0.5) * texelSize,
vec2(0.5, -0.5) * texelSize,
vec2(0.5, 0.5) * texelSize
);
const vec2 UPSAMPLE_OFFSETS[4] = vec2[]
(
vec2(-1.0, -1.0) * texelSize,
vec2(-1.0, 1.0) * texelSize,
vec2(1.0, -1.0) * texelSize,
vec2(1.0, 1.0) * texelSize
);
//Fragment shader.
#version 400 core
uniform sampler2D mainTexture;
in vec2 texCoord;
out vec4 fragColor;
void main(void)
{
#if defined(DOWNSAMPLE)
vec2 uv0 = texCoord + DOWNSAMPLE_OFFSETS[0];
vec2 uv1 = texCoord + DOWNSAMPLE_OFFSETS[1];
vec2 uv2 = texCoord + DOWNSAMPLE_OFFSETS[2];
vec2 uv3 = texCoord + DOWNSAMPLE_OFFSETS[3];
#else
vec2 uv0 = texCoord + UPSAMPLE_OFFSETS[0];
vec2 uv1 = texCoord + UPSAMPLE_OFFSETS[1];
vec2 uv2 = texCoord + UPSAMPLE_OFFSETS[2];
vec2 uv3 = texCoord + UPSAMPLE_OFFSETS[3];
#endif
float val0 = texture(mainTexture, uv0).r;
float val1 = texture(mainTexture, uv1).r;
float val2 = texture(mainTexture, uv2).r;
float val3 = texture(mainTexture, uv3).r;
//Do some stuff...
fragColor = ...;
}
The offsets look correct, assuming texelSize is in both cases the texel size of the render target. That is, twice as big for the downsampling pass than the upsampling pass. In the case of upsampling, you are not hitting the source texel centers exactly, but come close enough that nearest neighbor filtering snaps them to the intended result.
A more efficient option is to use textureGather instruction, specified in the ARB_texture_gather extension. When used to sample a texture, it returns the same four texels, that would be used for filtering. It only returns a single component of each texel to produce a vec4, but given that you only care about the red component, it's an ideal solution if the extension is available. The code would then be the same for both downsampling and upsampling:
#define GATHER_RED_COMPONENT 0
vec4 vals = textureGather(mainTexture, texcoord, GATHER_RED_COMPONENT);
// Output the maximum value
fragColor = max(max(vals.x, vals.y), max(vals.z, vals.w));

Shadowmapping always produces shadows beyond far plane

I am working on the beginnings of omnidirectional shadow mapping in my engine. For now I am only producing one shadowmap as a test. I am getting an odd result when using my current shaders. Here is a screenshot which shows the problem:
I am using a near value of 0.5 and a far value of 5.0 in the projection matrix for the shadowmap render. As near as I can tell, any value with a light-space z larger than my far plane distance is being computed by my fragment shader as in shadow.
This is my fragment shader:
in vec2 st;
uniform sampler2D colorTexture;
uniform sampler2D normalTexture;
uniform sampler2D depthTexture;
uniform sampler2D shadowmapTexture;
uniform mat4 invProj;
uniform mat4 lightProj;
uniform vec3 lightPosition;
out vec3 color;
void main () {
vec3 clipSpaceCoords;
clipSpaceCoords.xy = st.xy * 2.0 - 1.0;
clipSpaceCoords.z = texture(depthTexture, st).x * 2.0 - 1.0;
vec4 position = invProj * vec4(clipSpaceCoords,1.0);
position.xyz /= position.w;
vec4 lightSpace = lightProj * vec4(position.xyz,1.0);
lightSpace.xyz /= lightSpace.w;
lightSpace.xyz = lightSpace.xyz * 0.5 + 0.5;
float lightDepth = texture(shadowmapTexture, lightSpace.xy).x;
vec3 normal = texture(normalTexture, st);
vec3 diffuse;
float shadowFactor = 1.0;
if(lightSpace.w > 0.0 && lightSpace.z > lightDepth+0.0042) {
shadowFactor = 0.2;
}
else {
float k = 0.00001;
vec3 distanceToLight = lightPosition - position.xyz;
float distanceLength = length(distanceToLight);
float attenuation = (1.0 / (1.0 + (0.1 * distanceLength) + k * (distanceLength * distanceLength)));
float diffuseTemp = max(dot(normalize(normal), normalize(distanceToLight)), 0.0);
diffuse = vec3(1.0, 1.0, 1.0) * attenuation * diffuseTemp;
}
vec3 gamma = vec3(1.0/2.2);
color = pow(texture(colorTexture, st).xyz*shadowFactor+diffuse, gamma);
}
How can I fix this issue (Other than increasing my far plane distance)?
One other question, as this is the first time I have attempted shadowmapping: am I doing the lighting in relation to the shadows correctly?