Opengl texture flickering when used with mix() - opengl

I'm rendering a terrain with multiple textures that includes smooth transitions between the textures, based on the height of each fragment.
Here's my fragment shader:
#version 430
uniform sampler2D tex[3];
uniform float renderHeight;
in vec3 fsVertex;
in vec2 fsTexCoords;
out vec4 color;
void main()
{
float height = fsVertex.y / renderHeight;
const float range1 = 0.2;
const float range2 = 0.35;
const float range3 = 0.7;
const float range4 = 0.85;
if(height < range1)
color = texture(tex[0], fsTexCoords);
else if(height < range2) //smooth transition
color = mix( texture(tex[0], fsTexCoords), texture(tex[1], fsTexCoords), (height - range1) / (range2 - range1) );
else if(height < range3)
color = texture(tex[1], fsTexCoords);
else if(height < range4) //smooth transition
color = mix( texture(tex[1], fsTexCoords), texture(tex[2], fsTexCoords), (height - range3) / (range4 - range3) );
else
color = texture(tex[2], fsTexCoords);
}
'height' will always be in the range [0,1].
Here's the weird flickering I get. From what I can see they happen when 'height' equals one of the rangeN variables when using mix().
What may be the cause of this? I also tried playing around with adding and subtracting a 'bias' variable in some computations but had no luck.

Your problem is non-uniform flow control.
Basically, you can't call texture() inside an if.
Two solutions:
make all the calls to texture() first then blend the results with mix()
calculate the partial derivatives of the texture coordinates (with dFdx & co., there is an example in the link above) and use textureGrad() instead of texture()
In very simple cases, the first solution may be slightly faster. The second one is the way to go if you want to have many textures (normal maps etc.) But don't take my word for it, measure.

Related

Fade texture inner borders to transparent in LibGDX using openGL shaders (glsl)

I'm currently working on a tile game in LibGDX and I'm trying to get a "fog of war" effect by obscuring unexplored tiles. The result I get from this is a dynamically generated black texture of the size of the screen that only covers unexplored tiles leaving the rest of the background visible. This is an example of the fog texture rendered on top of a white background:
What I'm now trying to achieve is to dynamically fade the inner borders of this texture to make it look more like a fog that slowly thickens instead of just a bunch of black boxes put together on top of the background.
Googling the problem I found out I could use shaders to do this, so I tried to learn some glsl (I'm at the very start with shaders) and I came up with this shader:
VertexShader:
//attributes passed from openGL
attribute vec3 a_position;
attribute vec2 a_texCoord0;
//variables visible from java
uniform mat4 u_projTrans;
//variables shared between fragment and vertex shader
varying vec2 v_texCoord0;
void main() {
v_texCoord0 = a_texCoord0;
gl_Position = u_projTrans * vec4(a_position, 1f);
}
FragmentShader:
//variables shared between fragment and vertex shader
varying vec2 v_texCoord0;
//variables visible from java
uniform sampler2D u_texture;
uniform vec2 u_textureSize;
uniform int u_length;
void main() {
vec4 texColor = texture2D(u_texture, v_texCoord0);
vec2 step = 1.0 / u_textureSize;
if(texColor.a > 0) {
int maxNearPixels = (u_length * 2 + 1) * (u_length * 2 + 1) - 1;
for(int i = 0; i <= u_length; i++) {
for(float j = 0; j <= u_length; j++) {
if(i != 0 || j != 0) {
texColor.a -= (1 - texture2D(u_texture, v_texCoord0 + vec2(step.x * float(i), step.y * float(j))).a) / float(maxNearPixels);
texColor.a -= (1 - texture2D(u_texture, v_texCoord0 + vec2(-step.x * float(i), step.y * float(j))).a) / float(maxNearPixels);
texColor.a -= (1 - texture2D(u_texture, v_texCoord0 + vec2(step.x * float(i), -step.y * float(j))).a) / float(maxNearPixels);
texColor.a -= (1 - texture2D(u_texture, v_texCoord0 + vec2(-step.x * float(i), -step.y * float(j))).a) / float(maxNearPixels);
}
}
}
}
gl_FragColor = texColor;
}
This is the result I got setting a length of 20:
So the shader I wrote kinda works, but has terrible performance cause it's O(n^2) where n is the length of the fade in pixels (so it can be very high, like 60 or even 80). It also has some problems, like that the edges are still a bit too sharp (I'd like a smother transition) and some of the angles of the border are less faded than others (I'd like to have a fade uniform everywhere).
I'm a little bit lost at this point: is there anything I can do to make it better and faster? Like I said I'm new to shaders, so: is it even the right way to use shaders?
As others mentioned in the comments, instead of blurring in the screen-space, you should filter in the tile-space while potentially exploiting the GPU bilinear filtering. Let's go through it with images.
First define a texture such that each pixel corresponds to a single tile, black/white depending on the fog at that tile. Here's such a texture blown up:
After applying the screen-to-tiles coordinate transformation and sampling that texture with GL_NEAREST interpolation we get the blocky result similar to what you have:
float t = texture2D(u_tiles, M*uv).r;
gl_FragColor = vec4(t,t,t,1.0);
If instead of GL_NEAREST we switch to GL_LINEAR, we get a somewhat better result:
This still looks a little blocky. To improve on that we can apply a smoothstep:
float t = texture2D(u_tiles, M*uv).r;
t = smoothstep(0.0, 1.0, t);
gl_FragColor = vec4(t,t,t,1.0);
Or here is a version with a linear shade-mapping function:
float t = texture2D(u_tiles, M*uv).r;
t = clamp((t-0.5)*1.5 + 0.5, 0.0, 1.0);
gl_FragColor = vec4(t,t,t,1.0);
Note: these images were generated within a gamma-correct pipeline (i.e. sRGB framebuffer enabled). This is one of those few scenarios, however, where ignoring gamma may give better results, so you're welcome to experiment.

Efficiently iterate trough neighbouring pixels in fragment shader

For context:
I'm working on a generative 2D animation with OpenFrameworks.
I'm trying to implement a shader shifts the fills some shapes with a color, depending on the orientation of the shapes edges.
Basically it takes an image like this one:
and spits out something like this:
Note that it intentionally only takes the color from the left side of the shape.
Right now my fragment shader looks like this:
#version 150
out vec4 outputColor;
uniform sampler2DRect fbo;
uniform sampler2DRect mask;
vec2 point;
vec4 col;
float x,i;
float delta = 200;
vec4 extrudeColor()
{
x = gl_FragCoord.x > delta ? gl_FragCoord.x - delta : 0;
for(i = gl_FragCoord.x; i > x; i--)
{
point = vec2(i, gl_FragCoord.y);
if(texture(fbo, point) != vec4(0,0,0,0)){
col = texture(fbo, point);
return vec4(col.r, col.g, col.b, (i-x)/delta);
}
}
return vec4(0,0,0,1);
}
void main()
{
outputColor = texture(mask, gl_FragCoord.xy) == vec4(1,1,1,1) && texture(fbo, gl_FragCoord.xy) == vec4(0,0,0,0) ? extrudeColor() : vec4(0,0,0,0);
}
the mask sampler is just a black and white version of the second image that I use to avoid calculating pixels outside of the shapes.
The shader I have works but it is slow and I feel like I'm not using proper GPU thinking and coding.
The actual, more general question:
I'm totally new to glsl and opengl. Is there a way to make this kind of iteration trough neighbouring pixels more efficiently and without having this many texture() reads?
Maybe using matrices? IDK!
This is a highly inefficient way to approach this problem. Try to avoid conditionals (if's) and loops (for's) in your shader. I would suggest loading or generating a single texture, and then using an alpha mask to create the shape you need. The texture could remain constant, while the 2 or 8-bit mask could be generated per frame.
An alternative method would be to use a few uniforms and upload "per-line" data in an array:
#version 440 core
uniform sampler2D _Texture ; // The texture to draw
uniform vec2 _Position ; // The 'object' position (screen coordinate)
uniform int _RowOffset ; // The offset in the 'object' to start drawing
uniform int _RowLength ; // The length of the 'row' to draw
uniform int _Height ; // The height of the 'object' to draw
in vec2 _TexCoord ; // The texture coordinate passed from Vertex shader
out vec4 _FragColor ; // The output color (gl_FragColor deprecated)
void main () {
if (gl_FragCoord.x < (_Position.x + _RowOffset)) discard ;
if (gl_FragCoord.x > (_Position.x + _RowOffset + _RowLength)) discard ;
_FragColor = texture2D (_Texture, _TexCoord.st) ;
}
Or, without sampling a texture at all, you could generate a linear gradient function and sample the color from it using the Y coordinate:
const vec4 _Red = vec4 (1, 0, 0, 1) ;
const vec4 _Green vec4 (0, 1, 0, 0) ;
vec4 _GetGradientColor (float _P /* Percentage */) {
float _R = (_Red.r * _P + _Green.r * (1 - _P) / 2 ;
float _G = (_Red.g * _P + _Green.g * (1 - _P) / 2 ;
float _B = (_Red.b * _P + _Green.b * (1 - _P) / 2 ;
float _A = (_Red.a * _P + _Green.a * (1 - _P) / 2 ;
return (vec4 (_R, _G, _B, _A)) ;
}
Then in your Frag Shader,
float _P = gl_FragCoord.y - _Position.y / _Height ;
_FragColor = _GetGradientColor (_P) ;
Shader Output
Of course this all could be optimised a bit, and this only generates a 2-color gradient whereas it looks like you're needing several colors. A quick Google search for "linear gradient generator" can land you some nicer alternatives. I should also note this simple example will not work for shapes with 'holes' in them, but it can be revised to do so. If the shader math gets too heavy, then choose the texture with alpha mask option.

Why is my frag shader casting long shadows horizontally and short shadows vertically?

I have the following fragment shader:
#version 330
layout(location=0) out vec4 frag_colour;
in vec2 texelCoords;
uniform sampler2D uTexture; // the color
uniform sampler2D uTextureHeightmap; // the heightmap
uniform float uSunDistance = -10000000.0; // really far away vertically
uniform float uSunInclination; // height from the heightmap plane
uniform float uSunAzimuth; // clockwise rotation point
uniform float uQuality; // used to determine number of steps and steps size
void main()
{
vec4 c = texture(uTexture,texelCoords);
vec2 textureD = textureSize(uTexture,0);
float d = max(textureD.x,textureD.y); // use the largest dimension to determine stepsize etc
// position the sun in the centre of the screen and convert from spherical to cartesian coordinates
vec3 sunPosition = vec3(textureD.x/2,textureD.y/2,0) + vec3( uSunDistance*sin(uSunInclination)*cos(uSunAzimuth),
uSunDistance*sin(uSunInclination)*sin(uSunAzimuth),
uSunDistance*cos(uSunInclination) );
float height = texture2D(uTextureHeightmap, texelCoords).r; // starting height
vec3 direction = normalize(vec3(texelCoords,height) - sunPosition); // sunlight direction
float sampleDistance = 0;
float samples = d*uQuality;
float stepSize = 1.0 / ((samples/d) * d);
for(int i = 0; i < samples; i++)
{
sampleDistance += stepSize; // increase the sample distance
vec3 newPoint = vec3(texelCoords,height) + direction * sampleDistance; // get the coord for the next sample point
float newHeight = texture2D(uTextureHeightmap,newPoint.xy).r; // get the height of that sample point
// put it in shadow if we hit something that is higher than our starting point AND is heigher than the ray we're casting
if(newHeight > height && newHeight > newPoint.z)
{
c *= 0.5;
break;
}
}
frag_colour = c;
}
The purpose is for it to cast shadows based on a heightmap. Pretty nifty, and the results look good.
However, there's a problem where the shadows appear longer when they are horizontal compared to vertical. If I make the window size different, with a window that is taller than wide, I get the opposite effect. I.e., the shadows are casting longer in the longer dimension.
This tells me that it's to do with the way I'm stepping in the above shader, but I can't tell the problem.
To illustrate, here is the with a uSunAzimuth that results in a horizontally cast shadow:
And here is the exact same code with a uSunAzimuth for a vertical shadow:
It's not very pronounced in these low resolution images, but in larger resolutions the effect gets more exaggerated. Essentially; the shadow when you measure how it casts in all 360 degrees of azimuth clears out an ellipse instead of a circle.
The shadow fragment shader operates on a "snapshot" of the viewport. When your scene is rendered and this "snapshot" is generated, then the vertex positions are transformed by the projection matrix. The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport and takes in account the aspect ration of the viewport.
(see Both depth buffer and triangle face orientation are reversed in OpenGL,
and Transform the modelMatrix).
This causes that the high map (uTextureHeightmap) represents a rectangular field of view, dependent on the aspect ratio.
But the texture coordinates, which you use to access the height map describe a quad in the range (0, 0) to (1, 1).
This mismatch must be balanced, by scaling with the aspect ratio.
vec3 direction = ....;
float aspectRatio = textureD.x / textureD.y;
direction.xy *= vec2( 1.0/aspectRatio, 1.0 );
I just needed to adjust the direction slightly.
float aspectCorrection = textureD.x / textureD.y;
...
vec3 direction = normalize(vec3(texelCoords,height) - sunPosition);
direction.y *= aspectCorrection;

Uniform point arrays and managing fragment shader coordinates systems

My aim is to pass an array of points to the shader, calculate their distance to the fragment and paint them with a circle colored with a gradient depending of that computation.
For example:
(From a working example I set up on shader toy)
Unfortunately it isn't clear to me how I should calculate and convert the coordinates passed for processing inside the shader.
What I'm currently trying is to pass two array of floats - one for x positions and one for y positions of each point - to the shader though a uniform. Then inside the shader iterate through each point like so:
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
uniform float sourceX[100];
uniform float sourceY[100];
uniform vec2 resolution;
in vec4 gl_FragCoord;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main()
{
float intensity = 0.0;
for(int i=0; i<100; i++)
{
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
intensity += exp(-0.5*d*d);
}
intensity=3.0*pow(intensity,0.02);
if (intensity<=1.0)
gl_FragColor=vec4(0.0,intensity*0.5,0.0,1.0);
else if (intensity<=2.0)
gl_FragColor=vec4(intensity-1.0, 0.5+(intensity-1.0)*0.5,0.0,1.0);
else
gl_FragColor=vec4(1.0,3.0-intensity,0.0,1.0);
}
But that doesn't work - and I believe it may be because I'm trying to work with the pixel coordinates without properly translating them. Could anyone explain to me how to make this work?
Update:
The current result is:
The sketch's code is:
PShader pointShader;
float[] sourceX;
float[] sourceY;
void setup()
{
size(1024, 1024, P3D);
background(255);
sourceX = new float[100];
sourceY = new float[100];
for (int i = 0; i<100; i++)
{
sourceX[i] = random(0, 1023);
sourceY[i] = random(0, 1023);
}
pointShader = loadShader("pointfrag.glsl", "pointvert.glsl");
shader(pointShader, POINTS);
pointShader.set("sourceX", sourceX);
pointShader.set("sourceY", sourceY);
pointShader.set("resolution", float(width), float(height));
}
void draw()
{
for (int i = 0; i<100; i++) {
strokeWeight(60);
point(sourceX[i], sourceY[i]);
}
}
while the vertex shader is:
#define PROCESSING_POINT_SHADER
uniform mat4 projection;
uniform mat4 transform;
attribute vec4 vertex;
attribute vec4 color;
attribute vec2 offset;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main() {
vec4 clip = transform * vertex;
gl_Position = clip + projection * vec4(offset, 0, 0);
vertColor = color;
center = clip.xy;
pos = offset;
}
Update:
Based on the comments it seems you have confused two different approaches:
Draw a single full screen polygon, pass in the points and calculate the final value once per fragment using a loop in the shader.
Draw bounding geometry for each point, calculate the density for just one point in the fragment shader and use additive blending to sum the densities of all points.
The other issue is your points are given in pixels but the code expects a 0 to 1 range, so d is large and the points are black. Fixing this issue as #RetoKoradi describes should address the points being black, but I suspect you'll find ramp clipping issues when many are in close proximity. Passing points into the shader limits scalability and is inefficient unless the points cover the whole viewport.
As below, I think sticking with approach 2 is better. To restructure your code for it, remove the loop, don't pass in the array of points and use center as the point coordinate instead:
//calc center in pixel coordinates
vec2 centerPixels = (center * 0.5 + 0.5) * resolution.xy;
//find the distance in pixels (avoiding aspect ratio issues)
float dPixels = distance(gl_FragCoord.xy, centerPixels);
//scale down to the 0 to 1 range
float d = dPixels / resolution.y;
//write out the intensity
gl_FragColor = vec4(exp(-0.5*d*d));
Draw this to a texture (from comments: opengl-tutorial.org code and this question) with additive blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
Now that texture will contain intensity as it was after your original loop. In another fragment shader during a full screen pass (draw a single triangle that covers the whole viewport), continue with:
uniform sampler2D intensityTex;
...
float intensity = texture2D(intensityTex, gl_FragCoord.xy/resolution.xy).r;
intensity = 3.0*pow(intensity, 0.02);
...
The code you have shown is fine, assuming you're drawing a full screen polygon so the fragment shader runs once for each pixel. Potential issues are:
resolution isn't set correctly
The point coordinates aren't in the range 0 to 1 on the screen.
Although minor, d will be stretched by the aspect ratio, so you might be better scaling the points up to pixel coordinates and diving distance by resolution.y.
This looks pretty similar to creating a density field for 2D metaballs. For performance you're best off limiting the density function for each point so it doesn't go on forever, then spatting discs into a texture using additive blending. This saves processing those pixels a point doesn't affect (just like in deferred shading). The result is the density field, or in your case per-pixel intensity.
These are a little related:
2D OpenGL ES Metaballs on android (no answers yet)
calculate light volume radius from intensity
gl_PointSize Corresponding to World Space Size
It looks like the point center and fragment position are in different coordinate spaces when you subtract them:
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
Based on your explanation and code, source and source are in window coordinates, meaning that they are in units of pixels. gl_FragCoord is in the same coordinate space. And even though you don't show that directly, I assume that resolution is the size of the window in pixels.
This means that:
vec2 position = ( gl_FragCoord.xy / resolution.xy );
calculates the normalized position of the fragment within the window, in the range [0.0, 1.0] for both x and y. But then on the next line:
float d = distance(position, source);
you subtrace source, which is still in window coordinates, from this position in normalized coordinates.
Since it looks like you wanted the distance in normalized coordinates, which makes sense, you'll also need to normalize source:
vec2 source = vec2(sourceX[i],sourceY[i]) / resolution.xy;

How do I use a GLSL shader to apply a radial blur to an entire scene?

I have a radial blur shader in GLSL, which takes a texture, applies a radial blur to it and renders the result to the screen. This works very well, so far.
The problem is, that this applies the radial blur to the first texture in the scene. But what I actually want to do, is to apply this blur to the whole scene.
What is the best way to achieve this functionality? Can I do this with only shaders, or do I have to render the scene to a texture first (in OpenGL) and then pass this texture to the shader for further processing?
// Vertex shader
varying vec2 uv;
void main(void)
{
gl_Position = vec4( gl_Vertex.xy, 0.0, 1.0 );
gl_Position = sign( gl_Position );
uv = (vec2( gl_Position.x, - gl_Position.y ) + vec2(1.0) ) / vec2(2.0);
}
// Fragment shader
uniform sampler2D tex;
varying vec2 uv;
const float sampleDist = 1.0;
const float sampleStrength = 2.2;
void main(void)
{
float samples[10];
samples[0] = -0.08;
samples[1] = -0.05;
samples[2] = -0.03;
samples[3] = -0.02;
samples[4] = -0.01;
samples[5] = 0.01;
samples[6] = 0.02;
samples[7] = 0.03;
samples[8] = 0.05;
samples[9] = 0.08;
vec2 dir = 0.5 - uv;
float dist = sqrt(dir.x*dir.x + dir.y*dir.y);
dir = dir/dist;
vec4 color = texture2D(tex,uv);
vec4 sum = color;
for (int i = 0; i < 10; i++)
sum += texture2D( tex, uv + dir * samples[i] * sampleDist );
sum *= 1.0/11.0;
float t = dist * sampleStrength;
t = clamp( t ,0.0,1.0);
gl_FragColor = mix( color, sum, t );
}
This basically is called "post-processing" because you're applying an effect (here: radial blur) to the whole scene after it's rendered.
So yes, you're right: the good way for post-processing is to:
create a screen-sized NPOT texture (GL_TEXTURE_RECTANGLE),
create a FBO, attach the texture to it
set this FBO to active, render the scene
disable the FBO, draw a full-screen quad with the FBO's texture.
As for the "why", the reason is simple: the scene is rendered in parallel (the fragment shader is executed independently for many pixels). In order to do radial blur for pixel (x,y), you first need to know the pre-blur pixel values of the surrounding pixels. And those are not available in the first pass, because they are only being rendered in the meantime.
Therefore, you must apply the radial blur only after the whole scene is rendered and fragment shader for fragment (x,y) is able to read any pixel from the scene. This is the reason why you need 2 rendering stages for that.