Fade Screen to specific color using glsl - glsl

I want to fade the screen to a specific color using glsl
So far this is my glsl code and it works quite well:
uniform sampler2D textureSampler;
uniform vec2 texcoordOffset;
uniform vec3 sp;
uniform vec3 goal;
varying vec4 vertColor;
varying vec4 vertTexcoord;
void main(void) {
vec3 col=texture2D(textureSampler, vertTexcoord.st).rgb;
gl_FragColor = vec4(col+((goal-col)/sp), 1.0);
//gl_FragColor = vec4(col+((goal-col)*sp), 1.0); //as suggested below also this doesn't solve the problem
}
The only problem I have is that with higher sp values the colors aren't faded completly to the new color. I think the problem is caused by the accuracy which with the shader works.
Doas anyone has an Idea how to increase the accuracy?
EDIT:
Could it be that this effect is Driver dependent? I'm using an ATI with the latest drivers maybe someone could try the code on an NVIDIA card?

Let's break it down:
float A, B:
float Mix;
float C = A + (B-A) / Mix;
Now it's fairly easy to see that Mix has to be infinite to create pure A, so it isn't GLSL fault at all. The normally used equation is as follows
float C = A + (B-A) * Mix;
// Let's feed some data:
// Mix = 0 -> C = A;
// Mix = 1 -> C = A + (B - A) = A + B - A = B;
// Mix = 0.5 -> C = A + 0.5*(B - A) = A + 0.5*B - 0.5*A = 0.5*A + 0.5*B
Correct, right?
Change your code to:
gl_FragColor = vec4(col+((goal-col) * sp), 1.0);
And use the range of <0,1> in sp instead. Also, shouldn't sp be actually float? If all of it's components are equal (IOW sp.x == sp.y == sp.z), you can just change it's type and it will work, as referenced here.

Related

GLSL: How to do switch-like statement

I would like to dynamically invoke an easing based on the data passed into the shader. So in pseudocode:
var easing = easings[easingId]
var value = easing(point)
I'm wondering the best way to accomplish this in GLSL. I could use a switch statement in some way, or could perhaps put the easings into an array and use them like that. Or maybe there is a way to create a hashtable and use it like the above example.
easingsArray = [
cubicIn,
cubicOut,
...
]
uniform easingId
main() {
easing = easingsArray[easingId]
value = easing(point)
}
That would be the potential array approach. Another approach is the switch statement. Maybe there are others. Wondering what the recommended way is of doing this. Maybe I could use a struct somehow...
If you need conditional branching in GLSL (in your case for selecting an easing function based on a variable) you'll need to use if or switch statements.
For example
if (easingId == 0) {
result = cubicIn();
} else if (easingId == 1) {
result = cubicOut();
}
or
switch (easingId) {
case 0:
result = cubicIn();
break;
case 1:
result = cubicOut();
break;
}
GLSL has no support for function pointers, so the kind of dynamic dispatch solutions you are considering (tables of function pointers etc.) will unfortunately not be possible.
Whilst your question was explicitly about data being passed into the shader, I would also like to point out that if the value controlling the branch is being passed into the shader as a uniform, then you could instead compile multiple variants of your shader, and then dynamically select the right one (i.e. the one using the right easing function) from the application itself. This would save the cost of branching in the shader.
Apart from conditional branching with if-else or switch, you may also want to have a look at the GLSL subroutines, this allows you to control the shader code branching on the C++ side, without touching the shader. Theoretically, this approach should be more efficient than using if-else, but the downside is that subroutines are not supported in SPIR-V, which is the future of shaders.
A subroutine is very much like a function pointer in C, you define multiple functions of the same "subroutine" type, and then use a subroutine uniform to control which function should be called at runtime. Here's an example of debug drawing a framebuffer using subroutines.
#version 460
// this shader is used by: `FBO::DebugDraw()`
////////////////////////////////////////////////////////////////////////////////
#ifdef vertex_shader
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec2 uv;
layout(location = 0) out vec2 _uv;
void main() {
_uv = uv;
gl_Position = vec4(position.xy, 0.0, 1.0);
}
#endif
////////////////////////////////////////////////////////////////////////////////
#ifdef fragment_shader
layout(location = 0) in vec2 _uv;
layout(location = 0) out vec4 color;
layout(binding = 0) uniform sampler2D color_depth_texture;
layout(binding = 1) uniform usampler2D stencil_texture;
const float near = 0.1;
const float far = 100.0;
subroutine vec4 draw_buffer(void); // typedef the subroutine (like a function pointer)
layout(location = 0) subroutine uniform draw_buffer buffer_switch;
layout(index = 0)
subroutine(draw_buffer)
vec4 DrawColorBuffer() {
return texture(color_depth_texture, _uv);
}
layout(index = 1)
subroutine(draw_buffer)
vec4 DrawDepthBuffer() {
// sampling the depth texture format should return a float
float depth = texture(color_depth_texture, _uv).r;
// depth in screen space is non-linear, the precision is high for small z-values
// and low for large z-values, we need to linearize depth values before drawing
float ndc_depth = depth * 2.0 - 1.0;
float z = (2.0 * near * far) / (far + near - ndc_depth * (far - near));
float linear_depth = z / far;
return vec4(vec3(linear_depth), 1.0);
}
layout(index = 2)
subroutine(draw_buffer)
vec4 DrawStencilBuffer() {
uint stencil = texture(stencil_texture, _uv).r;
return vec4(vec3(stencil), 1.0);
}
void main() {
color = buffer_switch();
}
#endif

Banding Problem in Multi Step Shader with Ping Pong Buffers, does not happen in ShaderToy

I am trying to implement a Streak shader, which is described here:
http://www.chrisoat.com/papers/Oat-SteerableStreakFilter.pdf
Short explanation: Samples a point with a 1d kernel in a given direction. The kernel size grows exponentially in each step. Color values are weighted based on distance to sampled point and summed. The result is a smooth tail/smear/light streak effect on that direction. Here is the frag shader:
precision highp float;
uniform sampler2D u_texture;
varying vec2 v_texCoord;
uniform float u_Pass;
const float kernelSize = 4.0;
const float atten = 0.95;
vec4 streak(in float pass, in vec2 texCoord, in vec2 dir, in vec2 pixelStep) {
float kernelStep = pow(kernelSize, pass - 1.0);
vec4 color = vec4(0.0);
for(int i = 0; i < 4; i++) {
float sampleNum = float(i);
float weight = pow(atten, kernelStep * sampleNum);
vec2 sampleTexCoord = texCoord + ((sampleNum * kernelStep) * (dir * pixelStep));
vec4 texColor = texture2D(u_texture, sampleTexCoord) * weight;
color += texColor;
}
return color;
}
void main() {
vec2 iResolution = vec2(512.0, 512.0);
vec2 pixelStep = vec2(1.0, 1.0) / iResolution.xy;
vec2 dir = vec2(1.0, 0.0);
float pass = u_Pass;
vec4 streakColor = streak(pass, v_texCoord, dir, pixelStep);
gl_FragColor = vec4(streakColor.rgb, 1.0);
}
It was going to be used for a starfield type of effect. And here is the implementation on ShaderToy which works fine:
https://www.shadertoy.com/view/ll2BRG
(Note: Disregard the first shader in Buffer A, it just filters out the dim colors in the input texture to emulate a star field since afaik ShaderToy doesn't allow uploading custom textures)
But when I use the same shader in my own code and render using ping-pong FrameBuffers, it looks different. Here is my own implementation ported over to WebGL:
https://jsfiddle.net/1b68eLdr/87755/
I basically create 2 512x512 buffers, ping-pong the shader 4 times increasing kernel size at each iteration according to the algorithm and render the final iteration on the screen.
The problem is visible banding, and my streaks/tails seem to be losing brightness a lot faster: (Note: the image is somewhat inaccurate, the lengths of the streaks are same/correct, its color values that are wrong)
I have been struggling with this for a while in Desktop OpenGl / LWJGL, I ported it over to WebGL/Javascript and uploaded on JSFiddle in hopes someone can spot what the problem is. I suspect it's either about texture coordinates or FrameBuffer configuration since shaders are exactly the same.
The reason it works on Shadertoys is because it uses a floating-point render target.
Simply use gl.FLOAT as the type of your framebuffer texture and the issue is fixed (I could verify it with the said modification on your JSFiddle).
So do this in your createBackingTexture():
// Just request the extension (MUST be done).
gl.getExtension('OES_texture_float');
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, this._width, this._height, 0, gl.RGBA, gl.FLOAT, null);

Are my GLSL shader programs optimized by the browser or driver?

In a recent project of mine, I procedurally create fragment shaders that look like this (but possibly larger), which I display using WebGL:
precision mediump float;
uniform vec2 u_windowSize;
void main() {
float s = 2.0 / min(u_windowSize.x, u_windowSize.y);
vec2 pos0 = s * (gl_FragCoord.xy - 0.5 * u_windowSize);
if (length(pos0) > 1.0) { gl_FragColor = vec4(0,0,0,0); return; }
vec2 pos1 = pos0/0.8;
vec2 pos2 = ((1.0-length(pos1))/length(pos1)) * pos1;
vec3 col2 = vec3(1.0,1.0,1.0);
vec2 pos3 = pos2;
vec3 col3 = vec3(1.0,0.0,0.0);
vec2 tmp2 = 6.0*(1.0/sqrt(2.0)) * mat2(1.0,1.0,-1.0,1.0) * pos2;
vec3 col4;
if (mod(tmp2.x, 2.0) < 1.0 != mod(tmp2.y, 2.0) < 1.0) {
col4 = col2;
} else {
col4 = col3;
};
vec2 pos5 = pos0;
vec3 col5 = vec3(0.0,1.0,1.0);
vec3 col6;
if (length(pos0) < 0.8) {
col6 = col4;
} else {
col6 = col5;
};
gl_FragColor = vec4(col6, 1.0);
}
Obviously there is some redundancy here that you would not write by hand – copying pos2 into pos3 is pointless, for example. But since I generate this code, this is convenient.
Before I now prematurely start optimizing and make my generator produce hopefully more efficient code, I’d like to know:
Do browsers and/or graphics drivers already optimize such things (so I don't have to)?
There is no requirement for the browser to do any optimization. You can see what the browser is sending to the driver by using the WEBGL_debug_shaders extension.
Example:
const gl = document.createElement('canvas').getContext('webgl');
const ext = gl.getExtension('WEBGL_debug_shaders');
const vs = `
attribute vec4 position;
uniform mat4 matrix;
void main() {
float a = 1. + 2. * 3.; // does this get optimized to 7?
float b = a; // does this get discarded?
gl_Position = matrix * position * vec4(b, a, b, a);
}
`;
const s = gl.createShader(gl.VERTEX_SHADER);
gl.shaderSource(s, vs);
gl.compileShader(s);
console.log(ext.getTranslatedShaderSource(s));
On my machine/driver/browser that returns this result
#version 410
in vec4 webgl_74509a83309904df;
uniform mat4 webgl_5746d1f3d2c2394;
void main(){
(gl_Position = vec4(0.0, 0.0, 0.0, 0.0));
float webgl_2420662cd003acfa = 7.0;
float webgl_44a9acbe7629930d = webgl_2420662cd003acfa;
(gl_Position = ((webgl_5746d1f3d2c2394 * webgl_74509a83309904df) * vec4(webgl_44a9acbe7629930d, webgl_2420662cd003acfa, webgl_44a9acbe7629930d, webgl_2420662cd003acfa)));
}
We can see in this case the simple constant math was optimized but the fact that a an b are the same were not. That said there is no guarantee that other browsers would make this optimization.
Whether or not drivers optimize is up to the driver. Most drivers will at least do a little optimization but full optimization takes time. DirectX can take > 5 minutes to optimize a single complex shader with full optimization on so optimization is probably something that should be done offline. In the DirectX case you're expected to save the binary shader result to avoid the 5 minutes the next time the shader is needed but for WebGL that's not possible as binary shaders would be both not portable and a security issue. Also having the browser freeze for a long time waiting for the compile would also be unacceptable so browsers can't ask DirectX for full optimization. That said, some browser cache binary shader results behind the scenes.

LibGdx Shader ("no uniform with name 'u_texture' in shader")

The Shader compiles successfully, but the program crashes as soon as rendering starts... This is the error i get: "no uniform with name 'u_texture' in shader". This is what my shader looks like:
#ifdef GL_ES
precision mediump float;
#endif
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
varying vec2 surfacePosition;
#define MAX_ITER 10
void main( void ) {
vec2 p = surfacePosition*4.0;
vec2 i = p;
float c = 0.0;
float inten = 1.0;
for (int n = 0; n < MAX_ITER; n++) {
float t = time * (1.0 - (1.0 / float(n+1)));
i = p + vec2(
cos(t - i.x) + sin(t + i.y),
sin(t - i.y) + cos(t + i.x)
);
c += 1.0/length(vec2(
p.x / (sin(i.x+t)/inten),
p.y / (cos(i.y+t)/inten)
)
);
}
c /= float(MAX_ITER);
gl_FragColor = vec4(vec3(pow(c,1.5))*vec3(0.99, 0.97, 1.8), 1.0);
}
Can someone please help me. I don't know what I'm doing wrong. BTW, this is shader i found on the internet, so I know it is working, the only problem is making it work with libgdx.
libGDX's SpriteBatch assumes that your shader will have u_texture uniform. To overcome just add
ShaderProgram.pedantic = false;(Javadoc) before putting your shader program into the SpriteBatch.
UPDATE: raveesh is right about shader compiler vanishing unused uniforms and attributes, but libGDX wraps OpenGL shader in custom ShaderProgram.
Not only should you add the uniform u_texture in your shader program, you should also use it, otherwise it will be optimized away by the shader compiler.
But looking at you shader, you don't seem to need the uniform anyway, so check your program for something like shader.setUniformi("u_texture", 0); and remove the line. It should work fine then.

How to improve the quality of my shadows?

First, a screenshot:
As you can see, the tops of the shadows look OK (if you look at the dirt where the tops of the shrubs are projected, it looks more or less correct), but the base of the shadows is way off.
The bottom left corner of the image shows the shadow map I computed. It's a depth-map from the POV of the light, which is also where my character is standing.
Here's another shot, from a different angle:
Any ideas what might be causing it to come out like this? Is the depth of the shrub face too similar to the depth of the ground directly behind it, perhaps? If so, how do I get around that?
I'll post the fragment shader below, leave a comment if there's anything else you need to see.
Fragment Shader
#version 330
in vec2 TexCoord0;
in vec3 Tint0;
in vec4 WorldPos;
in vec4 LightPos;
out vec4 FragColor;
uniform sampler2D TexSampler;
uniform sampler2D ShadowSampler;
uniform bool Blend;
const int MAX_LIGHTS = 16;
uniform int NumLights;
uniform vec3 Lights[MAX_LIGHTS];
const float lightRadius = 100;
float distSq(vec3 v1, vec3 v2) {
vec3 d = v1-v2;
return dot(d,d);
}
float CalcShadowFactor(vec4 LightSpacePos)
{
vec3 ProjCoords = LightSpacePos.xyz / LightSpacePos.w;
vec2 UVCoords;
UVCoords.x = 0.5 * ProjCoords.x + 0.5;
UVCoords.y = 0.5 * ProjCoords.y + 0.5;
float Depth = texture(ShadowSampler, UVCoords).x;
if (Depth < (ProjCoords.z + 0.0001))
return 0.5;
else
return 1.0;
}
void main()
{
float scale;
FragColor = texture2D(TexSampler, TexCoord0.xy);
// transparency
if(!Blend && FragColor.a < 0.5) discard;
// biome blending
FragColor *= vec4(Tint0, 1.0f);
// fog
float depth = gl_FragCoord.z / gl_FragCoord.w;
if(depth>20) {
scale = clamp(1.2-15/(depth-19),0,1);
vec3 destColor = vec3(0.671,0.792,1.00);
vec3 colorDist = destColor - FragColor.xyz;
FragColor.xyz += colorDist*scale;
}
// lighting
scale = 0.30;
for(int i=0; i<NumLights; ++i) {
float dist = distSq(WorldPos.xyz, Lights[i]);
if(dist < lightRadius) {
scale += (lightRadius-dist)/lightRadius;
}
}
scale *= CalcShadowFactor(LightPos);
FragColor.xyz *= clamp(scale,0,1.5);
}
I'm fairly certain this is an offset problem. My shadows look to be about 1 block off, but I can't figure out how to shift them, nor what's causing them to be off.
Looks like "depth map bias" actually:
Not exactly sure how to set this....do I just call glPolygonOffset before rendering the scene? Will try it...
Setting glPolygonOffset to 100,100 amplifies the problem:
I set this just before rendering the shadow map:
GL.Enable(EnableCap.PolygonOffsetFill);
GL.PolygonOffset(100f, 100.0f);
And then disabled it again. I'm not sure if that's how I'm supposed to do it. Increasing the values amplifies the problem....decreasing them to below 1 doesn't seem to improve it though.
Notice also how the shadow map in the lower left changed.
Vertex Shader
#version 330
layout(location = 0) in vec3 Position;
layout(location = 1) in vec2 TexCoord;
layout(location = 2) in mat4 Transform;
layout(location = 6) in vec4 TexSrc; // x=x, y=y, z=width, w=height
layout(location = 7) in vec3 Tint; // x=R, y=G, z=B
uniform mat4 ProjectionMatrix;
uniform mat4 LightMatrix;
out vec2 TexCoord0;
out vec3 Tint0;
out vec4 WorldPos;
out vec4 LightPos;
void main()
{
WorldPos = Transform * vec4(Position, 1.0);
gl_Position = ProjectionMatrix * WorldPos;
LightPos = LightMatrix * WorldPos;
TexCoord0 = vec2(TexSrc.x+TexCoord.x*TexSrc.z, TexSrc.y+TexCoord.y*TexSrc.w);
Tint0 = Tint;
}
While world-aligned cascaded shadow maps are great and used in most new games out there, it's not related to why your shadows have a strange offset with your current implementation.
Actually, it looks like you're sampling from the correct texels on the shadow map just on where the shadows that are occurring are exactly where you'd expect them to be, however your comparison is off.
I've added some comments to your code:
vec3 ProjCoords = LightSpacePos.xyz / LightSpacePos.w; // So far so good...
vec2 UVCoords;
UVCoords.x = 0.5 * ProjCoords.x + 0.5; // Right, you're converting X and Y from clip
UVCoords.y = 0.5 * ProjCoords.y + 0.5; // space to texel space...
float Depth = texture(ShadowSampler, UVCoords).x; // I expect we sample a value in [0,1]
if (Depth < (ProjCoords.z + 0.0001)) // Uhoh, we never converted Z's clip space to [0,1]
return 0.5;
else
return 1.0;
So, I suspect you want to compare to ProjCoords.z * 0.5 + 0.5:
if (Depth < (ProjCoords.z * 0.5 + 0.5 + 0.0001))
return 0.5;
else
return 1.0;
Also, that bias factor makes me nervous. Better yet, just take it out for now and deal with it once you get the shadows appearing in the right spots:
const float bias = 0.0;
if (Depth < (ProjCoords.z * 0.5 + 0.5 + bias))
return 0.5;
else
return 1.0;
I might not be entirely right about how to transform ProjCoords.z to match the sampled value, however this is likely the issue. Also, if you do move to cascaded shadow maps (I recommend world-aligned) I'd strongly recommend drawing frustums representing where each shadow map is viewing -- it makes debugging a whole lot easier.
This is called the "deer in headlights" effect of buffer mapped shadows. There are a several ways to minimize this effect. Look for "light space shadow mapping".
NVidia OpenGL SDK has "cascaded shadow maps" example. You might want to check it out (haven't used it myself, though).
How to improve the quality of my shadows?
The problem could be caused by using incorrect matrix while rendering shadows. Your example doesn't demonstrate how light matrices are set. By murphy's law I'll have to assume that bug lies in this missing piece of code - since you decided that this part isn't important, it probably causes the problem. If matrix used while testing the shadow is different from matrix used to render the shadow, you'll get exactly this problem.
I suggest to forget about the whole minecraft thing for a moment, and play around with shadows in simple application. Make a standalone application with floor plane and rotating cube (or teapot or whatever you want), and debug shadow maps there, until you get hang of it. Since you're willing to throw +100 bounty onto the question, you might as well post the complete code of your standalone sample here - if you still get the problem in the sample. Trying to stick technology you aren't familiar with into the middle of working(?) engine isn't a good idea anyway. Take it slow, get used to the technique/technology/effect, then integrate it.