Re-defining Shader variables draws them all? - glsl

I'm fairly new to shaders and came across the Book of Shaders website. Early on, this piece of code appears and surprisingly he didn't teach about how variables work yet so I can't get my head around the color variable. This code simultaneously displays a left to right fading background (black to white) and a green diagonal line.
So, essentially, you declare vec3 color to be vec3(y) which means all the 3 r,g,b values will be same throughout. I get why the fading background occurs because r, g, b stay equal and range between 0 and 1.
But coming from a JS and PHP background, normally if I change the value of a variable later, only the new value is accepted. So I was expecting that the lerping value out of color = (1.0-pct)*color+pct*vec3(0.0,1.0,0.0); would overwrite the previous vec3 color = vec3(y); and be considered for gl_FragColor function. But it appears both the versions of color are drawn: the fading BG and the green line. Is this how the shader code works, by drawing every definition of a variable?
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
// Plot a line on Y using a value between 0.0-1.0
float plot(vec2 st) {
return smoothstep(0.02, 0.0, abs(st.y - st.x));
}
void main() {
vec2 st = gl_FragCoord.xy/u_resolution;
float y = st.x;
vec3 color = vec3(y);
// Plot a line
float pct = plot(st);
color = (1.0-pct)*color+pct*vec3(0.0,1.0,0.0);
gl_FragColor = vec4(color,1.0);
}

First vec3 color = vec3(y); declares color and assigns the right to left black and white gradient to it. Then, color = (1.0-pct)*color+pct*vec3(0.0,1.0,0.0); assigns an new value to color which is a lerp between its old value (color), and its new value vec3(0.0,1.0,0.0) (green). It is equivalent to do :
color *= (1.0-pct);
color += pct*vec3(0.0,1.0,0.0);
The old value is overwritten but as the new definition uses this old value, you can still see the background gradient.

Related

OpenGL shader fill works with constant color, doesn't work with interpolation, how to debug?

We have code that mostly works filling polygons on a map, though it draws convex hulls and fills in some areas (will require tessellation).
The shader is given a set of triangle fan operations, and draws using hardcoded color yellow (and it works).
Then we try to interpolate based on the value, and it turns black (does not work).
Here is the fragment shader. Values coming in are all 0.0 to 1.0
With minVal = 0.0, maxVal = 1.0
and colors set to (0,0,1) and (1,0,0)
While I would appreciate knowing the bug, I would much more like to know how I can debug it. I need to be able to get the values in the shader and see what is happening. In short, I need some kind of debugging facility for GLSL. I did find NVIDIA nsight: https://developer.nvidia.com/nsight-graphics but could not get it working on linux.
#version 330 core
out vec4 FragColor;
//in vec2 TexCoord;
in float val;
//uniform sampler2D ourTexture;
uniform vec3 minColor;
uniform vec3 maxColor;
uniform float minVal;
uniform float maxVal;
void main()
{
float f = (val - minVal)/ (maxVal-minVal);
//FragColor = vec4(1,1,0,1);//texture(ourTexture, f);
FragColor = vec4(minColor*(1.0-f) + maxColor * f,1.0);
}
It turns out that we were using glUniform4fv to set a color with rgba.
There was no compile or runtime error. These calls do not have an error return that I know of.
The shader also did not generate an error, but the variables minColor and maxColor were not correctly set.
Thus the interpolation was always black.
vec4(minColor*(1.0-f) + maxColor * f,1.0);
There should have been an error, attempting to set an RGBA color into a vec3 variable.
I have found printf functions on stackoverflow that would have allowed viewing this kind of information: Convert floating-point numbers to decimal digits in GLSL

WebGL: Interpolate between old and new value

I am struggling with this for the second day now and it seems like such a simple task, but I can not find the right solution.
With p5.js I am creating a GL instance and sending uniforms to the vertex shader.
Now I want to change the global variable that is the uniform and have the shader interpolate between the old value to the new value.
In the example below, after the button is clicked, it should not change the color instantly, but do small increments to get to the new color.
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 iResolution;
uniform float iTime;
uniform vec3 uSky;
void main()
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = gl_FragCoord.xy/iResolution.xy;
uv -= 0.5;
uv.x *= iResolution.x/iResolution.y;
vec3 sky = uSky;
//sky = mix(sky, uSky, 0.001);
// ^- This needs to be increment until 1 every time the uniform changes
// Output to screen
gl_FragColor = vec4(sky, 1.0);
}
https://glitch.com/~shader-prb
As Rabbid76 commented. I can not change the uniform value in the shader.
The solution was to lerp from the host.

Uniform point arrays and managing fragment shader coordinates systems

My aim is to pass an array of points to the shader, calculate their distance to the fragment and paint them with a circle colored with a gradient depending of that computation.
For example:
(From a working example I set up on shader toy)
Unfortunately it isn't clear to me how I should calculate and convert the coordinates passed for processing inside the shader.
What I'm currently trying is to pass two array of floats - one for x positions and one for y positions of each point - to the shader though a uniform. Then inside the shader iterate through each point like so:
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
uniform float sourceX[100];
uniform float sourceY[100];
uniform vec2 resolution;
in vec4 gl_FragCoord;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main()
{
float intensity = 0.0;
for(int i=0; i<100; i++)
{
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
intensity += exp(-0.5*d*d);
}
intensity=3.0*pow(intensity,0.02);
if (intensity<=1.0)
gl_FragColor=vec4(0.0,intensity*0.5,0.0,1.0);
else if (intensity<=2.0)
gl_FragColor=vec4(intensity-1.0, 0.5+(intensity-1.0)*0.5,0.0,1.0);
else
gl_FragColor=vec4(1.0,3.0-intensity,0.0,1.0);
}
But that doesn't work - and I believe it may be because I'm trying to work with the pixel coordinates without properly translating them. Could anyone explain to me how to make this work?
Update:
The current result is:
The sketch's code is:
PShader pointShader;
float[] sourceX;
float[] sourceY;
void setup()
{
size(1024, 1024, P3D);
background(255);
sourceX = new float[100];
sourceY = new float[100];
for (int i = 0; i<100; i++)
{
sourceX[i] = random(0, 1023);
sourceY[i] = random(0, 1023);
}
pointShader = loadShader("pointfrag.glsl", "pointvert.glsl");
shader(pointShader, POINTS);
pointShader.set("sourceX", sourceX);
pointShader.set("sourceY", sourceY);
pointShader.set("resolution", float(width), float(height));
}
void draw()
{
for (int i = 0; i<100; i++) {
strokeWeight(60);
point(sourceX[i], sourceY[i]);
}
}
while the vertex shader is:
#define PROCESSING_POINT_SHADER
uniform mat4 projection;
uniform mat4 transform;
attribute vec4 vertex;
attribute vec4 color;
attribute vec2 offset;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main() {
vec4 clip = transform * vertex;
gl_Position = clip + projection * vec4(offset, 0, 0);
vertColor = color;
center = clip.xy;
pos = offset;
}
Update:
Based on the comments it seems you have confused two different approaches:
Draw a single full screen polygon, pass in the points and calculate the final value once per fragment using a loop in the shader.
Draw bounding geometry for each point, calculate the density for just one point in the fragment shader and use additive blending to sum the densities of all points.
The other issue is your points are given in pixels but the code expects a 0 to 1 range, so d is large and the points are black. Fixing this issue as #RetoKoradi describes should address the points being black, but I suspect you'll find ramp clipping issues when many are in close proximity. Passing points into the shader limits scalability and is inefficient unless the points cover the whole viewport.
As below, I think sticking with approach 2 is better. To restructure your code for it, remove the loop, don't pass in the array of points and use center as the point coordinate instead:
//calc center in pixel coordinates
vec2 centerPixels = (center * 0.5 + 0.5) * resolution.xy;
//find the distance in pixels (avoiding aspect ratio issues)
float dPixels = distance(gl_FragCoord.xy, centerPixels);
//scale down to the 0 to 1 range
float d = dPixels / resolution.y;
//write out the intensity
gl_FragColor = vec4(exp(-0.5*d*d));
Draw this to a texture (from comments: opengl-tutorial.org code and this question) with additive blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
Now that texture will contain intensity as it was after your original loop. In another fragment shader during a full screen pass (draw a single triangle that covers the whole viewport), continue with:
uniform sampler2D intensityTex;
...
float intensity = texture2D(intensityTex, gl_FragCoord.xy/resolution.xy).r;
intensity = 3.0*pow(intensity, 0.02);
...
The code you have shown is fine, assuming you're drawing a full screen polygon so the fragment shader runs once for each pixel. Potential issues are:
resolution isn't set correctly
The point coordinates aren't in the range 0 to 1 on the screen.
Although minor, d will be stretched by the aspect ratio, so you might be better scaling the points up to pixel coordinates and diving distance by resolution.y.
This looks pretty similar to creating a density field for 2D metaballs. For performance you're best off limiting the density function for each point so it doesn't go on forever, then spatting discs into a texture using additive blending. This saves processing those pixels a point doesn't affect (just like in deferred shading). The result is the density field, or in your case per-pixel intensity.
These are a little related:
2D OpenGL ES Metaballs on android (no answers yet)
calculate light volume radius from intensity
gl_PointSize Corresponding to World Space Size
It looks like the point center and fragment position are in different coordinate spaces when you subtract them:
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
Based on your explanation and code, source and source are in window coordinates, meaning that they are in units of pixels. gl_FragCoord is in the same coordinate space. And even though you don't show that directly, I assume that resolution is the size of the window in pixels.
This means that:
vec2 position = ( gl_FragCoord.xy / resolution.xy );
calculates the normalized position of the fragment within the window, in the range [0.0, 1.0] for both x and y. But then on the next line:
float d = distance(position, source);
you subtrace source, which is still in window coordinates, from this position in normalized coordinates.
Since it looks like you wanted the distance in normalized coordinates, which makes sense, you'll also need to normalize source:
vec2 source = vec2(sourceX[i],sourceY[i]) / resolution.xy;

Odd OpenGL fragment shader behavior with gl_PointCoord

I'm trying to draw a 2 dimensional line the has a smooth gradient between two colors. My application allows me to click and drag, the first point of the line segment is the first click point, the second point of the line follows the mouse cursor position.
I have the line drawing using glDrawArrays(GL_LINES, 0, points.size());, points is a 2 index array of points.
The line draws fine, clicking and dragging to move the line around works, but I'm having explainable behavior with my fragment shader:
uniform vec4 color1; //Color at p1
uniform vec4 color2; //Color at p2
out vec4 fragColor;
void main()
{
//Average the fragment positions
float weight = (gl_PointCoord.s + gl_PointCoord.t) * 0.5f;
//Weight the first and second color
fragColor = (color1 * weight) + (color2 * (1.0f - weight));
}
color1 is red, color2 is green.
As I drag my line around it bounces between entirely red, entirely green, the gradient I desire, or some solid mixture of red and green on every screen redraw.
I suspect I'm using gl_PointCoord incorrectly, but I can't inspect the values as they're in the shader.I tried the following in my shader:
fragColor = (color1 + color2) * 0.5f;
And it gives a stable yellow color, so I have some confidence that the colors are stable between redraws.
Any tips?
gl_PointCoord is only defined for point primitves. Using it with GL_LINES is just undefined behavior and never going to work.
If you want a smooth gradient, you should add a weight attribute to your line vertices and set it to 0 or 1 for start and end points, respectively.

GLSL - put decal texture into base texture with color indication

I'm trying to write simple shader to put some "mark"(64*64) on base texture(128*128), to indicate where mark must be, i use cyan colored mark-sized(64*64) region on base texture.
becomes
Fragment shader
precision lowp float;
uniform sampler2D us_base_tex;
uniform sampler2D us_mark_tex;
varying vec2 vv_base_tex;
varying vec2 vv_mark_tex;
const vec4 c_mark_col = vec4(0.0, 1.0, 1.0, 1.0);//CYAN
void main()
{
vec4 base_col = texture2D(us_base_tex, vv_base_tex);
if(base_col == c_mark_col)
{
vec4 mark_col = texture2D(us_mark_tex, vv_mark_tex);//texelFetch magic overhere must be
base_col = mix(base_col, mark_col, mark_col.a);
}
gl_FragColor = base_col;
}
Of course, it not works as it should, i got something like this (transperity only for demonstration, there is no cyan region, only piece of "T"):
I try to figure it and only something like texelFetch will help me, but i can't figure out, how get tex coord of base texture cyan texel and converted it to get - first col/first row cyan base texel = first col/first row mark texel, second col/first row base = second col/first row of mark. e.t.c.
I think there's a way to do this in a single pass - but it involves using another texture that is capable of holding the information presented below. So you're going to increase your texture memory usage.
In this approach, the second texture (it can be generated by post-processing the original texture either offline or somehow) contains the UV map for the decal
R = normalized distance from left of cyan square
G = normalized distance from the top of the cyan square
B = don't care
Now the pixel shader is simple, all it needs to do is to see if the current texel is cyan, pick the R and G from the "decal-uvmap" texture and use those as texture coords to sample the decal texture.
Note that the bit depth of this texture (and it's size) is related to the size of the original texture so it may be possible to get away with a much smaller "decal-uvmap" texture than the original.