In GLSL (WebGL), how can I get the screen pixel X position in the fragment shader? - glsl

Initially I was using just gl_FragCoord.x which works great up to a point. However the range of gl_FragCoord.xy is between 0.5 and 1023.5, so after 1023.5 x just returns the same value.
I have a uniform with the actual canvas size.
uniform vec2 resolution;
void main(void) {
float pixelXPos = ?
}
Assuming the width is 1800, how can I get the actual pixel screen X position in the fragment shader for pixels above 1023.5?
For context, I have a single square that covers the entire canvas. I want to the shader to print a pattern on that square, so I need to know the x and y coordinate (in screen space) in order to know what to paint each pixel to.
Thanks!

You have to use the highp precision qualifiers for pixelXPos:
highp float pixelXPos = gl_FragCoord.x;
The integer range for mediump is only guaranteed to be at least [-2e10, 2e10] ([-1024, 1024]). See OpenGL ES Shading Language 1.00 - 4.5 Precision and Precision Qualifiers.

Related

OpenGL shader fill works with constant color, doesn't work with interpolation, how to debug?

We have code that mostly works filling polygons on a map, though it draws convex hulls and fills in some areas (will require tessellation).
The shader is given a set of triangle fan operations, and draws using hardcoded color yellow (and it works).
Then we try to interpolate based on the value, and it turns black (does not work).
Here is the fragment shader. Values coming in are all 0.0 to 1.0
With minVal = 0.0, maxVal = 1.0
and colors set to (0,0,1) and (1,0,0)
While I would appreciate knowing the bug, I would much more like to know how I can debug it. I need to be able to get the values in the shader and see what is happening. In short, I need some kind of debugging facility for GLSL. I did find NVIDIA nsight: https://developer.nvidia.com/nsight-graphics but could not get it working on linux.
#version 330 core
out vec4 FragColor;
//in vec2 TexCoord;
in float val;
//uniform sampler2D ourTexture;
uniform vec3 minColor;
uniform vec3 maxColor;
uniform float minVal;
uniform float maxVal;
void main()
{
float f = (val - minVal)/ (maxVal-minVal);
//FragColor = vec4(1,1,0,1);//texture(ourTexture, f);
FragColor = vec4(minColor*(1.0-f) + maxColor * f,1.0);
}
It turns out that we were using glUniform4fv to set a color with rgba.
There was no compile or runtime error. These calls do not have an error return that I know of.
The shader also did not generate an error, but the variables minColor and maxColor were not correctly set.
Thus the interpolation was always black.
vec4(minColor*(1.0-f) + maxColor * f,1.0);
There should have been an error, attempting to set an RGBA color into a vec3 variable.
I have found printf functions on stackoverflow that would have allowed viewing this kind of information: Convert floating-point numbers to decimal digits in GLSL

Shadertoy - fragCoord vs iResolution vs fragColor

I'm fairly new to Shadertoy and GLSL in general. I have successfully duplicated numerous Shadertoy shaders into Blender without actually knowing how it all works. I have looked for tutorials but I'm more of a visual learner.
If someone could explain or, even better, provide some images that describe the difference between fragCoord, iResolution, & fragColor. That would be great!
I'm mainly interested in the Numbers. Because I use Blender I'm used to the canvas being 0 to 1 -or- -1 to 1
This one in particular has me a bit confused.
vec2 u = (fragCoord - iResolution.xy * .5) / iResolution.y * 8.;
I can't reproduce the remaining code in Blender without knowing the coordinate system.
Any help would be greatly appreciated!
It is normal, you cannot reproduce this code in blender without knowing the coordinate system.
The Shadertoy documentation states:
Image shaders implement the mainImage() function to generate
procedural images by calculating a color for each pixel in the image.
This function is invoked once in each pixel and the host application
must provide the appropriate input data and retrieve the output color
to assign it to the corresponding pixel on the screen. The signature
of this function is:
void mainImage( out vec4 fragColor, in vec2 fragCoord);
where fragCoord contains the coordinates of the pixel for which the
shader must calculate a color. These coordinates are counted in pixels
with values from 0.5 to resolution-0.5 over the entire rendering
surface and the resolution of this surface is transmitted to the
shader via the uniform iResolution variable.
Let me explain.
The iResolution variable is a uniform vec3 which contains the dimensions of the window and is sent to the shader with some openGL code.
The fragCoord variable is a built-in variable that contains the coordinates of the pixel where the shader is being applied.
More concretely:
fragCoord : is a vec2 that is between 0 > 640 on the X axis and 0 > 360 on the Y axis
iResolution : is a vec2 with an X value of 640 and a Y value of 360
quick note on how vectors work in OpenGL:
if you have also a hard time understanging how vector work in OpenGL, I highly recommand to read the anwser of Homan bellow, a very usefull introduction to OpenGL swizzling.
This image was calculated with the following code:
// Normalized pixel coordinates (between 0 and 1)
vec2 uv = fragCoord/iResolution.xy;
// Set R and G values based on position
vec3 col = vec3(uv.x,uv.y,0);
// Output to screen
fragColor = vec4(col,1.0);
The output ranges from 0,0 in the lower-left and 1,1 in the upper-right. This is the default lower-left windows space set by OpenGL.
This an image was calculated with the following code:
// Normalized pixel coordinates (between -0.5 and 0.5)
vec2 uv = (fragCoord - iResolution.xy * 0.5)/iResolution.xy;
// Set R and G values based on position
vec3 col = vec3(uv.x,uv.y,0);
// Output to screen
fragColor = vec4(col,1.0);
The output ranges from -0.5,-0.5 in the lower-left and 0.5,0.5 because
in the first step we subtract half of the window size [0.5] from each pixel coordinate [fragCoord]. You can see the effect in the way the red and green values don't kick into visibility until much later.
You might also want to normalize only the y axis by changing the first step to
vec2 uv = (fragCoord - iResolution.xy * 0.5)/iResolution.y;
Depending our your purpose the image can seem strange if you normalize both axes so this is a possible strategy.
This an image was calculated with the following code:
// Normalized pixel coordinates (between -0.5 to 0.5)
vec2 uv = (fragCoord - iResolution.xy * 0.5)/iResolution.xy;
// Set R and G values based on position using ceil() function
// The ceil() function returns the smallest integer that is greater than the uv value
vec3 col = vec3(ceil(uv.x),ceil(uv.y),0);
// Output to screen
fragColor = vec4(col,1.0);
The ceil() function allows us to see that the center of the image is 0, 0
As for the second part of the shadertoy documentation:
The output color is returned in fragColor as a four-component vector,
the last component being ignored by the client. The result is
retrieved in an "out" variable in anticipation of the future addition
of several rendering targets.
Really all this means is that fragColor contains four values that are shopped to the next stage in the rendering pipeline. You can find more about in and out variables here.
The values in fragColor determine the color of the pixel where the shader is being applied.
If you want to learn more about shaders these are some good starting places:
the book of shader - uniform
learn OpenGL - shader
Not to take away from the accepted answer, which is very thorough. But in case anyone else was confused about the types, iResolution is a 'uniform highp 3-component vector of float'... so actually a vec3? That's why we see in examples that fragCoord (actually a vec2) is divided by iResolution.xy (the .xy gives us a vec2). But what is the '.xy' thing? Is it a method? An attribute or property? With some random googling I found out that the '.xy' tacked on at the end is called 'swizzling'
https://www.khronos.org/opengl/wiki/Data_Type_(GLSL)#Vectors
(for convenience the gist of it is here below)
Swizzling
You can access the components of vectors using the following syntax:
vec4 someVec;
someVec.x + someVec.y;
This is called swizzling. You can use x, y, z, or w, referring to the
first, second, third, and fourth components, respectively.
The reason it has that name "swizzling" is because the following syntax is entirely valid:
vec2 someVec;
vec4 otherVec = someVec.xyxx;
vec3 thirdVec = otherVec.zyy;
You can use any combination of up to 4 of the letters to create a vector (of the same basic type) of that length. So otherVec.zyy is a vec3, which is how we can initialize a vec3 value with it. Any combination of up to 4 letters is acceptable, so long as the source vector actually has those components. Attempting to access the 'w' component of a vec3 for example is a compile-time error.
Swizzling also works on l-values (left values?):
vec4 someVec;
someVec.wzyx = vec4(1.0, 2.0, 3.0, 4.0); // Reverses the order.
someVec.zx = vec2(3.0, 5.0); // Sets the 3rd component of someVec to 3.0 and the 1st component to 5.0
However, when you use a swizzle as a way of setting component values, you cannot use the same swizzle component twice. So someVec.xx = vec2(4.0, 4.0); is not allowed.
Additionally, there are 3 sets of swizzle masks. You can use xyzw, rgba (for colors), or stpq (for texture coordinates). These three sets have no actual difference; they're just syntactic sugar. You cannot combine names from different sets in a single swizzle operation. So ".xrs" is not a valid swizzle mask.
In OpenGL 4.2 or ARB_shading_language_420pack, scalars can be swizzled as well. They obviously only have one source component, but it is legal to do this:
float aFloat;
vec4 someVec = aFloat.xxxx;
// -1 to 1
vec2 uv = (2.0 * fragCoord - iResolution.xy) / iResolution.xy;
vec3 col = vec3(uv.x, uv.y, 0.0);
fragColor = vec4(col1, 1.0);

Can anyone explain these snippets related to WebGL

I am referring to this link for learning how to render a texture in webgl.
I am facing some doubts as it is not very easy for a beginner to understand.
What does these snippets mean for GLSL:
vec2 zeroToOne = a_position / u_resolution;
vec2 zeroToTwo = zeroToOne * 2.0;
vec2 clipSpace = zeroToTwo - 1.0;
Also, I don't want to fill the entire canvas if my image is bigger. I want to render all textures as a 512 * 384 (4:3), how to do that by modifying the vertices.
Since I wrote the sample you linked too I'm curious how I can improve the explanation already on that site
The sample you linked to is from this page.
That page says right at the top
This is a continuation from WebGL Fundamentals. If you haven't read that I'd suggest going there first
That page says
WebGL only cares about 2 things. Clipspace coordinates and colors. Your job as a programmer using WebGL is to provide WebGL with those 2 things. You provide 2 "shaders" to do this. A Vertex shader which provides the clipspace coordinates and a fragment shader that provides the color.
Clipspace coordinates always go from -1 to +1 no matter what size your canvas is
It then shows an example using clip space coordinates.
After that it says we'd probably rather work in pixels and shows a shader with comments that details how to convert from pixels to clip space
For 2D stuff you would probably rather work in pixels than clipspace so let's change the shader so we can supply rectangles in pixels and have it convert to clipspace for us. Here's the new vertex shader
attribute vec2 a_position;
uniform vec2 u_resolution;
void main() {
// convert the rectangle from pixels to 0.0 to 1.0
vec2 zeroToOne = a_position / u_resolution;
// convert from 0->1 to 0->2
vec2 zeroToTwo = zeroToOne * 2.0;
// convert from 0->2 to -1->+1 (clipspace)
vec2 clipSpace = zeroToTwo - 1.0;
gl_Position = vec4(clipSpace, 0, 1);
}
In fact, the sample you linked to has those exact same comments in the code.
I'd love to hear any ideas how I can make that clearer
This code likely converts a_position from 0..N-1 texture resolution space to
-1..1 range.

Uniform point arrays and managing fragment shader coordinates systems

My aim is to pass an array of points to the shader, calculate their distance to the fragment and paint them with a circle colored with a gradient depending of that computation.
For example:
(From a working example I set up on shader toy)
Unfortunately it isn't clear to me how I should calculate and convert the coordinates passed for processing inside the shader.
What I'm currently trying is to pass two array of floats - one for x positions and one for y positions of each point - to the shader though a uniform. Then inside the shader iterate through each point like so:
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
uniform float sourceX[100];
uniform float sourceY[100];
uniform vec2 resolution;
in vec4 gl_FragCoord;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main()
{
float intensity = 0.0;
for(int i=0; i<100; i++)
{
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
intensity += exp(-0.5*d*d);
}
intensity=3.0*pow(intensity,0.02);
if (intensity<=1.0)
gl_FragColor=vec4(0.0,intensity*0.5,0.0,1.0);
else if (intensity<=2.0)
gl_FragColor=vec4(intensity-1.0, 0.5+(intensity-1.0)*0.5,0.0,1.0);
else
gl_FragColor=vec4(1.0,3.0-intensity,0.0,1.0);
}
But that doesn't work - and I believe it may be because I'm trying to work with the pixel coordinates without properly translating them. Could anyone explain to me how to make this work?
Update:
The current result is:
The sketch's code is:
PShader pointShader;
float[] sourceX;
float[] sourceY;
void setup()
{
size(1024, 1024, P3D);
background(255);
sourceX = new float[100];
sourceY = new float[100];
for (int i = 0; i<100; i++)
{
sourceX[i] = random(0, 1023);
sourceY[i] = random(0, 1023);
}
pointShader = loadShader("pointfrag.glsl", "pointvert.glsl");
shader(pointShader, POINTS);
pointShader.set("sourceX", sourceX);
pointShader.set("sourceY", sourceY);
pointShader.set("resolution", float(width), float(height));
}
void draw()
{
for (int i = 0; i<100; i++) {
strokeWeight(60);
point(sourceX[i], sourceY[i]);
}
}
while the vertex shader is:
#define PROCESSING_POINT_SHADER
uniform mat4 projection;
uniform mat4 transform;
attribute vec4 vertex;
attribute vec4 color;
attribute vec2 offset;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main() {
vec4 clip = transform * vertex;
gl_Position = clip + projection * vec4(offset, 0, 0);
vertColor = color;
center = clip.xy;
pos = offset;
}
Update:
Based on the comments it seems you have confused two different approaches:
Draw a single full screen polygon, pass in the points and calculate the final value once per fragment using a loop in the shader.
Draw bounding geometry for each point, calculate the density for just one point in the fragment shader and use additive blending to sum the densities of all points.
The other issue is your points are given in pixels but the code expects a 0 to 1 range, so d is large and the points are black. Fixing this issue as #RetoKoradi describes should address the points being black, but I suspect you'll find ramp clipping issues when many are in close proximity. Passing points into the shader limits scalability and is inefficient unless the points cover the whole viewport.
As below, I think sticking with approach 2 is better. To restructure your code for it, remove the loop, don't pass in the array of points and use center as the point coordinate instead:
//calc center in pixel coordinates
vec2 centerPixels = (center * 0.5 + 0.5) * resolution.xy;
//find the distance in pixels (avoiding aspect ratio issues)
float dPixels = distance(gl_FragCoord.xy, centerPixels);
//scale down to the 0 to 1 range
float d = dPixels / resolution.y;
//write out the intensity
gl_FragColor = vec4(exp(-0.5*d*d));
Draw this to a texture (from comments: opengl-tutorial.org code and this question) with additive blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
Now that texture will contain intensity as it was after your original loop. In another fragment shader during a full screen pass (draw a single triangle that covers the whole viewport), continue with:
uniform sampler2D intensityTex;
...
float intensity = texture2D(intensityTex, gl_FragCoord.xy/resolution.xy).r;
intensity = 3.0*pow(intensity, 0.02);
...
The code you have shown is fine, assuming you're drawing a full screen polygon so the fragment shader runs once for each pixel. Potential issues are:
resolution isn't set correctly
The point coordinates aren't in the range 0 to 1 on the screen.
Although minor, d will be stretched by the aspect ratio, so you might be better scaling the points up to pixel coordinates and diving distance by resolution.y.
This looks pretty similar to creating a density field for 2D metaballs. For performance you're best off limiting the density function for each point so it doesn't go on forever, then spatting discs into a texture using additive blending. This saves processing those pixels a point doesn't affect (just like in deferred shading). The result is the density field, or in your case per-pixel intensity.
These are a little related:
2D OpenGL ES Metaballs on android (no answers yet)
calculate light volume radius from intensity
gl_PointSize Corresponding to World Space Size
It looks like the point center and fragment position are in different coordinate spaces when you subtract them:
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
Based on your explanation and code, source and source are in window coordinates, meaning that they are in units of pixels. gl_FragCoord is in the same coordinate space. And even though you don't show that directly, I assume that resolution is the size of the window in pixels.
This means that:
vec2 position = ( gl_FragCoord.xy / resolution.xy );
calculates the normalized position of the fragment within the window, in the range [0.0, 1.0] for both x and y. But then on the next line:
float d = distance(position, source);
you subtrace source, which is still in window coordinates, from this position in normalized coordinates.
Since it looks like you wanted the distance in normalized coordinates, which makes sense, you'll also need to normalize source:
vec2 source = vec2(sourceX[i],sourceY[i]) / resolution.xy;

How to get pixel information inside a fragment shader?

In my fragment shader I can load a texture, then do this:
uniform sampler2D tex;
void main(void) {
vec4 color = texture2D(tex, gl_TexCoord[0].st);
gl_FragColor = color;
}
That sets the current pixel to color value of texture. I can modify these, etc and it works well.
But a few questions. How do I tell "which" pixel I am? For example, say I want to set pixel 100,100 (x,y) to red. Everything else to black. How do I do a :
"if currentSelf.Position() == (100,100); then color=red; else color=black?"
?
I know how to set colors, but how do I get "my" location?
Secondly, how do I get values from a neighbor pixel?
I tried this:
vec4 nextColor = texture2D(tex, gl_TexCoord[1].st);
But not clear what it is returning? if I'm pixel 100,100; how do I get the values from 101,100 or 100,101?
How do I tell "which" pixel I am?
You're not a pixel. You're a fragment. There's a reason that OpenGL calls them "Fragment shaders"; it's because they aren't pixels yet. Indeed, not only may they never become pixels (via discard or depth tests or whatever), thanks to multisampling, multiple fragments can combine to form a single pixel.
If you want to tell where your fragment shader is in window-space, use gl_FragCoord. Fragment positions are floating-point values, not integers, so you have to test with a range instead of a single "100, 100" value.
Secondly, how do I get values from a neighbor pixel?
If you're talking about the neighboring framebuffer pixel, you don't. Fragment shaders cannot arbitrarily read from the framebuffer, either in their own position or in a neighboring one.
If you're talking about accessing a neighboring texel from the one you accessed, then that's just a matter of biasing the texture coordinate you pass to texture2D. You have to get the size of the texture (since you're not using GLSL 1.30 or above, you have to manually pass this in), invert the size and either add or subtract these sizes from the S and T component of the texture coordinate.
Easy peasy.
Just compute the size of a pixel based on resolution. Then look up +1 and -1.
vec2 onePixel = vec2(1.0, 1.0) / u_textureSize;
gl_FragColor = (
texture2D(u_image, v_texCoord) +
texture2D(u_image, v_texCoord + vec2(onePixel.x, 0.0)) +
texture2D(u_image, v_texCoord + vec2(-onePixel.x, 0.0))) / 3.0;
There's a good example here