Metal Shading Language - FragCoord equivalent? - glsl

I tried to translate this shadertoy scene into Metal Kernel. In shadertoy code:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec3 dir = rayDirection(45.0, iResolution.xy, fragCoord);
...
}
in OpenGL case, we need to send iResolution from the glfw window. And fragCoord will be gl_FragCoord from Fragment Shader.
I have this in metal file:
kernel void compute(texture2d<float, access::write> output [[texture(0)]],
uint2 gid [[thread_position_in_grid]]) {
int width = output.get_width();
int height = output.get_height();
....
}
So I can get my iResolution from the width and height. But I am not sure how to get gl_FragCoord.
Do Metal_stdlib have something equivalent to gl_FragCoord? Or if I have to calculate, How can I get obtain the same value?

If you're looking for the "fragment" position in window coordinates (as gl_FragCoord is), you can use float2(gid), which ranges from (0, 0) to (width, height). This is only the case if your grid dimensions (the product of your threadgroup size and threadgroup count) exactly match the dimensions of the destination texture.

Related

"Scan Through" a large texture glsl

I've encoded some data into a 44487x1.0 luminance texture:
Now I would like to "scrub" this data across my shader, so that a slice of the texture equal in width to the pixel width of my canvas is displayed. So if the canvas is 500px wide, then 500 pixels from the texture will be shown. The texture is then translated by some offset value so that different values within the texture can be displayed.
//vertex shader
export const vs = GLSL`
#version 300 es
in vec4 position;
void main() {
gl_Position = position;
}
`;
//fragment shader
#version 300 es
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 u_resolution;
uniform float u_time;
uniform sampler2D u_texture_7; //data texture
out vec4 fragColor;
void main(){
//data texture dimensions
vec2 dims = vec2(44487., 1.0);
//amount by which to translate the data texture
vec2 offset = vec2(u_time*.5, 0.);
//canvas coords
vec2 uv = gl_FragCoord.xy/u_resolution.xy;
//textuer asspect ratio, w/h
float textureAspect = 44487. / 1.;
vec3 col = vec3(0.);
//texture width is 44487*larger than uv, I guess?
vec2 textCoords = vec2((uv.x/textureAspect)+offset.x, uv.y);
//get texture values
vec3 text = texture(u_texture_7, textCoords).rgb;
//output
fragColor = vec4(text, 1.);
}
However, this doesn't seem to work. All I get is a black screen. Is using a wide texture like this a good way to go about getting the array values into the shader? The texture is very small in size, but I'm wondering if the dimensions might still be causing an issue.
Alternatively to providing one large texture, I could provide a smaller texture, but update the texture uniform values via js?
After trying several different approaches, the work around I ended up using was uploading the 44487x1.0 image to a separate 2d canvas, and then performing the transformations of the texture in the 2d canvas, and not the shader. The canvas is then sent to the shader as a texture.
Might not be the most efficient solution, but it avoids having to mess around with the texture too much in the shader.

How to stop a Shader from distorting a texture

I am trying to learn how to use shaders and use GLSL. One of the shaders is working but is distorting the texture of the sprite it's working on. I'm doing this all on SFML.
Distorted texture on left, actual texture on right:
The problem comes from this line
When I started the texture was being rendered upside down but subtracting the y component of the cordinates from 1 fixed that issue. The line that is causing the issue is
vec2 texCoord = (gl_FragCoord.xy / sourceSize.xy);
Where the sourceSize is a uniform passing in the resolution of something as a vec2. I've been passing in various values into this and getting different distorted versions of the texture. I was wondering if there was a way a ratio to pass in or something to avoid this distortion.
Texture Size in Pixels: 512x512
Passed in values for the above image: 512x512
Shader
uniform sampler2D source;
uniform vec2 sourceSize;
uniform float time;
void main( void )
{
vec2 texCoord = (gl_FragCoord.xy / sourceSize.xy); //Gets the pixel position in a range of 0.0 to 1
texCoord = vec2 (texCoord.x,1.0-texCoord.y);//Inverts the y co ordinate
vec4 Color = texture2D(source, texCoord);//Gets the current pixture colour
gl_FragColor = Color;//Output
}
Found a solution. Posting it here for if other need the help.
Changing
vec4 Color = texture2D(source, texCoord);//Gets the current pixture colour
To
vec4 Color = texture2D(source, gl_TexCoord[0].xy);//Gets the current pixture colour
Will fix the distortion effect.

Project cubemap to 2D texture

I'd like to debug my render to cubemap function by projecting the whole thing to a 2D texture just like this one:
On my render from texture shader I've only got the UV texture coordinates available (ranging from (0,0) to (1,1)). How can I project the cubemap to the screen in a single draw call?
You can do this by rendering 6 quads and using 3D texture coords (s,t,p) pointing to each vertex of the cube so 8 variations of ( +/-1,+/-1,+/-1 ).
The UV 2D coords (s,t) like 4 variations of (0/1,0/1) are not usable for whole CUBE_MAP only for its individual sides.
Look for txr_skybox in here
Normal mapping gone horribly wrong
on how CUBE_MAP is used in fragment shader.
PS in OpenGL the texture coords are called s,t,p,q instead of u,v,w,...
Here related QA:
rendering cube map layout, understanding glTexCoord3f parameters
My answer is essentially the same as the one accepted one, but I have used this very technique to debug my depth-cubemap (used for shadowcasting) in my current project, so I thought I would include a working sample of the fragment shader code I used.
Unfolding cubemap
This is supposed to be rendered to a rectangle on top of the screen with aspect ratio 3/4 directly on the screen and with s,t going from (0,0) in the lower-left corner to (1,1) at the upper-right corner.
Note that in this case, the cubemap I use is inverted, that is objects to the +(x,y,z) side of the cubemap origen is rendered to -(x,y,z), and the direction I choose as up for the top/bottom quads are completely arbitrary; so to get this example to work you may need to change some signs or swap s and t some times, also note that I here only read one channel, as it is debth map:
Fragment shader code for a quad-map as the one in the question:
//Should work in most other versions
#version 400 core
uniform samplerCube dynamic_texture;
out vec4 out_color;
in vec2 ST;
void main()
{
//In this example i use a debthmap with only 1 channel, but the projection should work with a colored cubemap to, just replace this with a vec3 or vec4
float debth=0;
vec2 localST=ST;
//Scale Tex coordinates such that each quad has local coordinates from 0,0 to 1,1
localST.t = mod(localST.t*3,1);
localST.s = mod(localST.s*4,1);
//Due to the way my debth-cubemap is rendered, objects to the -x,y,z side is projected to the positive x,y,z side
//Inside where tob/bottom is to be drawn?
if (ST.s*4>1 && ST.s*4<2)
{
//Bottom (-y) quad
if (ST.t*3.f < 1)
{
vec3 dir=vec3(localST.s*2-1,1,localST.t*2-1);//Get lower y texture, which is projected to the +y part of my cubemap
debth = texture( dynamic_texture, dir ).r;
}
//top (+y) quad
else if (ST.t*3.f > 2)
{
vec3 dir=vec3(localST.s*2-1,-1,-localST.t*2+1);//Due to the (arbitrary) way I choose as up in my debth-viewmatrix, i her emultiply the latter coordinate with -1
debth = texture( dynamic_texture, dir ).r;
}
else//Front (-z) quad
{
vec3 dir=vec3(localST.s*2-1,-localST.t*2+1,1);
debth = texture( dynamic_texture, dir ).r;
}
}
//If not, only these ranges should be drawn
else if (ST.t*3.f > 1 && ST.t*3 < 2)
{
if (ST.x*4.f < 1)//left (-x) quad
{
vec3 dir=vec3(-1,-localST.t*2+1,localST.s*2-1);
debth = texture( dynamic_texture, dir ).r;
}
else if (ST.x*4.f < 3)//right (+x) quad (front was done above)
{
vec3 dir=vec3(1,-localST.t*2+1,-localST.s*2+1);
debth = texture( dynamic_texture, dir ).r;
}
else //back (+z) quad
{
vec3 dir=vec3(-localST.s*2+1,-localST.t*2+1,-1);
debth = texture( dynamic_texture, dir ).r;
}
}
else//Tob/bottom, but outside where we need to put something
{
discard;//No need to add fancy semi transparant borders for quads, this is just for debugging purpose after all
}
out_color = vec4(vec3(debth),1);
}
Here is a screenshot of this technique used to render my depth-map in the lower-right corner of the screen (rendering with a point-light source placed at the very center of an empty room with no other objects than the walls and the player character):
Equirectangular projection
I must, however, say that I prefer using an equirectangular projection for debugging cubemaps, as it doesn't have any holes in it; and, luckily, these are even easier to make than unfolded cubemaps, just use a fragment shader like this (still with s,t going from (0,0) to (1,1) from lower-left to upper-right corner), but this time with aspect ratio 1/2:
//Should work in most other versions
#version 400 core
uniform samplerCube dynamic_texture;
out vec4 out_color;
in vec2 ST;
void main()
{
float phi=ST.s*3.1415*2;
float theta=(-ST.t+0.5)*3.1415;
vec3 dir = vec3(cos(phi)*cos(theta),sin(theta),sin(phi)*cos(theta));
//In this example i use a debthmap with only 1 channel, but the projection should work with a colored cubemap to
float debth = texture( dynamic_texture, dir ).r;
out_color = vec4(vec3(debth),1);
}
Here is a screenshot where an equirectangular projection is used to display my depth-map in the lower-right corner:

Uniform point arrays and managing fragment shader coordinates systems

My aim is to pass an array of points to the shader, calculate their distance to the fragment and paint them with a circle colored with a gradient depending of that computation.
For example:
(From a working example I set up on shader toy)
Unfortunately it isn't clear to me how I should calculate and convert the coordinates passed for processing inside the shader.
What I'm currently trying is to pass two array of floats - one for x positions and one for y positions of each point - to the shader though a uniform. Then inside the shader iterate through each point like so:
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
uniform float sourceX[100];
uniform float sourceY[100];
uniform vec2 resolution;
in vec4 gl_FragCoord;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main()
{
float intensity = 0.0;
for(int i=0; i<100; i++)
{
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
intensity += exp(-0.5*d*d);
}
intensity=3.0*pow(intensity,0.02);
if (intensity<=1.0)
gl_FragColor=vec4(0.0,intensity*0.5,0.0,1.0);
else if (intensity<=2.0)
gl_FragColor=vec4(intensity-1.0, 0.5+(intensity-1.0)*0.5,0.0,1.0);
else
gl_FragColor=vec4(1.0,3.0-intensity,0.0,1.0);
}
But that doesn't work - and I believe it may be because I'm trying to work with the pixel coordinates without properly translating them. Could anyone explain to me how to make this work?
Update:
The current result is:
The sketch's code is:
PShader pointShader;
float[] sourceX;
float[] sourceY;
void setup()
{
size(1024, 1024, P3D);
background(255);
sourceX = new float[100];
sourceY = new float[100];
for (int i = 0; i<100; i++)
{
sourceX[i] = random(0, 1023);
sourceY[i] = random(0, 1023);
}
pointShader = loadShader("pointfrag.glsl", "pointvert.glsl");
shader(pointShader, POINTS);
pointShader.set("sourceX", sourceX);
pointShader.set("sourceY", sourceY);
pointShader.set("resolution", float(width), float(height));
}
void draw()
{
for (int i = 0; i<100; i++) {
strokeWeight(60);
point(sourceX[i], sourceY[i]);
}
}
while the vertex shader is:
#define PROCESSING_POINT_SHADER
uniform mat4 projection;
uniform mat4 transform;
attribute vec4 vertex;
attribute vec4 color;
attribute vec2 offset;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main() {
vec4 clip = transform * vertex;
gl_Position = clip + projection * vec4(offset, 0, 0);
vertColor = color;
center = clip.xy;
pos = offset;
}
Update:
Based on the comments it seems you have confused two different approaches:
Draw a single full screen polygon, pass in the points and calculate the final value once per fragment using a loop in the shader.
Draw bounding geometry for each point, calculate the density for just one point in the fragment shader and use additive blending to sum the densities of all points.
The other issue is your points are given in pixels but the code expects a 0 to 1 range, so d is large and the points are black. Fixing this issue as #RetoKoradi describes should address the points being black, but I suspect you'll find ramp clipping issues when many are in close proximity. Passing points into the shader limits scalability and is inefficient unless the points cover the whole viewport.
As below, I think sticking with approach 2 is better. To restructure your code for it, remove the loop, don't pass in the array of points and use center as the point coordinate instead:
//calc center in pixel coordinates
vec2 centerPixels = (center * 0.5 + 0.5) * resolution.xy;
//find the distance in pixels (avoiding aspect ratio issues)
float dPixels = distance(gl_FragCoord.xy, centerPixels);
//scale down to the 0 to 1 range
float d = dPixels / resolution.y;
//write out the intensity
gl_FragColor = vec4(exp(-0.5*d*d));
Draw this to a texture (from comments: opengl-tutorial.org code and this question) with additive blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
Now that texture will contain intensity as it was after your original loop. In another fragment shader during a full screen pass (draw a single triangle that covers the whole viewport), continue with:
uniform sampler2D intensityTex;
...
float intensity = texture2D(intensityTex, gl_FragCoord.xy/resolution.xy).r;
intensity = 3.0*pow(intensity, 0.02);
...
The code you have shown is fine, assuming you're drawing a full screen polygon so the fragment shader runs once for each pixel. Potential issues are:
resolution isn't set correctly
The point coordinates aren't in the range 0 to 1 on the screen.
Although minor, d will be stretched by the aspect ratio, so you might be better scaling the points up to pixel coordinates and diving distance by resolution.y.
This looks pretty similar to creating a density field for 2D metaballs. For performance you're best off limiting the density function for each point so it doesn't go on forever, then spatting discs into a texture using additive blending. This saves processing those pixels a point doesn't affect (just like in deferred shading). The result is the density field, or in your case per-pixel intensity.
These are a little related:
2D OpenGL ES Metaballs on android (no answers yet)
calculate light volume radius from intensity
gl_PointSize Corresponding to World Space Size
It looks like the point center and fragment position are in different coordinate spaces when you subtract them:
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
Based on your explanation and code, source and source are in window coordinates, meaning that they are in units of pixels. gl_FragCoord is in the same coordinate space. And even though you don't show that directly, I assume that resolution is the size of the window in pixels.
This means that:
vec2 position = ( gl_FragCoord.xy / resolution.xy );
calculates the normalized position of the fragment within the window, in the range [0.0, 1.0] for both x and y. But then on the next line:
float d = distance(position, source);
you subtrace source, which is still in window coordinates, from this position in normalized coordinates.
Since it looks like you wanted the distance in normalized coordinates, which makes sense, you'll also need to normalize source:
vec2 source = vec2(sourceX[i],sourceY[i]) / resolution.xy;

Finding the pixel color in a specific coordinate from a sampler2D using GLSL

I have a 3D object in my scene and the fragment shader for this object receives a texture that has the same size of the screen. I want to get the coordinates from the current fragment and find the color information on the image in the same position. Is this possible? Can someone point me to the right direction?
The fragment shader has a built-in value called gl_FragCoord that supplies the pixel coordinates of the target fragment. You must divide this by the width and height of the viewport to get the texture coordinates for lookup. Here's a short example:
uniform vec2 resolution;
uniform sampler2D backbuffer;
void main( void ) {
vec2 position = ( gl_FragCoord.xy / resolution.xy );
vec4 color = texture2D(backbuffer, position);
// ... do something with it ...
}
For a complete working example, try this in a WebGL-capable browser:
http://glslsandbox.com/e#375.15