In my fragment shader I can load a texture, then do this:
uniform sampler2D tex;
void main(void) {
vec4 color = texture2D(tex, gl_TexCoord[0].st);
gl_FragColor = color;
}
That sets the current pixel to color value of texture. I can modify these, etc and it works well.
But a few questions. How do I tell "which" pixel I am? For example, say I want to set pixel 100,100 (x,y) to red. Everything else to black. How do I do a :
"if currentSelf.Position() == (100,100); then color=red; else color=black?"
?
I know how to set colors, but how do I get "my" location?
Secondly, how do I get values from a neighbor pixel?
I tried this:
vec4 nextColor = texture2D(tex, gl_TexCoord[1].st);
But not clear what it is returning? if I'm pixel 100,100; how do I get the values from 101,100 or 100,101?
How do I tell "which" pixel I am?
You're not a pixel. You're a fragment. There's a reason that OpenGL calls them "Fragment shaders"; it's because they aren't pixels yet. Indeed, not only may they never become pixels (via discard or depth tests or whatever), thanks to multisampling, multiple fragments can combine to form a single pixel.
If you want to tell where your fragment shader is in window-space, use gl_FragCoord. Fragment positions are floating-point values, not integers, so you have to test with a range instead of a single "100, 100" value.
Secondly, how do I get values from a neighbor pixel?
If you're talking about the neighboring framebuffer pixel, you don't. Fragment shaders cannot arbitrarily read from the framebuffer, either in their own position or in a neighboring one.
If you're talking about accessing a neighboring texel from the one you accessed, then that's just a matter of biasing the texture coordinate you pass to texture2D. You have to get the size of the texture (since you're not using GLSL 1.30 or above, you have to manually pass this in), invert the size and either add or subtract these sizes from the S and T component of the texture coordinate.
Easy peasy.
Just compute the size of a pixel based on resolution. Then look up +1 and -1.
vec2 onePixel = vec2(1.0, 1.0) / u_textureSize;
gl_FragColor = (
texture2D(u_image, v_texCoord) +
texture2D(u_image, v_texCoord + vec2(onePixel.x, 0.0)) +
texture2D(u_image, v_texCoord + vec2(-onePixel.x, 0.0))) / 3.0;
There's a good example here
Related
I'm fairly new to Shadertoy and GLSL in general. I have successfully duplicated numerous Shadertoy shaders into Blender without actually knowing how it all works. I have looked for tutorials but I'm more of a visual learner.
If someone could explain or, even better, provide some images that describe the difference between fragCoord, iResolution, & fragColor. That would be great!
I'm mainly interested in the Numbers. Because I use Blender I'm used to the canvas being 0 to 1 -or- -1 to 1
This one in particular has me a bit confused.
vec2 u = (fragCoord - iResolution.xy * .5) / iResolution.y * 8.;
I can't reproduce the remaining code in Blender without knowing the coordinate system.
Any help would be greatly appreciated!
It is normal, you cannot reproduce this code in blender without knowing the coordinate system.
The Shadertoy documentation states:
Image shaders implement the mainImage() function to generate
procedural images by calculating a color for each pixel in the image.
This function is invoked once in each pixel and the host application
must provide the appropriate input data and retrieve the output color
to assign it to the corresponding pixel on the screen. The signature
of this function is:
void mainImage( out vec4 fragColor, in vec2 fragCoord);
where fragCoord contains the coordinates of the pixel for which the
shader must calculate a color. These coordinates are counted in pixels
with values from 0.5 to resolution-0.5 over the entire rendering
surface and the resolution of this surface is transmitted to the
shader via the uniform iResolution variable.
Let me explain.
The iResolution variable is a uniform vec3 which contains the dimensions of the window and is sent to the shader with some openGL code.
The fragCoord variable is a built-in variable that contains the coordinates of the pixel where the shader is being applied.
More concretely:
fragCoord : is a vec2 that is between 0 > 640 on the X axis and 0 > 360 on the Y axis
iResolution : is a vec2 with an X value of 640 and a Y value of 360
quick note on how vectors work in OpenGL:
if you have also a hard time understanging how vector work in OpenGL, I highly recommand to read the anwser of Homan bellow, a very usefull introduction to OpenGL swizzling.
This image was calculated with the following code:
// Normalized pixel coordinates (between 0 and 1)
vec2 uv = fragCoord/iResolution.xy;
// Set R and G values based on position
vec3 col = vec3(uv.x,uv.y,0);
// Output to screen
fragColor = vec4(col,1.0);
The output ranges from 0,0 in the lower-left and 1,1 in the upper-right. This is the default lower-left windows space set by OpenGL.
This an image was calculated with the following code:
// Normalized pixel coordinates (between -0.5 and 0.5)
vec2 uv = (fragCoord - iResolution.xy * 0.5)/iResolution.xy;
// Set R and G values based on position
vec3 col = vec3(uv.x,uv.y,0);
// Output to screen
fragColor = vec4(col,1.0);
The output ranges from -0.5,-0.5 in the lower-left and 0.5,0.5 because
in the first step we subtract half of the window size [0.5] from each pixel coordinate [fragCoord]. You can see the effect in the way the red and green values don't kick into visibility until much later.
You might also want to normalize only the y axis by changing the first step to
vec2 uv = (fragCoord - iResolution.xy * 0.5)/iResolution.y;
Depending our your purpose the image can seem strange if you normalize both axes so this is a possible strategy.
This an image was calculated with the following code:
// Normalized pixel coordinates (between -0.5 to 0.5)
vec2 uv = (fragCoord - iResolution.xy * 0.5)/iResolution.xy;
// Set R and G values based on position using ceil() function
// The ceil() function returns the smallest integer that is greater than the uv value
vec3 col = vec3(ceil(uv.x),ceil(uv.y),0);
// Output to screen
fragColor = vec4(col,1.0);
The ceil() function allows us to see that the center of the image is 0, 0
As for the second part of the shadertoy documentation:
The output color is returned in fragColor as a four-component vector,
the last component being ignored by the client. The result is
retrieved in an "out" variable in anticipation of the future addition
of several rendering targets.
Really all this means is that fragColor contains four values that are shopped to the next stage in the rendering pipeline. You can find more about in and out variables here.
The values in fragColor determine the color of the pixel where the shader is being applied.
If you want to learn more about shaders these are some good starting places:
the book of shader - uniform
learn OpenGL - shader
Not to take away from the accepted answer, which is very thorough. But in case anyone else was confused about the types, iResolution is a 'uniform highp 3-component vector of float'... so actually a vec3? That's why we see in examples that fragCoord (actually a vec2) is divided by iResolution.xy (the .xy gives us a vec2). But what is the '.xy' thing? Is it a method? An attribute or property? With some random googling I found out that the '.xy' tacked on at the end is called 'swizzling'
https://www.khronos.org/opengl/wiki/Data_Type_(GLSL)#Vectors
(for convenience the gist of it is here below)
Swizzling
You can access the components of vectors using the following syntax:
vec4 someVec;
someVec.x + someVec.y;
This is called swizzling. You can use x, y, z, or w, referring to the
first, second, third, and fourth components, respectively.
The reason it has that name "swizzling" is because the following syntax is entirely valid:
vec2 someVec;
vec4 otherVec = someVec.xyxx;
vec3 thirdVec = otherVec.zyy;
You can use any combination of up to 4 of the letters to create a vector (of the same basic type) of that length. So otherVec.zyy is a vec3, which is how we can initialize a vec3 value with it. Any combination of up to 4 letters is acceptable, so long as the source vector actually has those components. Attempting to access the 'w' component of a vec3 for example is a compile-time error.
Swizzling also works on l-values (left values?):
vec4 someVec;
someVec.wzyx = vec4(1.0, 2.0, 3.0, 4.0); // Reverses the order.
someVec.zx = vec2(3.0, 5.0); // Sets the 3rd component of someVec to 3.0 and the 1st component to 5.0
However, when you use a swizzle as a way of setting component values, you cannot use the same swizzle component twice. So someVec.xx = vec2(4.0, 4.0); is not allowed.
Additionally, there are 3 sets of swizzle masks. You can use xyzw, rgba (for colors), or stpq (for texture coordinates). These three sets have no actual difference; they're just syntactic sugar. You cannot combine names from different sets in a single swizzle operation. So ".xrs" is not a valid swizzle mask.
In OpenGL 4.2 or ARB_shading_language_420pack, scalars can be swizzled as well. They obviously only have one source component, but it is legal to do this:
float aFloat;
vec4 someVec = aFloat.xxxx;
// -1 to 1
vec2 uv = (2.0 * fragCoord - iResolution.xy) / iResolution.xy;
vec3 col = vec3(uv.x, uv.y, 0.0);
fragColor = vec4(col1, 1.0);
So I have a Compute Shader that is supposed to take a texture and copy it over to another texture with slight modifications. I have confirmed that the textures are bound and that data can be written using RenderDoc which is a debugging tool for graphics. The issue I have is that inside the shader the variable gl_GlobalInvocationID, which is created by OpenGL, does not seem to work properly.
Here is my call of the compute shader: (The texture height is 480)
glDispatchCompute(1, this->m_texture_height, 1); //Call upon shader
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
And then we have my compute shader here:
#version 440
#extension GL_ARB_compute_shader : enable
#extension GL_ARB_shader_image_load_store : enable
layout (rgba8, binding=0) uniform image2D texture_source0;
layout (rgba8, binding=1) uniform image2D texture_target0;
layout (local_size_x=640 , local_size_y=1 , local_size_z=1) in; //Local work-group size
void main() {
ivec2 txlPos; //A variable keeping track of where on the texture current texel is from
vec4 result; //A variable to store color
txlPos = ivec2(gl_GlobalInvocationID.xy);
//txlPos = ivec2( (gl_WorkGroupID * gl_WorkGroupSize + gl_LocalInvocationID).xy );
result = imageLoad(texture_source0, txlPos); //Get color value
barrier();
result = vec4(txlPos, 0.0, 1.0);
imageStore(texture_target0, txlPos, result); //Save color in target texture
}
When I run this the target texture becomes entirely yellow, save for a 1pxl thick green line along the left border and a 1pxl thick red line along the bottom border. My expectation is to see some sort of gradient given that a save txlPos as a colour value.
Am I somehow defining my work-groups wrong? I've tried splitting the gl_GlobalInvokationID up into its components but not managed to get any wiser fiddling with them.
A 8-bit floating point texture can only store values between 0 and 1. Since gl_GlobalInvocationID is in most cases larger than 1, it get's clamped to the maximum value of 1 which makes the texture yellow.
If you want to create a gradient in both directions, then you have to make sure that the values stored start at 0 and end at 1. One possiblity is to divide by the maximum:
result = vec4(vec2(gl_GlobalInvocationID.xy) / vec2(640, 480), 0, 1);
As titled, gl_FragColor.a = 0 is supposed to make thing transparent. So what's the diff from discard?
For the following code
varying vec3 f_color;
uniform sampler2D mytexture;
varying vec2 texCoords;
float cutoff = 0.5;
void main(void) {
gl_FragColor = texture2D(mytexture,texCoords);
if( gl_FragColor.a < cutoff) discard;
}
discard will work but if I replace discard with gl_FragColor.a=0; , it has no effect. Why?
First, alpha to zero will only hide the pixel if blending is enabled. Discard will prevent the fragment from being written entirely. For example, if you write a fragment of alpha zero it will still update the depth buffer and stencil buffer even if you haven't altered the pixel's color. With discard it will not affect either of these.
For a full example let's say I render a tree branch with one quad and use alpha testing as you're doing. By discarding the fragments that are transparent I can later render something behind that branch because the depth buffer won't drop those fragments in between. If you just set alpha to zero, the depth buffer would hold the full quad and anything rendered afterwards would be occluded by the entire quad, not the leaves that passed the discard test.
I'm trying to write simple shader to put some "mark"(64*64) on base texture(128*128), to indicate where mark must be, i use cyan colored mark-sized(64*64) region on base texture.
becomes
Fragment shader
precision lowp float;
uniform sampler2D us_base_tex;
uniform sampler2D us_mark_tex;
varying vec2 vv_base_tex;
varying vec2 vv_mark_tex;
const vec4 c_mark_col = vec4(0.0, 1.0, 1.0, 1.0);//CYAN
void main()
{
vec4 base_col = texture2D(us_base_tex, vv_base_tex);
if(base_col == c_mark_col)
{
vec4 mark_col = texture2D(us_mark_tex, vv_mark_tex);//texelFetch magic overhere must be
base_col = mix(base_col, mark_col, mark_col.a);
}
gl_FragColor = base_col;
}
Of course, it not works as it should, i got something like this (transperity only for demonstration, there is no cyan region, only piece of "T"):
I try to figure it and only something like texelFetch will help me, but i can't figure out, how get tex coord of base texture cyan texel and converted it to get - first col/first row cyan base texel = first col/first row mark texel, second col/first row base = second col/first row of mark. e.t.c.
I think there's a way to do this in a single pass - but it involves using another texture that is capable of holding the information presented below. So you're going to increase your texture memory usage.
In this approach, the second texture (it can be generated by post-processing the original texture either offline or somehow) contains the UV map for the decal
R = normalized distance from left of cyan square
G = normalized distance from the top of the cyan square
B = don't care
Now the pixel shader is simple, all it needs to do is to see if the current texel is cyan, pick the R and G from the "decal-uvmap" texture and use those as texture coords to sample the decal texture.
Note that the bit depth of this texture (and it's size) is related to the size of the original texture so it may be possible to get away with a much smaller "decal-uvmap" texture than the original.
Right now I can obtain the color of the neighbouring pixel by doing
color = texture2D(backBuffer, vec2(gl_TexCoord[0].x + i,gl_TexCoord[0].y + j);
But how can I know what pixel that is or at least the current uv of that pixel on the texture?
Which pixel of the fragment. The UV / ST is a number from 0 to 1 representing the whole texture.
I want to calculate a pixels brightness based on its distance from a point.
gl_TexCoord[0].x gives you the s texture coordinate, while gl_TexCoord[0].y gives you the s texture coordinate.
If you are writing fragment shader, the pixel position shouldn't matter. I haven't tried, but maybe you can get it using gl_in, which is defined as :
in gl_PerVertex {
vec4 gl_Position;
float gl_PointSize;
float gl_ClipDistance[];
} gl_in[];
but I am not sure it if it available for pixel shader.