GLSL - put decal texture into base texture with color indication - opengl

I'm trying to write simple shader to put some "mark"(64*64) on base texture(128*128), to indicate where mark must be, i use cyan colored mark-sized(64*64) region on base texture.
becomes
Fragment shader
precision lowp float;
uniform sampler2D us_base_tex;
uniform sampler2D us_mark_tex;
varying vec2 vv_base_tex;
varying vec2 vv_mark_tex;
const vec4 c_mark_col = vec4(0.0, 1.0, 1.0, 1.0);//CYAN
void main()
{
vec4 base_col = texture2D(us_base_tex, vv_base_tex);
if(base_col == c_mark_col)
{
vec4 mark_col = texture2D(us_mark_tex, vv_mark_tex);//texelFetch magic overhere must be
base_col = mix(base_col, mark_col, mark_col.a);
}
gl_FragColor = base_col;
}
Of course, it not works as it should, i got something like this (transperity only for demonstration, there is no cyan region, only piece of "T"):
I try to figure it and only something like texelFetch will help me, but i can't figure out, how get tex coord of base texture cyan texel and converted it to get - first col/first row cyan base texel = first col/first row mark texel, second col/first row base = second col/first row of mark. e.t.c.

I think there's a way to do this in a single pass - but it involves using another texture that is capable of holding the information presented below. So you're going to increase your texture memory usage.
In this approach, the second texture (it can be generated by post-processing the original texture either offline or somehow) contains the UV map for the decal
R = normalized distance from left of cyan square
G = normalized distance from the top of the cyan square
B = don't care
Now the pixel shader is simple, all it needs to do is to see if the current texel is cyan, pick the R and G from the "decal-uvmap" texture and use those as texture coords to sample the decal texture.
Note that the bit depth of this texture (and it's size) is related to the size of the original texture so it may be possible to get away with a much smaller "decal-uvmap" texture than the original.

Related

OpenGL vertices jitter when moving - 2D scene

I am working on a 2d project and I noticed the following issue:
As you can see in the gif above, when the object is making small movements, its vertices jitter.
To render, every frame I clear a VBO, calculate the new positions of the vertices and then insert them to the VBO. Every frame, I create the exact same structure, but from a different origin.
Is there a way to get smooth motion even when the displacement between each frame is so minor?
I am using SDL2 so double buffering is enabled by default.
This is a minor issue, but it becomes very annoying once I apply a texture to the model.
Here is the vertex shader I am using:
#version 330 core
layout (location = 0) in vec2 in_position;
layout (location = 1) in vec2 in_uv;
layout (location = 2) in vec3 in_color;
uniform vec2 camera_position, camera_size;
void main() {
gl_Position = vec4(2 * (in_position - camera_position) / camera_size, 0.0f, 1.0f);
}
What you see is caused by the rasterization algorithm. Consider the following two rasterizations of the same geometry (red lines) offset by only half a pixel:
As can be seen, shifting by just half a pixel can change the perceived spacing between the vertical lines from three pixels to two pixels. Moreover, the horizontal lines didn't shift, therefore their appearance didn't change.
This inconsistent behavior is what manifests as "wobble" in your animation.
One way to solve this is to enable anti-aliasing with glEnable(GL_LINE_SMOOTH). Make sure to have correct blending enabled. This will, however, result in blurred lines when they fall right between the pixels.
If instead you really need the crisp jagged line look (eg pixel art), then you need to make sure that your geometry only ever moves by an integer number of pixels:
vec2 scale = 2/camera_size;
vec2 offset = -scale*camera_position;
vec2 pixel_size = 2/viewport_size;
offset = round(offset/pixel_size)*pixel_size; // snap to pixels
gl_Position = vec4(scale*in_position + offset, 0.0f, 1.0f);
Add viewport_size as a uniform.

Shadertoy - fragCoord vs iResolution vs fragColor

I'm fairly new to Shadertoy and GLSL in general. I have successfully duplicated numerous Shadertoy shaders into Blender without actually knowing how it all works. I have looked for tutorials but I'm more of a visual learner.
If someone could explain or, even better, provide some images that describe the difference between fragCoord, iResolution, & fragColor. That would be great!
I'm mainly interested in the Numbers. Because I use Blender I'm used to the canvas being 0 to 1 -or- -1 to 1
This one in particular has me a bit confused.
vec2 u = (fragCoord - iResolution.xy * .5) / iResolution.y * 8.;
I can't reproduce the remaining code in Blender without knowing the coordinate system.
Any help would be greatly appreciated!
It is normal, you cannot reproduce this code in blender without knowing the coordinate system.
The Shadertoy documentation states:
Image shaders implement the mainImage() function to generate
procedural images by calculating a color for each pixel in the image.
This function is invoked once in each pixel and the host application
must provide the appropriate input data and retrieve the output color
to assign it to the corresponding pixel on the screen. The signature
of this function is:
void mainImage( out vec4 fragColor, in vec2 fragCoord);
where fragCoord contains the coordinates of the pixel for which the
shader must calculate a color. These coordinates are counted in pixels
with values from 0.5 to resolution-0.5 over the entire rendering
surface and the resolution of this surface is transmitted to the
shader via the uniform iResolution variable.
Let me explain.
The iResolution variable is a uniform vec3 which contains the dimensions of the window and is sent to the shader with some openGL code.
The fragCoord variable is a built-in variable that contains the coordinates of the pixel where the shader is being applied.
More concretely:
fragCoord : is a vec2 that is between 0 > 640 on the X axis and 0 > 360 on the Y axis
iResolution : is a vec2 with an X value of 640 and a Y value of 360
quick note on how vectors work in OpenGL:
if you have also a hard time understanging how vector work in OpenGL, I highly recommand to read the anwser of Homan bellow, a very usefull introduction to OpenGL swizzling.
This image was calculated with the following code:
// Normalized pixel coordinates (between 0 and 1)
vec2 uv = fragCoord/iResolution.xy;
// Set R and G values based on position
vec3 col = vec3(uv.x,uv.y,0);
// Output to screen
fragColor = vec4(col,1.0);
The output ranges from 0,0 in the lower-left and 1,1 in the upper-right. This is the default lower-left windows space set by OpenGL.
This an image was calculated with the following code:
// Normalized pixel coordinates (between -0.5 and 0.5)
vec2 uv = (fragCoord - iResolution.xy * 0.5)/iResolution.xy;
// Set R and G values based on position
vec3 col = vec3(uv.x,uv.y,0);
// Output to screen
fragColor = vec4(col,1.0);
The output ranges from -0.5,-0.5 in the lower-left and 0.5,0.5 because
in the first step we subtract half of the window size [0.5] from each pixel coordinate [fragCoord]. You can see the effect in the way the red and green values don't kick into visibility until much later.
You might also want to normalize only the y axis by changing the first step to
vec2 uv = (fragCoord - iResolution.xy * 0.5)/iResolution.y;
Depending our your purpose the image can seem strange if you normalize both axes so this is a possible strategy.
This an image was calculated with the following code:
// Normalized pixel coordinates (between -0.5 to 0.5)
vec2 uv = (fragCoord - iResolution.xy * 0.5)/iResolution.xy;
// Set R and G values based on position using ceil() function
// The ceil() function returns the smallest integer that is greater than the uv value
vec3 col = vec3(ceil(uv.x),ceil(uv.y),0);
// Output to screen
fragColor = vec4(col,1.0);
The ceil() function allows us to see that the center of the image is 0, 0
As for the second part of the shadertoy documentation:
The output color is returned in fragColor as a four-component vector,
the last component being ignored by the client. The result is
retrieved in an "out" variable in anticipation of the future addition
of several rendering targets.
Really all this means is that fragColor contains four values that are shopped to the next stage in the rendering pipeline. You can find more about in and out variables here.
The values in fragColor determine the color of the pixel where the shader is being applied.
If you want to learn more about shaders these are some good starting places:
the book of shader - uniform
learn OpenGL - shader
Not to take away from the accepted answer, which is very thorough. But in case anyone else was confused about the types, iResolution is a 'uniform highp 3-component vector of float'... so actually a vec3? That's why we see in examples that fragCoord (actually a vec2) is divided by iResolution.xy (the .xy gives us a vec2). But what is the '.xy' thing? Is it a method? An attribute or property? With some random googling I found out that the '.xy' tacked on at the end is called 'swizzling'
https://www.khronos.org/opengl/wiki/Data_Type_(GLSL)#Vectors
(for convenience the gist of it is here below)
Swizzling
You can access the components of vectors using the following syntax:
vec4 someVec;
someVec.x + someVec.y;
This is called swizzling. You can use x, y, z, or w, referring to the
first, second, third, and fourth components, respectively.
The reason it has that name "swizzling" is because the following syntax is entirely valid:
vec2 someVec;
vec4 otherVec = someVec.xyxx;
vec3 thirdVec = otherVec.zyy;
You can use any combination of up to 4 of the letters to create a vector (of the same basic type) of that length. So otherVec.zyy is a vec3, which is how we can initialize a vec3 value with it. Any combination of up to 4 letters is acceptable, so long as the source vector actually has those components. Attempting to access the 'w' component of a vec3 for example is a compile-time error.
Swizzling also works on l-values (left values?):
vec4 someVec;
someVec.wzyx = vec4(1.0, 2.0, 3.0, 4.0); // Reverses the order.
someVec.zx = vec2(3.0, 5.0); // Sets the 3rd component of someVec to 3.0 and the 1st component to 5.0
However, when you use a swizzle as a way of setting component values, you cannot use the same swizzle component twice. So someVec.xx = vec2(4.0, 4.0); is not allowed.
Additionally, there are 3 sets of swizzle masks. You can use xyzw, rgba (for colors), or stpq (for texture coordinates). These three sets have no actual difference; they're just syntactic sugar. You cannot combine names from different sets in a single swizzle operation. So ".xrs" is not a valid swizzle mask.
In OpenGL 4.2 or ARB_shading_language_420pack, scalars can be swizzled as well. They obviously only have one source component, but it is legal to do this:
float aFloat;
vec4 someVec = aFloat.xxxx;
// -1 to 1
vec2 uv = (2.0 * fragCoord - iResolution.xy) / iResolution.xy;
vec3 col = vec3(uv.x, uv.y, 0.0);
fragColor = vec4(col1, 1.0);

Use RGB texture as alpha values/Subpixel font rendering in OpenGL

Currently I'm using FreeType in subpixel mode and take the largest color of each pixel as alpha value, with the following fragment shader:
uniform sampler2D Image;
uniform vec4 Color;
smooth in vec2 vVaryingTexCoord;
out vec4 vFragColor;
void main(void){
vec4 color = texture(Image, vVaryingTexCoord);
vFragColor = color * Color;
}
This works fine for dark backgrounds, but on lighter ones the border pixels show (e.g. when a text pixel is (1,0,0)). To make it work with brighter backgrounds, I'd need to pass the background color and do the blending myself, which starts breaking down once I move to more complex backgrounds.
Is there a way to use the RGB values from FreeType as alpha values for a solid color (which is passed to the shader)? This formula basically, where b = background pixel, t = current text pixel, c = static color:
b*((1,1,1) - t) + t*c.rgb*c.a
I think drawing everything else first and passing that framebuffer to the font shader would work, but that seems a bit overkill. Is there a way doing this in the OpenGL blend stage? I tried playing around with glBlendFunc and such, but didn't get anywhere.
It's possible using Dual Source Blending, available since OpenGL 3.3. This spec draft even mentions subpixel rendering as use case. All that is needed to make it work:
glBlendFunc(GL_SRC1_COLOR, GL_ONE_MINUS_SRC1_COLOR);
(don't forget to enable GL_BLEND, it happens to me all the time :D)
Specify dual output in the fragment shader: (You can bind by name instead if you want, see spec)
layout(location = 0, index = 0) out vec4 color;
layout(location = 0, index = 1) out vec4 colorMask;
In main:
color = StaticColor;
colorMask = StaticColor.a*texel;
Where StaticColor is the global text color uniform, and texel is the current pixel value of the glyph.

OpenGL: Compute Shader - gl_GlobalInvocationID giving static output

So I have a Compute Shader that is supposed to take a texture and copy it over to another texture with slight modifications. I have confirmed that the textures are bound and that data can be written using RenderDoc which is a debugging tool for graphics. The issue I have is that inside the shader the variable gl_GlobalInvocationID, which is created by OpenGL, does not seem to work properly.
Here is my call of the compute shader: (The texture height is 480)
glDispatchCompute(1, this->m_texture_height, 1); //Call upon shader
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
And then we have my compute shader here:
#version 440
#extension GL_ARB_compute_shader : enable
#extension GL_ARB_shader_image_load_store : enable
layout (rgba8, binding=0) uniform image2D texture_source0;
layout (rgba8, binding=1) uniform image2D texture_target0;
layout (local_size_x=640 , local_size_y=1 , local_size_z=1) in; //Local work-group size
void main() {
ivec2 txlPos; //A variable keeping track of where on the texture current texel is from
vec4 result; //A variable to store color
txlPos = ivec2(gl_GlobalInvocationID.xy);
//txlPos = ivec2( (gl_WorkGroupID * gl_WorkGroupSize + gl_LocalInvocationID).xy );
result = imageLoad(texture_source0, txlPos); //Get color value
barrier();
result = vec4(txlPos, 0.0, 1.0);
imageStore(texture_target0, txlPos, result); //Save color in target texture
}
When I run this the target texture becomes entirely yellow, save for a 1pxl thick green line along the left border and a 1pxl thick red line along the bottom border. My expectation is to see some sort of gradient given that a save txlPos as a colour value.
Am I somehow defining my work-groups wrong? I've tried splitting the gl_GlobalInvokationID up into its components but not managed to get any wiser fiddling with them.
A 8-bit floating point texture can only store values between 0 and 1. Since gl_GlobalInvocationID is in most cases larger than 1, it get's clamped to the maximum value of 1 which makes the texture yellow.
If you want to create a gradient in both directions, then you have to make sure that the values stored start at 0 and end at 1. One possiblity is to divide by the maximum:
result = vec4(vec2(gl_GlobalInvocationID.xy) / vec2(640, 480), 0, 1);

How to get pixel information inside a fragment shader?

In my fragment shader I can load a texture, then do this:
uniform sampler2D tex;
void main(void) {
vec4 color = texture2D(tex, gl_TexCoord[0].st);
gl_FragColor = color;
}
That sets the current pixel to color value of texture. I can modify these, etc and it works well.
But a few questions. How do I tell "which" pixel I am? For example, say I want to set pixel 100,100 (x,y) to red. Everything else to black. How do I do a :
"if currentSelf.Position() == (100,100); then color=red; else color=black?"
?
I know how to set colors, but how do I get "my" location?
Secondly, how do I get values from a neighbor pixel?
I tried this:
vec4 nextColor = texture2D(tex, gl_TexCoord[1].st);
But not clear what it is returning? if I'm pixel 100,100; how do I get the values from 101,100 or 100,101?
How do I tell "which" pixel I am?
You're not a pixel. You're a fragment. There's a reason that OpenGL calls them "Fragment shaders"; it's because they aren't pixels yet. Indeed, not only may they never become pixels (via discard or depth tests or whatever), thanks to multisampling, multiple fragments can combine to form a single pixel.
If you want to tell where your fragment shader is in window-space, use gl_FragCoord. Fragment positions are floating-point values, not integers, so you have to test with a range instead of a single "100, 100" value.
Secondly, how do I get values from a neighbor pixel?
If you're talking about the neighboring framebuffer pixel, you don't. Fragment shaders cannot arbitrarily read from the framebuffer, either in their own position or in a neighboring one.
If you're talking about accessing a neighboring texel from the one you accessed, then that's just a matter of biasing the texture coordinate you pass to texture2D. You have to get the size of the texture (since you're not using GLSL 1.30 or above, you have to manually pass this in), invert the size and either add or subtract these sizes from the S and T component of the texture coordinate.
Easy peasy.
Just compute the size of a pixel based on resolution. Then look up +1 and -1.
vec2 onePixel = vec2(1.0, 1.0) / u_textureSize;
gl_FragColor = (
texture2D(u_image, v_texCoord) +
texture2D(u_image, v_texCoord + vec2(onePixel.x, 0.0)) +
texture2D(u_image, v_texCoord + vec2(-onePixel.x, 0.0))) / 3.0;
There's a good example here