GLSL Texel Size - glsl

Why people use this equation to specify the size of a single texel?
vec2 tex_offset = 1.0 / textureSize(image, 0);
In my opinion, this should be:
whole texels -> whole texture size
single texel -> single texel size
So
singleTexelSize = (singleTexel * wholeTextureSize) / wholeTexel = wholeTextureSize / wholeTexel
This is the whole version of the fragment shader of the Bloom lesson about OpenGL of Joey de Vries:
#version 330 core
out vec4 FragColor;
in vec2 TexCoords;
uniform sampler2D image;
uniform bool horizontal;
uniform float weight[5] = float[] (0.227027, 0.1945946, 0.1216216, 0.054054, 0.016216);
void main()
{
vec2 tex_offset = 1.0 / textureSize(image, 0); // gets size of single texel
vec3 result = texture(image, TexCoords).rgb * weight[0]; // current fragment's contribution
if(horizontal)
{
for(int i = 1; i < 5; ++i)
{
result += texture(image, TexCoords + vec2(tex_offset.x * i, 0.0)).rgb * weight[i];
result += texture(image, TexCoords - vec2(tex_offset.x * i, 0.0)).rgb * weight[i];
}
}
else
{
for(int i = 1; i < 5; ++i)
{
result += texture(image, TexCoords + vec2(0.0, tex_offset.y * i)).rgb * weight[i];
result += texture(image, TexCoords - vec2(0.0, tex_offset.y * i)).rgb * weight[i];
}
}
FragColor = vec4(result, 1.0);
}

I can't follow your proposed way of calculating texel size but maybe this makes it more clear:
Texture coordinates (also referred to as UVs) are normalized, so they're fractions between 0 and 1 no matter the size of the texture, meaning that a coordinate of 1 can be 256 pixels or 2048 pixels. To calculate how to get from one dedicated value to the next in a normalized space you need to divide 1 by the number of individual values.
To put in in a different context: Assume I have 10 apples, so that's 100%, how much percent make up one apple? 100(%) / 10(apples) = 10(%) = 1(apple).
So if you now want only whole apples from me you know that you have to ask for a multiple of 10%. Convert this to fractions by dividing by 100 and you're back to UVs.
For further reading check out this answer, it also has a nice diagram:
Simulate OpenGL texture wrap + interpolation on canvas?

Related

OpenGL: how to pass vertex pos from Geometry shader to fragment shader?

I am currently learning shaders in OpenGL and finished writing my "drawText" geometry shader, so I can draw dynamic text ( content change every frame ), without recreating VBO every frame.
It's working nicely but it's limited to 28 chars, because of the GL_MAX_GEOMETRY_TOTAL_OUTPUT_COMPONENTS limitations that is equal 1024.
ATM I have 6 components per vertex emitted vec4 pos and vec2 texCoord.
Which give me 1024/6 = 170 vertices to use for my triangle strip.
I need 6 vertices per char ( instead first and last char ) to display a quad per char and 2 vertices to move to next char with degenerated triangle.
That gives me 170/6 = 28 chars.
So when I have a long text, I split it into text of 28 chars.
So now I try to optimize that and get my geometry shader to draw more than 28 chars.
So because I am in 2D, I was trying to find a way to store the texCoord in the pos.zw for the fragment shader. and remove the out vec2 texCoord in my geometry shader. Which will make me emit only 4 components per vertex, which would bring me to 42 chars.
But reading the fragment shader doc and fragment systems input I don't see who to do this.
So, is there a way to achieve that?
My code for reference
Vertex Shader
#version 330 core
layout (location = 0) in vec2 aPos;
uniform vec2 textPosition;
void main()
{
gl_Position = vec4(aPos ,0, 1) + vec4(textPosition, 0, 0);
}
Fragment Shader
#version 330 core
out vec4 fragColor;
in vec2 texCoord;
uniform vec4 textColor;
uniform sampler2D outTexture;
void main()
{
fragColor = texture(outTexture, texCoord) * textColor;
}
Geometry Shader
#version 330 core
layout (points) in;
layout (triangle_strip, max_vertices = 170) out;
// max components and vertices are 1024
// vec4 pos and vec2 text coord per vertex, that 6 components per vertex, 1024 / 6 = 170
out vec2 texCoord;
uniform float screenRatio = 1;
uniform float fontRatio = 1;
uniform float fontInterval = 0; // distance between letters
uniform float fontSize = 0.025f; // default value, screen coord range is -1f , 1f
uniform int textString[8]; // limited to 28 chars . 170 vertices / 6 = 28, 28 / 4 = 7 ints.
void main()
{
vec4 position = gl_in[0].gl_Position;
float fsx = fontSize * fontRatio * screenRatio;
float fsy = fontSize;
float tsy = 1.0f / 16.0f; // fixed in a 16x16 chars bitmap
float tsx = tsy;
float tw = tsx * fontRatio;
float to = ( tsx - tw ) * 0.5f;
vec4 ptl = position + vec4(0,0,0,0); // top left
vec4 ptr = position + vec4(fsx,0,0,0); // top right
vec4 pbl = position + vec4(0,fsy,0,0); // bottom left
vec4 pbr = position + vec4(fsx,fsy,0,0); // bottom right
vec2 tt; // tex coord top
vec2 tb; // tex coord bottom
fsx += fontInterval;
int i = 0; // index in int array
int si = 0; // sub index in int
int ti = textString[0];
int ch = 0;
do
{
// unpack a char, 4 chars per int
ch = (ti >> si) & (0xFF);
// string ends with \0 or end of array
if ( ch == 0 || i >= 8)
break;
// compute row and col of char in bitmaps 16x16 chars
int r = ch >> 4;
int c = ch - ( r << 4 );
// compute tex coord from row and column
tb = vec2(c * tsx + to, 1.0f - r * tsy);
tt = vec2(tb.x , tb.y - tsy);
texCoord = tt;
gl_Position = ptl;
EmitVertex();
EmitVertex();
texCoord = tb;
gl_Position = pbl;
EmitVertex();
tt.x += tw;
tb.x += tw;
texCoord = tt;
gl_Position = ptr;
EmitVertex();
texCoord = tb;
gl_Position = pbr;
EmitVertex();
EmitVertex();
// advance of 1 char
ptl.x += fsx;
ptr.x += fsx;
pbl.x += fsx;
pbr.x += fsx;
si += 8;
if ( si >= 32 )
{
si = 0;
++i;
ti = textString[i];
}
}
while ( true );
EndPrimitive();
}
The position of a vertex to be sent to the rasterizer, as defined through gl_Position, contains 4 components. Always. And the meaning of those components is defined by the rasterizer and the OpenGL rendering system.
You cannot bypass or otherwise get around it. The output position has 4 components, and you cannot hide texture coordinates or other arbitrary data within them.
If you need to output more stuff from the GS, then you need to more efficiently use your GS's vertex output. As it currently stands, you output degenerate strips between each quad. This means that for every 6 vertices, only 4 of them are meaningful. You're using degenerate strips to split quads.
Instead of doing that, you should use EndPrimitive to split your quads. That will remove 1/3rd of all of your vertex output, giving you more components to put to actual good use.

Optimizing pixel-swapping shader

I'm using a shader that swaps colors/palettes on a texture. The shader checks a pixel for transparency and then sets the pixel if not transparent. Is there an efficient way to ignore 0 alpha pixels other than a potential branch? In this case, where I set pixel = newPixel:
uniform bool alternate;
uniform sampler2D texture;
void main()
{
vec4 pixel = texture2D(bitmap, openfl_TextureCoordv);
if(alternate)
{
vec4 newPixel = texture2D(texture, vec2(pixel.r, pixel.b));
if(newPixel.a != 0.0)
pixel = newPixel;
}
gl_FragColor = pixel;
}
You can use mix and step:
void main()
{
vec4 pixel = texture2D(bitmap, openfl_TextureCoordv);
vec4 newPixel = texture2D(texture, vec2(pixel.r, pixel.b));
gl_FragColor = mix(pixel, newPixel,
float(alternate) * (1.0 - step(newPixel.a, 0.0)));
}
You may want to make a smooth transition depending on the alpha channel. In this case you only need mix:
gl_FragColor = mix(pixel, newPixel, float(alternate) * newPixel.a);
I would look into the step function. You can express the if statement as a product of two conditions.
For example:
if (a >= 0) {
b = c;
}
Is equivalent to
b = (step(a, 0)*c) + (1.0 - step(a, 0)*b);

How can I render a textured quad so that I fade different corners?

I'm drawing textured quads to the screen in a 2D environment. The quads are used as a tile-map. In order to "blend" some of the tiles together I had the idea like:
A single "grass" tile drawn on top of dirt would render it as a faded circle of grass; faded from probably the quarter point.
If there was a larger area of grass tiles, then the edges would gradually fade from the quarter point that is on the edge of the grass.
So if the entire left-edge of the quad was to be faded, it would have 0 opacity at the left-edge, and then full opacity at one quarter of the width of the quad. Right edge fade would have full opacity at the three-quarters width, and fade down to 0 opacity at the right-most edge.
I figured that setting 4 corners as "on" or "off" would be enough to have the fragment shader work it out. However, I can't work it out.
If corner0 were 0 the result should be something like this for the quad:
If both corner0 and corner1 were 0 then it would look like this:
This is what I have so far:
#version 330
layout(location=0) in vec3 inVertexPosition;
layout(location=1) in vec2 inTexelCoords;
layout(location=2) in vec2 inElementPosition;
layout(location=3) in vec2 inElementSize;
layout(location=4) in uint inCorner0;
layout(location=5) in uint inCorner1;
layout(location=6) in uint inCorner2;
layout(location=7) in uint inCorner3;
smooth out vec2 texelCoords;
flat out vec2 elementPosition;
flat out vec2 elementSize;
flat out uint corner0;
flat out uint corner1;
flat out uint corner2;
flat out uint corner3;
void main()
{
gl_Position = vec4(inVertexPosition.x,
-inVertexPosition.y,
inVertexPosition.z, 1.0);
texelCoords = vec2(inTexelCoords.x,1-inTexelCoords.y);
elementPosition.x = (inElementPosition.x + 1.0) / 2.0;
elementPosition.y = -((inElementPosition.y + 1.0) / 2.0);
elementSize.x = (inElementSize.x) / 2.0;
elementSize.y = -((inElementSize.y) / 2.0);
corner0 = inCorner0;
corner1 = inCorner1;
corner2 = inCorner2;
corner3 = inCorner3;
}
The element position is provided in the range of [-1,1], the corner variables are all either 0 or 1. These are provided on an instance basis, whereas the vertex position and texelcoords are provided per-vertex. The vertex y-coord is inverted because I work in reverse and just flip it here for ease. ElementSize is on the scale of [0,2], so I'm just converting it to [0,1] range.
The UV coords could be any values, not neccessarily [0,1].
Here's the frag shader
#version 330
precision highp float;
layout(location=0) out vec4 frag_colour;
smooth in vec2 texelCoords;
flat in vec2 elementPosition;
flat in vec2 elementSize;
flat in uint corner0;
flat in uint corner1;
flat in uint corner2;
flat in uint corner3;
uniform sampler2D uTexture;
const vec2 uScreenDimensions = vec2(600,600);
void main()
{
vec2 uv = texelCoords;
vec4 c = texture(uTexture,uv);
frag_colour = c;
vec2 fragPos = gl_FragCoord.xy / uScreenDimensions;
// What can I do using the fragPos, elementPos??
}
Basically, I'm not sure what I can do using the fragPos and elementPosition to fade pixels toward a corner if that corner is 0 instead of 1. I kind of understand that it should be based on the distance of the frag from the corner position... but I can't work it out. I added elementSize because I think it's needed to determine how far from the corner the given frag is...
To achieve a fading effect, you have to use Blending. YOu have to set the alpha channel of the fragment color dependent on a scale:
frag_colour = vec4(c.rgb, c.a * scale);
scale has to be computed dependent on the texture coordinates (uv). If a coordinate is in range [0.0, 0.25] or [0.75, 1.0] then the texture has to be faded dependent on the corresponding cornerX variable. In the following the variables uv is assumed to be a 2 dimensional vector, in range [0, 1].
Compute a linear gradients for the left, right, bottom and top side, dependent on uv:
float gradL = min(1.0, uv.x * 4.0);
float gradR = min(1.0, (1.0 - uv.x) * 4.0);
float gradT = min(1.0, uv.y * 4.0);
float gradB = min(1.0, (1.0 - uv.y) * 4.0);
Or compute Hermite gradients by using smoothstep:
float gradL = smoothstep(0.0, 0.25, uv.x);
float gradR = 1.0 - smoothstep(0.75, 1.0, uv.x);
float gradT = smoothstep(0.0, 0.25, uv.y);
float gradB = 1.0 - smoothstep(0.75, 1.0, uv.y);
Compute the fade factor for the 4 corners and the 4 sides dependent on gradL, gradR, gradT, gradB and the corresponding cornerX variable. Finally compute the maximum fade factor:
float fade0 = float(corner0) * max(0.0, 1.0 - dot(vec2(0.707), vec2(gradL, gradT)));
float fade1 = float(corner1) * max(0.0, 1.0 - dot(vec2(0.707), vec2(gradL, gradB)));
float fade2 = float(corner2) * max(0.0, 1.0 - dot(vec2(0.707), vec2(gradR, gradB)));
float fade3 = float(corner3) * max(0.0, 1.0 - dot(vec2(0.707), vec2(gradR, gradT)));
float fadeL = float(corner0) * float(corner1) * (1.0 - gradL);
float fadeB = float(corner1) * float(corner2) * (1.0 - gradB);
float fadeR = float(corner2) * float(corner3) * (1.0 - gradR);
float fadeT = float(corner3) * float(corner0) * (1.0 - gradT);
float fade = max(
max(max(fade0, fade1), max(fade2, fade3)),
max(max(fadeL, fadeR), max(fadeB, fadeT)));
At the end compute the scale and set the fragment color:
float scale = 1.0 - fade;
frag_colour = vec4(c.rgb, c.a * scale);

How to get a smooth result with RSM (Reflective Shadow Mapping)?

I'm trying to implement a Reflective Shadow Mapping program with Vulkan.
The problem is that a get bad result :
As you can see the result is not smooth.
Here I am rendering in a first pass the position, normal and flux from the light position in 3 textures with a resolution of 512 * 512.
In a second pass, I compute the indirect illumination from the first pass textures according to this paper (http://www.klayge.org/material/3_12/GI/rsm.pdf) :
for(int i = 0; i < 151; i++)
{
vec4 rsmProjCoords = projCoords + vec4(rsmDiskSampling[i] * 0.09, 0.0, 0.0);
vec3 indirectLightPos = texture(rsmPosition, rsmProjCoords.xy).rgb;
vec3 indirectLightNorm = texture(rsmNormal, rsmProjCoords.xy).rgb;
vec3 indirectLightFlux = texture(rsmFlux, rsmProjCoords.xy).rgb;
vec3 r = worldPos - indirectLightPos;
float distP2 = dot( r, r );
vec3 emission = indirectLightFlux * (max(0.0, dot(indirectLightNorm, r)) * max(0.0, dot(N, -r)));
emission *= rsmDiskSampling[i].x * rsmDiskSampling[i].x / (distP2 * distP2);
indirectRSM += emission;
}
The problem is fixed.
The main problem was the sampling, I was using a linear sampling instead of a nearest sampling :
samplerInfo.magFilter = VK_FILTER_NEAREST;
samplerInfo.minFilter = VK_FILTER_NEAREST;
Other problems were the number of VPL used and the distance between them.

Shader only rendering 1/4th of the screen

I'm currently trying to create a gaussian blur shader, and while I've successfully created the blur effect my shader only renders the lower right quarter of my screen like the image shows: And just to make it clear; it has NOT re-scaled the image, it just doesn't render the rest of it.
Here's my shader code with comments of my thoughts:
Vertex (horizontal, vertical almost identical. See below)
#version 400
in vec2 a_texCoord0; //My tex coords
out vec2 blurTextureCoords[11]; //Sample 11px (i.e, 5 right 5 left of center)
uniform float u_width; //Width of the screen. Used to calculate pixel size
void main(void){
gl_Position = vec4(a_texCoord0, 0.0, 1.0);
vec2 centerTexCoords = a_texCoord0 * 0.5 + 0.5; //Maybe this is somehow wrong?
float pixelSize = 1.0 / u_width; //This is kind of interesting, because my shader sometimes tells me that "no uniform value was found with name 'u_width', but it still seems to work as if I change the values manually (ex. set it to 1920) it still looks normal.
for(int i=-5; i<=5; i++) {
blurTextureCoords[i+5] = centerTexCoords + vec2(pixelSize * i, 0.0); //I also thought that it might be because I multiply a float with an integer, but if I do float(i) instead of just i it still looks the same.
}
}
Fragment
#version 400
#ifdef GL_ES
precision mediump float;
#endif
in vec2 blurTextureCoords[11];
out vec4 out_colour;
uniform sampler2D u_texture;
void main(void){
//The actual bluring
out_colour = vec4(0.0);
out_colour += texture(u_texture, blurTextureCoords[0]) * 0.0093;
out_colour += texture(u_texture, blurTextureCoords[1]) * 0.028002;
out_colour += texture(u_texture, blurTextureCoords[2]) * 0.065984;
out_colour += texture(u_texture, blurTextureCoords[3]) * 0.121703;
out_colour += texture(u_texture, blurTextureCoords[4]) * 0.175713;
out_colour += texture(u_texture, blurTextureCoords[5]) * 0.198596;
out_colour += texture(u_texture, blurTextureCoords[6]) * 0.175713;
out_colour += texture(u_texture, blurTextureCoords[7]) * 0.121703;
out_colour += texture(u_texture, blurTextureCoords[8]) * 0.065984;
out_colour += texture(u_texture, blurTextureCoords[9]) * 0.028002;
out_colour += texture(u_texture, blurTextureCoords[10]) * 0.0093;
}
Additional code
Creating a shader:
//I also have one for the vertical shader, it's almost exactly the same.
horizontalShader = new ShaderProgram(
Gdx.files.internal("graphics/shaders/post-processing/blur/horizontalBlur.vert"),
Gdx.files.internal("graphics/shaders/post-processing/blur/blur.frag"));
horizontalShader.pedantic = false;
horizontalShader.begin();
horizontalShader.setUniformf("u_width", Gdx.graphics.getWidth());
horizontalShader.end();
if (horizontalShader.getLog().length() != 0) {
System.out.println("Horizontal shader! \n" + horizontalShader.getLog());
}
Rendering to FBO then to screen:
// Horozontal blur
horizontalFBO.begin();
spriteBatch.begin();
spriteBatch.setShader(horizontalShader);
background_image.draw(spriteBatch);
spriteBatch.end();
horizontalFBO.end();
// Vertical blur
verticalFBO.begin();
spriteBatch.begin();
spriteBatch.setShader(verticalShader);
spriteBatch.draw(horizontalFBO.getColorBufferTexture(), 0, 0);
spriteBatch.end();
verticalFBO.end();
// Normal FBO (screen)
spriteBatch.begin();
spriteBatch.setShader(null);
spriteBatch.draw(verticalFBO.getColorBufferTexture(), 0, 0);
spriteBatch.end();
Additional info
I use two FBOs, but it seems that those are not the root of the problem, as the problem still persists if I just render directly to the screen using these shaders.
I have two vertex shaders, one for the horizontal and one for the vertical blur. The only difference is the uniform name u_width becomes u_height and the blurTextureCoords[i+5] = centerTexCoords + vec2(pixelSize * i, 0.0); becomes blurTextureCoords[i+5] = centerTexCoords + vec2(0.0, pixelSize * i);