Why is this simple mask shader failing? - opengl

I'm trying to create a "transition" effect between two 2D scenes. I have 3 textures: before, after, and mask. before and after are self-explanatory. mask is a simple monochrome texture that defines how the first two get composited. It changes over time, to perform the transition. All 3 textures are the same size.
I've verified that all 3 textures contain the correct data, but when I try to perform the compositing, I end up with either before in its entirety, or after in its entirety, seemingly at random.
Here's what I'm doing:
Application code:
glEnable(GL_MULTISAMPLE);
glActiveTextureARB(GL_TEXTURE1_ARB);
glEnable(GL_TEXTURE_RECTANGLE_ARB);
after.handle.bind;
glActiveTextureARB(GL_TEXTURE2_ARB);
glEnable(GL_TEXTURE_RECTANGLE_ARB);
mask.handle.bind;
glActiveTextureARB(GL_TEXTURE0_ARB);
before.handle.bind;
GShaders.UseShaderProgram(maskProgramHandle); //GShaders: Global shader engine
GShaders.SetUniformValue(maskProgramHandle, 'before', 0);
GShaders.SetUniformValue(maskProgramHandle, 'after', 1);
GShaders.SetUniformValue(maskProgramHandle, 'mask', 2);
before.DrawFull; //draws the texture to the screen as a quad.
glDisable(GL_MULTISAMPLE);
Vertex shader:
varying vec4 v_color;
varying vec2 texture_coordinate;
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
texture_coordinate = vec2(gl_MultiTexCoord0);
v_color = gl_Color;
gl_FrontColor = gl_Color;
}
Fragment shader:
uniform sampler2DRect before;
uniform sampler2DRect after;
uniform sampler2DRect mask;
varying vec2 texture_coordinate;
void main()
{
vec3 maskValue = texture2DRect(mask, texture_coordinate).rgb;
float alpha = (maskValue.r + maskValue.g + maskValue.b) / 3.0;
vec4 beforeValue = texture2DRect(before, texture_coordinate);
vec4 afterValue = texture2DRect(after, texture_coordinate);
gl_FragColor = mix(beforeValue, afterValue, alpha);
}
Any idea what's going wrong?

This is only guessing, have you tried this
gl_FragColor = mix(beforeValue, afterValue, alpha / 255.0f);

Related

How can I repeat a texture within a WGSL shader?

I am using non-power-of-two textures within shader programs. I am trying to implement scrolling text with repeats. Scrolling works fine, but at soon as I try to get the texture to repeat via logic in my vertex shader, I suddenly get a quad with a single set of stretched pixels across the entire range. I assume this is happening due to the filtering algorithm.
As background, I want to generate the texture coordinates within the vertex program since I then do further distortion on them in the fragment programs, and it is easier to manage if the inputs to the fragment programs are already correct to account for scroll. Note that I access textureCoordinateVarying in the corresponding fragment shaders
This works, albeit with no repeated texture once the text scrolls through:
attribute vec4 position;
attribute vec2 texcoord;
uniform mat3 matrixUniform;
uniform float horizontalTextureOffsetUniform;
varying vec2 textureCoordinateVarying;
void main() {
gl_Position = vec4((matrixUniform * vec3(position.x, position.y, 1)).xy, 0, 1);
textureCoordinateVarying = vec2(
//I get a nice scrolling animation by changing the offset here, but the texture doesn't repeat, since it is NPO2 and therefore doesn't have repeating enabled
texcoord.x + horizontalTextureOffsetUniform,
texcoord.y
);
}
On the other hand, this gives me a stretched out image, as you can see:
attribute vec4 position;
attribute vec2 texcoord;
uniform mat3 matrixUniform;
uniform float horizontalTextureOffsetUniform;
varying vec2 textureCoordinateVarying;
void main() {
gl_Position = vec4((matrixUniform * vec3(position.x, position.y, 1)).xy, 0, 1);
textureCoordinateVarying = vec2(
\\Note how I am using fract here, which should make the texture repeat at the 1.0 texture boundary, but instead renders a blurry stretched texture
fract(texcoord.x + horizontalTextureOffsetUniform),
texcoord.y
);
}
Any ideas on how I can solve for this?
Thanks!
You need to do the repeat math in the fragment shader, not the vertex shader.
const gl = document.querySelector('canvas').getContext('webgl');
const vs = `
void main() {
gl_Position = vec4(0,0,0,1);
gl_PointSize = 100.0;
}`;
const fs = `
precision highp float;
uniform sampler2D tex;
uniform vec2 offset;
void main() {
gl_FragColor = texture2D(tex, fract(gl_PointCoord.xy + offset));
}`;
// compile shaders, link program, look up locations
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
// calls gl.createTexture, gl.texImage2D, gl.texParameteri
const tex = twgl.createTexture(gl, {
src: 'https://i.imgur.com/v38pV.jpg'
});
function render(time) {
time *= 0.001;
gl.useProgram(programInfo.program);
// calls gl.activeTexture, gl.bindTexture, gl.uniform
twgl.setUniformsAndBindTextures(programInfo, {
tex,
offset: [time, time * 0.1],
});
gl.drawArrays(gl.POINTS, 0, 1);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
<canvas></canvas>

Deferred MSAA Artifacting

This is the process I go through to render the scene:
Bind MSAA x4 GBuffer (4 Color Attachments, Position, Normal, Color and Unlit Color (skybox only. I also have a Depth component/Texture).
Draw SkyBox
Draw Geo
Blit all Color and Depth Components to a Single Sample FBO
Apply Lighting (I use the depth texture to check if it should be lit by checking if depth texture value is less than 1).
Render Quad
And this is what is happening:
As you can see I get these white and black artefacts around the edge instead of smooth edge. (Good to note that if I remove the lighting and just render the texture without lighting, I don't get this and it smooths correctly).
Here is my shader (it has SSAO implemented but that seem to not effect this).
#version 410 core
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D texFramebuffer;
uniform sampler2D ssaoTex;
uniform sampler2D gPosition;
uniform sampler2D gNormal;
uniform sampler2D gAlbedo;
uniform sampler2D gAlbedoUnlit;
uniform sampler2D gDepth;
uniform mat4 View;
struct Light {
vec3 Pos;
vec3 Color;
float Linear;
float Quadratic;
float Radius;
};
const int MAX_LIGHTS = 32;
uniform Light lights[MAX_LIGHTS];
uniform vec3 viewPos;
uniform bool SSAO;
void main()
{
vec3 color = texture(gAlbedo, Texcoord).rgb;
vec3 colorUnlit = texture(gAlbedoUnlit, Texcoord).rgb;
vec3 pos = texture(gPosition, Texcoord).rgb;
vec3 norm = normalize(texture( gNormal, Texcoord)).rgb;
vec3 depth = texture(gDepth, Texcoord).rgb;
float ssaoValue = texture(ssaoTex, Texcoord).r;
// then calculate lighting as usual
vec3 lighting;
if(SSAO)
{
lighting = vec3(0.3 * color.rgb * ssaoValue); // hard-coded ambient component
}
else
{
lighting = vec3(0.3 * color.rgb); // hard-coded ambient component
}
vec3 posWorld = pos.rgb;
vec3 viewDir = normalize(viewPos - posWorld);
for(int i = 0; i < MAX_LIGHTS; ++i)
{
vec4 lightPos = View * vec4(lights[i].Pos,1.0);
vec3 normLight = normalize(lightPos.xyz);
float distance = length(lightPos.xyz - posWorld);
if(distance < lights[i].Radius)
{
// diffuse
vec3 lightDir = normalize(lightPos.xyz - posWorld);
vec3 diffuse = max(dot(norm.rgb, lightDir), 0.0) * color.rgb *
lights[i].Color;
float attenuation = 1.0 / (1.0 + lights[i].Linear * distance + lights[i].Quadratic * distance * distance);
lighting += (diffuse*attenuation);
}
}
if(depth.r >= 1)
{
outColor = vec4(colorUnlit, 1.0);
}
else
{
outColor = vec4(lighting, 1.0);
}
}
So the last if statement checks if it is in the depth texture, if it is then apply lighting, if it is not then just draw the skybox (this is so lighting is not applied to the skybox).
I have spent a few days trying to work this out, changing ways of checking if it should be light by comparing normals, position and depth, changing the formats to a higher res (e.g. using RGB16F instead of RGB8 etc.) but I can't figure out what is causing it and doing lighting per sample (using texel fetch) would be way to intensive.
Any Ideas?
This question is a bit old now but I thought I would say how I solved my issue.
I run basic Sobel Filter in my shader which I use to do screen-space outlines, but in addition I also check if MSAA is enabled and if so compute lighting per texel around the edge pixels!

Passing on two rectangles with different meaning from geometry to fragment shader

I've adapted Cinder library signed distance fonthandling to Delphi, and am now implementing a twist to upload all data for multiple texts in a single call, and to and to have some control over relative size when zooming (using an uniform instead of the 1.0001 factor in the geometry shader, not yet working in this code)
The basic signed distance handling is not altered, I only tried to calculate the needed rectangles using the geometry shader. I understand how to create the destination rectangle (where the character must appear) using triangle_strip, but are having problems passing the texcoord to the fragment shader.
destination rectangle : the input topleft.xy + widthheight (dimensions) is used to calculate the destination rectangle of each character on the screen. Using gl_position.
texture source rectangle 2: texcoordtl+texdimens, topleft point + dimensions for the character in the font texture. This is the main point where I'm unsure. Passed to fragment using texcoord in/out param.
I'd be grateful for any pointers or avenues to research and specially wonder about the way I calculate the texcoord coordinates and pass them on
to the fragment shader.
An array of the below record is bound with GL_ARRAY_BUFFER and described using a series of glGetAttribLocation/glEnableVertexAttribArray/glVertexAttribPointer calls)
Drawing is done using
glDrawArrays(GL_POINTS, 0, numberofelements_in_array );
The record:
TGLCharacter = packed record // 5*2* single + 1*4 byte color + 1*4 byte detail. = 48 bytes per character drawn
origin : TGLVectorf2; // origin of text ( = glfloat[2])
topleft : TGLVectorf2; // origin of this character
widthheight: TGLVectorf2; // width and heght this chracter
texcoordtl : TGLVectorf2; // coordinates topleft in texture.
texdimens : TGLVectorf2; // sizes in texture
col : TGLVectorub4; // 4 colors, 1 per rect vertex
detail : integer; // layer. Not used in this example.
end;
geometry code first because I expect the problems here:
#version 150 compatibility
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
in vec2 gorigin[];
in vec2 gtopleft[];
in vec2 gwidthheight[];
in vec2 gtexcoordtl[];
in vec2 gtexdimens[];
in vec4 gcolor[];
out vec3 fColor;
out vec2 texcoord;
void main() {
// calculate distance cur char - first char of this text
vec2 dxcoordinate = (gtopleft[0]-gorigin[0]);
// now multiply with uniform here and calc new coordinate:
// for now we use uniform slightly close to 1 to make debugging easier and avoid
// nvidia's shadercompiler to optimize gorigin out.
// equal to 1, and the nvidia shader optimizes it out.
vec2 x1y1 = 1.0001*gorigin[0]+dxcoordinate;
vec2 x2y2 = x1y1+gwidthheight[0]*1.0001;
vec2 texx1y1 = gtexcoordtl[0];
vec2 texx2y2 = gtexcoordtl[0]+gtexdimens[0];
fColor = vec3(gcolor[0].rgb);
gl_Position = gl_ModelViewProjectionMatrix * vec4(x1y1,0,1.0);
texcoord = texx1y1.xy;
EmitVertex();
gl_Position = gl_ModelViewProjectionMatrix * vec4(x2y2.x,x1y1.y,0,1.0);
texcoord = vec2(texx2y2.x,texx1y1.y);
EmitVertex();
gl_Position= gl_ModelViewProjectionMatrix * vec4(x1y1.x,x2y2.y,0,1.0);
texcoord = vec2(texx1y1.x,texx2y2.y);
EmitVertex();
gl_Position = gl_ModelViewProjectionMatrix * vec4(x2y2,0,1.0);
texcoord = texx2y2.xy;
EmitVertex();
EndPrimitive();
}
frag code:
#version 150 compatibility
uniform sampler2D font_map;
uniform float smoothness;
const float gamma = 2.2;
in vec3 fColor;
in vec2 texcoord;
void main()
{
// retrieve signed distance
float sdf = texture2D( font_map, texcoord.xy ).r;
// perform adaptive anti-aliasing of the edges
float w = clamp( smoothness * (abs(dFdx(texcoord.x)) + abs(dFdy(texcoord.y))), 0.0, 0.5);
float a = smoothstep(0.5-w, 0.5+w, sdf);
// gamma correction for linear attenuation
a = pow(a, 1.0/gamma);
if (a<0.1)
discard;
// final color
gl_FragColor.rgb = fColor.rgb;
gl_FragColor.a = gl_Color.a * a;
}
vertex code is probably ok I guess.
#version 150 compatibility
in vec2 anorigin;
in vec2 topleft;
in vec2 widthheight;
in vec2 texcoordtl;
in vec2 texdimens;
in vec4 color;
out vec2 gorigin;
out vec2 gtopleft;
out vec2 gwidthheight;
out vec2 gtexcoordtl;
out vec2 gtexdimens;
out vec4 gcolor;
void main()
{
gorigin=anorigin;
gtopleft=topleft;
gwidthheight=widthheight;
gtexcoordtl=texcoordtl;
gtexdimens=texdimens;
gcolor=color;
gl_Position = gl_ModelViewProjectionMatrix * vec4(anorigin.xy,0,1.0);;
}
The above code works. The problem was in the uploading code, so wrong vertex data was uploaded. I did some minor fixes to the above code while debugging and added it to the question, so that the question now shows working code.
Here is some possible code that changes the last 2 lines of the frag shader to create outline fonts. I'm not really happy yet with it though. When zooming out the color of the font seems to change
vec3 othercol; // to be added to declarations
.. rest shader below the discard statement becomes:
othercol=vec3(1.0,1.0,1.0);
if (sqrt(0.299 * fColor.r*fColor.r + 0.587 * fColor.g*fColor.g + 0.114 * fColor.b*fColor.b)>0.5)
{ othercol=vec3(0,0,0);}
// final color
if (sdf>0.25 && sdf<0.75)
{gl_FragColor.rgb = othercol.rgb;}
else
{gl_FragColor.rgb = fColor.rgb;}
gl_FragColor.a = gl_Color.a * a;
}

GLSL Grayscale Shader removes transparency

I'm very new to GLSL and started with a simple greyscale shade. I used the code of GamesFromScratch's tutorial:
vertexshader:
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;
uniform mat4 u_projTrans;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = a_color;
v_texCoords = a_texCoord0;
gl_Position = u_projTrans * a_position;
}
Fragmentshader:
#ifdef GL_ES
precision mediump float;
#endif
varying vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
uniform mat4 u_projTrans;
void main() {
vec3 color = texture2D(u_texture, v_texCoords).rgb;
float gray = (color.r + color.g + color.b) / 3.0;
vec3 grayscale = vec3(gray);
gl_FragColor = vec4(grayscale, 1.0);
}
The effect and the problem: Everything is rendered in grayscale only, but transparent parts of the textures become white. For example: A simple filled circle is usually drawn as a circle. Now its a circle within a white box. Next to the removed transparent parts also changes on the alpha are not visible.
The problem is in your fragment shader. You create a vec3 color imagine (r,g,b) then you set gl_FragColor to a vec4 (r,g,b,a). Use use the first three from grayscale and then set the "a" to a hard coded alpha value of 1, removing any transparency.
You could get the rgba from the sampler and use its alpha in the final vec4.
Also if you are looking for a more true grayscale conversion the general standard is
color = 0.299 * r + 0.587 * g + 0.114 * b
https://en.wikipedia.org/wiki/Grayscale
I think these changes will help you.
vec4 color = texture2D(u_texture, v_texCoords);
float gray = dot(color.rgb, vec3(0.299, 0.587, 0.114));
gl_FragColor = vec4(grayscale, color.a);
In my changes, I read color with alpha from texture and apply it to output.

OpenGL 3.3 deferred shading not working

I've setup an OpenGL environment with deferred shading following this tutorial but I can't make the second shader output on my final buffer.
I can see that the first shader (the one that doesn't use lights) is working properly because with gDEBugger I can see that the output buffers are correct, but the second shader really can't display anything. I've also tried to make the second shader output a single color for all the scene just to see if it was displying something, bot nothing is visible (the screen should be completely red but it isn't).
The first pass shader (the one I use to create the buffers for the GBuffer) is working so I'm not add it's code or how I created and implemented my GBuffer, but if you need I'll add them, just tell me.
I think the problem is when I tell OpenGL to output on the FrameBuffer 0 (my video).
This is how I enalbe OpenGL to write to the FrameBuffer 0:
glEnable(GL_BLEND);
m_MotoreGrafico->glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
// Abilito la scrittura sul buffer finale
m_MotoreGrafico->glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
m_gBuffer.BindForReading();
glClear(GL_COLOR_BUFFER_BIT);
// Imposto le matrici dello shader
SetUpOGLProjectionViewMatrix(1);
// Passo le texture del GBuffer allo shader
pActiveShader->setUniform1i(_T("gPositionMap"), m_gBuffer.GetPositionTexture());
pActiveShader->setUniform1i(_T("gColorMap"), m_gBuffer.GetDiffuseTexture());
pActiveShader->setUniform1i(_T("gNormalMap"), m_gBuffer.GetNormalTexture());
// Passo variabili necessarie allo shader
float dimensioneFinestra[2], posizioneCamera[3];
dimensioneFinestra[0] = m_nLarghezzaFinestra;
dimensioneFinestra[1] = m_nAltezzaFinestra;
m_MotoreGrafico->GetActiveCameraPosition(posizioneCamera);
pActiveShader->setUniform2f(_T("gScreenSize"), dimensioneFinestra);
pActiveShader->setUniform3f(_T("gCameraPos"), posizioneCamera);
pActiveShader->setUniform1i(_T("gUsaLuci"), 0);
// Disegno le luci
float coloreLuce[3], posizioneLuce[3], direzioneLuce[3], vUpLuce[3], vRightLuce[3], intensita;
for(int i = 0; i < GetDocument()->m_RTL.GetNLights(); i++)
{
CRTLuce* pRTLuce = GetDocument()->m_RTL.GetRTLightAt(i);
...
m_MotoreGrafico->glBindVertexArray(pRTLuce->GetRTLuce()->GetVBO()->getVBAIndex());
glDrawArrays(GL_TRIANGLES, 0, pRTLuce->GetRTLuce()->GetNVertPerShader());
}
The function m_gBuffer.BindForReading() is like this (bot I think it doesn't matter for my problem):
for (unsigned int i = 0 ; i < ARRAY_SIZE_IN_ELEMENTS(m_textures); i++)
{
m_pMotoreGrafico->glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(GL_TEXTURE_2D, m_textures[GBUFFER_TEXTURE_TYPE_POSITION + i]);
}
So far my GBuffer is working (it creates the textures) and my first shader is also working (it's drawing the textures of my GBuffer).
The problem then is that I can't reset OpenGL to draw in my video.
The first 4 textures are the ones create with the first-pass shader.
This is my back buffer (after the second-pass shader)
And this is my front buffer (after the second-pass shader)
This is my second-pass fragment shader code (it outputs only red)
out vec4 outputColor;
void main()
{
outputColor = vec4(1.0, 0.0, 0.0, 1.0);
}
Does anyone have an idea of what I'm doing wrong?
Second-pass vertex shader code:
#version 330
uniform struct Matrici
{
mat4 projectionMatrix;
mat4 modelMatrix;
mat4 viewMatrix;
} matrices;
layout (location = 0) in vec3 inPosition;
void main()
{
vec4 vEyeSpacePosVertex = matrices.viewMatrix * matrices.modelMatrix * vec4(inPosition, 1.0);
gl_Position = matrices.projectionMatrix * vEyeSpacePosVertex;
}
Second-pass fragment shader code:
#version 330
uniform struct MDLight
{
vec3 vColor;
vec3 vPosition;
vec3 vDirection;
float fAmbientIntensity;
float fStrength;
int bOn;
float fConeCosine;
float fAltezza;
float fLarghezza;
vec3 vUp;
vec3 vRight;
} gLuce;
uniform float gSpecularIntensity;
uniform float gSpecularPower;
uniform sampler2D gPositionMap;
uniform sampler2D gColorMap;
uniform sampler2D gNormalMap;
uniform vec3 gCameraPos;
uniform vec2 gScreenSize;
uniform int gLightType;
uniform int gUsaLuci;
vec2 CalcTexCoord()
{
return gl_FragCoord.xy / gScreenSize;
}
out vec4 outputColor;
void main()
{
vec2 TexCoord = CalcTexCoord();
vec4 Color = texture(gColorMap, TexCoord);
outputColor = vec4(1.0, 0.0, 0.0, 1.0);
}