OpenGL fragment shader manual depth test crash - opengl

Somehow a very simple test of depth values between the gbuffer depth texture and the current frag depth of a forward rendering pass causes a hard crash. I feel like there must be something I'm missing here. NOTE: the framebuffer where the frag shader is being used is multisampled. Here's the crashing frag shader:
#version 330
out vec4 out_Color;
in vec2 TexCoords;
uniform sampler2D gDepth;//depth attachment from gBuffer render passs
void main() {
float z1 = texture(gDepth, TexCoords).r;//depth from gbuffer depth, a float32 texture
float z2 = gl_FragCoord.z;
if(z1 < z2) { //MANUAL Z TEST -- this single line causes a huge crash at run time
out_Color = vec4(1, 0, 0, 1);//color rejected frags red as a diagnostic
return;
}
else{
out_Color = vec4(0,0,1,1);//else, frags are colored blue
}
}
This is all from code that works fine otherwise, and I've diagnosed this problem down to this single line. My graphics card is somewhat recent Nvidia card (Geforce GT 710), and I am using glew and wgl. If I can do z-testing here I can bump up my frame rate by quite a bit
Here's a photo of rendering a scene using z1 and z2:
EDIT: I forgot to mention that that FBO where the frag shader is being used is multisampled.

Related

Render from fbo texture to another within same fbo

I'm trying to set up deferred rendering, and have successfully managed to output data to the varying gbuffer textures (position, normal, albedo, specular).
I am now attempting to sample from the albedo texture to a 5th colour attachment in the same fbo (for the purposes of further potential post process sampling), by rendering a full screen quad with simple texture coordinates.
I have checked that the vertex/texcoord data is good via Nsight, and confirmed that the shader can "see" the texture to sample from, but all that I see in the target texture is the clear colour when I examine it in the Nsight debugger.
At the moment, the shader is basically just a simple pass through shader:
vertex shader:
#version 430
in vec3 MSVertex;
in vec2 MSTexCoord;
out xferBlock
{
vec3 VSVertex;
vec2 VSTexCoord;
} outdata;
void main()
{
outdata.VSVertex = MSVertex;
outdata.VSTexCoord = MSTexCoord;
gl_Position = vec4(MSVertex,1.0);
}
fragment shader:
#version 430
layout (location = 0) uniform sampler2D colourMap;
layout (location = 0) out vec4 colour;
in xferBlock
{
vec3 VSVertex;
vec2 VSTexCoord;
} indata;
void main()
{
colour = texture(colourMap, indata.VSTexCoord).rgba;
}
As you can see, there is nothing fancy about the shader.
The gl code is as follows:
//bind frame buffer for writing to texture #5
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glDrawBuffer(GL_COLOR_ATTACHMENT4); //5 textures total
glClear(GL_COLOR_BUFFER_BIT);
//activate shader
glUseProgram(second_stage_program);
//bind texture
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, fbo_buffers[2]); //third attachment: albedo
//bind and draw fs quad (two array buffers: vertices, and texture coordinates)
glBindVertexArray(quad_vao);
glDrawArrays(GL_TRIANGLES,0,6);
I'm trying to work out what is preventing rendering to the texture. I'm using a core context with OpenGL v4.3.
I've tried outputting a single white colour for all fragments, using the texture coordinates to generate a colour colour = vec4(indata.VSTexCoord, 1.0, 1.0); and sampling the texture itself, as you see in the shader code, but nothing changes the resultant texture, which just shows the clear colour.
What am I doing wrong?

gl_PointCoord compiles and links, but crashes at runtime

I successfully wrote a standard basic transform feedback particle system with point-sprites. No flickering, the particles update from one buffer into the next, which is then rendered, then output buffer becomes input buffer on next iteration. All GPU-side, standard transform feedback.
Wonderful! ONE BIG PROBLEM: It only works if I don't use gl_PointCoord. Using a flat color for my point sprites works fine. But I need gl_PointCoord to do anything meaningful. All my shaders, whether or not they use gl_PointCoord, compile and link just fine. However, at runtime, if the shader uses gl_PointCoord (whether or not gl_PointCoord is actually in the execution path), the program crashes. I explicitly glEnable(GL_POINT_SPRITE). This doesn't have any effect. Omitting gl_PointCoord, and setting glPointSize(100.0f), and using vec4(1.0,1.0,1.0,1.) the particle system renders just fine as large white blocky squares (as expected). But using gl_PointCoord in any way (as standard texture lookup coord or procedural color or anything else) will crash my shaders at runtime, after successfully compiling and linking. I simply don't understand why. It passed glShaderSource, glCompileShader,glAttachShader, glLinkProgram. I'm compiling my shaders as #version 430, and 440, and I even tried 300 es. All compile, link, and I checked the status of the compile and link. All good. I'm using a high-end microsoft surface book pro, visual studio 2015. NVIDIA GeForce GPU. I also made sure all my drivers are up to date. Unfortunately, with point sprites, I don't have billboard vertices from the vertex shader to use to interpolate into the fragment shader as texture coordinates. gl_FragCoord doesn't work either (as I would expect for point sprites). Does anyone know how to solve this or use another technique for texture coordinates for point sprites?
glBeginTransformFeedback(GL_POINTS);//if my fragment shader uses gl_PointCoord, it hard crashes here.
When replying, please understand I'm very experienced in writing shaders, Vertex Shaders, Pixel Shaders, Tessellation control, tessellation evaluation, and geometry shaders, in GLSL and HLSL. But I don't claim to know everything. I could have forgotten something simple; I just have no idea what that could be. I'm figuring it could be a state I don't have enabled. As far as transform feedback goes, I also set up the varying attribs correctly via glTransformFeedbackVaryings.
C++ :
void Render(void* pData)
{
auto pOwner = static_cast<CPointSpriteSystem*>(pData);
glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);
glEnable(GL_POINT_SPRITE);
glEnable(GL_POINT_SMOOTH);
//glPointParameteri(GL_POINT_SPRITE_COORD_ORIGIN, GL_LOWER_LEFT);
m_Shader.Activate();
auto num_particles = pOwner->m_NumPointSprites;
FeedbackIndex = 0;
while (true)
{
m_Shader.SetSubroutine(GL_VERTEX_SHADER, "RenderPass",
vssubroutines[FeedbackIndex],
vsprevSubLoc[FeedbackIndex],
vsupdateSub[FeedbackIndex]);
m_Shader.SetSubroutine(GL_FRAGMENT_SHADER, "RenderPixelPass",
pssubroutines[0],
psprevSubLoc[0],
psrenderSub[0]);
if (!FeedbackIndex)
{
glEnable(GL_RASTERIZER_DISCARD);
glBindVertexArray(m_vao[bufferIndex]);
glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, m_Feedback[bufferIndex]);
glBeginTransformFeedback(GL_POINTS);//if feedback fragment shader uses gl_PointCoord, will always hard-crash here
glDrawArrays(GL_POINTS, 0, num_particles);
glEndTransformFeedback();
glFlush();
}
else
{
m_Shader.SetSubroutine(GL_FRAGMENT_SHADER, "RenderPixelPass",
pssubroutines[(int)pOwner->m_ParticleType],
psprevSubLoc[(int)pOwner->m_ParticleType],
psrenderSub[(int)pOwner->m_ParticleType]);
glPointSize(100.0f);
glDisable(GL_RASTERIZER_DISCARD);
glDrawTransformFeedback(GL_POINTS, m_Feedback[bufferIndex]);
bufferIndex = 1 - bufferIndex;
break;
}
FeedbackIndex = 1 - FeedbackIndex;
}
}
VS feedback:
#version 310 es
subroutine void RenderPassType();
subroutine uniform RenderPassType RenderPass;
layout(location=0) in vec3 VertexPosition;
layout(location=1) in vec3 VertexVelocity;
layout(location=2) in float VertexStartTime;
layout(location=3) in vec3 VertexInitialVelocity;
out vec3 Position;
out vec3 Velocity;
out float StartTime;
out float Transp;
uniform float g_fCurSeconds;
uniform float g_fElapsedSeconds;
uniform float Time;
uniform float H;
uniform vec3 Accel;
#ifdef USE_VIEW_BLOCK
layout(std140) uniform view_block{
mat4 g_mView,
g_mInvView,
g_mPrevView,
g_mPrevInvView,
g_mProj,
g_mInvProj;
};
uniform mat4 g_mWorld;
#endif
subroutine(RenderPassType) void UpdateSphere(){
Position=VertexPosition+VertexVelocity*g_fElapsedSeconds;
Velocity=VertexVelocity;
StartTime=VertexStartTime;
}
subroutine(RenderPassType) void Render(){
gl_Position=g_mProj*g_mInvView*vec4(VertexPosition,1.0);
}
void main(){
RenderPass();"
}
PS feedback:
#version 310 es //version 430 and 440 same results
subroutine void RenderPixelType();
subroutine uniform RenderPixelType RenderPixelPass;
uniform sampler2D tex0;
layout(location=0) out vec4 g_FragColor;
subroutine(RenderPixelType) void Basic(){
g_FragColor=vec4(1.0,1.0,1.0,1.0);
}
subroutine(RenderPixelType) void ProceduralSphere(){
#if 1
vec2 coord=gl_PointCoord;//at runtime: BOOM!
coord=coord*2.0-1.0;
float len=length(coord);
if(len>1.0) discard;
g_FragColor=vec4(1.0-len,1.0-len,1.0-len,1.0);
#else
g_FragColor=vec4(1.0,1.0,1.0,1.0);//always works
#endif
}
subroutine(RenderPixelType) void StandardImage(){
g_FragColor=texture2D(tex0,gl_PointCoord); //boom!!
g_FragColor=vec4(1.0,1.0,1.0,1.0);
}
void main(){
RenderPixelPass();
}
I solved the problem! The problem was actually that I didn't write a value to Transp (declared out float Transp;//in vs). I casually thought I didn't have to do this. But I started to trim some fat, and as soon as I wrote out a generic float (not actually being used by later shader stages: Transp=0.0f), and then compiled as #version 430, it all started to work as originally expected: little white spheres

Add radial gradient texture to each white part of another texture in shader

Recently, I have read article about sun shader (XNA Sun Shader) and decided to implement it using OpenGL ES 2.0. But I faced with a problem connected with shader:
I have two textures, one of them is fire gradient texture:
And another one is texture each white part of which must be colored by the first texture:
So, I'm going to have a result like below (do not pay attention that the result texture is rendered on sphere mesh):
I really hope that somebody knows how to implement this shader.
You can first sampling the original texture, if the color is white, then sampling the gradient texture.
uniform sampler2D Texture0; // original texture
uniform sampler2D Texture1; // gradient texture
varying vec2 texCoord;
void main(void)
{
gl_FragColor = texture2D( Texture0, texCoord );
// If the color in original texture is white
// use the color in gradient texture.
if (gl_FragColor == vec4(1.0, 1.0, 1.0,1.0)) {
gl_FragColor = texture2D( Texture1, texCoord );
}
}

Fragment shader always uses 1.0 for alpha channel

I have a 2d texture that I loaded with
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, gs.width(), gs.height(), 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, gs.buffer());
where gs is an object that with methods that return the proper types.
In the fragment shader I sample from the texture and attempt to use that as the alpha channel for the resultant color. If I use the sampled value for other channels in the output texture it produces what I would expect. Any value that I use for the alpha channel appears to be ignored, because it always draws Color.
I am clearing the screen using:
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
Can anyone suggest what I might be doing wrong? I am getting an OpenGL 4.0 context with 8 red, 8 green, 8 blue, and 8 alpha bits.
Vertex Shader:
#version 150
in vec2 position;
in vec3 color;
in vec2 texcoord;
out vec3 Color;
out vec2 Texcoord;
void main()
{
Texcoord = texcoord;
Color = color;
gl_Position = vec4(position, 0.0, 1.0);
}
Fragment Shader:
#version 150
in vec3 Color;
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D tex;
void main()
{
float t = texture(tex, Texcoord);
outColor = vec4(Color, t);
}
Frankly, I am surprised this actually works. texture (...) returns a vec4 (unless you are using a shadow/integer sampler, which you are not). You really ought to be swizzling that texture down to just a single component if you intend to store it in a float.
I am guessing you want the alpha component of your texture, but who honestly knows -- try this instead:
float t = texture (tex, Texcoord).a; // Get the alpha channel of your texture
A half-way decent GLSL compiler would warn/error you for doing what you are trying to do right now. I suspect yours is as well, but you are not checking the shader info log when you compile your shader.
Update:
The original answer did not even begin to address the madness you are doing with your GL_DEPTH_COMPONENT internal format texture. I completely missed that because the code did not fit on screen.
Why are you using gs.rgba() to pass data to a texture whose internal and pixel transfer format is exactly 1 component? Also, if you intend to use a depth texture in your shader then the reason it is always returning a=1.0 is actually very simple:
Beginning with GLSL 1.30, when sampled using texture (...), depth textures are automatically setup to return the following vec4:
       vec4 (r, r, r, 1.0).
The RGB components are replaced with the value of R (the floating-point depth), and A is replaced with a constant value of 1.0.
Your issue is that you're only passing in a vec3 when you need a vec4. RGBA - 4 components, not just three.

How to achieve depth value invariance between two passes?

I have two geometry passes. In the first pass, I write the fragment's depth value to a float texture with glBlendEquation(GL_MIN), similar to dual depth peeling. In the second pass I use it for early depth testing in the fragment shader.
However, for some fragments the depth test fails unless I slightly offset the min depth value (eps below):
Setting up the texture:
glBindTexture(GL_TEXTURE_2D, offscreenDepthMapTextureId);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RG32F, screenWidth, screenHeight);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, offscreenDepthMapFboId);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D,
offscreenDepthMapTextureId, 0);
Note that the texture is used as a color attachment, not the depth attachment. I have disabled GL_DEPTH_TEST in both passes, since I can't use it in the final pass.
Vertex shader, used in both passes:
#version 430
layout(location = 0) in vec4 position;
uniform mat4 mvp;
invariant gl_Position;
void main()
{
gl_Position = mvp * position;
}
First pass fragment shader, with offscreenDepthMapFboId bound, so it writes the depth as color to the texture. Blending makes sure that only the min value ends up in the red component.
#version 430
out vec4 outputColor;
void main()
{
outputColor.rg = vec2(gl_FragCoord.z, -gl_FragCoord.z);
}
Second pass, writing to the default framebuffer. The texture is used as depthTex.
#version 430
out vec4 outputColor;
uniform sampler2D depthTex;
void main()
{
vec2 zwMinMax = texelFetch(depthTex, ivec2(gl_FragCoord.xy), 0).rg;
float zwMin = zwMinMax.r;
float zwMax = -zwMinMax.g;
float eps = 0;// doesn't work
//float eps = 0.0000001; // works
if (gl_FragCoord.z > zwMin + eps)
discard;
outputColor = vec4(1, 0, 0, 1);
}
The invariant qualifier in the vertex shader didn't help. Using eps = 0.0000001 seems like a crude workaround as I can't be sure that this particular value will always work. How do I get the two shaders to produce the exact same gl_FragCoord.z? Or is the depth texture the problem? Wrong format? Are there any conversions happening that I'm not aware of?
Have you tried glDepthFunc(GL_EQUAL) instead of your in-shader-comparison-approach?
Are you reading and writing to the same depth-texture in the same pass?
Im not sure if gl_FragCoord's format and precision is correlated with the one of the depth-buffer. Also it might be a driver glitch, but glDepthFunc(GL_EQUAL) should work as expected.
Do not use gl_FragCoord.z as the depth value.
In vertex shader:
out float depth;
void main()
{
...
depth = ((gl_DepthRange.diff * (gl_Position.z / glPosition.w)) +
gl_DepthRange.near + gl_DepthRange.far) / 2.0;
}