I am doing a texture lookup to color a surface in a GLSL fragment shader.
Under a specific case, the texture coordinate must be fetched from an additional texture, which will be loaded only in this case.
This works fine on Nvidia and Intel graphic cards but not on AMD.
With AMD the fragment is discarded if the 2nd texture is not loaded. No matter the result of the if statement.
What is proper way to implement this in the fragment shader to avoid this issue?
Thank you for your help.
#version 120
uniform sampler3D colorMap;
uniform sampler1D zTexture;
uniform bool useZtexture;
void main(void)
{
float zTexCoord = 0.0f;
if(useZtexture == true)
{
// unitialized zTexture causes the fragment to be discarded
// even when useZtexture is false
zTexCoord = float(texture1D(zTexture, 0.0f));
}
gl_FragColor = texture3D( colorMap, vec3(gl_TexCoord[0].s, gl_TexCoord[0].t, zTexCoord));
}
Related
Somehow a very simple test of depth values between the gbuffer depth texture and the current frag depth of a forward rendering pass causes a hard crash. I feel like there must be something I'm missing here. NOTE: the framebuffer where the frag shader is being used is multisampled. Here's the crashing frag shader:
#version 330
out vec4 out_Color;
in vec2 TexCoords;
uniform sampler2D gDepth;//depth attachment from gBuffer render passs
void main() {
float z1 = texture(gDepth, TexCoords).r;//depth from gbuffer depth, a float32 texture
float z2 = gl_FragCoord.z;
if(z1 < z2) { //MANUAL Z TEST -- this single line causes a huge crash at run time
out_Color = vec4(1, 0, 0, 1);//color rejected frags red as a diagnostic
return;
}
else{
out_Color = vec4(0,0,1,1);//else, frags are colored blue
}
}
This is all from code that works fine otherwise, and I've diagnosed this problem down to this single line. My graphics card is somewhat recent Nvidia card (Geforce GT 710), and I am using glew and wgl. If I can do z-testing here I can bump up my frame rate by quite a bit
Here's a photo of rendering a scene using z1 and z2:
EDIT: I forgot to mention that that FBO where the frag shader is being used is multisampled.
I successfully wrote a standard basic transform feedback particle system with point-sprites. No flickering, the particles update from one buffer into the next, which is then rendered, then output buffer becomes input buffer on next iteration. All GPU-side, standard transform feedback.
Wonderful! ONE BIG PROBLEM: It only works if I don't use gl_PointCoord. Using a flat color for my point sprites works fine. But I need gl_PointCoord to do anything meaningful. All my shaders, whether or not they use gl_PointCoord, compile and link just fine. However, at runtime, if the shader uses gl_PointCoord (whether or not gl_PointCoord is actually in the execution path), the program crashes. I explicitly glEnable(GL_POINT_SPRITE). This doesn't have any effect. Omitting gl_PointCoord, and setting glPointSize(100.0f), and using vec4(1.0,1.0,1.0,1.) the particle system renders just fine as large white blocky squares (as expected). But using gl_PointCoord in any way (as standard texture lookup coord or procedural color or anything else) will crash my shaders at runtime, after successfully compiling and linking. I simply don't understand why. It passed glShaderSource, glCompileShader,glAttachShader, glLinkProgram. I'm compiling my shaders as #version 430, and 440, and I even tried 300 es. All compile, link, and I checked the status of the compile and link. All good. I'm using a high-end microsoft surface book pro, visual studio 2015. NVIDIA GeForce GPU. I also made sure all my drivers are up to date. Unfortunately, with point sprites, I don't have billboard vertices from the vertex shader to use to interpolate into the fragment shader as texture coordinates. gl_FragCoord doesn't work either (as I would expect for point sprites). Does anyone know how to solve this or use another technique for texture coordinates for point sprites?
glBeginTransformFeedback(GL_POINTS);//if my fragment shader uses gl_PointCoord, it hard crashes here.
When replying, please understand I'm very experienced in writing shaders, Vertex Shaders, Pixel Shaders, Tessellation control, tessellation evaluation, and geometry shaders, in GLSL and HLSL. But I don't claim to know everything. I could have forgotten something simple; I just have no idea what that could be. I'm figuring it could be a state I don't have enabled. As far as transform feedback goes, I also set up the varying attribs correctly via glTransformFeedbackVaryings.
C++ :
void Render(void* pData)
{
auto pOwner = static_cast<CPointSpriteSystem*>(pData);
glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);
glEnable(GL_POINT_SPRITE);
glEnable(GL_POINT_SMOOTH);
//glPointParameteri(GL_POINT_SPRITE_COORD_ORIGIN, GL_LOWER_LEFT);
m_Shader.Activate();
auto num_particles = pOwner->m_NumPointSprites;
FeedbackIndex = 0;
while (true)
{
m_Shader.SetSubroutine(GL_VERTEX_SHADER, "RenderPass",
vssubroutines[FeedbackIndex],
vsprevSubLoc[FeedbackIndex],
vsupdateSub[FeedbackIndex]);
m_Shader.SetSubroutine(GL_FRAGMENT_SHADER, "RenderPixelPass",
pssubroutines[0],
psprevSubLoc[0],
psrenderSub[0]);
if (!FeedbackIndex)
{
glEnable(GL_RASTERIZER_DISCARD);
glBindVertexArray(m_vao[bufferIndex]);
glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, m_Feedback[bufferIndex]);
glBeginTransformFeedback(GL_POINTS);//if feedback fragment shader uses gl_PointCoord, will always hard-crash here
glDrawArrays(GL_POINTS, 0, num_particles);
glEndTransformFeedback();
glFlush();
}
else
{
m_Shader.SetSubroutine(GL_FRAGMENT_SHADER, "RenderPixelPass",
pssubroutines[(int)pOwner->m_ParticleType],
psprevSubLoc[(int)pOwner->m_ParticleType],
psrenderSub[(int)pOwner->m_ParticleType]);
glPointSize(100.0f);
glDisable(GL_RASTERIZER_DISCARD);
glDrawTransformFeedback(GL_POINTS, m_Feedback[bufferIndex]);
bufferIndex = 1 - bufferIndex;
break;
}
FeedbackIndex = 1 - FeedbackIndex;
}
}
VS feedback:
#version 310 es
subroutine void RenderPassType();
subroutine uniform RenderPassType RenderPass;
layout(location=0) in vec3 VertexPosition;
layout(location=1) in vec3 VertexVelocity;
layout(location=2) in float VertexStartTime;
layout(location=3) in vec3 VertexInitialVelocity;
out vec3 Position;
out vec3 Velocity;
out float StartTime;
out float Transp;
uniform float g_fCurSeconds;
uniform float g_fElapsedSeconds;
uniform float Time;
uniform float H;
uniform vec3 Accel;
#ifdef USE_VIEW_BLOCK
layout(std140) uniform view_block{
mat4 g_mView,
g_mInvView,
g_mPrevView,
g_mPrevInvView,
g_mProj,
g_mInvProj;
};
uniform mat4 g_mWorld;
#endif
subroutine(RenderPassType) void UpdateSphere(){
Position=VertexPosition+VertexVelocity*g_fElapsedSeconds;
Velocity=VertexVelocity;
StartTime=VertexStartTime;
}
subroutine(RenderPassType) void Render(){
gl_Position=g_mProj*g_mInvView*vec4(VertexPosition,1.0);
}
void main(){
RenderPass();"
}
PS feedback:
#version 310 es //version 430 and 440 same results
subroutine void RenderPixelType();
subroutine uniform RenderPixelType RenderPixelPass;
uniform sampler2D tex0;
layout(location=0) out vec4 g_FragColor;
subroutine(RenderPixelType) void Basic(){
g_FragColor=vec4(1.0,1.0,1.0,1.0);
}
subroutine(RenderPixelType) void ProceduralSphere(){
#if 1
vec2 coord=gl_PointCoord;//at runtime: BOOM!
coord=coord*2.0-1.0;
float len=length(coord);
if(len>1.0) discard;
g_FragColor=vec4(1.0-len,1.0-len,1.0-len,1.0);
#else
g_FragColor=vec4(1.0,1.0,1.0,1.0);//always works
#endif
}
subroutine(RenderPixelType) void StandardImage(){
g_FragColor=texture2D(tex0,gl_PointCoord); //boom!!
g_FragColor=vec4(1.0,1.0,1.0,1.0);
}
void main(){
RenderPixelPass();
}
I solved the problem! The problem was actually that I didn't write a value to Transp (declared out float Transp;//in vs). I casually thought I didn't have to do this. But I started to trim some fat, and as soon as I wrote out a generic float (not actually being used by later shader stages: Transp=0.0f), and then compiled as #version 430, it all started to work as originally expected: little white spheres
I have written a fragment shader which i would like to change the color of the fragment. for example I would like if the color it receives is black then it should change it to a blue.
This is the shader that I am using:
uniform sampler2D mytex;
layout (pixel_center_integer) in vec4 gl_FragCoord;
uniform sampler2D texture1;
void main ()
{
ivec2 screenpos = ivec2 (gl_FragCoord.xy);
vec4 color = texelFetch (mytex, screenpos, 0);
if (color == vec4 (0.0,0.0,0.0,1.0)) {
color = (0.0,0.0,0.0,0.0);
}
gl_FragColor = texture2D (texture1, gl_TexCoord[0].st);
}
And here is the log that I am getting from it:
WARNING: -1:65535: 'GL_ARB_explicit_attrib_location' : extension is not available in current GLSL version
WARNING: 0:1: 'texelFetch' : function is not available in current GLSL version
I am aware of the warning- but shouldn't it compile anyways?
The shader is not doing what i would like it to do, can someone explain why?
For one thing, you are using functions that are not available in your GLSL implementation. The result of calling these will be undefined.
However, the kicker here is that gl_FragColor has absolutely NOTHING to do with the value of color in this shader. So even if your texelFetch (...) logic actually did work correctly, changing the value of color does nothing to the final output. A smart compiler will see this as a no-op and effectively strip your shader down to this:
uniform sampler2D texture1;
void main ()
{
gl_FragColor = texture2D (texture1, gl_TexCoord[0].st);
}
If that were not enough, texelFetch (...) is completely unnecessary in this shader. If you want to lookup the texel that corresponds to the current fragment in your shader and the texture has the same dimensions as the viewport you are drawing into you can actually use texture2D (texture1, gl_FragCoord.xy); This is because the default behaviour in GLSL is to have gl_FragCoord supply the coordinate of the fragment's center (x+0.5, y+0.5) - this is also the center of the corresponding texel in your texture (if it is the same resolution), so you can do a traditional texture lookup without worrying that texture filtering will alter your sampled result.
texelFetch (...) lets you fetch an explicit texel in a texture without using normalized coordinates, it is sort of like a "grownup" rectangle texture :) It is generally useful if you are using a multisample texture and want a specific sample, or if you want to bypass texture filtering (which includes mipmap level selection). In this case, it is not needed at all.
This is probably what you really want (OpenGL 3.2):
#version 150
uniform sampler2D mytex;
uniform sampler2D texture1;
layout (location=0) out vec4 frag_color;
layout (location=1) out vec4 mytex_color;
void main ()
{
mytex_color = texture2D (mytex, gl_FragCoord.xy);
// This is not black->blue like you explained in your question...
// ... This is generally opaque->transparent, assuming 4th component = alpha
if (mytex_color == vec4 (0.0,0.0,0.0,1.0)) {
mytex_color = vec4 (0.0);
}
frag_color = texture2D (texture1, gl_TexCoord[0].st);
}
In older GLSL versions, you will have to use glBindFragDataLocation (...) and set the data locations manually or use gl_FragData[n] instead of out variables.
Now the real problem here is that you seem to be wanting to change the color of the texture you are sampling from. That will not work, at best you will have to use two fragment data outputs. Writing into the same texture you are sampling from can be done under some very controlled circumstances, but generally what you would do is ping-pong between textures. In other words, you would fetch from one texture, write to another texture and all subsequent render passes that reference to the original texture should be swapped with the one you just wrote to.
See "Fragment Data Location" for more information on Multiple Render Target drawing.
I can't seem to find any information on the Web about fixing shadow casting by objects, which textures have alpha != 1.
Is there any way to implement something like "per-fragment depth test", not a "per-vertex", so I could just discard appearing of the fragment on a shadowmap if colored texel has transparency? Also, in theory, it could make shadow mapping be more accurate.
EDIT
Well, maybe that was a terrible idea I gave above, but only I want is to tell shaders that if texel have alpha<1, there's no need to shadow things behind that texel. I guess depth texture require only vertex information, thats why every tutorial about shadow mapping has minimized vertex and empty fragment shader and nothing happens when trying do something with fragment shader.
Anyway, what is the main idea of fixing shadow casting by partly-transparent objects?
EDIT2
I've modified my shaders and now It discards every fragment, if at least one has transparency o_O. So those objects now don't cast any shadows (but opaque do)... Please, have a look at the shaders:
// Vertex Shader
uniform mat4 orthoView;
in vec4 in_Position;
in vec2 in_TextureCoord;
out vec2 TC;
void main(void) {
TC = in_TextureCoord;
gl_Position = orthoView * in_Position;
}
.
//Fragment Shader
uniform sampler2D texture;
in vec2 TC;
void main(void) {
vec4 texel = texture2D(texture, TC);
if (texel.a < 0.4)
discard;
}
And it's strange because I use the same trick with the same textures in my other shaders and it works... any ideas?
If you use discard in the fragment shader, then no depth information will be recorded for that fragment. So in your fragment shader, simply add a test to see whether the texture is transparent, and if so discard that fragment.
Have a strange issue with my glsl shader. It renders nothing (eg black screen) and makes my glDrawElements cast a GL_INVALID_OPERATION. The shader in use is shown bellow. When I comment out the line with v = texture3D(texVol,pos).r; and replace it with v = 0.4; it outputs what is expected (orange-like color) and no gl errors is generated.
uniform sampler2D texBack;
uniform sampler3D texVol;
uniform vec3 texSize;
uniform vec2 winSize;
uniform float iso;
varying vec3 inCoords;
vec4 raytrace(in vec3 entryPoint,in vec3 exitPoint){
vec3 dir = exitPoint - entryPoint;
vec3 pos = entryPoint;
vec4 color = vec4(0.0,0.0,0.0,0.0);
int steps = int(2.0*length(texSize));
dir = dir * (1.0/steps);
vec3 n;
float v,m=0.0,avg=0.0,avg2=0.0;
for(int i = 0;i<steps || i < 2500;i++){
v = texture3D(texVol,pos).r;
m = max(v,m);
avg += v;
pos += dir;
}
return vec4(avg/steps,m,0,1);
}
void main()
{
vec2 texCoord = gl_FragCoord.xy/winSize;
vec3 exitPoint = texture2D(texBack,texCoord).xyz;
gl_FragColor = raytrace(inCoords,exitPoint);
}
I am using an VBO for rendering a color cube as entry and exist point for my rays. They are stored in FBOs and they look ok when I render them directly to the screen.
I have tried chaning to glBegin/glEnd and draw the cube with quads and then I get the same errors.
I cant find what I am doing wrong and now I need your help. Why is my texture3D generating GL_INVALID_OPERATION?
Note:
I have enabled both 2d and 3d textures.
Edit:
I've just uploaded the whole project to github. browse to for more code https://github.com/r-englund/rGraphicsLibrary
This is tested on both Intel HD 3000 and Nvidia GT550m
According to OpenGL specification glDrawElements() generates GL_INVALID_OPERATION in the following cases:
If a geometry shader is active and mode is incompatible with the input primitive type of the geometry shader in the currently installed program object.
If a non-zero buffer object name is bound to an enabled array or the element array and the buffer object's data store is currently mapped.
This means the problem has nothing to do with your fragment shader. If you don't use geometry shaders, you should fix the buffer objects accordingly.
It looks like your are not providing additional relevant information in your question.