How to check if a sampler is null in glsl? - opengl

I have a shader with a _color uniform and a sampler. Now I want to draw with _color ONLY if the sampler was not set. Is there any way to figure that our within the shader? (Unfortunately the sampler returns 1,1,1,1 when not assigned, which makes mixing it via alpha impossible)

You cannot do that. The sampler is an opaque handle which just references a texture unit. I'm not sure if the spec guarantees that (1,1,1,1) when sampling from a unit where no texture is bound, or if that is undefined behavior.
What you can do is just use another uniform to switch betwenn using the sampler or the uniform color, or just use different shaders and switch between those. There are also the possibilities of subprograms here, but I don't know if that would be the right appraoch for such a simple problem.

I stumbled over this question trying to solve a similar problem.
Since GLSL 4.30
int textureQueryLevels( gsamplerX sampler);
Is a build-in function. In the GLSL spec. p. 151 it says
The value zero will be returned if no texture or an incomplete texture is associated with sampler.
In the OpenGL-Forms I found an entry to this question suggesting to use
ivecY textureSize(gsamplerX sampler,int lod);
and testing if the texture size is greater than zero. But this is, to my understanding, not covered by the standard. In the section 11.1.3.4 of the OpenGL specification it is said that
If the computed texture image level is outside the range [levelbase,q], the results are undefined ...
Edit:
I just tried this method on my problem and as it turns out nvidia has some issues with this function, resulting in a non zero value when no texture is bound. (See nvidia bug report from 2015)

sampler2d affects x y and z so if check for those with the parametric w as fourth parameter u can check if u gave in texture
vec4 texturecolor ;
texturecolor=texture2D(sampler, uv)*vec4(color,1.0);
if( texturecolor == vec4(0,0,0,1))
{
texturecolor = vec4(color,1.0);
}

Related

Draw a geometric object and texture in different coordinates using same shader in Opengl (GLSL)

I wonder if there is a nice (at least any) way to draw some geometric shape and a texture using same shader program in opengl 2 (or maybe higher).
Saw this example in a book for a fragmnet shader (as an example of how glTexEnvi func from Opegl 1 can be replaced in Opengl >= 2 version):
precision mediump float;
uniform sampler2D s_tex0;
varying vec2 v_texCoord;
varying vec4 v_primaryColor;
void main()
{
gl_FragColor = texture2D(s_tex0, v_texCoord) * v_primaryColor;
}
Though it is very hard for me to guess the vertex shader, if i want to draw texture and some geometry in different coordinates (possibly intersecting in some place).
Does anybody have an idea?
There has to be a way. It will just make some things (for example different blendings) so much easier to do.
P.S. Had an idea of using a "switcher" in vertex shader to pass different coordinates wheather it is in "1" or "0" state, somewhy it didn't workout. Hope you know a better solution.
I'll just leave it here.
Though i still don't know the possible vertex shader for the question above i was lucky enough to solve my subgoal a harder way using blending.
It turned out that blending with constants GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA didn't work as expected (when destination are vertices) because alpha channel for pixels was "turned off" by default (you could still use alpha channel from image), so you have to "turn it on" to make blending with these constants work properly.
In android studio (and java overall) it is possible to do it using setEGLConfigChooser function.

GLSL sampler2DShadow deprecated past version 120? What to use?

I've been trying to implement percentage closer filtering for my shadow mapping as described here Nvidia GPU Gems
When I try to sample my shadow map using a uniform sampler2DShadow and shadow2D or shadow2DProj the GLSL compile fails and gives me the error
shadow2D deprecated after version 120
How would I go about implementing an equivalent solution in GLSL 330+? I'm currently just using a binary texture sample along with Poisson Sampling but the staircase aliasing is pretty bad.
Your title is way off base. sampler2DShadow is not deprecated. The only thing that changed in GLSL 1.30 was that the mess of functions like texture1D, texture2D, textureCube, shadow2D, etc. were all replaced with overloads of texture (...).
Note that this overload of texture (...) is equivalent to shadow2D (...):
float texture(sampler2DShadow sampler,
vec3 P,
[float bias]);
The texture coordinates used for the lookup using this overload are: P.st and the reference value used for depth comparison is P.r. This overload only works properly when texture comparison is enabled (GL_TEXTURE_COMPARE_MODE == GL_COMPARE_REF_TO_TEXTURE​) for the texture/sampler object bound to the shadow sampler's texture image unit; otherwise the results are undefined.
Beginning with GLSL 1.30, the only time you need to use a different texture lookup function is when you are doing something fundamentally different (e.g. texture projection => textureProj, requesting an exact LOD => textureLod, fetching a texel by its integer coordinates/sample index => texelFetch, etc.). Texture lookup with comparison (shadow sampler) is not considered fundamentally different enough to require its own specialized texture lookup function.
This is all described quite thoroughly on OpenGL's wiki site.

Bind pre rendered depth texture to fbo or to fragment shader?

In a deferred shading framework, I am using different framebufer objects to perform various render passes. In the first pass I write the DEPTH_STENCIL_ATTACHMENT for the whole scene to a texture, let's call it DepthStencilTexture.
To access the depth information stored in DepthStencilTexture from different render passes, for which I use different framebuffer objects, I know two ways:
1) I bind the DepthStencilTexture to the shader and I access it in the fragment shader, where I do the depth manually, like this
uniform vec2 WinSize; //windows dimensions
vec2 uv=gl_FragCoord.st/WinSize;
float depth=texture(DepthStencilTexture ,uv).r;
if(gl_FragCoord.z>depth) discard;
I also set glDisable(GL_DEPTH_TEST) and glDepthMask(GL_FALSE)
2) I bind the DepthStencilTexture to the framebuffer object as DEPTH_STENCIL_ATTACHMENT and set glEnable(GL_DEPTH_TEST) and glDepthMask(GL_FALSE) (edit: in this case I won't bind the DepthStencilTexture to the shader, to avoid loop feedback, see the answer by Nicol Bolas, and I if I need the depth in the fragment shader I will use gl_FragCorrd.z)
In certain situations, such as drawing light volumes, for which I need the Stencil Test and writing to the stencil buffer, I am going for the solution 2).
In other situations, in which I completely ignore the Stencil, and just need the depth stored in the DepthStencilTexture, does option 1) gives any advantages over the more "natural" option 2) ?
For example I have a (silly, I think) doubt about it . Sometimes in my fragment shaders Icompute the WorldPosition from the depth. In the case 1) it would be like this
uniform mat4 invPV; //inverse PV matrix
vec2 uv=gl_FragCoord.st/WinSize;
vec4 WorldPosition=invPV*vec4(uv, texture(DepthStencilTexture ,uv).r ,1.0f );
WorldPosition=WorldPosition/WorldPosition.w;
In the case 2) it would be like this (edit: this is wrong, gl_FragCoord.z is the current fragment's depth, not the actual depth stored in the texture)
uniform mat4 invPV; //inverse PV matrix
vec2 uv=gl_FragCoord.st/WinSize;
vec4 WorldPosition=invPV*vec4(uv, gl_FragCoord.z, 1.0f );
WorldPosition=WorldPosition/WorldPosition.w;
I am assuming that gl_FragCoord.z in case 2) will be the same as texture(DepthStencilTexture ,uv).r in case 1), or, in other words, the depth stored in the the DepthStencilTexture. Is it true? Is gl_FragCoord.z read from the currently bound DEPTH_STENCIL_ATTACHMENT also with glDisable(GL_DEPTH_TEST) and glDepthMask(GL_FALSE) ?
Going strictly by the OpenGL specification, option 2 is not allowed. Not if you're also reading from that texture.
Yes, I realize you're using write masks to prevent depth writes. It doesn't matter; the OpenGL specification is quite clear. In accord with 9.3.1 of OpenGL 4.4, a feedback loop is established when:
an image from texture object T is attached to the currently bound draw framebuffer object at attachment point A
the texture object T is currently bound to a texture unit U, and
the current programmable vertex and/or fragment processing state makes it
possible (see below) to sample from the texture object T bound to texture
unit U
That is the case in your code. So you technically have undefined behavior.
One reason this is undefined is so that simply changing write masks won't have to do things like clearing framebuffer and/or texture caches.
That being said, you can get away with option 2 if you employ NV_texture_barrier. Which, despite the name, is quite widely available on AMD hardware. The main thing to do here is to issue a barrier after you do all of your depth writing, so that all subsequent reads are guaranteed to work. The barrier will do all of the cache clearing and such you need.
Otherwise, option 1 is the only choice: doing the depth test manually.
I am assuming that gl_FragCoord.z in case 2) will be the same as texture(DepthStencilTexture ,uv).r in case 1), or, in other words, the depth stored in the the DepthStencilTexture. Is it true?
Neither is true. gl_FragCoord is the coordinate of the fragment being processed. This is the fragment generated by the rasterizer, based on the data for the primitive being rasterized. It has nothing to do with the contents of the framebuffer.

Find out if GL_TEXTURE_2D is active in shader

I would like to know if GL_TEXTURE_2D is active in the shader.
I am binding a color to the shader as well as the active texture (if GL_TEXTURE_2D is set) and need to combine these two.
So if texture is bound, mix the color and the texture (sampler2D * color) and if no texture is bound, use color.
Or should I go another way about this?
It is not quite clear what you mean by 'GL_TEXTURE_2D is active' or 'GL_TEXTURE_2D is set'.
Please note the following:
glEnable(GL_TEXTURE_2D) has no effect on your (fragment) shader. It parametrizes the fixed function part of your pipeline that you just replaced by using a fragment shader.
There is no 'direct'/'clean' way of telling from inside the GLSL shader whether there is a valid texture bound to the texture unit associated with your texture sampler (to my knowledge).
Starting with GLSL 1.3 you might have luck using textureSize(sampler, 0).x > 0 to detect the presence of a valid texture associated with sampler, but that might result in undefined behavior.
The ARB_texture_query_levels extension does indeed explicitly state that textureQueryLevels(gsampler2D sampler) returns 0 if there is no texture associated with sampler.
Should you go another way about this? I think so: Instead of making a decision inside the shader, simply bind a 1x1 pixel texture of 'white' and unconditionally sample that texture and multiply the result with color, which will obviously return 1.0 * color. That is going to be more portable and faster, too.

glUniform fails to set sampler value

I'm using OpenGL and GLSL to draw a texture over a simple mesh.
My problem is that when I am using glUniform1i to set the value of a sampler2D uniform, it was not set. For example in the in this code:
glUseProgram(programObject);
glUniform1i(glGetUniformLocation(programObject, "texture"), 1);
GLint val;
glGetUniformiv(programObject,
glGetUniformLocation(programObject, "texture"),
&val);
printf("Value is %d\n", val);
The value printed at command line out is 0. I have checked that the shaders are compiled correctly and the program is linked correctly. glGetUniformLocation outputs an index > 0. Furthermore, glGetError doesn't output any error at any point.
This only happens when the uniform is a sampler. As the texture is not set, quering it in the shader always returns (0, 0, 0, 1). (I have also checked, using apitrace, that my texture is correctly bound to GL_TEXTURE1, which is done inmediately after the code shown).
I have searched extensively for this problem and I have only found one instance of something similar to this here. The solution was to initialize GLEW, but this hasn't worked for me.
I'm happy to provide any extended info needed (or the trace from apitrace).
Edit
Here is the shader:
uniform sampler2D texture;
varying vec2 textureCoord;
void main() {
gl_FragColor = texture2D(texture, textureCoord);
}
Solution
I managed to solve the problem after some work.
As it turns out, this problem wasn't due to the uniform not being set, as always returning 0 for samplers regardless of actual value appears to be a quirk of the gl implementation. This happened under Intel's driver for Sandy Bridge only. Testing the code under other implementations always returned the correct sampler value.
The problem of the texture returning (0, 0, 0, 1) always was due to a mistake I made during the generation of the texture.
Now the glGerUniformiv still returns 0, but the texture used is the correct one.
Solution
I managed to solve the problem after some work.
As it turns out, this problem wasn't due to the uniform not being set, as always returning 0 for samplers regardless of actual value appears to be a quirk of the gl implementation. This happened under Intel's driver for Sandy Bridge only. Testing the code under other implementations always returned the correct sampler value.
The problem of the texture returning (0, 0, 0, 1) always was due to a mistake I made during the generation of the texture.
Now the glGerUniformiv still returns 0, but the texture used is the correct one.