GLSL sampler2DShadow deprecated past version 120? What to use? - opengl

I've been trying to implement percentage closer filtering for my shadow mapping as described here Nvidia GPU Gems
When I try to sample my shadow map using a uniform sampler2DShadow and shadow2D or shadow2DProj the GLSL compile fails and gives me the error
shadow2D deprecated after version 120
How would I go about implementing an equivalent solution in GLSL 330+? I'm currently just using a binary texture sample along with Poisson Sampling but the staircase aliasing is pretty bad.

Your title is way off base. sampler2DShadow is not deprecated. The only thing that changed in GLSL 1.30 was that the mess of functions like texture1D, texture2D, textureCube, shadow2D, etc. were all replaced with overloads of texture (...).
Note that this overload of texture (...) is equivalent to shadow2D (...):
float texture(sampler2DShadow sampler,
vec3 P,
[float bias]);
The texture coordinates used for the lookup using this overload are: P.st and the reference value used for depth comparison is P.r. This overload only works properly when texture comparison is enabled (GL_TEXTURE_COMPARE_MODE == GL_COMPARE_REF_TO_TEXTURE​) for the texture/sampler object bound to the shadow sampler's texture image unit; otherwise the results are undefined.
Beginning with GLSL 1.30, the only time you need to use a different texture lookup function is when you are doing something fundamentally different (e.g. texture projection => textureProj, requesting an exact LOD => textureLod, fetching a texel by its integer coordinates/sample index => texelFetch, etc.). Texture lookup with comparison (shadow sampler) is not considered fundamentally different enough to require its own specialized texture lookup function.
This is all described quite thoroughly on OpenGL's wiki site.

Related

What is the difference between textureLodOffset and texelFetchOffset with respect to their expected offsets?

I know that texelFetch performs a lookup using texture coordinates with range [0, textureSize], and textureLod with range [0,1] both with explicit level of detail.
But I have noticed that textureLodOffset requires an offset as ivec2, int and so on. This seems to be the case for texelFetchOffset as well.
I can see why this makes sense for texelFetch, but I am not sure how it relates to textureLod.
I am used to computing the offset coordinate manually in the shader with something like coord.xy + 1/textureSize() for textureLod. I don't think this is causing any issues with performance etc, but I would like to know how we can use textureLodOffset with integer coordinates as specified in the documentation what makes their use different from texelFetchOffset.
The difference between textureLodOffset and texelFetchOffset are the texture coordinates.
The texture coordinates of textureLodOffset are in rage [0, 1]. But the unit of the texture coordinates of texelFetchOffset are texels and the range of the coordinates depends on the size of the texture. In compare to the *Fetch* functions, textureLodOffset respects texture filtering and sampling.
The *Fetch* functions performs a lookup of a single texel from an unambiguous texture coordinate. See OpenGL 4.6 API Compatibility Profile Specification - 11.1.3.2 Texel Fetches
In both cases, the offset argument is integral because it is meant to access neighboring texels. See further OpenGL Shading Language 4.60 Specification - 8.9.1. Texture Query Functions.

Varyings from all vertices in fragment shader with no interpolation. Why not?

If we pass a varying from any geometry stage (vertex, geometry or tess shader) to fragment shader, we always loose some information. Basically, we loose it in two ways:
By interpolation: smooth, noperspective or centroid - does not matter. If we passed 3 floats (one per vertex) in geometry stage, we will get only one mixed float in fragment stage.
By discarding. When doing flat interpolation, hardware discards all values except one from provoking vertex.
Why does OpenGL not allow functionality like this:
Vertex shader:
// nointerp is an interpolation qualifier I would like to have
// along with smooth or flat.
nointerp out float val;
main()
{
val = whatever;
}
Fragment shader:
nointerp in float val[3];
// val[0] might contain the value from provoking vertex,
// and the rest of val[] elements contain values from vertices in winding order.
main()
{
// some code
}
In GLSL 330 I need to make integer indexing tricks or divide by barycentric coordinates in fragment shader, if I want values from all vertices.
Is it hard to implement in hardware, or is it not widely requested by shader coders? Or am I not aware of it?
Is it hard to implement in hardware, or is it not widely requested by shader coders?
It is usually just not needed in the typical shading algorithms. So traditionally, there has been the automatic (more or less) interpolation for each fragment. It is probably not too hard to implement in current gen hardware, because at least modern desktop GPUs typically use "pull-model interpolation" (see Fabian Giesen's blog article) anyway, meaning the actual interpolation is done in the shader already, the fixed-function hw just provides the interpolation coefficients. But this is hidden from you by the driver.
Or am I not aware of it?
Well, in unextended GL, there is currently (GL 4.6) no such feature. However, there are two related GL extensions:
GL_AMD_shader_explicit_vertex_parameter
GL_NV_fragment_shader_barycentric
which basically provide the features you are asking for.

How does OpenGL decide between using MAG_FILTER and MIN_Filter when accessing in shader?

When configuring OpenGL with glTexParamteri(GL_Texture_2D, GL_TEXTURE_MAG_FILTER, ...) and glTexParamteri(GL_Texture_2D, GL_TEXTURE_MIN_FILTER, ...) how does OpenGL decide which filter to use when accessing a texture in shader with texture(...)?
My only guess it's that it is calculating the pixel footprint but since you could access the texture in either the fragment or vertex shader it can't know on which primitive what texture is projected.
My only guess it's that it is calculating the pixel footprint
Yes, that's what it does. It will approximate the pixel footprint in the texture space by calcualting the derivatives of the texcoords with respect to the window space x and y direction, and it will approximate these derivatives by finite differencing in a 2x2 pixel quad, just like the dFdx and dFdy GLSL functions are working. It will use the longer of the two partial derivative vectors as the size, and calculate the Level-Of-Detail value based on that.
but since you could access the texture in either the fragment or vertex shader it can't know on which primitive what texture is projected.
Correct, that's why the GLSL specification,
(Version 4.60) states the following in the beginning of section 8.9 Texture Functions:
Texture lookup functions are available in all shading stages. However, automatic level of detail is
computed only for fragment shaders. Other shaders operate as though the base level of detail were
computed as zero
Some (ie: most) GLSL texture accessing functions say that they require "implicit derivatives". All such functions only work entirely:
In the fragment shader.
Within uniform control flow of the FS.
If you call a texture access function that requires implicit derivatives in a non-fragment shader, then it will only access from the base mipmap level. However, if you're in a fragment shader but outside of uniform control flow, then all such functions have undefined behavior.
So if you're not in a fragment shader, you either want to access from the base mipmap level (in which case, MAG_FILTER applies), or you want to use functions that explicitly provide the values used for doing interpolation: Lod (where you explicitly say what mipmap level(s) you fetch from), Grad (where you explicitly specify derivatives used to decide where the pixel footprint is), or any of the texelFetch or textureGather functions (which don't do interpolation at all).

How to check if a sampler is null in glsl?

I have a shader with a _color uniform and a sampler. Now I want to draw with _color ONLY if the sampler was not set. Is there any way to figure that our within the shader? (Unfortunately the sampler returns 1,1,1,1 when not assigned, which makes mixing it via alpha impossible)
You cannot do that. The sampler is an opaque handle which just references a texture unit. I'm not sure if the spec guarantees that (1,1,1,1) when sampling from a unit where no texture is bound, or if that is undefined behavior.
What you can do is just use another uniform to switch betwenn using the sampler or the uniform color, or just use different shaders and switch between those. There are also the possibilities of subprograms here, but I don't know if that would be the right appraoch for such a simple problem.
I stumbled over this question trying to solve a similar problem.
Since GLSL 4.30
int textureQueryLevels( gsamplerX sampler);
Is a build-in function. In the GLSL spec. p. 151 it says
The value zero will be returned if no texture or an incomplete texture is associated with sampler.
In the OpenGL-Forms I found an entry to this question suggesting to use
ivecY textureSize(gsamplerX sampler,int lod);
and testing if the texture size is greater than zero. But this is, to my understanding, not covered by the standard. In the section 11.1.3.4 of the OpenGL specification it is said that
If the computed texture image level is outside the range [levelbase,q], the results are undefined ...
Edit:
I just tried this method on my problem and as it turns out nvidia has some issues with this function, resulting in a non zero value when no texture is bound. (See nvidia bug report from 2015)
sampler2d affects x y and z so if check for those with the parametric w as fourth parameter u can check if u gave in texture
vec4 texturecolor ;
texturecolor=texture2D(sampler, uv)*vec4(color,1.0);
if( texturecolor == vec4(0,0,0,1))
{
texturecolor = vec4(color,1.0);
}

CG to GLSL: f3texRECT equivalent?

I'm trying to convert a CG program to a GLSL program.
What I've done so far seems correct, but the GLSL shader outputs an incorrect output. The incorrect behavior is defined by a set of test images.
The only dark point on which I'm investigating is the function f3texRECT, which I've translated in texture. However, I cannot find any documentation about f3texRECT.
Can somebody put some light about?
f3texRECT() looks like it would map to texture() with a sampler2DRect instead of a sampler2D -- meaning the texture coordinates are unnormalized ([0..textureSize-1] instead of [0..1]). The "f3" prefix means the result is a three-channel color. Older versions of GLSL had a textureRect() function for this purpose, but it's been deprecated.
f3texRECT(..args..) is exactly equivalent to texRECT(..args..).xyz in Cg -- it exists as a holdover from the older HSL which didn't have fully general swizzles.
In GLSL the equivalent function is texture, so you should be to use texture(..arg..).xyz there too, though the args might be slightly different.
The main confusion translating texture calls from Cg to GLSL is dealing with shadow textures -- shadow texture lookups in Cg use normal samplers but the tex call has an extra component in the coordinate. GLSL has distinct shadow sampler types instead. So when translating Cg to GLSL you need to figure out which textures are 'normal' textures and which are 'shadow' textures, based on how they are used. In the rare case that you have a single texture used for both normal and shadow lookups, you need to split it into two samplers for GLSL.