Weird behaviour using mipmapping from glsl - glsl

I am trying to use mipmapping with vulkan. I do understand that I should use vkCmdBlit between each layer for each image, but before doing that, I just wanted to know how to change the layer in GLSL.
Here is what I did.
First I load and draw a texture (using layer 0) and there was no problem. The "rendered image" is the texture I load, so it is good.
Second, I use this shader (so I wanted to use the second layer (number 1)) but the "rendered image" does not change :
#version 450
layout(set = 0, binding = 0) uniform sampler2D tex;
in vec2 texCoords;
layout(location = 0) out vec4 outColor;
void main() {
outColor = textureLod(tex, texCoords, 1);
}
According to me, the rendered image should be changed, but not at all, it is always the same image, even if I increase the "1" (the number of the layer).
Third instead changing anything in the glsl code, I change the layer number into the ImageSubresourceRange to create the imageView, and the "rendered image" changed, so it seems normal to me and when I will use vkCmdBlit, I must see the original image in lower resolution.
The real problem is, when I try to use a mipmapping (through mipmapping) in GLSL, it does not affect at all the rendered image, but in C++ it does (and that seems fair).
here is (all) my source code
https://github.com/qnope/Vulkan-Example/tree/master/Mipmap

Judging by your default sampler creation info (https://github.com/qnope/Vulkan-Example/blob/master/Mipmap/VkTools/System/sampler.cpp#L28) you always set the maxLod member of your samplers to zero, so your lod is always clamped between 0.0 and 0.0 (minLod/maxLod). This would fit the behaviour you described.
So try setting the maxLod member of your sampler creation info to the actual number of mip maps in your texture and changing the lod level in the shader shoudl work fine.

Related

Dual Source Blending with Multiple Render Target link error

I am trying to implement text subpixel using dual-source blending.
https://www.khronos.org/registry/OpenGL/extensions/ARB/ARB_blend_func_extended.txt
layout(location = 0, index = 0) out vec4 fragColor;
layout(location = 0, index = 1) out vec4 srcColor1;
void main()
{
vec4 scolor0;
vec4 scolor1;
// some calulcations
fragColor = scolor0;
srcColor1 = scolor1;
}
It works fine.
But now I want to write to another render target
like
layout(location = 0, index = 0) out vec4 fragColor;
layout(location = 0, index = 1) out vec4 srcColor1;
layout(location = 1) out uvec4 myMRT;
and I tried changing location = 1 or location = 2 but in either case,I get this linking error
error: assembly compile error for fragment shader at offset
error: too many color outputs when using dual source output
Which is the best way to use MRT in case of Dual source blending?
The OpenGL 4.6 core profile specification states in section 17.3.6.3 "Dual Source Blending and Multiple Draw Buffers" (emphasis mine):
Blend functions that require the second color input
(Rs1;Gs1;Bs1;As1) (SRC1_- COLOR, SRC1_ALPHA, ONE_MINUS_SRC1_COLOR, or ONE_MINUS_SRC1_ALPHA)
may consume hardware resources that could otherwise be used for rendering to multiple draw buffers. Therefore, the number of draw
buffers that can be attached to a framebuffer may be lower when using
dual-source blending.
The maximum number of draw buffers that may be attached to a single framebuffer when using dual-source blending functions is
implementation-dependent and may be queried by calling GetIntegerv
with pname MAX_DUAL_SOURCE_DRAW_BUFFERS. When using dual-source
blending, MAX_DUAL_SOURCE_DRAW_BUFFERS should be used in place of
MAX_DRAW_BUFFERS to determine the maximum number of draw buffers
that may be attached to a single framebuffer. The value of
MAX_DUAL_SOURCE_DRAW_BUFFERS must be at least 1.
(The older ARB_blend_func_exteded extension you referred uses different wording, but also only guarantees a minumum of 1 for MAX_DUAL_SOURCE_DRAW_BUFFERS)
So even wit the most current GL spec, GL implementations are not required to support dual-source blending with multiple render targets at all. And looking at the current report for that capability on gpuinfo.org shows that there are no real word implementations supporting a value bigger than 1. So nope, you can't do this. At least that's the state of affairs in December 2020.

In OpenGL, is there a way to blend based on a separate channel's value in the shader?

In OpenGL (not ES), is there a universal way to blend based a texture while drawing based on another texture or variable's value? On OpenGLES, I know that I can do custom blending on some platforms via extensions like GL_EXT_shader_framebuffer_fetch. The reason I ask, is that I have a special texture where the forth channel is not alpha, and I need to be able to blend it on a separate alpha which is available on a different map.
You want dual-source blending, which is available in core as of OpenGL 3.3. This allows you to provide a fragment shader with two outputs and use both of them in the blend function.
You would declare outputs in the fragment shader like this:
layout(location = 0, index = 0) out vec4 outColor;
layout(location = 0, index = 1) out vec4 outAlpha;
You could then set the blending function like this, for premultiplied alpha:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC1_COLOR);
Or non-premultiplied alpha:
glBlendFunc(GL_SRC1_COLOR, GL_ONE_MINUS_SRC1_COLOR);
Note that SRC1 here is the second output of the fragment shader. If I remember correctly, this blending will only work for one location.

Draw a geometric object and texture in different coordinates using same shader in Opengl (GLSL)

I wonder if there is a nice (at least any) way to draw some geometric shape and a texture using same shader program in opengl 2 (or maybe higher).
Saw this example in a book for a fragmnet shader (as an example of how glTexEnvi func from Opegl 1 can be replaced in Opengl >= 2 version):
precision mediump float;
uniform sampler2D s_tex0;
varying vec2 v_texCoord;
varying vec4 v_primaryColor;
void main()
{
gl_FragColor = texture2D(s_tex0, v_texCoord) * v_primaryColor;
}
Though it is very hard for me to guess the vertex shader, if i want to draw texture and some geometry in different coordinates (possibly intersecting in some place).
Does anybody have an idea?
There has to be a way. It will just make some things (for example different blendings) so much easier to do.
P.S. Had an idea of using a "switcher" in vertex shader to pass different coordinates wheather it is in "1" or "0" state, somewhy it didn't workout. Hope you know a better solution.
I'll just leave it here.
Though i still don't know the possible vertex shader for the question above i was lucky enough to solve my subgoal a harder way using blending.
It turned out that blending with constants GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA didn't work as expected (when destination are vertices) because alpha channel for pixels was "turned off" by default (you could still use alpha channel from image), so you have to "turn it on" to make blending with these constants work properly.
In android studio (and java overall) it is possible to do it using setEGLConfigChooser function.

Opengl fragment shader second color

I have a problem with fragment shader, I want to get an effect where two different objects are illuminated with different light. Here is my main code:
glUniform1i(TextureID, 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, depthTexture);
glUniform1i(ShadowMapID, 1);
//Here I draw my first object
//Then I want to change light from my fragment shader to color2.
My fragment shader:
// Ouput data
layout(location = 0) out vec3 color;
layout(location = 1) out vec3 color2;
void main(){
//Here I calculate my color variables
}
I have no idea how to achieve this effect. Do I have to write a second fragmentshader? Is it necessary?
Not quite.
Think about what a fragment shader is; it gets run for every pixel on your screen. As such, it typically has one color output, denoting value of said pixel. Multiple outputs of fragment shader are used in advanced techniques such as MRT (multi render-targets), to avoid unnecessary geometry computations.
If you want to change a value of the light between the calls, you simply change the shader uniforms, and then just execute the drawcall again. Another, analogous solution is to use an UBO.
Writing different shaders is necessary if you have fundamental changes in logic; otherwise, they are often generic enough to make just data bindings' modifications enough for stuff like changing lights. (Changing the number of lights, though, is another story).

Bind pre rendered depth texture to fbo or to fragment shader?

In a deferred shading framework, I am using different framebufer objects to perform various render passes. In the first pass I write the DEPTH_STENCIL_ATTACHMENT for the whole scene to a texture, let's call it DepthStencilTexture.
To access the depth information stored in DepthStencilTexture from different render passes, for which I use different framebuffer objects, I know two ways:
1) I bind the DepthStencilTexture to the shader and I access it in the fragment shader, where I do the depth manually, like this
uniform vec2 WinSize; //windows dimensions
vec2 uv=gl_FragCoord.st/WinSize;
float depth=texture(DepthStencilTexture ,uv).r;
if(gl_FragCoord.z>depth) discard;
I also set glDisable(GL_DEPTH_TEST) and glDepthMask(GL_FALSE)
2) I bind the DepthStencilTexture to the framebuffer object as DEPTH_STENCIL_ATTACHMENT and set glEnable(GL_DEPTH_TEST) and glDepthMask(GL_FALSE) (edit: in this case I won't bind the DepthStencilTexture to the shader, to avoid loop feedback, see the answer by Nicol Bolas, and I if I need the depth in the fragment shader I will use gl_FragCorrd.z)
In certain situations, such as drawing light volumes, for which I need the Stencil Test and writing to the stencil buffer, I am going for the solution 2).
In other situations, in which I completely ignore the Stencil, and just need the depth stored in the DepthStencilTexture, does option 1) gives any advantages over the more "natural" option 2) ?
For example I have a (silly, I think) doubt about it . Sometimes in my fragment shaders Icompute the WorldPosition from the depth. In the case 1) it would be like this
uniform mat4 invPV; //inverse PV matrix
vec2 uv=gl_FragCoord.st/WinSize;
vec4 WorldPosition=invPV*vec4(uv, texture(DepthStencilTexture ,uv).r ,1.0f );
WorldPosition=WorldPosition/WorldPosition.w;
In the case 2) it would be like this (edit: this is wrong, gl_FragCoord.z is the current fragment's depth, not the actual depth stored in the texture)
uniform mat4 invPV; //inverse PV matrix
vec2 uv=gl_FragCoord.st/WinSize;
vec4 WorldPosition=invPV*vec4(uv, gl_FragCoord.z, 1.0f );
WorldPosition=WorldPosition/WorldPosition.w;
I am assuming that gl_FragCoord.z in case 2) will be the same as texture(DepthStencilTexture ,uv).r in case 1), or, in other words, the depth stored in the the DepthStencilTexture. Is it true? Is gl_FragCoord.z read from the currently bound DEPTH_STENCIL_ATTACHMENT also with glDisable(GL_DEPTH_TEST) and glDepthMask(GL_FALSE) ?
Going strictly by the OpenGL specification, option 2 is not allowed. Not if you're also reading from that texture.
Yes, I realize you're using write masks to prevent depth writes. It doesn't matter; the OpenGL specification is quite clear. In accord with 9.3.1 of OpenGL 4.4, a feedback loop is established when:
an image from texture object T is attached to the currently bound draw framebuffer object at attachment point A
the texture object T is currently bound to a texture unit U, and
the current programmable vertex and/or fragment processing state makes it
possible (see below) to sample from the texture object T bound to texture
unit U
That is the case in your code. So you technically have undefined behavior.
One reason this is undefined is so that simply changing write masks won't have to do things like clearing framebuffer and/or texture caches.
That being said, you can get away with option 2 if you employ NV_texture_barrier. Which, despite the name, is quite widely available on AMD hardware. The main thing to do here is to issue a barrier after you do all of your depth writing, so that all subsequent reads are guaranteed to work. The barrier will do all of the cache clearing and such you need.
Otherwise, option 1 is the only choice: doing the depth test manually.
I am assuming that gl_FragCoord.z in case 2) will be the same as texture(DepthStencilTexture ,uv).r in case 1), or, in other words, the depth stored in the the DepthStencilTexture. Is it true?
Neither is true. gl_FragCoord is the coordinate of the fragment being processed. This is the fragment generated by the rasterizer, based on the data for the primitive being rasterized. It has nothing to do with the contents of the framebuffer.