I have a 3D texture. Each texel contains a transparency value on the alpha channel.
I need to generate my mipmaps in such a way that it always takes the values of the texel with he maximum alpha value.
In other words if there are 4 texels 3 with a transparency value of 0 and one with a transparency value of 1 the resulting mipmap texel should be 1.
How can I achieve this?
If I need to write my own shaders, what is the optimal way to do it?
EDIT:
My question, to put it more clearly is:
Do I need to manually create a shader that does this or is there a way to use built in functions of opengl to save me the trouble?
In order to do that, you'll need to render to each layer of each mipmap with a custom shader that computes max of 8 samples from the upper level.
This can be done by attaching each layer of the rendered mipmap to a framebuffer (using glFramebufferTexture3D), and, in the shader, sampling from the same texture by using texelFetch (lod parameter specifies the mipmap to sample from).
Related
Say I have a 512x512 pixels texture that I am displaying on 256x256 pixels on the screen.
In that case "the level-of-detail function used when sampling from the texture determines that the texture should be minified" according to my GL_TEXTURE_MIN_FILTER which is GL_LINEAR.
As a result 2x2 pixels will be minified to 1 pixel (distance weighted linear average).
Is there some way that I can control the minification?
Say I instead want 4x4 or 8x8 pixels to be minified to 1 pixel since I prefer a coarse or rasterized image ;-).
Alternatively is there some way I can achieve the same effect in the shader code?
If you want to precisely control the filtering, write an appropriate fragment shader, use the texelFetch function to access the unfiltered texture data, then implement the filter in the shader.
If you're going for a Taylor approximation of the filtering kernel, keep in mind, that you can make use of bilinear mipmap filtering (i.e. GL_TEXTURE_MIN_FILTER := GL_LINEAR_MIPMAP_LINEAR) to implement the 0th and 1st order terms of the Taylor expansion.
Hello I'm trying to implement Bloom effect in my program. In fact, I've already implemented the effect using a highlight pass and a separate gaussian blur pass.
Here's an example:
Bright pass texture:
Gaussian blur render pass (2 internal passes for this effect):
And finally the final pass (brightPass + BlurPass):
(I want to precise I don't have implemented HDR tone mapping yet).
But I found a very interesting article from intel:
https://software.intel.com/en-us/articles/compute-shader-hdr-and-bloom
It said :
"First the bright pass is performed where values below a specified threshold are filtered out. The bright pass output is then downscaled by half 4 times. Each of the downscaled bright pass outputs are blurred with a separable Gaussian filter and then added to the next higher resolution bright pass output."
I understood how it works but I did not understand how to perform texture downscaling using OpenGL. However I know if I use glGenerateMipmap() function (at the initialization of my FBO) with 4 mipmap levels I think I will have my 4 downscaled texture directly with the desired format (1/16, 1/8, 1/4 and 1/2) like it's written in the article.
But my problem is I can't find the way to do this!
Does it exist a way to bind the other textures generated using mipmaping and use them in the fragment shader? Should I have to render the bright pass 4 times with 4 separates FBOs applying different formats (1/16, ...). I think it's a solution but it's probably not correct about the performance. I think the bright pass should be rendered once using the window max size and then using the downscaled textures (mipmap) already loaded in memory but I don't know how to bind and use them into my shaders! I'm really lost.
Thanks a lot in advance for your help!
You may use textureLod, as BDL suggested, but I think that there is no real need for this.
Once the required mipmap levels are generated, the right level will be selected automatically depending on the size of the active render buffer.
The main idea of the article you have mentioned, is that they perform several blurring steps on differently sized sources.
After mipmap levels were generated, you will have 4 mipmap levels that represent 1/16, 1/8, 1/4 and 1/2 scaled textures from bright pass.
Each of this buffers you should blur, using the render buffer of the same size (1/16, 1/8, 1/4 and 1/2 correspondingly).
If the render buffer, you are rendering to is 1/16 scale of the source texture from the bright pass, than the 4-th mipmap level will be used. So you do not need to specify mipmap level manually.
You can access a specific mipmap-level in a shader using the
gvec4 textureLod(gsampler2D sampler, vec2 P, float lod);
method, where lod specifies the mipmap level. (link to documentation).
I'm trying to code a texture reprojection using a UV gBuffer (this is a texture that contains the UV desired value for mapping at that pixel)
I think that this should be easy to understand just by seeing this picture (I cannot attach due low reputation):
http://www.andvfx.com/wp-content/uploads/2012/12/3-objectes.jpg
The first image (the black/yellow/red/green one) is the UV gBuffer, it represents the uv values, the second one is the diffuse channel and the third the desired result.
Making this on OpenGL is pretty trivial.
Draw a simple rectangle and use as fragmented shader this pseudo-code:
float2 newUV=texture(UVgbufferTex,gl_TexCoord[0]).xy;
float3 finalcolor=texture(DIFFgbufferTex,newUV);
return float4(finalcolor,0);
OpenGL takes care about selecting the mipmap level, the anisotropic filtering etc, meanwhile if I make this on regular CPU process I get a single pixel for finalcolor so my result is crispy.
Any advice here? I was wondering about computing manually a kind of mipmaps and select the level by checking the contiguous pixel but not sure if this is the right way, also I doubt how to deal with that since it could be changing fast on horizontal but slower on vertical or viceversa.
In fact I don't know how this is computed internally on OpenGL/DirectX since I used this kind of code for a long time but never thought about the internals.
You are on the right track.
To select mipmap level or apply anisotropic filtering you need a gradient. That gradient comes naturally in GL (in fragment shaders) because it is computed for all interpolated variables after rasterization. This all becomes quite obvious if you ever try to sample a texture using mipmap filtering in a vertex shader.
You can compute the LOD (lambda) as such:
ρ = max (((du/dx)2 + (dv/dx)2)1/2
, ((du/dy)2 + (dv/dy)2)1/2)
λ = log2 ρ
The texture is picked basing on the size on the screen after reprojection. After you emit a triangle, check the rasterization size and pick the appropriate mipmap.
As for filtering, it's not that hard to implement i.e. bilinear filtering manually.
Is it possible to attach a texture to an FBO which has mipmaps?
I am currently trying to do this.I have a texture with several mipmap levels. I am attaching it to an FBO. When I clear the color for this buffer I still see the original texture in the output. Once I attach another texture with 1 mipmap level only the FBO draws the results correctly.
Though it's hard to say where your problem lies without any code, the fast and easy answer is just: Of course this is possible! Ever wondered what the level parameter of all those glFramebufferTexture functions is for?
But you can only write to a single mipmap level of the respective texture, all the other levels will be unchanged. The usual way is to write into mipmap level 0 (as you would do for a non-mipmapped texture) and generate the remaining levels by means of glGenerateMipmap. But you can also write to any other level or to each and every level individually.
I need to setup a GLSL fragment shader to change the color of a fragment other than the one currently being processed. Since that may not seem desirable, I'll provide a very brief context.
The project utilizes a render pass whereby a given model is drawn into an FBO with unique colors that correspond to UV coordinates in the texture map. These colors are then sampled and converted to image coordinates so that the texture map for the model can be updated based on what's visible to the camera. Essentially:
Render model to FBO
For each FBO pixel
1. sample secondary texture based on FBO pixel position
2. convert color at current pixel to image coordinate for the model's texture map
3. update model's texture with sampled secondary texture at calculated coordinate
End loop
The problem is that the current implementation is very CPU bound, so I'm reading the pixels out of the FBO and then manipulating them. Ideally, since I already have the color of the fragment to work with in the fragment shader, I want to just tack on the last few steps to the process and keep everything on the GPU.
The specific issue I'm having is that I don't quite know how (or if it's even possible) to have a fragment shader set the color of a fragment that it is not processing. If I can't work something up by using an extra large FBO and just offsetting the fragment that I want to set the color on, can I work something up that writes directly into a texture?
Any help/advice?
It's not possible to have a fragment shader write to anywhere other than the fragment it is processing. What you probably want to do is ping pong rendering.
In your code, you'd have three textures, matching your listed tasks:
the secondary texture
the source model texture map
the destination model texture map
At a first run, you'd use (1) and (2) as source textures, to draw to (3). Next time through the loop you'd use (1) and (3) to write to (2). Then you'd switch back to using (1) and (2) to write to (3). And so on.
So (2) and (3) are connected with framebuffer objects with the textures supplied as the colour buffer in place of a renderbuffer.
NVidia authored the GL_NV_texture_barrier extension in 2009 that allows you to compact (2) and (3) into a single texture, provided you are explicit about the dividing line between where you're reading and where you're writing. I don't have the expertise to say how widely available it is.
Attempting to read and write to the same texture (as is possible with FBOs) otherwise produces undefined results in OpenGL. Prompting issues at the hardware level are related to caching and multisampling.
As far as I understand, you need a scatter operation (uniform FBO pixel space -> random mesh UV texture destination) to be performed in OpenGL. There is a way to do this, not as simple as you may expect, and not even as fast, but I can't find a better one:
Run a draw call of type GL_POINTS and size equal to the width*height of your source FBO.
Select model texture as a destination FBO color layer, with no depth layer attached
In a vertex shader, compute the original screen coordinate by using gl_VertexID.
Sample from the source FBO texture to get color and target position (assuming your original FBO surface was a texture). Assign a proper gl_Position and pass the target color to the fragment shader.
In a fragment shader, just copy the color to the output.
This will make GPU to go through each of your original FBO pixels and scatter the computed colors over the destination texture.