Change the values of OpenGL's z-buffer - c++

I want to pass a matrix with depth values into the z-buffer of openGL. Somewhere I found that I can use:
glDrawPixels(640,480,GL_DEPTH_COMPONENT,GL_FLOAT,normalizedMappedDepthMat.ptr());
...where mat is an opencv Mat.Is it possible to change the z-buffer values in OpenGL using a texture binding?If so,how?

With the programmable pipeline, you can write to gl_FragDepth in the fragment shader, effectively setting a per-pixel z value. With that feature, you can implement a render-to-depth-buffer feature quite easily by rendering a full-screen quad (or something else, if you want to overwrite just a sub-region of the whole buffer). With reasonably modern GL, you will be able to use single-channel texture formats with enough precision like GL_R32F. With older GL versions, you can manually combine the RGB or RGBA channels of standard 8Bit textures to 24 or 32 bit values.
However, there are some little details you have to take into account. Writing to the depth buffer only occurs if the GL_DEPTH_TEST is enabled. This of course might discard some of your fragments (if the depth-buffer is not cleared before). One way to get around this is setting glDepthFunc() to GL_ALWAYS during your depth-buffer rendering.
You must also keep in mind that rendering writes to all buffers, not just the depth buffer. If you don't want to modify the color buffer, you can set glDrawBuffer() to GL_NONE, or can use glColorMask() to prevent overwriting it. If you use a stencil buffer, you should also disable or mask out writing to it, of course.

Related

OpenGL trim/inline contour of stencil

I have created a shape in my stencil buffer (black in the picture below). Now I would like to render to the backbuffer. I would like one texture on the outer pixels (say 4 pixels) of my stencil (red), and an other texture on the remaining pixels (red).
I have read several solutions that involve scaling, but that will not work when there is no obvious center of the shape.
How do I acquire the desired effect?
The stencil buffer works great for doing operations on the specific fragments being overlaid onto them. However, it's not so great for doing operations that require looking at pixels other than the one corresponding to the fragment being rendered. In order to do outlining, you have to ask about the values of neighboring pixels, which stencil operations don't allow.
So, if it is possible to put the stencil data you want to test against in a non-stencil format image (ie: a color image, maybe with an integer texture format), that would make things much simpler. You can do the effect of stencil discarding by using discard directly in the fragment shader. Since you can fetch arbitrarily from the texture (as long as you're not trying to modify it), you can fetch neighboring pixels and test their values. You can use that to identify when a fragment is near a border.
However, if you're relying on specialized stencil operations to build the stencil data itself (like bitwise operations), then that's more complicated. You will have to employ stencil texturing operations, so you're going to have to render to an FBO texture that has a depth/stencil format. And you'll have to set it up to allow you to read from the stencil aspect of the texture. This is an OpenGL 4.3 feature.
This effectively converts it into an 8-bit unsigned integer texture. That allows you to play whatever games you need to. But if you want to use stencil tests to discard fragments, you will also need texture barrier functionality to allow you to read from an image that's attached to the current FBO. But you don't need to actually use the barrier, since you should mask off stencil writing. You just need GL 4.5 or the NV/ARB_texture_barrier extension to be available, which they widely are.
Either way this happens, the biggest difficulty is going to be varying the size of the border. It is easy to just test the neighboring 9 pixels to see if it is at a border. But the larger the border size, the larger the area of pixels each fragment has to test. At that point, I would suggest trying to look for a different solution, one that is based on some knowledge of what pattern is being written into the stencil buffer.
That is, if the rendering operation that lays down the stencil has some knowledge of the shape, then it could compute a distance to the edge of the shape in some way. This might require constructing the geometry in a way that it has distance information in it.

Writing to depth buffer from opengl compute shader

Generally on modern desktop OpenGL hardware what is the best way to fill a depth buffer from a compute shader and then use that depth buffer for graphics pipeline rendering with triangles etc?
Specifically I am wondering about concerns regards HiZ. Also I wonder if it's better to do compute shader modifications to the depth buffer before or after the graphics rendering?
If the compute shader is run after the graphics rendering I assume the depth buffer will typically be decompressed behind the scenes. But I worry done the other way around the depth buffer may be in a decompressed/non-optimal state for the graphics pipeline?
As far as i know, you cannot bind textures with any of the depth formats as images, and thus cannot write to depth format textures in compute shaders. See glBindImageTexture documentation, it lists the formats that your texture format must be compatible to. Depth formats are not among them and the specification says the depth formats are not compatible to the normal formats.
Texture copying functions have the same compatibility restrictions, so you can't even e.g. write to a normal texture in the compute shader and then copy to a depth texture. glCopyImageSubData does not explicitly have that restriction but i haven't tried it and it's not part of the core profile anymore.
What might work is writing to a normal texture, then rendering a fullscreen triangle and setting gl_FragDepth to values read from the texture, but that's an additional fullscreen pass.
I don't quite understand your second question - if your compute shader stuff modifies the depth buffer, the result will most likely be different depending on whether you do it before or after regular rendering because different parts will be visible or occluded.
But maybe that question is moot since it seems you cannot manually write into depth buffers at all - which might also answer your third question - by not writing into depth buffers you cannot mess with the compression of it :)
Please note that i'm no expert in this, i had a similar problem and looked at the docs/spec myself, so this all might be wrong :) Please let me know if you manage to write to depth buffers with compute shaders!

How to use glReadPixels() to return resized image from FBO?

Shortly: I need a quick way to resize the buffer image and then return the pixels to me to save them to file etc.
Currently I first use glReadPixels(), and then I go through the pixels myself to resize them with my own resize function.
Is there any way to speed up this resizing, for example make OpenGL do the work for me? I think I could use glGetTexImage() with miplevel and mipmapping enabled, but as I noticed earlier, that function is bugged on my GFX card, so I can't use it.
I only need one miplevel, which could be anything from 1 to 4, but not all of them, to conserve some GPU memory. So is it possible to generate only one miplevel of wanted size?
Note: i dont think i can use multisampling, because i need pixel precise rendering for stencil tests, so if i rendered it with multisampling, it would make blurry pixels and they would fail with stencil test and masking and result would be incorrect (AFAIK). Edit: i only want to scale the colors (RGBA) buffer!
If you have OpenGL 3.0 or alternatively EXT_framebuffer_blit available (very likely -- all nVidia cards since around 2005, all ATI cards since around 2008 have it, and even Intel HD graphics claims to support it), then you can glBlitFramebuffer[EXT] into a smaller framebuffer (with a respectively smaller rectangle) and have the graphics card do the work.
Note that you cannot ever safely rescale inside the same frambuffer even if you were to say "I don't need the original", because overlapped blits are undefined (allowed, but undefined).
Or, you can of course just draw a fullscreen quad with a simple downscaling pixel shader (aniso decimation, if you want).
In fact, since you mention stencil in your last paragraph... if it is stencil (or depth) that you want to rescale, then you most definitively want to draw a fullscreen quad with a shader, because it will very likely not give the desired result otherwise. Usually, one will choose a max filter rather than interpolation in such a case (e.g. what reasonable, meaningful result could interpolating a stencil value of 0 and a value of 10 give -- something else is needed, such as "any nonzero" or "max value in sample area").
Create a framebuffer of the desired target size and draw your source image with a full-resized-buffer-sized textured quad. Then read the resized framebuffer contents using glReadPixels.
Psuedocode:
unbind_texture(OriginalSizeFBOattachmentColorTex);
glBindFramebuffer(OriginalSizeFBO);
render_picture();
glBindFramebuffer(TargetSizeFBO); // TargetSizeFBO used a Renderbuffer color attachment
glBindTexture(OriginalSizeFBOattachmentColorTex);
glViewport(TargetSize);
render_full_viewport_quad_with_texture();
glReadPixels(...);

Blend FBO onto default framebuffer

To clarify, when I say 'default framebuffer' I mean the one provided by the windowing system and what ends up on your monitor.
To improve my rendering speeds for a CAD app, I've managed to separate out the 3D elements from the Qt-handled 2D ones, and they now each render into their own FBO. When the time comes to get them onto the screen, I blit the 3D FBO onto the default FB, and then I want to blend my 2D FBO on top of it.
I've gotten to the blitting part fine, but I can't see how to blend my 2D FBO onto it? Both FBOs are identical in size and format, and they are both the same as the default FB.
I'm sure it's a simple operation, but I can't find anything on the net - presumably I'm missing the right term for what I am trying to do. Although I'm using Qt, I can use native OpenGL commands without issue.
A blit operation is ultimately a pixel copy operation. If you want to layer one image on top of another, you can't blit it. You must instead render a full-screen quad as a texture and use the proper blending parameters for your blending operation.
You can use GL_EXT_framebuffer_blit to blit contents of the framebuffer object to the application framebuffer (or to any other). Although, as the spec states, it is not possible to use blending:
The pixel copy bypasses the fragment pipeline. The only fragment
operations which affect the blit are the pixel ownership test and
the scissor test.
So any blending means to use fragment shader as suggested. One fullscreen pass with blending should be pretty cheap, I believe there is nothing to worry about.
use shader to read back from frame buffer. this is OpenGL ES extension, not support by all hardware.
https://www.khronos.org/registry/gles/extensions/EXT/EXT_shader_framebuffer_fetch.txt

Using a buffer for selectioning objects: accuracy problems

in each frame (as in frames per second) I render, I make a smaller version of it with just the objects that the user can select (and any selection-obstructing objects). In that buffer I render each object in a different color.
When the user has mouseX and mouseY, I then look into that buffer what color corresponds with that position, and find the corresponding objects.
I can't work with FBO so I just render this buffer to a texture, and rescale the texture orthogonally to the screen, and use glReadPixels to read a "hot area" around mouse cursor.. I know, not the most efficient but performance is ok for now.
Now I have the problem that this buffer with "colored objects" has some accuracy problems. Of course I disable all lighting and frame shaders, but somehow I still get artifacts. Obviously I really need clean sheets of color without any variances.
Note that here I put all the color information in an unsigned byte in GL_RED. (assumiong for now I maximally have 255 selectable objects).
Are these caused by rescaling the texture? (I could replace this by looking up scaled coordinates int he small texture.), or do I need to disable some other flag to really get the colors that I want.
Can this technique even be used reliably?
It looks like you're using GL_LINEAR for your GL_TEXTURE_MAG_FILTER. Use GL_NEAREST instead if you don't want interpolated colors.
I could replace this by looking up scaled coordinates int he small texture.
You should. Rescaling is more expensive than converting the coordinates for sure.
That said, scaling a uniform texture should not introduce artifacts if you keep an integer ratio (like upscale 2x), with no fancy filtering. It looks blurry on the polygon edges, so I'm assuming that's not what you use.
Also, the rescaling should introduce variations only at the polygon boundaries. Did you check that there are no variations in the un-scaled texture ? That would confirm whether it's the scaling that introduces your "artifacts".
What exactly do you mean by "variance"? Please explain in more detail.
Now some suggestion: In case your rendering doesn't depend on stencil buffer operations, you could put the object ID into the stencil buffer in the render pass to the window itself, don't use the detour over a separate texture. On current hardware you usually get 8 bits of stencil. Of course the best solution, if you want to use a index buffer approach, is using multiple render targets and render the object ID into an index buffer together with color and the other stuff in one pass. See http://www.opengl.org/registry/specs/ARB/draw_buffers.txt