Render 2D image (with depth) in OpenGL preserving depth testing - opengl

I have an image from an external source (say a software ray tracer) that also has a depth buffer. I want to render that image in an OpenGL scene (which contains several other 3D objects) such that the OpenGL depth buffer is correctly updated, i.e. the image and the other 3D objects should be combined using correct depth testing. Any ideas? A solution without shaders would be nice.

Load your depth map via glDrawPixels(..., ..., GL_DEPTH_COMPONENT, ..., ...) and render as usual.

Using OpenGL pixel_buffer_object, you can bind depth textures. So the process would be as follows:
Load external texture
Load external depth texture
Create pixel_buffer_object with the two textures
Set PBO as render target and render the rest of your geometry (don't glClear before rendering).

Related

Render to a layer of a texture array in OpenGL

I use OpenGL 3.2 to render shadow maps. For this, I construct a framebuffer that renders to a depth texture.
To attach the texture to the framebuffer, I use:
glFramebufferTexture2D( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shdw_texture, 0 );
This works great. After rendering the light view, my GLSL shader can sample the depth texture to solve visibility of light.
The problem I am trying to solve now, is to have many more shadow maps, let's say 50 of them. In my main render pass I don't want to be sampling from 50 different textures. I could use an atlas, but I wondered: could I pass all these shadow maps as slices from a 2D texture array?
So, somehow create a GL_TEXTURE_2D_ARRAY with a DEPTH format, and bind one layer of the array to the framebuffer?
Can framebuffers be backed for DEPTH by a texture array layer, instead of just a depth texture?
In general, you need to distinguish whether you want to create a layered framebuffer (see Layered Images) or whether you want to attach a single layer of a multilayered texture to a framebuffer.
Use glFramebufferTexture3D to attach a layer of a 3D texture (TEXTURE_3D) or array texture to a framebuffer or use glFramebufferTextureLayer to attach a layer of a three-dimensional or array texture to the framebuffer. In either case the last argument specifies the layer of the texture.
Layered attachments can be attached with glFramebufferTexture. See Layered rendering.

Why can't I render my depth map on a quad?

Some intro:
I'm currently trying to see how I can convert a depth map into a point cloud. In order to do this, I render a scene as usually and produce a depth map. From the depth map I try to recreate the scene as a point cloud from the given camera angle.
In order to do this I created a FBO so I can render my scene's depth map on a texture. The depth map is rendered on the texture successfully. I know it is done because I'm able to generate the point cloud from the depth texture using glGetTexImage and converting the data acquired.
The problem:
For presentation purposes, I want the depth map to be visible on a separate window. So, I just created a simple shader to draw the depth map texture on a quad. However, instead of the depth texture being drawn on the quad, the texture being drawn is the last that was bound using GlBindTexture. For example :
glUseProgram(simpleTextureViewerProgram);
glBindVertexArray(quadVAO);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,randomTexture);
glBindTexture(GL_TEXTURE_2D, depthTexture);
glUniform1i(quadTextureSampler, 0);
glDrawArrays(GL_TRIANGLES, 0, 6);
The code above renders the "randomTexure" on the quad instead of the "depthTexture". As I said earlier, "depthTexture" is the one I use in glGetTexImage, so it does contain the depth map.
I may be wrong but if I had to make a guess then the last GlBindTexture command fails and the problem is that "depthTexture" is not an RGB texture but a depth component texture. Is this the reason? How can I draw my depth map on the quad then?

OpenGL - how to render object to 3D texture as a volumetric billboard

I'm trying to implement volumetric billboards in OpenGL 3.3+ as described here
and video here.
The problem I'm facing now (quite basic) is: how do I render a 3D object to a 3D texture (as described in the paper) efficiently? Assuming the object could be stored in a 256x256x128 tex creating 256*256*128*2 framebuffers (because it's said that it should be rendered twice at each axis: +X,-X,+Y,-Y,+Z,-Z) would be insane and there are too few texture units to process that many textures as far as I know (not to mention the amount of time needed).
Does anyone have any idea how to deal with something like that?
A slice of 3D texture can be directly attached to the current framebuffer. So, create a frame buffer, a 3D texture and then do rendering like:
glFramebufferTexture3D( GL_FRAMEBUFFER, Attachment, GL_TEXTURE_3D,
TextureID, 0, ZSlice );
...render to the slice of 3D texture...
So, you need only 1 framebuffer that will be iterated by the number of Z-slices in your target 3D texture.

Separate Frame Buffer and Depth Buffer in OpenGL

In DirectX you are able to have separate render targets and depth buffers, so you can bind a render target and a depth buffer, do some rendering, remove the depth buffer and then do more rendering using the old depth buffer as a texture.
How would you go about this in opengl? From my understanding, you have a framebuffer object that contains both the color buffer(s) and an optional depth buffer. I don't think I can bind several framebuffer objects at the same time, would I have to recreate the framebuffer object every time it changes(probably several times a frame)? How do normal opengl programs do this?
A Framebuffer Object is nothing more than a series of references to images. These can be images in Textures (such as a mipmap layer of a 2D texture) or Renderbuffers (which can't be used as textures).
There is nothing stopping you from assembling an FBO that uses a texture's image for its color buffer and a texture's image for its depth buffer. Nor is there anything stopping you from later (so long as you're not rendering to that FBO while doing this) sampling from the texture as a depth texture. The FBO does not suddenly own these images exclusively or something.
In all likelihood, what has happened is that you've misunderstood the difference between an FBO and OpenGL's Default Framebuffer. The default framebuffer (ie: the window) is unchangeable. You can't take it's depth buffer and use it as a texture or something. What you do with an FBO is your own business, but OpenGL won't let you play with its default framebuffer in the same way.
You can bind multiple render targets to a single FBO, which should to the trick. Also since OpenGL is a state machine you can change the binding and number of targets anytime it is required.

Normal back buffer + render to depth texture in OpenGL (FBOs)

Here's the situation: I have a texture containing some depth values. I want to render some geometry using that texture as the depth buffer, but I want the color to be written to the normal framebuffer (i.e., the window's framebuffer).
How do I do this? I tried creating an FBO and only binding a depth texture to it (GL_DEPTH_COMPONENT) but that didn't work; none of the colors showed up.
No you can't. The FBO you are rendering to may be either a main framebuffer or an off-screen one. You can't mix them in any way.
Instead, I would suggest you to render to a color renderbuffer and then do a simple blitting operation into the main framebuffer.
Edit-1.
Alternatively, if you already have depth in the main FB, you can first blit your depth and then render to a main FB, thus saving video memory on the additional color renderbuffer.
P.S. Blitting is done via glBlitFramebuffer. In order to make it work you should setup GL_READ_FRAMEBUFFER, GL_DRAW_FRAMEBUFFER and glDrawBuffer() for each of them.