I'm currently developing an emulator and I got some games using GL_DEPTH24_STENCIL8 (D24S8) as depth buffer. The depth buffer works fine and all but the underlying texture does not seem to be written. I've checked everything: Write mask, attachment being Depth_Stencil, etc. Still it won't write the underlying texture. It's actually interesting because while rendering the depth buffer, the depth test works, what doesn't work is writting the underlying texture. NSight assumes my texture as it were 2 textures (you can see in the pics), one is allocated to the depth buffer which is correct and the other is the one used later on pixel shader sampling. For some reason it won't write it. Things I checked:
Framebuffer has the texture attached as a Depth Stencil Attachment.
Depth Write is on when it's supposed to be.
There are no copies, reuploads of the texture.
Stencil mask is well set, everything on keep but stencil test is disabled.
Some extra info:
I used immutable storage to allocate the depth/stencil buffer.
Components set to GL_DEPTH24_STENCIL8, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8
Things I've tried:
Flushing after draw and reupload, didn't work, texture had full zeroes.
glCopyImageSubData to another and use another, didn't work
Changing it to D32F does not work either, even though NSight shows it does, Renderdocs doesn't.
Related
I want to pass a matrix with depth values into the z-buffer of openGL. Somewhere I found that I can use:
glDrawPixels(640,480,GL_DEPTH_COMPONENT,GL_FLOAT,normalizedMappedDepthMat.ptr());
...where mat is an opencv Mat.Is it possible to change the z-buffer values in OpenGL using a texture binding?If so,how?
With the programmable pipeline, you can write to gl_FragDepth in the fragment shader, effectively setting a per-pixel z value. With that feature, you can implement a render-to-depth-buffer feature quite easily by rendering a full-screen quad (or something else, if you want to overwrite just a sub-region of the whole buffer). With reasonably modern GL, you will be able to use single-channel texture formats with enough precision like GL_R32F. With older GL versions, you can manually combine the RGB or RGBA channels of standard 8Bit textures to 24 or 32 bit values.
However, there are some little details you have to take into account. Writing to the depth buffer only occurs if the GL_DEPTH_TEST is enabled. This of course might discard some of your fragments (if the depth-buffer is not cleared before). One way to get around this is setting glDepthFunc() to GL_ALWAYS during your depth-buffer rendering.
You must also keep in mind that rendering writes to all buffers, not just the depth buffer. If you don't want to modify the color buffer, you can set glDrawBuffer() to GL_NONE, or can use glColorMask() to prevent overwriting it. If you use a stencil buffer, you should also disable or mask out writing to it, of course.
I succeeded in render to texture with Texturebuffer, using VAO and shaders.
But FBO has another options for color buffer, it's Renderbuffer. I searched a lot on the internet, but cannot found any example related to draw Renderbuffer as Texturebuffer with shaders
If I ain't wrong, Renderbuffer is released in OpenGL 3.30, and it's faster than Texturebuffer.
Can I use Renderbuffer as Texturebuffer? (stupid question huh? I think it should be absolutely, isn't it?)
If yes, please lead me or give any example to draw render buffer as texture buffer.
My target is just for study, but I'd like to know is that a better way to draw textures? Should we use it frequently?
First of all, don't use the term "texture buffer" when you really just mean texture. A "buffer texture"/"texture buffer object" is a different conecpt, completely unrelated here.
If I ain't wrong, Renderbuffer is released in OpenGL 3.30, and it's faster than Texturebuffer.
No. Renderbuffers were there when FBOs were first invented. One being faster than the other is not generally true either, but these are implementation details. But it is also irrelevant.
Can I use Renderbuffer as Texturebuffer? (stupid question huh? I think it should be absolutely, isn't it?)
Nope. You cant use the contents of a renderbuffer directly as a source for texture mapping. Renderbuffesr are just abstract memory regions the GPU renders to, and they are not in the format required for texturing. You can read back the results to the CPU using glReadPixels, our you could copy the data into a texture object, e.g. via glCopyTexSubImage - but that would be much slower than directly rendering into textures.
So renderbuffers are good for a different set of use cases:
offscreen rendering (e.g. where the image results will be written to a file, or encoded to a video)
as helper buffers during rendering, like the depth buffer or stencil buffer, where you do not care anbout the final contents of these buffers anyway
as intermediate buffer when the image data can't be directly used by the follwoing steps, e.g. when using multisampling, and copying the result to a non-multisampled framebuffer or texture
It appears that you have your terminology mixed up.
You attach images to Framebuffer Objects. Those images can either be a Renderbuffer Object (this is an offscreen surface that has very few uses besides attaching and blitting) or they can be part of a Texture Object.
Use whichever makes sense. If you need to read the results of your drawing in a shader then obviously you should attach a texture. If you just need a depth buffer, but never need to read it back, a renderbuffer might be fine. Some older hardware does not support multisampled textures, so that is another situation where you might favor renderbuffers over textures.
Performance wise, do not make any assumptions. You might think that since renderbuffers have a lot fewer uses they would somehow be quicker, but that's not always the case. glBlitFramebuffer (...) can be slower than drawing a textured quad.
I'm using an FBO with color, depth, and stencil attached to implement deferred shading.
Here is what I'm doing:
I create an FBO with color, depth, and stencil attached, then render to it
I blit the stencil to the back buffer's stencil buffer
I render the final pass (using stencil tests)
Is there a way I can avoid step #2?
That is, can I just "reuse" the same stencil from step #1 in step #3 directly? I tried creating a 2nd FBO with only a stencil attached, but that didn't work -- I assume b/c attaching an FBO disables all writes to the color back buffer.
Unfortuately, the way the GL API implements FBOs enforces this exrta blit step, although most real-world HW could probably do without it. It would have been better IMHO if they would not have made this special FBO 0 for the default buffer, but just special renderbuffers for the default color, depth and stencil buffers, so that once can mix them in an FBO (I think D3D allows this, but I'm not sure). But currently, I'm not aware of any GL feature/extension which would allow such a thing.
In DirectX you are able to have separate render targets and depth buffers, so you can bind a render target and a depth buffer, do some rendering, remove the depth buffer and then do more rendering using the old depth buffer as a texture.
How would you go about this in opengl? From my understanding, you have a framebuffer object that contains both the color buffer(s) and an optional depth buffer. I don't think I can bind several framebuffer objects at the same time, would I have to recreate the framebuffer object every time it changes(probably several times a frame)? How do normal opengl programs do this?
A Framebuffer Object is nothing more than a series of references to images. These can be images in Textures (such as a mipmap layer of a 2D texture) or Renderbuffers (which can't be used as textures).
There is nothing stopping you from assembling an FBO that uses a texture's image for its color buffer and a texture's image for its depth buffer. Nor is there anything stopping you from later (so long as you're not rendering to that FBO while doing this) sampling from the texture as a depth texture. The FBO does not suddenly own these images exclusively or something.
In all likelihood, what has happened is that you've misunderstood the difference between an FBO and OpenGL's Default Framebuffer. The default framebuffer (ie: the window) is unchangeable. You can't take it's depth buffer and use it as a texture or something. What you do with an FBO is your own business, but OpenGL won't let you play with its default framebuffer in the same way.
You can bind multiple render targets to a single FBO, which should to the trick. Also since OpenGL is a state machine you can change the binding and number of targets anytime it is required.
I am trying to write a program that writes video camera frames into a quad.
I saw tutorials explaining that with framebuffers can be faster, but still learning how to do it.
But then besides the framebuffer, I found that there is also renderbuffers.
The question is, if the purpose is only to write a texture into a quad that will fill up the screen, do I really need a renderbuffer?
I understand that renderbuffers are for depth testing, which I think is only for checking Z position of the pixel, therefore would be silly to have to create a render buffer for my scenario, correct?
A framebuffer object is a place to stick images so that you can render to them. Color buffers, depth buffers, etc all go into a framebuffer object.
A renderbuffer is like a texture, but with two important differences:
It is always 2D and has no mipmaps. So it's always exactly 1 image.
You cannot read from a renderbuffer. You can attach them to an FBO and render to them, but you can't sample from them with a texture access or something.
So you're talking about two mostly separate concepts. Renderbuffers do not have to be "for depth testing." That is a common use case for renderbuffers, because if you're rendering the colors to a texture, you usually don't care about the depth. You need a depth because you need depth testing for hidden-surface removal. But you don't need to sample from that depth. So instead of making a depth texture, you make a depth renderbuffer.
But renderbuffers can also use colors rather than depth formats. You just can't attach them as textures. You can still blit from/to them, and you can still read them back with glReadPixels. You just can't read from them in a shader.
Oddly enough, this does nothing to answer your question:
The question is, if the purpose is only to write a texture into a quad that will fill up the screen, do I really need a renderbuffer?
I don't see why you need a framebuffer or a renderbuffer of any kind. A texture is a texture; just draw a textured quad.