In my shadow map implementation, i am attaching color attachment to my fbo and i don't have any depth attachment. I have depth test enabled for rendering onto that FBO. With this implementation I get shadows correctly. I am wondering how it works without depth attachment. Can somebody explain me this?
I use color attachment to store my depth values as
color = vec4(gl_FragCoord.z,gl_FragCoord.z,gl_FragCoord.z,1.0);
This resembles no depth attachment I have ever seen (I realize you are trying to use a color attachment in its place); you are shoehorning four components into something that only needs one. There is defined behavior for reading from the 2nd, 3rd and 4th components of a depth attachment even though they do not exist, but writing to them is effectively meaningless.
If you intend to use a color texture to serve as a depth texture, then make sure it has high single-component precision (e.g. try something like GL_R32F) and do not waste storage with the 2nd, 3rd and 4th components. Alternatively, you could pack parts of a 32-bit, 24-bit or 16-bit fixed-point value across multiple components and unpack them later - this is pretty unorthodox on anything modern.
I am wondering how it works without depth attachment.
Generally it will not work.
You have just told us you have no depth attachment but the depth test is enabled ... what is that going to accomplish?
You may not want to draw to a depth texture for some reason, but you are at the very least, going to need a renderbuffer depth attachment for there to be anything in the framebuffer to actually test against.
Related
I have created a shape in my stencil buffer (black in the picture below). Now I would like to render to the backbuffer. I would like one texture on the outer pixels (say 4 pixels) of my stencil (red), and an other texture on the remaining pixels (red).
I have read several solutions that involve scaling, but that will not work when there is no obvious center of the shape.
How do I acquire the desired effect?
The stencil buffer works great for doing operations on the specific fragments being overlaid onto them. However, it's not so great for doing operations that require looking at pixels other than the one corresponding to the fragment being rendered. In order to do outlining, you have to ask about the values of neighboring pixels, which stencil operations don't allow.
So, if it is possible to put the stencil data you want to test against in a non-stencil format image (ie: a color image, maybe with an integer texture format), that would make things much simpler. You can do the effect of stencil discarding by using discard directly in the fragment shader. Since you can fetch arbitrarily from the texture (as long as you're not trying to modify it), you can fetch neighboring pixels and test their values. You can use that to identify when a fragment is near a border.
However, if you're relying on specialized stencil operations to build the stencil data itself (like bitwise operations), then that's more complicated. You will have to employ stencil texturing operations, so you're going to have to render to an FBO texture that has a depth/stencil format. And you'll have to set it up to allow you to read from the stencil aspect of the texture. This is an OpenGL 4.3 feature.
This effectively converts it into an 8-bit unsigned integer texture. That allows you to play whatever games you need to. But if you want to use stencil tests to discard fragments, you will also need texture barrier functionality to allow you to read from an image that's attached to the current FBO. But you don't need to actually use the barrier, since you should mask off stencil writing. You just need GL 4.5 or the NV/ARB_texture_barrier extension to be available, which they widely are.
Either way this happens, the biggest difficulty is going to be varying the size of the border. It is easy to just test the neighboring 9 pixels to see if it is at a border. But the larger the border size, the larger the area of pixels each fragment has to test. At that point, I would suggest trying to look for a different solution, one that is based on some knowledge of what pattern is being written into the stencil buffer.
That is, if the rendering operation that lays down the stencil has some knowledge of the shape, then it could compute a distance to the edge of the shape in some way. This might require constructing the geometry in a way that it has distance information in it.
Generally on modern desktop OpenGL hardware what is the best way to fill a depth buffer from a compute shader and then use that depth buffer for graphics pipeline rendering with triangles etc?
Specifically I am wondering about concerns regards HiZ. Also I wonder if it's better to do compute shader modifications to the depth buffer before or after the graphics rendering?
If the compute shader is run after the graphics rendering I assume the depth buffer will typically be decompressed behind the scenes. But I worry done the other way around the depth buffer may be in a decompressed/non-optimal state for the graphics pipeline?
As far as i know, you cannot bind textures with any of the depth formats as images, and thus cannot write to depth format textures in compute shaders. See glBindImageTexture documentation, it lists the formats that your texture format must be compatible to. Depth formats are not among them and the specification says the depth formats are not compatible to the normal formats.
Texture copying functions have the same compatibility restrictions, so you can't even e.g. write to a normal texture in the compute shader and then copy to a depth texture. glCopyImageSubData does not explicitly have that restriction but i haven't tried it and it's not part of the core profile anymore.
What might work is writing to a normal texture, then rendering a fullscreen triangle and setting gl_FragDepth to values read from the texture, but that's an additional fullscreen pass.
I don't quite understand your second question - if your compute shader stuff modifies the depth buffer, the result will most likely be different depending on whether you do it before or after regular rendering because different parts will be visible or occluded.
But maybe that question is moot since it seems you cannot manually write into depth buffers at all - which might also answer your third question - by not writing into depth buffers you cannot mess with the compression of it :)
Please note that i'm no expert in this, i had a similar problem and looked at the docs/spec myself, so this all might be wrong :) Please let me know if you manage to write to depth buffers with compute shaders!
In WebGL, is it possible to write to the fragment's depth value or control the fragment's depth value in some other way?
As far as I could find, gl_FragDepth is not present in webgl 1.x, but I am wondering if there is any other way (extensions, browser specific support, etc) to do it.
What I want to archive is to have a ray traced object play along with other elements drawn using the usual model, view, projection.
There is the extension EXT_frag_depth
Because it's an extension it might not be available everywhere so you need to check it exists.
var isFragDepthAvailable = gl.getExtension("EXT_frag_depth");
If isFragDepthAvailable is not falsey then you can enable it in your shaders with
#extension GL_EXT_frag_depth : enable
Otherwise you can manipulate gl_Position.z in your vertex shader though I suspect that's not really a viable solution for most needs.
Brad Larson has a clever workaround for this that he uses in Molecules (full blog post):
To work around this, I implemented my own custom depth buffer using a
frame buffer object that was bound to a texture the size of the
screen. For each frame, I first do a rendering pass where the only
value that is output is a color value corresponding to the depth at
that point. In order to handle multiple overlapping objects that might
write to the same fragment, I enable color blending and use the
GL_MIN_EXT blending equation. This means that the color components
used for that fragment (R, G, and B) are the minimum of all the
components that objects have tried to write to that fragment (in my
coordinate system, a depth of 0.0 is near the viewer, and 1.0 is far
away). In order to increase the precision of depth values written to
this texture, I encode depth to color in such a way that as depth
values increase, red fills up first, then green, and finally blue.
This gives me 768 depth levels, which works reasonably well.
EDIT: Just realized WebGL doesn't support min blending, so not very useful. Sorry.
I am trying to write a program that writes video camera frames into a quad.
I saw tutorials explaining that with framebuffers can be faster, but still learning how to do it.
But then besides the framebuffer, I found that there is also renderbuffers.
The question is, if the purpose is only to write a texture into a quad that will fill up the screen, do I really need a renderbuffer?
I understand that renderbuffers are for depth testing, which I think is only for checking Z position of the pixel, therefore would be silly to have to create a render buffer for my scenario, correct?
A framebuffer object is a place to stick images so that you can render to them. Color buffers, depth buffers, etc all go into a framebuffer object.
A renderbuffer is like a texture, but with two important differences:
It is always 2D and has no mipmaps. So it's always exactly 1 image.
You cannot read from a renderbuffer. You can attach them to an FBO and render to them, but you can't sample from them with a texture access or something.
So you're talking about two mostly separate concepts. Renderbuffers do not have to be "for depth testing." That is a common use case for renderbuffers, because if you're rendering the colors to a texture, you usually don't care about the depth. You need a depth because you need depth testing for hidden-surface removal. But you don't need to sample from that depth. So instead of making a depth texture, you make a depth renderbuffer.
But renderbuffers can also use colors rather than depth formats. You just can't attach them as textures. You can still blit from/to them, and you can still read them back with glReadPixels. You just can't read from them in a shader.
Oddly enough, this does nothing to answer your question:
The question is, if the purpose is only to write a texture into a quad that will fill up the screen, do I really need a renderbuffer?
I don't see why you need a framebuffer or a renderbuffer of any kind. A texture is a texture; just draw a textured quad.
I am very interested in understanding how multisampling works. I have found a large literature on how to enable or use it, but very little information concerning what it really does in order to achieve an antialiased rendering. What I have found, in many places, is conflicting information that only confused me more.
Please note that I know how to enable and use multisampling (I actually already use it), what I don't know is what kind of data really gets into the multisampled renderbuffers/textures, and how this data is used in the rendering pipeline.
I can understand very well how supersampling works, but multisampling still has some obscure areas that I would like to understand.
here is what the specs say: (OpenGL 4.2)
Pixel sample values, including color, depth, and stencil values, are stored in this
buffer (the multisample buffer). Samples contain separate color values for each fragment color.
...
During multisample rendering the contents of a pixel fragment are changed
in two ways. First, each fragment includes a coverage value with SAMPLES bits.
...
Second, each fragment includes SAMPLES depth values and sets of associated
data, instead of the single depth value and set of associated data that is maintained
in single-sample rendering mode.
So, each sample contains a distinct color, coverage bit, and depth. What's the difference from a normal supersampling? Seems like a "weighted" supersampling to me, where each final pixel value is determined by the coverage value of its samples instead of a simple average, but I am very unsure about this. And what about texture coordinates at sample level?
If I store, say, normals in a RGBF multisampled texture, will I read them back "antialiased" (that is, approaching 0) on the edges of a polygon?
A fragment shader is called once per fragment, unless it uses gl_SampleID, glSampleIn or has a 'sample' storage qualifier. How can a fragment shader be invoked once per fragment and get an antialiased rendering?
OpenGL on Silicon Graphics Systems:
http://www-f9.ijs.si/~matevz/docs/007-2392-003/sgi_html/ch09.html#LE68984-PARENT
mentions: When you use multisampling and read back color, you get the resolved color value (that is, the average of the samples). When you read back stencil or depth, you typically get back a single sample value rather than the average. This sample value is typically the one closest to the center of the pixel.
And there's this technical spec (1994) from the OpenGL site. It explains in full detail what is done If MULTISAMPLE_SGIS is enabled: http://opengl.org/registry/specs/SGIS/multisample.txt
See also this related question: How are depth values resolved in OpenGL textures when multisampling?
And the answers to this question, where GL_MULTISAMPLE_ARB is recommended: where is GL_MULTISAMPLE defined?. The specs for GL_MULTISAMPLE_ARB (2002) are here: http://www.opengl.org/registry/specs/ARB/multisample.txt