I didn't find any documentation on how Integer texture should behave for Multisampling.
For example, for 4 sub-samples, if the values in sub-samples are 0, 0, 500, 500.
Does it resolve to 250,250,250,250?
When applied to integer framebuffer attachments, a multisample resolve operation (as caused by glBlitFramebuffer) simply selects a single sample from within the pixel to represent the entire value. Which sample is selected is not specified, so it's up to the implementation.
Basically, there's no real useful purpose to resolving an integer multisample attachment. If you have some multisample integer buffer (such as with deferred rendering), the only useful resolve you're going to get is one you write yourself.
Related
I want to count the number of fragments on each pixel (with depth test disabled). I have tried enabling blending and setting glBlendFunc(GL_ONE,GL_ONE) to accumulate them. This works just fine with a float32 format texture binding to a FBO, but I think a uint32 format texture (e.g. GL_R32UI) is more intuitive for this task. However, I can't get the expected behavior. It seems each fragment just overwrites the texture. I just wonder if there's other methods to do the accumulation on integer format textures.
However, I can't get the expected behavior. It seems each fragment just overwrites the texture.
That's because the blending stage is not available on pure integer framebuffer formats.
but I think a uint32 format texture (e.g. GL_R32UI) is more intuitive for this task.
Well, is it? What does "intuitive" even mean here? First of all, a GL_R16F format is probably enough for a reasonable of overdraw, and it would reduce bandwidth demands a lot (which seems to be the limiting factor for such a pass).
I just wonder if there's other methods to do the accumulation on integer format textures.
I can see two ways, I doubt that either of them is really more "intuitive", but I you absolutely need the result as integer, you could try these:
Don't use a framebuffer at all, but use image load/store on an unsigned integer texture in the fragment shader. Ust atomic operations, in particular imageAtomicAdd to count the number of fragments at each fragment location. Note that if you go that route, your're outside of the GL's automatic synchronization paths, and you'll have to add an exlicit glMemoryBarrier call after that render pass.
You could also just use a standard normalized integer format like GL_RED8 (or GL_RED16) use blending as before, but have the fragment shader output 1.0/255.0 (or 1.0/65535.0, respectively). The data which ends up in the framebuffer will be integer in the end. If you need this data on the CPU, you can directly read it back, if you need it on the GPU, you can use glTextureView to reinterpret the data as an unnormalized integer texture without a copy/conversion step.
I have 6 textures.
Some were initialized via :-
void glTexImage2DMultisample( GLenum target,
GLsizei samples,
GLenum internalformat,
GLsizei width,
GLsizei height,
GLboolean fixedsamplelocations);
Some (e.g. for stroke / defer shading / special effect) were initialized via :-
glTexImage2D( ... );
(Main question) Is it possible to render to all of them at once? i.e. How to mix MSAA types of render target?
I have tried that, but I got error 36182.
From Multi-sampling Frame Render Objects and Depth Buffers . The error means :-
GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE is also returned if the value of
GL_TEXTURE_FIXED_SAMPLE_LOCATIONS is not the same for all attached
textures; or, if the attached images are a mix of renderbuffers and
textures, the value of GL_TEXTURE_FIXED_SAMPLE_LOCATIONS is not
GL_TRUE for all attached textures.
Perhaps, such operation is not allowed? I have searched for some explanation :-
From https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2DMultisample.xhtml :-
fixedsamplelocations = Specifies whether the image will use identical sample locations and the same number of samples for all
texels in the image, and the sample locations will not depend on the
internal format or size of the image. ,
https://learnopengl.com/Advanced-OpenGL/Anti-Aliasing says that :-
If the last argument (fixedsamplelocations) is set to GL_TRUE, the image will use identical
sample locations and the same number of subsamples for each texel.
^ Nevertheless, there are a few of technical words that I don't know.
What is a sample location? Does the sample locations mean glLocation in glsl shader?
I guess it is the same as GLsizei samples, e.g. mostly 1 or 4. If it is 4, 1 pixel =4 samples.
Does the internal format means somethings like GL_[components][size][type]?
Why the sample locations is related to internal format?
Thus I don't understand what fixedsamplelocations actually means.
Many examples use GL_TRUE. When should it be GL_FALSE?
In a single FBO, images are either all multisampled or all not multisampled. To do otherwise (even if the MS images have a sample count of 1) will lead to a framebuffer completeness error. Not only must all images either be multisampled or non-multisampled, all images must also have the same sample count (the sample count of non-multisampled images is 0).
So what you want is not permitted.
The stuff on fixed-sample locations is a completely separate question.
Multisample images of a particular sample count have the same number of samples in each pixel. However, when rendering for anti-aliasing purposes, it is a good idea to vary where those samples are in terms of the pixel's area of the screen. This improves the quality of the anti-alising produced by multisampling, as it minimizes any coherency between adjacent sample patterns.
If you turn on fixed sample locations, you're forcing the implementation to not do this. So you will get reduced anti-aliasing quality. You would only need to turn this on if you have some shader computation that relies on a consistent location for the samples. This would typically be because you're doing the multisample resolve manually (rather than with a blit).
When I render to a texture (stored in a bound framebuffer object), do any of the following texture parameters matter?
GL_TEXTURE_WRAP_S
GL_TEXTURE_WRAP_T
GL_TEXTURE_MIN_FILTER
GL_TEXTURE_MAG_FILTER
It's also redundant to generate mipmaps, right? (Might be a stupid question, but I'm just making sure!)
What about data types (the type parameter)?
Does type have to be GL_FLOAT? If not, what's the difference between specifying type as GL_FLOAT and GL_UNSIGNED_BYTE?
Also, every doc I find on the web regarding Texture2D info (e.g. https://www.opengl.org/sdk/docs/man/html/glTexImage2D.xhtml) is missing some info. (namely the GL_DEPTH_COMPONENT16/24/32, and GL_RGB16 flags. other sources miss a lot more).
Is there a source for complete info on these stuff? (preferably specialized for the render-to-texture technique)
When I render to a texture (stored in a bound framebuffer object), do any of the following texture parameters matter?
No. While those parameters can be set in a way that breaks Texture Completeness, they do not affect Framebuffer Object Completeness. No texture or sampling parameters can affect framebuffer completeness.
What about data types (the type parameter)? Does type have to be GL_FLOAT? If not, what's the difference between specifying type as GL_FLOAT and GL_UNSIGNED_BYTE?
Even if you are transferring no bytes of data (ie: passing nullptr), you must provide pixel transfer parameters that are legal values. If you don't, then your call to glTexImage2D will fail with an error.
For example, from the OpenGL Specification version 4.5:
An INVALID_OPERATION error is generated if one of the base internal
format and format is DEPTH_COMPONENT or DEPTH_STENCIL, and the other
is neither of these values.
So your internal format and pixel transfer format must at least be able to talk to one another, even if they're never actually used. So if you're making a depth/stencil texture, you must use GL_DEPTH_STENCIL as the pixel transfer format.
Or you can just stop screwing around with bad APIs and use glTexStorage.
I am currently faced with a problem closely related to the OpenGL pipeline, and the use of shaders.
Indeed, I work on a project whose one of the steps consists of reading pixels from an image that we generate using OpenGL, with as much accuracy as possible : I mean that instead of reading integers, I would like to read float numbers. (So, instead of reading the value (134, 208, 108) for a pixel, I would like to obtain something like (134.180, 207.686, 108.413), for example.)
For this project, I used both vertex and fragment shaders to render my scene. I assume that the color computed and returned by the fragment shader, is a vector of 4 floats (one per RGBA component) belonging to the "continuous" [0, 1] internal. But, how can I get it in my C++ file ? Is there a way of doing it ?
I thought of calling the glReadPixels() function just after having rendered my scene in a buffer, by setting the format argument to GL_RGBA, and the data type of the pixel data to GL_FLOAT. But I have the feeling that the values associated to the pixels that we read, have already been casted to a integer in the meanwhile, because the float numbers that I finally get, correspond to the interval [0, 255] clamped to [0, 1], without any gain in precision. A closer look on the OpenGL spectifications strengthens this idea : I think there is indeed a cast somewhere between rendering my scene, and callingglReadPixels().
Do you have any idea about the way I can reach my objective ?
The GL_RGBA format returned by the fragment shader stores pixels components in 8-bit integers. You should use a floating point format, such as GL_RGBA16F or GL_RGBA32F, where 16 and 32 are the depths for each component.
For standard OpenGL textures, the filtering state is part of the texture, and must be defined when the texture is created. This leads to code like:
glGenTextures(1,&_texture_id);
glBindTexture(GL_TEXTURE_2D,_texture_id);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexImage2D(...);
This works perfectly. I am trying to make a multisampled texture (for use in a FBO). The code is very similar:
glGenTextures(1,&_texture_id);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,_texture_id);
glTexParameterf(GL_TEXTURE_2D_MULTISAMPLE,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D_MULTISAMPLE,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexImage2DMultisample(...);
I am using a debug context, and with this code the first glTexParameterf(...) call causes:
GL_INVALID_ENUM error generated. multisample texture targets doesn't support sampler state
I don't know what this is supposed to mean. Notice that multisampled textures only support nearest filtering. I am specifying this. I noticed that for some of the calls (in particular glTexParameterf(...)), the GL_TEXTURE_2D_MULTISAMPLE is not a listed input in the documentation (which would indeed explain the invalid enum error if they're actually invalid, not just forgotten). However, if it is not accepted, then how am I supposed to set nearest filtering?
You do not need to set nearest filtering because multisample textures are not filtered at all. The specification (section 8.10) does list GL_TEXTURE_2D_MULTISAMPLE as a valid target for glTexParameteri (which you should use instead of glTexParameterf for integer parameters), but lists among possible errors:
An INVALID_ENUM error is generated if target is either TEXTURE_2D_MULTISAMPLE
or TEXTURE_2D_MULTISAMPLE_ARRAY, and pname is any sampler
state from table 23.18.