I have 6 textures.
Some were initialized via :-
void glTexImage2DMultisample( GLenum target,
GLsizei samples,
GLenum internalformat,
GLsizei width,
GLsizei height,
GLboolean fixedsamplelocations);
Some (e.g. for stroke / defer shading / special effect) were initialized via :-
glTexImage2D( ... );
(Main question) Is it possible to render to all of them at once? i.e. How to mix MSAA types of render target?
I have tried that, but I got error 36182.
From Multi-sampling Frame Render Objects and Depth Buffers . The error means :-
GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE is also returned if the value of
GL_TEXTURE_FIXED_SAMPLE_LOCATIONS is not the same for all attached
textures; or, if the attached images are a mix of renderbuffers and
textures, the value of GL_TEXTURE_FIXED_SAMPLE_LOCATIONS is not
GL_TRUE for all attached textures.
Perhaps, such operation is not allowed? I have searched for some explanation :-
From https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2DMultisample.xhtml :-
fixedsamplelocations = Specifies whether the image will use identical sample locations and the same number of samples for all
texels in the image, and the sample locations will not depend on the
internal format or size of the image. ,
https://learnopengl.com/Advanced-OpenGL/Anti-Aliasing says that :-
If the last argument (fixedsamplelocations) is set to GL_TRUE, the image will use identical
sample locations and the same number of subsamples for each texel.
^ Nevertheless, there are a few of technical words that I don't know.
What is a sample location? Does the sample locations mean glLocation in glsl shader?
I guess it is the same as GLsizei samples, e.g. mostly 1 or 4. If it is 4, 1 pixel =4 samples.
Does the internal format means somethings like GL_[components][size][type]?
Why the sample locations is related to internal format?
Thus I don't understand what fixedsamplelocations actually means.
Many examples use GL_TRUE. When should it be GL_FALSE?
In a single FBO, images are either all multisampled or all not multisampled. To do otherwise (even if the MS images have a sample count of 1) will lead to a framebuffer completeness error. Not only must all images either be multisampled or non-multisampled, all images must also have the same sample count (the sample count of non-multisampled images is 0).
So what you want is not permitted.
The stuff on fixed-sample locations is a completely separate question.
Multisample images of a particular sample count have the same number of samples in each pixel. However, when rendering for anti-aliasing purposes, it is a good idea to vary where those samples are in terms of the pixel's area of the screen. This improves the quality of the anti-alising produced by multisampling, as it minimizes any coherency between adjacent sample patterns.
If you turn on fixed sample locations, you're forcing the implementation to not do this. So you will get reduced anti-aliasing quality. You would only need to turn this on if you have some shader computation that relies on a consistent location for the samples. This would typically be because you're doing the multisample resolve manually (rather than with a blit).
Related
What I want to do
I want to have a set triangles bleed through, or rather ignore the depth buffer, for another set triangles, but only if they have the same number.
Problem (optional reading)
I do not know how to do this without introducing a ton of bubbles into the pipeline. Right now I have very high throughput because I can throw my geometry onto the GPU, tell it to render, and forget about it. However, if I have to keep toggling the state when drawing, I'm worried I'm going to tank my performance. Other people who have done what I've just said (doing a ton of draw calls and state changes) have much worse performance than me. This performance hit is also significantly worse on older hardware, where we are talking on order of 50 - 100+ times performance loss by doing it the state-change way.
Unfortunately this triangle bleeding scenario happens many thousands of times, so the state machine will be getting flooded with "draw triangles, depth off, draw triangles that bleed through, depth on, ...", except N times, where N can get large (N >= 1000).
A good way of imagining this is having a set of triangles T_i, and a set of triangles that bleed through B_i where B_i only bleeds through T_i, and i ranges from 0...1000+. Note that if we are drawing B_100, then it should only bleed through T_100, not T_99 or T_101.
My next thought is to draw all the triangles with their integer into one framebuffer (along with the integer), then draw the bleed through triangles into another framebuffer (also with the integer), and then merge these framebuffers together. I figure they will have the color, depth, and the integer, so I can hopefully merge them in the fragment shader.
Problem is, I have no idea how to write an integer alongside the out vec4 fragColor in the fragment shader.
Questions (and in short)
This leaves me with two questions:
How do I write an integer into a framebuffer? Do I need to write to 4 separate texture framebuffers? (like one color/depth framebuffer texture, another integer framebuffer texture, and then double this so I can merge the pairs of framebuffers together at some point?)
To make this more clear, the algorithm would look like
Render all the 'could be bled from triangles', described above as set T_i,
write colors and depth info into FB1, and write integers into FB2
Render all the 'bleeding' triangles, described above as set B_i,
write colors and depth into FB3, and write integers to FB4
Bind the textures for FB1, FB2, FB3, FB4
Render each pixel by sampling the RGBA, depth, and integers
from the appropriate texture and write those out into the
final framebuffer
I would need to access the color and depth from the textures in the shader. I would also need to access the integer from the other texture. Then I can do the comparison and choose which pixel to write to the default framebuffer.
Is this idea possible? I assume if (1) is, then the answer is yes. Maybe another question could be whether there's a better way. I tried thinking of doing this with the stencil buffer but had no luck
What you want is theoretically possible, but I can't speak as to its performance. You'll be reading and writing a whole lot of texels in a lot of textures for every program iteration.
Anyway to answer your questions:
A framebuffer can have multiple color attachments by using glFramebufferTexture2D with GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, etc. Each texture can then have its own internal format, in your example you probably want a regular RGB texture for your color output, and a second 1-integer only texture.
Your depth buffer is complicated, because you don't want to let OpenGL handle it as normal. If you want to take over the depth buffer, you probably want to attach it as yet another, float texture that you can check against or not your screen-space fragments.
If you have doubts about your shader, remember that you can bind the any number of textures as input samplers you program in code, and each color bind gets its own output value (your shader runs per-texel, so you output one value at a time). Make sure the format of your output is correct, ie vec3/vec4 for the color buffer, int for your integer buffer and float for the float buffer.
And stencil buffers won't help you turn depth checking on or off in a single (possibly indirect) draw call. I can't visualize what your bleeding thing means, but it can probably help with that? Maybe? But definitely not conditional depth checking.
I have two 2D textures. The first, an MSAA texture, uses a target of GL_TEXTURE_2D_MULTISAMPLE. The second, non MSAA texture, uses a target of GL_TEXTURE_2D.
According to OpenGL's spec on ARB_texture_multisample, only GL_NEAREST is a valid filtering option when the MSAA texture is being drawn to.
In this case, both of these textures are attached to GL_COLOR_ATTACHMENT0 via their individual Framebuffer objects. Their resolutions are also the same size (to my knowledge this is necessary when blitting an MSAA to non-MSAA).
So, given the current constraints, if I blit the MSAA holding FBO to the non-MSAA holding FBO, do I still need to use GL_NEAREST as the filtering option, or is GL_LINEAR valid, since both textures have already been rendered to?
The filtering options only come into play when you sample from the textures. They play no role while you render to the texture.
When sampling from multisample textures, GL_NEAREST is indeed the only supported filter option. You also need to use a special sampler type (sampler2DMS) in the GLSL code, with corresponding sampling instructions.
I actually can't find anything in the spec saying that setting the filter to GL_LINEAR for multisample textures is an error. But the filter is not used at all. From the OpenGL 4.5 spec (emphasis added):
When a multisample texture is accessed in a shader, the access takes one vector of integers describing which texel to fetch and an integer corresponding to the sample numbers described in section 14.3.1 determining which sample within the texel to fetch. No standard sampling instructions are allowed on the multisample texture targets, and no filtering is performed by the fetch.
For blitting between multisample and non-multisample textures with glBlitFramebuffer(), the filter argument can be either GL_LINEAR or GL_NEAREST, but it is ignored in this case. From the 4.5 spec:
If the read framebuffer is multisampled (its effective value of SAMPLE_BUFFERS is one) and the draw framebuffer is not (its value of SAMPLE_BUFFERS is zero), the samples corresponding to each pixel location in the source are converted to a single sample before being written to the destination. filter is ignored.
This makes sense because there is a restriction in this case that the source and destination rectangle need to be the same size:
An INVALID_OPERATION error is generated if either the read or draw framebuffer is multisampled, and the dimensions of the source and destination rectangles provided to BlitFramebufferare not identical.
Since the filter is only applied when the image is stretched, it does not matter in this case.
I want to determine the size (width, height) of a framebuffer object.
I created a framebuffer object via
// create the FBO.
glGenFramebuffers(1, &fboId);
How can I get the size of the first color attachment given only the framebuffer object id (fboId)?
Is this possible or do I have tor store the size of the color attachment in an external variable to know later the size of the FBO?
Your question is somewhat confused, as you ask for two different things.
Here's the easy question:
How can I get the size of the first color attachment given only the framebuffer object id (fboId)?
That's simple: get the texture/renderbuffer attached to that attachment, get what mipmap level and array layer is attached, then query the texture/renderbuffer for how big it is.
The first two steps are done with glGetFramebufferAttachmentParameter (note the key word "Attachment") for GL_COLOR_ATTACHMENT0. You query the GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE to get whether it's a renderbuffer or a texture. You can get the renderbuffer/texture name with GL_FRAMEBUFFER_ATTACHMENT_OBJECT_NAME.
If the object is a renderbuffer, you can then bind the renderbuffer and use glGetRenderbufferParameter to fetch the renderbuffer's GL_RENDERBUFFER_WIDTH and GL_RENDERBUFFER_HEIGHT.
If the object is a texture, you'll need to do more work. You need to query the attachment parameter GL_FRAMEBUFFER_ATTACHMENT_TEXTURE_LEVEL to get the mipmap level.
Of course, now you need to know how to bind it. If you're using OpenGL versions before 4.4 or without certain extensions, then this is complicated. See, you need to know which texture target type to use. Silly and annoying as this may seem, the only way to determine the target from just the texture object name is to... try everything. Go through each target and check glGetError. The one for which GL_INVALID_OPERATION isn't returned is the right one.
If you have GL 4.4 or ARB_multi_bind available, you can just use glBindTextures (note the "s"), which does not require that you specify the target. And if you have 4.5 or ARB_direct_state_access, you don't need to bind the texture at all. The DSA-style functions don't need the texture target, and it also provides glBindTextureUnit, which binds a texture to its natural internal target.
Once you have the texture bound and it's mipmap level, you use glGetTexLevelParameter to query the GL_TEXTURE_WIDTH and GL_TEXTURE_HEIGHT for that level.
Now, that's the easy problem. The hard problem is what your title asks:
I want to determine the size (width, height) of a framebuffer object.
The size of the renderable area of an FBO is not the same as the size of GL_COLOR_ATTACHMENT0. The renderable area of an FBO is the intersection of all of the sizes of all of the images attached to the FBO.
Unless you have special knowledge of this FBO, you can't assume that the FBO contains only one image or that all of the images have the same size (and if you have special knowledge of the FBO, then quite frankly you should also have special knowledge of how big it is). So you'll need to repeat the above procedure for every attachment (if the type is GL_NONE, then nothing is attached). Then take the intersection of the returned values (ie: the smallest width and height).
In general, you shouldn't have to ask an FBO that you created how big it is. Just as you don't have to ask textures how big they are. You made them; by definition, you know how big they are. You put them in the FBO, so again you know how big it is.
I have troubles with using Textures in OpenGl (only 2D)
I want to show a Level (size 15x15) and every Field can have a different value/Picture.
I have about 100 different Fieldtypes and according to the leveldesign, I have to display a different image for every type.
If I use an single tga-Image for every possible Field (100 files), everything works fine,
but now I put all Images together in one File and, depending on the field-type, I'm always displaying a different part from the image.
The Problem: sometimes thin Lines in the images aren't displayed and between the different sprites, there are often black or gray lines which makes the whole graphics ugly.
Is there a way to solve this problem easily?
(Maybe I have to load the tga-image into one GLuint and then split it up into 100 different GLunint's? Or is there an setting to improve the rendering? Or do I have to change the resolution of the tga-image file itself?)
Here's a part of my image-File, every element has a resolution of 220x220 pixels --> whole pic.: 2200x2200 pixels
And that's the OpenGl-output:
I really don't want to have hundrets of image-files. Especially because loading all these Files consumes a lot of time and I'm sure there's a solution out there
EDIT:
I'm using the following settings:
// Specify filtering and edge actions
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP);
EDIT2:
Now I changed GL_LINEAR to GL_NEAREST and added a half texel in the texture-coordinates there are no lines between the images anymore :)
But I still have the other problem:
Small elements (lines) in the image aren't displayed correctly.
This is how it should look like:
"Keep in mind, that OpenGL samples textures at the texel centers"
Have a look at the answer here:
OpenGL ES Texture Coordinates Slightly Off
You need to add padding pixels to your textures, to avoid lines along their edges. So that when OpenGL samples pixels from texture to be drawn (including mipmap levels if you render them in smaller size than they are in your files), there are no wrongly colored pixels used.
I am very interested in understanding how multisampling works. I have found a large literature on how to enable or use it, but very little information concerning what it really does in order to achieve an antialiased rendering. What I have found, in many places, is conflicting information that only confused me more.
Please note that I know how to enable and use multisampling (I actually already use it), what I don't know is what kind of data really gets into the multisampled renderbuffers/textures, and how this data is used in the rendering pipeline.
I can understand very well how supersampling works, but multisampling still has some obscure areas that I would like to understand.
here is what the specs say: (OpenGL 4.2)
Pixel sample values, including color, depth, and stencil values, are stored in this
buffer (the multisample buffer). Samples contain separate color values for each fragment color.
...
During multisample rendering the contents of a pixel fragment are changed
in two ways. First, each fragment includes a coverage value with SAMPLES bits.
...
Second, each fragment includes SAMPLES depth values and sets of associated
data, instead of the single depth value and set of associated data that is maintained
in single-sample rendering mode.
So, each sample contains a distinct color, coverage bit, and depth. What's the difference from a normal supersampling? Seems like a "weighted" supersampling to me, where each final pixel value is determined by the coverage value of its samples instead of a simple average, but I am very unsure about this. And what about texture coordinates at sample level?
If I store, say, normals in a RGBF multisampled texture, will I read them back "antialiased" (that is, approaching 0) on the edges of a polygon?
A fragment shader is called once per fragment, unless it uses gl_SampleID, glSampleIn or has a 'sample' storage qualifier. How can a fragment shader be invoked once per fragment and get an antialiased rendering?
OpenGL on Silicon Graphics Systems:
http://www-f9.ijs.si/~matevz/docs/007-2392-003/sgi_html/ch09.html#LE68984-PARENT
mentions: When you use multisampling and read back color, you get the resolved color value (that is, the average of the samples). When you read back stencil or depth, you typically get back a single sample value rather than the average. This sample value is typically the one closest to the center of the pixel.
And there's this technical spec (1994) from the OpenGL site. It explains in full detail what is done If MULTISAMPLE_SGIS is enabled: http://opengl.org/registry/specs/SGIS/multisample.txt
See also this related question: How are depth values resolved in OpenGL textures when multisampling?
And the answers to this question, where GL_MULTISAMPLE_ARB is recommended: where is GL_MULTISAMPLE defined?. The specs for GL_MULTISAMPLE_ARB (2002) are here: http://www.opengl.org/registry/specs/ARB/multisample.txt