I'm learning about framebuffers right now and I just don't understand what the Color attachment does. I understand framebuffers.
What is the point of the second parameter in:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureColorBuffer, 0);
Why doesn't anything draw to my frame buffer when I change it to COLOR_ATTACHMENT1?
How could I draw to the frame buffer by setting the texture to Color attachment 1?
Why would using multiple color attachments be useful?
Is it similar to the glActiveTexture(GL_TEXTUREi) concept of drawing multiple textures at once?
I'm just trying to understand more open gl. Thanks.
Yes, a framebuffer can have multiple color attachments, and the second parameter to glFramebufferTexture2D() controls which of them you're setting. The maximum number of supported color attachments can be queried with:
GLint maxAtt = 0;
glGetIntegerv(GL_MAX_COLOR_ATTACHMENTS, &maxAtt);
The minimum number supported by all implementations is 8.
To select which buffer(s) you want to render to, you use glDrawBuffer() or glDrawBuffers(). The only difference between these two calls is that the first one allows you to specify only one buffer, while the second one supports multiple buffers.
In the example you tried, you can render to GL_COLOR_ATTACHMENT1 by using either:
glDrawBuffer(GL_COLOR_ATTACHMENT1);
or:
GLenum bufs[1] = {GL_COLOR_ATTACHMENT1};
glDrawBuffers(1, bufs);
The default is GL_COLOR_ATTACHMENT0, which explains why you had success rendering to attachment 0 without ever using this call. The list of draw buffers is part of the FBO state.
To produce output for multiple color buffers, you define multiple outputs in the fragment shader.
One popular use for multiple color buffers (often referred to by the acronym MRT = Multiple Render Targets) is deferred shading, where the interpolated vertex attributes used for the shading calculation are stored in a number of buffers in the initial rendering pass. The actual shading calculation is then performed only for the visible pixels in a second pass, using the attribute values from the buffers produced in the first pass. See for example http://ogldev.atspace.co.uk/www/tutorial35/tutorial35.html for a more complete explanation of how this works.
Related
I'm developing a 3d program with OpenGL 3.3. So far I managed to render many cubes and some spheres. I need to texture all the faces of all the cubes with a common texture except one face which should have a different texture. I tried with a single texture and everything worked fine but when I try to add another one the program seems to behave randomly.
My questions:
is there a suitable way of passing multiple textures to the shaders?
how am I supposed to keep track of faces in order to render the right texture?
Googling I found out that it could be useful to define vertices twice, but I don't really get why.
Is there a suitable way of passing multiple textures to the shaders?
You'd use glUniform1i() along with glActiveTexture(). Thus given your fragment shader has multiple uniform sampler2D:
uniform sampler2D tex1;
uniform sampler2D tex2;
Then as you're setting up your shader, you set the sampler uniforms to the texture units you want them associated with:
glUniform1i(glGetUniformLocation(program, "tex1"), 0)
glUniform1i(glGetUniformLocation(program, "tex2"), 1)
You then set the active texture to either GL_TEXTURE0 or GL_TEXTURE1 and bind a texture.
glActiveTexture(GL_TEXTURE0)
glBindTexture(GL_TEXTURE_2D, texture1)
glActiveTexture(GL_TEXTURE1)
glBindTexture(GL_TEXTURE_2D, texture2)
How am I supposed to keep track of faces in order to render the right texture?
It depends on what you want.
You could decide which texture to use based the normal (usually done with tri-planar texture mapping).
You could also have another attribute that decides how much to crossfade between the two textures.
color = mix(texture(tex1, texCoord), texture(tex2, texCoord), 0.2)
Like before, you can equally have a uniform float transition. This would allow you to fade between textures, making it possible to fade between slides like in PowerPoint, so to speak.
Try reading LearnOpenGL's Textures tutorial.
Lastly there's a minimum of 80 texture units. You can check specifically how many you have available using GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS.
You can use index buffers. Define the vertices once, and then use one index buffer to draw the portion of the mesh with the first texture, then use the second index buffer to draw the portion that needs the second texture.
Here's the general formula:
Setup the vertex buffer
Setup the shader
Setup the first texture
Setup and draw the index buffer for the part of the mesh that should use the first texture
Setup the second texture
Setup and draw the index buffer for the part of the mesh that should use the second texture.
I'd like to access framebuffer to get RGB and change their values for each pixel. It is because the glReadPixels, and glDrawPixels are too slow to use, so that i should use shaders instead of using them.
Now, I write code, and success to display three-dimensional model using GLSL shaders.
I drew two cubes as follows.
....
glDrawArrays(GL_TRIANGLES, 0, 12*6);
....
and fragment shader :
varying vec3 fragmentColor;
void main()
{
gl_FragColor = vec4(fragmentColor, 1);
}
Then, how can I access to RGB values and change it?
For example, If the pixel values at (u1, v1) on window and (u2, v2) are (0,0,255), then I want to change them to (255,0,0)
With the exception of an OpenGL ES-only extension, fragment shaders cannot just read from the current framebuffer. Otherwise, we wouldn't need blending.
You also can't just render to the image you're reading from in a shader. So if you need to do some sort of post-processing, then that is best done by rendering to a separate image. That is, you do your rendering to image 1, then bind that as a texture and change the FBO so that you're rendering to image 2.
Alternatively, if you have access to OpenGL 4.5/ARB/NV_texture_barrier, then you can use texture barriers to handle this. This permits you a single read/modify/write pass, if you bind the current framebuffer's image as a texture. You'd issue the barrier before doing your read/modify/write, then bind that texture to a sampler while still rendering to that framebuffer.
Also, this requires that the FS read from the exact texel that it would write to. Assuming a viewport anchored at 0,0, the code for this would be texelFetch(sampler, ivec2(gl_FragCoord.xy), 0). You can't read from someone else's texel and modify it.
Obviously you must be rendering to a texture; you cannot use the default framebuffer for this.
Texture barrier could be used for cases where you read from different texels than you write to. But that would require doing something similar to the first case of switching bound images. Though you wouldn't need to change the FBO exactly; you could change the region of the FBO that you render to. That is, so long as you're reading from a different area than you're rendering to, and you use barriers appropriately when switching between those regions, everything is fine.
I have two 2D textures. The first, an MSAA texture, uses a target of GL_TEXTURE_2D_MULTISAMPLE. The second, non MSAA texture, uses a target of GL_TEXTURE_2D.
According to OpenGL's spec on ARB_texture_multisample, only GL_NEAREST is a valid filtering option when the MSAA texture is being drawn to.
In this case, both of these textures are attached to GL_COLOR_ATTACHMENT0 via their individual Framebuffer objects. Their resolutions are also the same size (to my knowledge this is necessary when blitting an MSAA to non-MSAA).
So, given the current constraints, if I blit the MSAA holding FBO to the non-MSAA holding FBO, do I still need to use GL_NEAREST as the filtering option, or is GL_LINEAR valid, since both textures have already been rendered to?
The filtering options only come into play when you sample from the textures. They play no role while you render to the texture.
When sampling from multisample textures, GL_NEAREST is indeed the only supported filter option. You also need to use a special sampler type (sampler2DMS) in the GLSL code, with corresponding sampling instructions.
I actually can't find anything in the spec saying that setting the filter to GL_LINEAR for multisample textures is an error. But the filter is not used at all. From the OpenGL 4.5 spec (emphasis added):
When a multisample texture is accessed in a shader, the access takes one vector of integers describing which texel to fetch and an integer corresponding to the sample numbers described in section 14.3.1 determining which sample within the texel to fetch. No standard sampling instructions are allowed on the multisample texture targets, and no filtering is performed by the fetch.
For blitting between multisample and non-multisample textures with glBlitFramebuffer(), the filter argument can be either GL_LINEAR or GL_NEAREST, but it is ignored in this case. From the 4.5 spec:
If the read framebuffer is multisampled (its effective value of SAMPLE_BUFFERS is one) and the draw framebuffer is not (its value of SAMPLE_BUFFERS is zero), the samples corresponding to each pixel location in the source are converted to a single sample before being written to the destination. filter is ignored.
This makes sense because there is a restriction in this case that the source and destination rectangle need to be the same size:
An INVALID_OPERATION error is generated if either the read or draw framebuffer is multisampled, and the dimensions of the source and destination rectangles provided to BlitFramebufferare not identical.
Since the filter is only applied when the image is stretched, it does not matter in this case.
In order to implement "depth-peeling", I render my OpenGL scene in to a series of framebuffers each equipped with a rgba color texture and depth texture. This works fine if I don't care about anti-aliasing. If I do, then it seems the correct thing to do is enable GL_MULTISAMPLING and use a GL_TEXTURE_2D_MULTISAMPLE instead of GL_TEXTURE_2D. But I'm confused about which other calls need to be replaced.
In particular, how should I adapt my framebuffer construction to use glTexImage2DMultisample instead of glTexImage2D?
Do I need to change the calls to glFramebufferTexture2D beyond using GL_TEXTURE_2D_MULTISAMPLE instead of GL_TEXTURE_2D?
If I'm rendering both color and depth into textures, do I need to make a call to glRenderbufferStorageMultisample?
Finally, is there some glBlit* that I need to do in addition to setting up textures for the framebuffer to render into?
There are many related questions on this topic, but none of the solutions I found seem to point to a canonical tutorial or clear example putting all these together.
While I have only used multisampled FBO rendering with renderbuffers, not textures, the following is my understanding.
Do I need to change the calls to glFramebufferTexture2D beyond using GL_TEXTURE_2D_MULTISAMPLE instead of GL_TEXTURE_2D?
No, that's all you need. You create the texture with glTexImage2DMultisample(), and then attach it using GL_TEXTURE_2D_MULTISAMPLE as the 3rd argument to glFramebufferTexture2D(). The only constraint is that the level (5th argument) has to be 0.
If I'm rendering both color and depth into textures, do I need to make a call to glRenderbufferStorageMultisample?
Yes. If you attach a depth buffer to the same FBO, you need to use a multisampled renderbuffer, with the same number of samples as the color buffer. So you create your depth renderbuffer with glRenderbufferStorageMultisample(), passing in the same sample count you used for the color buffer.
Finally, is there some glBlit* that I need to do in addition to setting up textures for the framebuffer to render into?
Not for rendering into the framebuffer. Once you're done rendering, you have a couple of options:
You can downsample (resolve) the multisample texture to a regular texture, and then use the regular texture for your subsequent rendering. For resolving the multisample texture, you can use glBlitFramebuffer(), where the multisample texture is attached to the GL_READ_FRAMEBUFFER, and the regular texture to the GL_DRAW_FRAMEBUFFER.
You can use the multisample texture for your subsequent rendering. You will need to use the sampler2DMS type for the samplers in your shader code, with the corresponding sampling functions.
For option 1, I don't really see a good reason to use a multisample texture. You might just as well use a multisample renderbuffer, which is slightly easier to use, and should be at least as efficient. For this, you create a renderbuffer for the color attachment, and allocate it with glRenderbufferStorageMultisample(), very much like what you need for the depth buffer.
How to render a texture with alpha?
I have a texture, and need to render it with different alpha values at different locations. Any way to do so? (My texture is GL_RGBA)
If not possible to change alpha value on the fly, I have to create different textures for different alpha levels?
First, make sure that your texture has an alpha channel. You mention you are loading an RGBA format, but it's always good to check the original file in an image editing program. Then make sure your texture is ready for rendering in openGL. A common mistake is to forget to set up the texture's filtering mode through glTexParameter*. It starts on a setting requiring mipmaps, so I find that it's easiest to start with:
glTexParameteri(GL_TEXTURE_2D, GL_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_MAG_FILTER, GL_LINEAR);
Secondly, you will need to set up openGL to be ready for blending. This involves a glEnable call with GL_BLEND and a glBlendFunc call. Most of the time, you will want the function call to be glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), as most other combinations of tokens will give you effects you are not after (see the glBlendFunc spec page for more info).
Finally, ensure you are sampling your texture at different points. If you are using immediate mode (you are using glVertex* to draw your scene), you will need to either use glTexGen* or manually specify texture points using glTexCoord* before calls to glVertex*. If using array data to draw your scene, make sure you have enabled the texture pointer using glEnableClientState(GL_TEXTURE_COORD_ARRAY) and glTexCoordPointer.
Your texture is GL_RGBA so it has a different alpha value for each texel.
If you want to change the alpha value used for render, I can think of the following methods:
Change the texture alpha values (not sure if you say that you don't want to do that).
Use glColor4f to change the alpha value of the vertices. It will multiply the texture values. You may need to use glEnable(GL_COLOR_MATERIAL) and/or glColorMaterial().
Use a vertex shader to change the vertex alpha values. It will multiply the texture values.
Use a fragment shader to change the sampled texture values on the fly.
Use two texture stages and multiply them. The second one will have the modified alpha values (see glActiveTexture() and friends).
Use a fragment shader and two (or more) texture stages. This is the coolest!