render to color attachment with DSA - opengl

I want to render into a framebuffer with DSA. However I only got it working when manually binding the framebuffer. Is there a way without bind it?
This is how I thought it would work:
glNamedFramebufferDrawBuffer(m_fbo, GL_COLOR_ATTACHMENT0);
drawCall();
This only works if I use glBindFramebuffer(GL_FRAMEBUFFER, m_fbo); before. How do I do it correctly?
Also, what is the equivalent to glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); for framebuffers with DSA? Again, I can only clear if I previously bound the framebuffer.

The array of draw buffers is not a global state, but rather it is stored per-framebuffer. You are probably familiar with the mechanics of Vertex Array Objects, which maintain separate sets of vertex attribute pointers; draw buffers are analogous to attribute pointers in this situation.
When you make a call to glNamedFramebufferDrawBuffer (m_fbo, ...), you are modifying the state of m_fbo's array without first having to bind m_fbo. You are not actually telling OpenGL to source its color buffer from m_fbo's GL_COLOR_ATTACHMENT0 - that only happens when you bind m_fbo.
In fact, if you think about this critically, this is the only logical way it can work. If you could arbitrarily source buffers from different framebuffer objects, then that would violate validation (completeness). For instance, FBO0 might have a multi-sampled color attachment with 4 samples and FBO1 might have a single-sampled depth attachment. Those are two incompatible buffers, but the only time that is validated is when you try to attach those two images to the same FBO.

Related

QOpenGLFramebufferObject bind texture

This is totally driving me nuts!
I've created my fbo by
QOpenGLFramebufferObjectFormat format;
format.setAttachment(QOpenGLFramebufferObject::CombinedDepthStencil);
format.setMipmap(false);
format.setSamples(4);
format.setTextureTarget(GL_TEXTURE_2D);
format.setInternalTextureFormat(GL_RGBA32F_ARB);
qfbo=new QOpenGLFramebufferObject(QSize(w,h),format);
ok it seems to be working and Qt doesn't claim about that.
The problem is that I need that QOpenGLFramebufferObject as read/write, write on it seems to be easy, just qfbo.bind() makes its work, the problem is that I cannot bind its texture, if I ask it about GLuint handle it always return 0 in both modes:
qDebug()<<qfbo->texture();
qDebug()<<qfbo->takeTexture();
obviously my intention is to bind the texture by itself like:
glBindTexture(GL_TEXTURE_1D, fbo->texture());
Any tip about this? I've been googling for a long time without luck :(
I forget to say that if I don't use .setsamples(4); get a !=0 GLuit but totally black.
If you use multisampling, there's no texture allocated, instead a multisampled renderbuffer gets used.
You need to create another, non multisampled FBO of matching dimensions and use blitFramebuffer to resolve the multisampled data into single sampled.
Then you'll have the texture you can bind (to the 2D target, not to the 1D!). If this texture is still full black, run apitrace on your code and see what you're doing wrong.

How to render color and depth in multisample texture?

In order to implement "depth-peeling", I render my OpenGL scene in to a series of framebuffers each equipped with a rgba color texture and depth texture. This works fine if I don't care about anti-aliasing. If I do, then it seems the correct thing to do is enable GL_MULTISAMPLING and use a GL_TEXTURE_2D_MULTISAMPLE instead of GL_TEXTURE_2D. But I'm confused about which other calls need to be replaced.
In particular, how should I adapt my framebuffer construction to use glTexImage2DMultisample instead of glTexImage2D?
Do I need to change the calls to glFramebufferTexture2D beyond using GL_TEXTURE_2D_MULTISAMPLE instead of GL_TEXTURE_2D?
If I'm rendering both color and depth into textures, do I need to make a call to glRenderbufferStorageMultisample?
Finally, is there some glBlit* that I need to do in addition to setting up textures for the framebuffer to render into?
There are many related questions on this topic, but none of the solutions I found seem to point to a canonical tutorial or clear example putting all these together.
While I have only used multisampled FBO rendering with renderbuffers, not textures, the following is my understanding.
Do I need to change the calls to glFramebufferTexture2D beyond using GL_TEXTURE_2D_MULTISAMPLE instead of GL_TEXTURE_2D?
No, that's all you need. You create the texture with glTexImage2DMultisample(), and then attach it using GL_TEXTURE_2D_MULTISAMPLE as the 3rd argument to glFramebufferTexture2D(). The only constraint is that the level (5th argument) has to be 0.
If I'm rendering both color and depth into textures, do I need to make a call to glRenderbufferStorageMultisample?
Yes. If you attach a depth buffer to the same FBO, you need to use a multisampled renderbuffer, with the same number of samples as the color buffer. So you create your depth renderbuffer with glRenderbufferStorageMultisample(), passing in the same sample count you used for the color buffer.
Finally, is there some glBlit* that I need to do in addition to setting up textures for the framebuffer to render into?
Not for rendering into the framebuffer. Once you're done rendering, you have a couple of options:
You can downsample (resolve) the multisample texture to a regular texture, and then use the regular texture for your subsequent rendering. For resolving the multisample texture, you can use glBlitFramebuffer(), where the multisample texture is attached to the GL_READ_FRAMEBUFFER, and the regular texture to the GL_DRAW_FRAMEBUFFER.
You can use the multisample texture for your subsequent rendering. You will need to use the sampler2DMS type for the samplers in your shader code, with the corresponding sampling functions.
For option 1, I don't really see a good reason to use a multisample texture. You might just as well use a multisample renderbuffer, which is slightly easier to use, and should be at least as efficient. For this, you create a renderbuffer for the color attachment, and allocate it with glRenderbufferStorageMultisample(), very much like what you need for the depth buffer.

Process of setting up a VAO in OpenGL

Can I get a more overall/general description of this?
I've been trying to research these things all week, but I only come into super technical explanations or examples.
Could somebody explain the overall process or goal of these VAO's? Maybe outline a bit of a "step-by-step" flowchart for setting up a VAO in OpenGL?
I've never been someone who learns by examples... so all the things I've found online have been really unhelpful so far.
Your Vertex Buffer Objects (VBOs) contain the vertex data. You set up attribute pointers which specify how that data should be accessed and interpreted in order to draw a vertex - how many components are there to each vertex, how are they stored (GL_FLOAT for example) etc.
The Vertex Array Object (VAO) saves all these attributes. Their purpose is to allow you to set up these attributes once and then restore them with one state change whenever you need to use them.
The advantage of this is you can easily set up multiple states and restore them - if I want to draw a cube and a triangle these are two different states - I create a VAO for each and the VAO saves the state for each of them. When I want to draw the cube, I restore the state for the cube only.
Setup
Setup has four steps:
Create the VAO
Bind the VAO
Set up the state
Unbind the VAO
Creating a VAO is very simple:
GLuint vao_name;
glGenVertexArrays(1, &vao_name);
This will generate one vertex array, and store it's name in vao_name.
Once your VAO is created you need to bind it:
glBindVertexArray(vao_name);
Now, any changes you make to the attribute pointers or bindings to the element array buffer will be saved. This is a call to any of the following:
glVertexAttribPointer(...)
glEnableVertexAttribArray(...)
glDisableVertexAttribArray(...)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ...)
Whenever you make one of these calls, the corresponding state is saved in the currently bound VAO. And when you later bind the VAO again, the state is restored.
Once you have set up the state you can unbind the VAO. Do this by binding VAO 0:
glBindVertexArrays(0);
You could also bind a different VAO and set up a different state for that. When this happens it is safe to make changes to the attribute pointers without disrupting your previous state.
Drawing
When you want to draw something all you have to do is restore the VAO:
glBindVertexArray(cube_vao)
glDrawArrays(GL_TRIANGLES, 0, 36);
glBindVertexArray(triangle_vao);
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0); //Unbind the VAO
Edit:
These are some answers that helped me when I was trying to understand VAOs:
glVertexAttribPointer overwrite, glVertexAttribPointer clarification, When should glVertexAttribPointer be called?
Also, the OpenGL specification explains it well once you've got your head around the basics: https://www.opengl.org/registry/
The VAO holds all information about how to read the attributes from the VBOs.
This includes for each attribute, which VBO it is in; the offset, the stride, how it is encoded (int, float, normalized or not), how many values per attribute. In other words the parameters of glVertexAttribPointer (and the bound GL_ARRAY_BUFFER at the time of the call.
It also holds the element array buffer that holds the indexes.
For what VAO's are, see #ratchet-freak's answer.
As for the goal of VAO's, it's a matter of improving performance by allowing one single state change to setup all vertex attributes bindings, which not only means fewer function calls but also more opportunity for the OpenGL implementation to optimize for this state as a whole.
For VAO, an understanding of VBOs and Shader is a pre-requisite. VAOs provide a way to “pre-define” vertex data and its attributes.
One way to understand is to draw with VBOs and then draw the same with VAOs. You will notice that it is similar to drawing with VBOs, except that it is first "saved" in a VAO for later use. Subsequently, the VAO is made active to actually draw the saved state.
See VBO, Shader, VAO for a fairly good example.

Can a texture be bound to more than one fbo?

Can the same texture be bound to more than one framebuffer object?
I need to write on some texture in a multi target rendering pass with a certain fbo, and later to add some blending to just one of those texture, so I need a second framebuffer object with that texture bound to.
I have no idea why you would think that you can't attach a texture to multiple FBOs. So yes, you can.
However, you shouldn't need to for your purposes. You don't have to write to all of the images attached to an FBO. You control what images get written to with glDrawBuffers. You can even selectively enable and disable blending to certain draw buffers, if you need to write to multiple buffers but only blend with certain ones.
So yes you can, but you shouldn't bother. Just switch your draw buffers, unless you need a new depth buffer or something.

Using a framebuffer as a vertex buffer without moving the data to the CPU

In OpenGL, is there a way to use framebuffer data as vertex data without moving the data through the CPU? Ideally, a framebuffer object could be recast as a vertex buffer object directly on the GPU. I'd like to use the fragment shader to generate a mesh and then render that mesh.
There's a couple ways you could go about this, the first has already been mentioned by spudd86 (except you need to use GL_PIXEL_PACK_BUFFER, that's the one that's written to by glReadPixels).
The other is to use a framebuffer object and then read from its texture in your vertex shader, mapping from a vertex id (that you would have to manage) to a texture location. If this is a one-time operation though I'd go with copying it over to a PBO and then binding into GL_ARRAY_BUFFER and then just using it as a VBO.
Just use the functions to do the copy and let the driver figure out how to do what you want, chances are as long as you copy directly into the vertex buffer it won't actually do a copy but will just make your VBO a reference to the data.
The main thing to be careful of is that some drivers may not like you using something you told it was for vertex data with an operation for pixel data...
Edit: probably something like the following may or may not work... (IIRC the spec says it should)
int vbo;
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, vbo);
// use appropriate pixel formats and size
glReadPixels(0, 0, w, h, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbo);
// draw stuff
Edited to correct buffer bindings thanks Phineas
The specification for GL_pixel_buffer_object gives an example demonstrating how to render to a vertex array under "Usage Examples".
The following extensions are helpful for solving this problem:
GL_texture_float - floating point internal formats to use for the color buffer attachment
GL_color_buffer_float - disable automatic clamping for fragment colors and glReadPixels
GL_pixel_buffer_object - operations for transferring pixel data to buffer objects
If you can do your work in a vertex/geometry shader, you can use transform feedback to write directly into a buffer object. This also has the option of skip the rasterizer and fragment shading.
Transform feedback is available as EXT_transform_feedback or core version since GL 3.0 (and the ARB equivalent).