How to read pixels from framebuffer object - opengl

I want to read the depth component of a scene rendered to a framebuffer object. I initially used glReadPixels() but found that it could only read pixels from the default framebuffer.
The answers to some relevant questions on this website suggest using PBO, but I haven't tried it yet. It seems that the PBO reading is asynchronous, therefore, using which command can synchronize the reading at the end?

A PBO won't help you here, because those are just a different kind of buffer to read into (instead of memory on the host into memory of the OpenGL implementation).
The usual way to go about making a depth component back-readable in OpenGL is to use a depth texture, attached to the depth attachment and after rendering using glGetTexImage to retrieve the date.
In the case of a normal color attachment you could use glReadPixels with a previous call to glReadBuffer to select the GL_COLOR_ATTACHMENT<i> of the bound FBO.

Related

Render to window framebuffer and FBO to save full scale texture image

I would like to save the output of my image processing OpenGL shader program to an image file and also display the result on the screen. I know how to save the window framebuffer using glReadPixels(). However, the resolution of the screen is smaller than the dimensions of the image.
If I render to an FBO, do I need to call glDrawArrays() again after saving and unbinding the FBO to see the results on the screen? Or is it possible to tell the window framebuffer to render from the FBO without having to run the shader program a second time?
To save the rendered image in the RBO, you can read the pixels directly by setting which buffer OpenGL will read the pixels from by calling glReadBuffer. In your particular case, setting the read buffer to GL_COLOR_ATTACHMENT<i> should do the trick. See the glDrawBuffer man page for details.
In order to display the image in the FBO: yes, you will need to make an additional rendering pass to copy the FBO's image into the default frame buffer. You an either bind the FBO as a texture, and render geometry, as you suggest, to get the image on the screen, or, you may be able to use glBlitFramebuffer to simplify the copying and image filtering.
If I render to an FBO, do I need to call glDrawArrays() again after saving and unbinding the FBO to see the results on the screen?
You should use glBlitFramebuffer (...), the purpose of this function is to copy one framebuffer (read buffer) to another (draw buffer). Provided you are not doing something unusual like drawing into an integer texture attachment then your FBO's draw buffer should be compatible with your default framebuffer (window).
There are some additional caveats related to the filter method and the type of image you are copying (e.g. depth buffers cannot use linearly interpolation), but since you are discussing "full scale" here, I imagine you are interested in GL_NEAREST anyway.

Pass stream hint to existing texture?

I have a texture that was created by another part of my code (with QT5's bindTexture, but this isn't relevant).
How can I set an OpenGL hint that this texture will be frequently updated?
glBindTexture(GL_TEXTURE_2D, textures[0]);
//Tell opengl that I plan on streaming this texture
glBindTexture(GL_TEXTURE_2D, 0);
There is no mechanism to indicating that a texture will be updated repeatedly; that is only related to buffers (e.g., VBOs, etc.) through the usage parameter. However, there are two possibilities:
Attache your texture as a framebuffer object and update it that way. That's probably the most efficient method to do what you're asking. The memory associated with the texture remains resident on the GPU, and you can update it at rendering speeds.
Try using a pixel buffer object (commonly called a PBO, and has an OpenGL buffer type of GL_PIXEL_UNPACK_BUFFER) as the buffer that Qt writes its generated texture into, and mark that buffer as GL_DYNAMIC_DRAW. You'll still need to call glTexImage*D() with the buffer offset of the PBO (i.e., probably zero) for each update, but that approach may afford some efficiency over just blasting texels to the pipe directly through glTexImage*D().
There is no such hint. OpenGL defines functionality, not performance. Just upload to it whenever you need to.

How to render Framebuffer Objects on multi-sampled textures?

I currently have a rendering engine using multiple passes in which various parts of the image are rendered on textures, and then combined using shaders. It works, and now I would like to activate multi-sampling.
I read here ( http://www.opengl.org/wiki/Framebuffer_Object_Examples#MSAA ) that, with OpenGL, you can't attach a GL_TEXTURE2D_MULTISAMPLE to a framebuffer object.
It seems one way to use multi-sampling and still have access to the result as texture is to use a multi-sampled render buffer, and then copy the result into a multisample texture.
My question is: what would be the best way to go forward?
Is it possible to render in a render buffer and use the output in my shader, without copying into a texture?
Should I indeed copy the content of the buffer into a texture, and then use it?
Is there another, better, solution?
Thanks.
I read here ( http://www.opengl.org/wiki/Framebuffer_Object_Examples#MSAA ) that, with OpenGL, you can't attach a GL_TEXTURE2D_MULTISAMPLE to a framebuffer object.
Read it again. It says nothing about GL_TEXTURE_2D_MULTISAMPLE textures. Actually, I take that back: don't read that page again. If you want good FBO info, read the page on Framebuffer Objects that explains 3.x behavior. The page you linked to is old.
Back in the EXT days, all you had were multisampled renderbuffers, because multisample textures didn't exist. You could create multisampled buffers, but you couldn't texture with them. You could only blit them.
In 3.3 OpenGL, you can create multisampled textures. And you can attach them just like any other texture to an FBO.

Applying a shader to framebuffer object to get fisheye affect

Lets say i have an application ( the details of the application should be irrelevent for solving the problem ). Instead of rendering to the screen, i am somehow able to force the application to render to a framebuffer object instead of rendering to the screen ( messing with glew or intercepting a call in a dll ).
Once the application has rendered its content to the FBO is it possible to apply a shader to the contents of the FB? My knowledge is limited here, so from what i understand at this stage all information about vertices is no longer available and all the necessary tests have been applied, so whats left in the buffer is just pixel data. Is this correct?
If it is possible to apply a shader to the FBO, is is possible to get a fisheye affect? ( like this for example: http://idea.hosting.lv/a/gfx/quakeshots.html )
The technique used in the linke above is to create 6 different viewports and render each viewport to a cubemap face and then apply the texture to a mesh.
Thanks
A framebuffer object encapsulates several other buffers, specifically those that are implicitly indexed by fragment location. So a single framebuffer object may bundle together a colour buffer, a depth buffer, a stencil buffer and a bunch of others. The individual buffers are known as renderbuffers.
You're right — there's no geometry in there. For the purposes of reading back the scene you get only final fragment values, which if you're highjacking an existing app will probably be a 2d pixel image of the frame and some other things that you don't care about.
If your GPU has render-to-texture support (originally an extension circa OpenGL 1.3 but you'd be hard pressed to find a GPU without it nowadays, even in mobile phones) then you can link a texture as a renderbuffer within a framebuffer. So the rendering code is exactly as it would be normally but ends up writing the results to a texture that you can then use as a source for drawing.
Fragment shaders can programmatically decide which location of a texture map to sample in order to create their output. So you can write a fragment shader that applies a fisheye lens, though you're restricted to the field of view rendered in the original texture, obviously. Which would probably be what you'd get in your Quake example if you had just one of the sides of the cube available rather than six.
In summary: the answer is 'yes' to all of your questions. There's a brief introduction to framebuffer objects here.
Look here for some relevant info:
http://www.opengl.org/wiki/Framebuffer_Object
The short, simple explanation is that a FBO is the 3D equivalent of a software frame buffer. You have direct access to individual pixels, instead of having to modify a texture and upload it. You can get shaders to point to an FBO. The link above gives an overview of the procedure.

What are the differences between a Frame Buffer Object and a Pixel Buffer Object in OpenGL?

What is the difference between FBO and PBO? Which one should I use for off-screen rendering?
What is the difference between FBO and PBO?
A better question is how are they similar. The only thing that is similar about them is their names.
A Framebuffer Object (note the capitalization: framebuffer is one word, not two) is an object that contains multiple images which can be used as render targets.
A Pixel Buffer Object is:
A Buffer Object. FBOs are not buffer objects. Again: framebuffer is one word.
A buffer object that is used for asynchronous uploading/downloading of pixel data to/from images.
If you want to render to a texture or just a non-screen framebuffer, then you use FBOs. If you're trying to read pixel data back to your application asynchronously, or you're trying to transfer pixel data to OpenGL images asynchronously, then you use PBOs.
They're nothing alike.
A FBO (Framebuffer object) is a target where you can render images other than the default frame buffer or screen.
A PBO (Pixel Buffer Object) allows asynchronous transfers of pixel data to and from the device. This can be helpful to improve overall performance when rendering if you have other things that can be done while waiting for the pixel transfer.
I would read VBOs, PBOs and FBOs:
Apple has posted two very nice bits of
sample code demonstrating PBOs and
FBOs. Even though these are
Mac-specific, as sample code they're
good on any platoform because PBOs and
FBOs are OpenGL extensions, not
windowing system extensions.
So what are all these objects? Here's
the situation:
I want to highlight something.
FBO it not memory block. I think it look like struct of pointer. You Must attach Texture to FBO to use it. After attach Texture you now can draw to it for offscreen render or for second pass effect.
struct FBO{
AttachColor0 *ptr0;
AttachColor1 *ptr1;
AttachColor2 *ptr2;
AttachDepth *ptr3;
};
In the other hand, PBO is memory block "block to hold type of memory. "Try to think of it as malloc of x size, then you can use memcpy to copy data from it to texture/FBO or to it".
Why to use PBO?
Create intermediate memory buffer to interface with Host memory and not stop OpenGL drawing will upload texture to or from host.