Antialiasing is lost when retrieving bitmap with glReadPixels - opengl

I write a program using OpenGL. It implements a simple function: draw a teapot.And in order to make it look nice on the screen, I enable multisample anti-aliasing. And it does. Look at the following bitmap:
But when I save it as a bmp picture, it looks bad. I use FBO and PBO to do it. Now I post part of my code here:
glGenFramebuffers(1,&m_frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER,m_frameBuffer);
glGenRenderbuffers(1,&m_renderBufferColor);
glBindRenderbuffer(GL_RENDERBUFFER,m_renderBufferColor);
glRenderbufferStorage(GL_RENDERBUFFER,GL_RGB,
m_subImageWidth,m_subImageHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER,m_renderBufferColor);
glGenRenderbuffers(1,&m_renderBufferDepth);
glBindRenderbuffer(GL_RENDERBUFFER,m_renderBufferDepth);
glRenderbufferStorage(GL_RENDERBUFFER,GL_DEPTH_COMPONENT,
m_subImageWidth,m_subImageHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER,m_renderBufferDepth);
glBindFramebuffer(GL_FRAMEBUFFER,0);
glBindFramebuffer(GL_FRAMEBUFFER,m_frameBuffer);
glGenBuffers(1,m_subImageBuffer);
glBindBuffer(GL_PIXEL_PACK_BUFFER,m_subImageBuffer);
glBufferData(GL_PIXEL_PACK_BUFFER,m_bufferSize,
NULL,GL_STREAM_READ);
glBindFramebuffer(GL_FRAMEBUFFER,m_frameBuffer);
glBindBuffer(GL_PIXEL_PACK_BUFFER,m_subImageBuffer);
glPixelStorei(GL_PACK_ALIGNMENT,1);
//注意:以BGR的顺序读取
glReadPixels(0,0,m_subImageWidth,m_subImageHeight,
GL_BGR,GL_UNSIGNED_BYTE,bufferOffset(0));
GLUtils::checkForOpenGLError(__FILE__,__LINE__);
m_subPixels[i] = static_cast<GLubyte*>(glMapBuffer(GL_PIXEL_PACK_BUFFER,GL_READ_ONLY));
gltGenBMP(subImageFile,GLT_BGR,m_subImageWidth,m_subImageHeight,m_subPixels[i]);
glBindBuffer(GL_PIXEL_PACK_BUFFER,0);
glUnmapBuffer(GL_PIXEL_PACK_BUFFER);
glBindBuffer(GL_PIXEL_PACK_BUFFER,0);
I am very curious.Why they are different : render to default famebuffer and save to a bmp picture?
Actually, what I want to do is to get 9 small bitmaps in 9 different adjacent angle and then synthesis one bitmap to display on a stereoscope 3D screen. But the synthesised bitmap looks bad.
Could someone tell me why?

Just because you enable multisample on your frame buffer does not mean your FBO will have it too. You need to use glRenderbufferStorageMultisample when creating the FBO.
See: FBO Blitting is not working
And: http://www.opengl.org/wiki/GL_EXT_framebuffer_multisample
This is also relevant: glReadPixels from FBO fails with multisampling

Related

giving preexisting texture to fbo to draw on it

I wanted to know if it's possible to give a non NULL texture to a frame buffer to render on it. I mean just drawing on it so it will become the background of the final texture.
From what I have tried it just keep the texture I give and render it directly, there's no drawing on it ( as if the drawing part have been useless).
If i give a NULL texture the drawing is done.
So i wanted to know if it's possible, am i just doing it wrongly?
all example of use of fbo i've seen only show NULL texture sent.
What you're trying to do is not as common as the use case where content in an FBO attachment is rendered from scratch. That's why you won't find as many examples.
It's still perfectly legal, though, and should work. The only difference should really be that you don't call glClear() after attaching the texture to the FBO, and starting to render.
One case where you'll have to be careful is if you use depth buffering for the rendering you want to do on top of the original texture content. In this case, you will of course need a depth buffer attachment (which is typically a renderbuffer) in your FBO, as usual. In this case, you will need to clear your depth buffer, but not the color buffer, before starting to render:
glClear(GL_DEPTH_BUFFER_BIT);

Render to window framebuffer and FBO to save full scale texture image

I would like to save the output of my image processing OpenGL shader program to an image file and also display the result on the screen. I know how to save the window framebuffer using glReadPixels(). However, the resolution of the screen is smaller than the dimensions of the image.
If I render to an FBO, do I need to call glDrawArrays() again after saving and unbinding the FBO to see the results on the screen? Or is it possible to tell the window framebuffer to render from the FBO without having to run the shader program a second time?
To save the rendered image in the RBO, you can read the pixels directly by setting which buffer OpenGL will read the pixels from by calling glReadBuffer. In your particular case, setting the read buffer to GL_COLOR_ATTACHMENT<i> should do the trick. See the glDrawBuffer man page for details.
In order to display the image in the FBO: yes, you will need to make an additional rendering pass to copy the FBO's image into the default frame buffer. You an either bind the FBO as a texture, and render geometry, as you suggest, to get the image on the screen, or, you may be able to use glBlitFramebuffer to simplify the copying and image filtering.
If I render to an FBO, do I need to call glDrawArrays() again after saving and unbinding the FBO to see the results on the screen?
You should use glBlitFramebuffer (...), the purpose of this function is to copy one framebuffer (read buffer) to another (draw buffer). Provided you are not doing something unusual like drawing into an integer texture attachment then your FBO's draw buffer should be compatible with your default framebuffer (window).
There are some additional caveats related to the filter method and the type of image you are copying (e.g. depth buffers cannot use linearly interpolation), but since you are discussing "full scale" here, I imagine you are interested in GL_NEAREST anyway.

'Render to Texture' and multipass rendering

I'm implementing an algorithm about pencil rendering. First, I should render the model using Phong shading to determine the intensity. Then I should map the texture to the rendered result.
I'm going to do a multipass rendering with opengl and cg shaders. Someone told me that I should try 'render to texture'. But I don't know how to use this method to get the effects that I want. In my opinion, we should first use this method to render the mesh, then we can get a 2D texture about the whole scene. Now that we have draw content to the framebuffer, next we should render to the screen, right? But how to use the rendered texture and do some post-processing on it? Can anybody show me some code or links about it?
I made this tutorial, it might help you : http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/
However, using RTT is overkill for what you're trying to do, I think. If you need the fragment's intensity in the texture, well, you already have it in your shader, so there is no need to render it twice...
Maybe this could be useful ? http://www.ozone3d.net/demos_projects/toon-snow.php
render to a texture with Phong shading
Draw that texture to the screen again in a full screen textured quad, applying a shader that does your desired operation.
I'll assume you need clarification on RTT and using it.
Essentially, your screen is a framebuffer (very similar to a texture); it's a 2D image at the end of the day. The idea of RTT is to capture that 2D image. To do this, the best way is to use a framebuffer object (FBO) (Google "framebuffer object", and click on the first link). From here, you have a 2D picture of your scene (you should check it by saving to an image file that it actually is what you want).
Once you have the image, you'll set up a 2D view and draw that image back onto the screen with an 800x600 quadrilateral or what-have-you. When drawing, you use a fragment program (shader), which transforms the brightness of the image into a greyscale value. You can output this, or you can use it as an offset to another, "pencil" texture.

How to render Framebuffer Objects on multi-sampled textures?

I currently have a rendering engine using multiple passes in which various parts of the image are rendered on textures, and then combined using shaders. It works, and now I would like to activate multi-sampling.
I read here ( http://www.opengl.org/wiki/Framebuffer_Object_Examples#MSAA ) that, with OpenGL, you can't attach a GL_TEXTURE2D_MULTISAMPLE to a framebuffer object.
It seems one way to use multi-sampling and still have access to the result as texture is to use a multi-sampled render buffer, and then copy the result into a multisample texture.
My question is: what would be the best way to go forward?
Is it possible to render in a render buffer and use the output in my shader, without copying into a texture?
Should I indeed copy the content of the buffer into a texture, and then use it?
Is there another, better, solution?
Thanks.
I read here ( http://www.opengl.org/wiki/Framebuffer_Object_Examples#MSAA ) that, with OpenGL, you can't attach a GL_TEXTURE2D_MULTISAMPLE to a framebuffer object.
Read it again. It says nothing about GL_TEXTURE_2D_MULTISAMPLE textures. Actually, I take that back: don't read that page again. If you want good FBO info, read the page on Framebuffer Objects that explains 3.x behavior. The page you linked to is old.
Back in the EXT days, all you had were multisampled renderbuffers, because multisample textures didn't exist. You could create multisampled buffers, but you couldn't texture with them. You could only blit them.
In 3.3 OpenGL, you can create multisampled textures. And you can attach them just like any other texture to an FBO.

Difference between glBitmap and glTexImage2D

I need to display image in openGL window.
Image changes every timer tick.
I've checked on google how, and as I can see it can be done using or glBitmap or glTexImage2D functions.
What is the difference between them?
The difference? These two functions have nothing in common.
glBitmap is a function for drawing binary images. That's not a .BMP file or an image you load (usually). The function's name doesn't refer to the colloquial term "bitmap". It refers to exact that: a map of bits. Each bit in the bitmap represents a pixel. If the bit is 1, then the current raster color will be written to the framebuffer. If the bit is 0, then the pixel in the framebuffer will not be altered.
glTexImage2D is for allocating textures and optionally uploading pixel data to them. You can later draw triangles that have that texture mapped to them. But glTexImage2D by itself does not draw anything.
What you are probably looking for is glDrawPixels, which draws an image directly into the framebuffer. If you use glTexImage2D, you have to first update the texture with the new image, then draw a shape with that texture (say, a fullscreen quad) to actually render the image.
That said, you'll be better off with glTexImage2D if...
You're using a library like JOGL that makes binding textures from images an easy operation, or
You want to scale the image or display it in perspective