Can I have a default Framebuffer without alpha and depth? - opengl

I am looking to save some video card memory by not allocating what I do not use. I am far from running out of memory, but it would feel 'cleaner' to me.
I can't really think of a reason to have an alpha value in the default framebuffer since my window is not going to alpha-blend with my desktop anyway. I was wondering if I could save a few bytes or have more color depth by removing its alpha.
Likewise, I am doing some deferred lighting and all my depth calculations occur in a framebuffer that is not the default one. Then I simply render a quad (two tris) to the default frame buffer with the texture in which I rendered my scene as well as a few GUI elements, none of which requires depth-testing. I call glDisable(GL_DEPTH_TEST) when rendering the default framebuffer, but I wouldn't mind not having a depth buffer at all instead of a depth buffer that I don't use.
Can I do that within OpenGl ? Or within SDL with which I create my OpenGl context ?
I try to create my OpenGl context with the following SDL attributes
sdl.GL_SetAttribute(sdl.GL_DOUBLEBUFFER, 1)
sdl.GL_SetAttribute(sdl.GL_DEPTH_SIZE, 0)
sdl.GL_SetAttribute(sdl.GL_ALPHA_SIZE, 0)
info := sdl.GetVideoInfo()
bpp := int(info.Vfmt.BitsPerPixel)
if screen := sdl.SetVideoMode(640, 480, bpp, sdl.OPENGL); screen == nil {
panic("Could not open SDL window: " + sdl.GetError())
}
if err := gl.Init(); err != nil {
panic(err)
}
Unfortunately, SDL's binding for GoLang lack the sdl_gl_GetAttribute function that would allow me to check whether my wishes are granted.
As I said, there is no emergency. I am mostly curious.

I can't really think of a reason to have an alpha value in the default framebuffer since my window is not going to alpha-blend with my desktop anyway.
That's good, because the default framebuffer having an alpha channel wouldn't actually do that (on Windows anyway).
The framebuffer alpha is there for use in blending operations. It is sometimes useful to do blending that is in some way based on a destination alpha color. For example, I once used the destination alpha as a "reflectivity" value for a reflective surface, when drawing the reflected objects after having drawn that reflective surface. It was necessary to do it in that order, because the reflective surface had to be drawn in multiple passes.
In any case, the low level WGL/GLX/etc APIs for creating OpenGL contexts do allow you to ask to not have alpha or depth. Note that if you ask for 0 alpha bits, that will almost certainly save you 0 memory, since it's more efficient to render to a 32-bit framebuffer than a 24-bit one. So you may as well keep it.
However, since you're using SDL, and the Go binding of SDL, that's up to SDL and it's Go binding. The sdl_gl_SetAttribute function should work, assuming SDL implements it correctly. If you want to verify this, you can just ask the framebuffer through OpenGL:
glBindFramebuffer(GL_FRAMEBUFFER, 0); //Use the default framebuffer.
GLint depthBits, alphaBits;
glGetFramebufferAttachmentParameteriv(GL_DRAW_FRAMEBUFFER, GL_DEPTH, GL_FRAMEBUFFER_ATTACHMENT_DEPTH_SIZE, &depthBits);
glGetFramebufferAttachmentParameteriv(GL_DRAW_FRAMEBUFFER, GL_BACK_LEFT, GL_FRAMEBUFFER_ATTACHMENT_ALPHA_SIZE, &alphaBits);

Related

OpenGL blend two FBOs

In a game I'm writing, I have a level, which is properly rendered to the on-screen render buffer provided to me by the OS. I can also render this to a framebuffer, then render this framebuffer onto the output render buffer.
To add a background, I want to render a different scene, an effect, or whatever to a second framebuffer, then have this "show through" wherever the framebuffer containing the level has no pixel set, i.e. the alpha value is 0. I think this is called alpha blending.
How would I go about doing this with OpenGL? I think glBlendFunc could be used to achieve this, but I am not sure how I can couple this with the framebuffer drawing routines to properly achieve the result I want.
glBlendFunc allows the application to blend (merge) the output of all your current draw operations (say, X) with the current "display" framebuffer (say, Y) that already exists.
ie,
New display output = X (blend) Y
You can control the blend function by gl as below snippet shows for example:
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Full usage shown here
https://github.com/prabindh/sgxperf/blob/master/sgxperf_test4.cpp
Note that the concepts of "showing through" and "blending" are a little different, you might just want to stick with "per pixel alpha blending" terminology.
FBOs are just a containers and are not storage. What you need to do is attach a texture target for each FBO and render your output to that texture, once you have done this. You can use your output textures on a fullscreen quad and do whatever you want with your blending.

How to use glReadPixels() to return resized image from FBO?

Shortly: I need a quick way to resize the buffer image and then return the pixels to me to save them to file etc.
Currently I first use glReadPixels(), and then I go through the pixels myself to resize them with my own resize function.
Is there any way to speed up this resizing, for example make OpenGL do the work for me? I think I could use glGetTexImage() with miplevel and mipmapping enabled, but as I noticed earlier, that function is bugged on my GFX card, so I can't use it.
I only need one miplevel, which could be anything from 1 to 4, but not all of them, to conserve some GPU memory. So is it possible to generate only one miplevel of wanted size?
Note: i dont think i can use multisampling, because i need pixel precise rendering for stencil tests, so if i rendered it with multisampling, it would make blurry pixels and they would fail with stencil test and masking and result would be incorrect (AFAIK). Edit: i only want to scale the colors (RGBA) buffer!
If you have OpenGL 3.0 or alternatively EXT_framebuffer_blit available (very likely -- all nVidia cards since around 2005, all ATI cards since around 2008 have it, and even Intel HD graphics claims to support it), then you can glBlitFramebuffer[EXT] into a smaller framebuffer (with a respectively smaller rectangle) and have the graphics card do the work.
Note that you cannot ever safely rescale inside the same frambuffer even if you were to say "I don't need the original", because overlapped blits are undefined (allowed, but undefined).
Or, you can of course just draw a fullscreen quad with a simple downscaling pixel shader (aniso decimation, if you want).
In fact, since you mention stencil in your last paragraph... if it is stencil (or depth) that you want to rescale, then you most definitively want to draw a fullscreen quad with a shader, because it will very likely not give the desired result otherwise. Usually, one will choose a max filter rather than interpolation in such a case (e.g. what reasonable, meaningful result could interpolating a stencil value of 0 and a value of 10 give -- something else is needed, such as "any nonzero" or "max value in sample area").
Create a framebuffer of the desired target size and draw your source image with a full-resized-buffer-sized textured quad. Then read the resized framebuffer contents using glReadPixels.
Psuedocode:
unbind_texture(OriginalSizeFBOattachmentColorTex);
glBindFramebuffer(OriginalSizeFBO);
render_picture();
glBindFramebuffer(TargetSizeFBO); // TargetSizeFBO used a Renderbuffer color attachment
glBindTexture(OriginalSizeFBOattachmentColorTex);
glViewport(TargetSize);
render_full_viewport_quad_with_texture();
glReadPixels(...);

Blend FBO onto default framebuffer

To clarify, when I say 'default framebuffer' I mean the one provided by the windowing system and what ends up on your monitor.
To improve my rendering speeds for a CAD app, I've managed to separate out the 3D elements from the Qt-handled 2D ones, and they now each render into their own FBO. When the time comes to get them onto the screen, I blit the 3D FBO onto the default FB, and then I want to blend my 2D FBO on top of it.
I've gotten to the blitting part fine, but I can't see how to blend my 2D FBO onto it? Both FBOs are identical in size and format, and they are both the same as the default FB.
I'm sure it's a simple operation, but I can't find anything on the net - presumably I'm missing the right term for what I am trying to do. Although I'm using Qt, I can use native OpenGL commands without issue.
A blit operation is ultimately a pixel copy operation. If you want to layer one image on top of another, you can't blit it. You must instead render a full-screen quad as a texture and use the proper blending parameters for your blending operation.
You can use GL_EXT_framebuffer_blit to blit contents of the framebuffer object to the application framebuffer (or to any other). Although, as the spec states, it is not possible to use blending:
The pixel copy bypasses the fragment pipeline. The only fragment
operations which affect the blit are the pixel ownership test and
the scissor test.
So any blending means to use fragment shader as suggested. One fullscreen pass with blending should be pretty cheap, I believe there is nothing to worry about.
use shader to read back from frame buffer. this is OpenGL ES extension, not support by all hardware.
https://www.khronos.org/registry/gles/extensions/EXT/EXT_shader_framebuffer_fetch.txt

Order of operations for multisampling in DirectX 10

I'm confused on the process that needs to be done for anti-aliasing in DirectX 10. I haven't done it at all before, so it may just be that I'm confused on the procedure in general.
Here's what I know so far (or think I know): you need to enable multisampling for your RasterizerState object that you use in the shader, and that the SampleDesc in the swap chain description (DXGI_SWAP_CHAIN_DESC) needs to be set to a supported value for Count (and that Quality should stay 0, since other values or hardware-specific - though I don't know what for). Between the calls to prepare for more drawing (ClearRenderTargetView and IASetInputLayout) and the call to Present, the back buffer should be downsampled (via ResolveSubresource) to an otherwise equal-sized texture. Then, (this is the part I can't find anything on) somehow present the downsampled texture.
Do I have something messed up along the way? Or am I just mistaken on the last step? I saw a few resources refer to doing the sampling resolution during one last draw to a full-screen quad with a multisampled shader texture (Texture2DMS<>), but can't figure out what that would entail, or why you would do it over the device call to just resolve it that way.
Any attempt I've made at this doesn't produce any increase in image quality.
EDIT: I'm using DX11, you just use D3D10_ instead of D3D11_, thanks DeadMG.
You don't need to do any downsampling.
Find out what kind of quality modes and sample count your GPU supports.
You said that the quality level should be 0, that is wrong. Use this to get the supported quality modes with sample counts:
UINT GetMSAAQuality(UINT numSamples,DXGI_FORMAT format)
{
UINT q=-1;
HR(m_device->CheckMultisampleQualityLevels(format,numSamples,&q));
return q-1;
}
// Use it like this:
UINT sampleCount=4; // You set this
UINT sampleQuality=GetMSAAQuality(sampleCount,DXGI_FORMAT_CHOOSE_ONE);
// For swap chain
DXGI_SWAP_CHAIN_DESC d;
d.SampleDesc.Count=sampleCount;
d.SampleDesc.Quality=sampleQuality;
// Now for all the textures that you create for your
// render targets and also for your depth buffer
D3D11_TEXTURE2D_DESC dtex;
dtex.SampleDesc.Quality=sampleQuality;
dtex.SampleDesc.Count=sampleCount;
// Now for all your render target views you must set the correct dimension type
// based on if you use multisampling or not
D3D11_RENDER_TARGET_VIEW_DESC drtv;
dRtSwapChain.ViewDimension=sampleQuality==0?D3D11_RTV_DIMENSION_TEXTURE2D:D3D11_RTV_DIMENSION_TEXTURE2DMS;
// For depth stencil view
D3D11_DEPTH_STENCIL_VIEW_DESC ddsv;
ddsv.ViewDimension=sampleQuality==0?D3D11_DSV_DIMENSION_TEXTURE2D:D3D11_DSV_DIMENSION_TEXTURE2DMS;
If you want to read a multisampled texture in your shader (if you render a fullscreen quad for example) you must declare the texture Texture2DMS<>.

Is it possible to attach the default renderbuffer to a FBO?

I'm considering refactoring a large part of my rendering code and one question popped to mind:
Is it possible to render to both the screen and to a texture using multiple color attachments in a Frame Buffer Object? I cannot find any information if this should be possible or not even though it has many useful applications. I guess it should be enough to bind my texture as color attachment0 and renderbuffer 0 to attachment1?
For example I want to make an interactive application where you can "draw" on a 3D model. I resolve where the user draws by rendering the UV-coordinates to a texture so I can look up at the mouse-coordinates where to modify the texture. In my case it would be fastest to have a shader that both draws the UV's to the texture and the actual texture to the screen in one pass.
Are there better ways to do this or am I on the right track?
There is no such thing as "default renderbuffer" in OpenGL. There is the window system provided default frame buffer with reserved name zero, but that basically means "no FBO enabled". So no, unfortunately normal OpenGL provides no method to somehow use its color buffer as a color attachment to any other FBO. I'm not aware of any extensions that could possible provide this feature.
With render buffers there is also the reserved name zero, but it's only a special "none" variable and allows unbinding render buffers.