GLX double buffering only working after two swaps - opengl

I have a program, using opengl and GLX. At the beginning I choose a framebuffer configuration with the following attributes:
const int attributes[] = {GLX_RENDER_TYPE, GLX_RGBA_BIT, GLX_DOUBLEBUFFER, True, None};
fb_configs = glXChooseFBConfig(display, screen_index, attributes, &fb_configs_count);
When I have to re-render the window, I clear the screen, render the content and then swap the buffers:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
if_render();
glXSwapBuffers(display, drawable);
After I swap the buffers, instead of the contents of my window, I see what is behind the window. My guess is that the swapped buffer is in some initial state and nothing was actually rendered into it. If I trigger the buffer swapping for a second time, the content in my window is visualized properly.
Am I missing something?
EDIT
This happens with fluxbox 1.3.7 and mesa 11.0.6.
I use a direct rendering context.

Related

How to render in the title bar with D3D11 in Windows 7?

I have a program that renders to a D3D11 swapchain created with IDXGIFactory2::CreateSwapChainForHwnd. It clears the swap chain to a particular color, specifically green, and draws a textured rectangle. Following the guide at https://learn.microsoft.com/en-us/windows/desktop/dwm/customframe, I extended the window caption downward using DwmExtendFrameIntoClientArea and I extended the client area to the entire window by handling the WM_NCCALCSIZE message. When the swap chain is presented, the contents of the swap chain buffer is drawn on top of the window, completely covering the DWM-drawn glass frame and caption buttons. How can I leave regions of the glass frame and caption buttons to be drawn by DWM while still drawing in the window frame using D3D11?
I have already tried to clear the RenderTargetView to a color of {0.0f, 0.0f, 0.0f, 0.0f} with ID3D11DeviceContext::ClearRenderTargetView but the alpha component appears to be ignored. I've tried to specify DXGI_ALPHA_MODE_STRAIGHT and DXGI_ALPHA_MODE_PREMULTIPLIED in the AlphaMode member of the DXGI_SWAP_CHAIN_DESC1 used to create the swap chain, but it crashes because alpha-blended swapchains must be created by CreateSwapChainForComposition or CreateSwapChainForCoreWindow. Those functions are not viable since I would like to support Windows 7.
Another thing I've tried is creating a blend state and making parts of the texture transparent. All this does is blend the transparent parts of texture with the clear color. Nothing is getting rendered by DWM though.

how can I create OpenGL context using Windows memory dc (c++)

In my Windows MFC application, in its View class I created an OpenGL context using View's DC:
HANDLE * hdc = GetDC()->m_hdc;
int nPixelFormat;
static PIXELFORMATDESCRIPTOR pfd = {
sizeof(PIXELFORMATDESCRIPTOR), // Size of this structure
1, // Version of this structure
PFD_DRAW_TO_WINDOW | // Draw to window (not bitmap)
PFD_SUPPORT_OPENGL | // Support OpenGL calls
PFD_DOUBLEBUFFER, // Double -buffered mode
PFD_TYPE_RGBA, // RGBA Color mode
24, // Want 24bit color
0,0,0,0,0,0, // Not used to select mode
0,0, // Not used to select mode
0,0,0,0,0, // Not used to select mode
32, // Size of depth buffer
0, // Not used to select mode
0, // Not used to select mode
PFD_MAIN_PLANE, // Draw in main plane
0, // Not used to select mode
0,0,0 }; // Not used to select mode
// Choose a pixel format that best matches that described in pfd
nPixelFormat = ChoosePixelFormat(hdC, &pfd);
// Set the pixel format for the device context
assert(SetPixelFormat(hdC, nPixelFormat, &pfd));
HGLRC m_hrc = wglCreateContext(hdc);
assert(m_hrc);
wglMakeCurrent(m_hdc,m_hrc);
All the code above works all right, and I can do OpenGL drawings as expected.
But , What I need now is to change the DC to memory dc instead of window DC . To be exact, how can I use the 'hmemDC' bellow to create an OpenGL context like the way I did above with window DC:
CRect rct;
GetClientRect(&rct);
HDC hmemDC = CreateCompatibleDC(pDC->m_hDC);
HBITMAP hBmp = CreateCompatibleBitmap(pDC->m_hDC,rct.Width(),rct.Height());
with the same pixel format constructed above, I came across the "Invalid pixel format" error in calling wglCreateContext() , can not success in getting the correct OpenGL context.
I googled a lot , and tryed to change some of the values of pixel format, the result was the same.
Is it possible to create OpenGL context with Windows Memory DC? and if it is how should I do it?
Edit:
This is why I need a bitmap ( or Memory DC ): I created a 2d map rendering library which uses OpenGL. The client want to use this library to render background map, and draw its own symbols on top of it . But, they prefer to use Windows GDI other than OpenGL to draw their symbols. So I thought if I can provide them with a bitmap or a Mmeory DC, they could to what they want. Any better solutions? Am I in the right direction? Or it is a total bad idea to provide such a 2d library in OpenGL backend.
This can't be done in a useful way.
You can in principle render to a bitmap by using the PFD_DRAW_TO_BITMAP flag instead of PFD_DRAW_TO_WINDOW in the PIXELFORMATDESCRIPTOR.
However, doing so will disable all hardware accelearated rendering. This will fall back to Microsofts default OpenGL 1.1 implementation.
If you want hw-accleration and/or modern GL, you either need a window or some offscreen buffer like a pbuffer, which is available via the WGL_ARB_pbuffer extension. However, in modern GL, you are probably better off creating a window which is just never shown, and using a Frambeuffer Object as the offscreen render target.
In either case, you will have to copy the data back to the CPU, if you need it as some bitmap there.
Put in few words: You can't create arbitrary OpenGL contexts for MemDCs. At least no kind of OpenGL context you'd actually want to use.
If your goal is off-screen rendering either create a PBuffer-DC; which requires to create a OpenGL context first which in turn required to create a window and setting its pixel format. Or you can just create a window and a OpenGL context for it and use a framebuffer object.

glClear() not obeying scissor region [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I'm drawing Open GL content (direct Win32 - not using GLUT, FreeGLUT, GLFW, etc) using double buffering in an arbitrary Windows 7 window which is already open, for example, a Windows Notepad window. I have the window handle and can draw the content I want as expected, but I am seeing strange behavior with the glClear() function.
It is my understanding that the glClear() function should only affect pixels on the screen which are INSIDE the region defined by the glScissor() function. I have defined the scissor region with glScissor() and then enabled the scissor function using glEnable(GL_SCISSOR_TEST). glClearColor is set to white (0,0,0,1). I'm clearing both color and depth buffers with the glClear() command.
When the SwapBuffers() command is executed in order to render on the screen, my selected clear color of white is painted inside the scissor region as I requested, but the rest of the window OUTSIDE the scissor region is painted black, rather than leaving these pixels untouched as I expected.
As shown in the image, the scissor region (white) and the object (3D cube) are drawn correctly, but the rest of the notepad window's pixels are set to black, and anything previously painted in that Notepad window is covered over.
glClearColor(1.0f, 1.0f, 1.0f, 1.0f); // white background
glViewport(0, 0, 300, 300);
glScissor(0, 0, 250, 400);
glEnable(GL_SCISSOR_TEST);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//... draw cube inside glBegin()/glEnd()...
SwapBuffers(hDC);
If I get your description correctly, glClear works as intended.
You must not assume that only because you see something on the screen, it is also present in the back buffer. The contents of the Notepad window that you see is either the front buffer, or a copy of the front buffer that was blitted into DWM's own secret render buffer (depending on whether you have compositing or not). Or, something else, a GDI buffer that was blitted to DWM's buffer, or such. Most likely the latter, since it's using GDI to render.
When you flip buffers, the back buffer is displayed over anything that's on-screen in that regin, and what you get is an all-black buffer (actually uninitialized, but presumably the driver was so kind as to zero the memory) except for the area that you cleared to white.
Which is exactly what you should expect -- your glClear affected only a subregion, and the rest is undefined, it happened to be zero (black).
Incidentially, if no compositing is enabled what you can see on-screen can be copied from the front buffer to the back buffer on most graphic cards, so you would be able to still see the original contents of the Notepad window if you wished to have it that way. You will however never have the contents of a GDI window in your back buffer magically (nor will this work with DWM, nor is it something that is guaranteed to work, it only works incidentially most of the time).
The clean solution, if you want the window's original contents, would be to BitBlt from the DC to memory, create a texture, and draw (or blit) that one into the back buffer.

QGLWidget ignores glClear

I'm stumped. I have a widget inside the mainwindow on a QT 4.6 application which has been configured as a openGL widget. It draws just fine except I am unable to clear the background between frames. I get a black background when he window opens and it never clears after that, so I get a jumbled mess of rendered objects as time goes along. I'm also having to call swapBuffers() at the end of the paintGL() function to get the widget to show the most recent frame which has me puzzled as I was under the impression that swapBuffers() was called automatically. I am set up for double buffering and the widget is not shared. Here is the relevant code:
void GLWidget::paintGL ( )
{
m_Input.Draw();
QGLWidget::swapBuffers();
}
void GLWidget::initializeGL ( )
{
qglClearColor(QColor(0,0,255,128));
glClear(GL_COLOR_BUFFER_BIT);
}
It does seem there's something not right with the double buffering. Clearing the screen to a background color is pretty basic. But it's driving me nuts as to why it's not working. The remainder of the drawing code is working fine. Any ideas? (This is on a Linux system.)
glClear is a drawing operation. It doesn't belong into initializeGL but into paintGL. The clearing color should be set right before calling glClear as well, so move that [q]glClearColor along.
Update
The paintGL method should look like this:
void GLWidget::paintGL()
{
qglClearColor(QColor(0,0,255,128));
glClear(GL_COLOR_BUFFER_BIT);
// you probably also want to clear the depth and stencil buffers
// glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
m_Input.Draw();
QGLWidget::swapBuffers();
}
Of course you must make sure that m_Input.Draw() doesn't mess things up again. You didn't show the code for that, so I'm in the blind here.

OpenGL with MFC

I am trying to make 4 OpenGL viewports inside a CSplitterWnd, but am having some problems.
At first, I had flickering and drawing issues until I added the flag PFD_SUPPORT_GDI into the pixel format, which made everything work nicely together. But when I use PFD_SUPPORT_GDI, I am only able to get a 1.1 OpenGL context.
Is it possible to use PFD_SUPPORT_GDI with a version of OpenGL higher than 1.1 so that I can use VBOs? or is there another way to get OpenGL to work properly without PFD_SUPPORT_GDI?
The biggest problem with not having PFD_SUPPORT_GDI is that the splitter window separator wipes the viewport contents away when you drag over it..which does not happen while using the PFD_SUPPORT_GDI flag.
PFD_SUPPORT_GDI means, you want to be able to draw using GDI calls, which will force you into using the software renderer.
Most of the time flicker issues, especially with MFC are due to not properly set/choosen WNDCLASS(EX) parameters. Most importantly CS_OWNDC flag should be set and the background brush should be NULL. Also you should overwrite the OnEraseBackground handler and implement a OnPaint handler, that reports a validated rect.
PFD_SUPPORT_GDI means that you can do GDI drawing to the window. This forces a software OpenGL implementation, because you cannot use GDI drawing (which is software) with hardware-based OpenGL drawing.
So no, you cannot have both hardware OpenGL (or D3D) acceleration and GDI support for the same window. If you're having issues with what happens to the contents of such windows, that is something you should resolve in some other way. Perhaps you could simply redraw the view when its size is changed or something.
I decided the best way to do this was to use a frame buffer. Handling OnEraseBackground() helped with the flicker, but the MFC still just doesn't want to play nice with OpenGL, so I had to go with a GDI solution.
Each viewport first gets drawn to it's own frame buffer, and then blitted to the appropriate window.
void FrameBuffer::Blit(HDC hDC, int width, int height)
{
glReadPixels(0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, blitBuffer);
SetDIBitsToDevice(hDC, 0, 0, width, height, 0, 0, 0, height, blitBuffer, &blitInfo, DIB_RGB_COLORS);
}
This solution doesn't seem to be making any visible impact on performance.