OpenGL with MFC - opengl

I am trying to make 4 OpenGL viewports inside a CSplitterWnd, but am having some problems.
At first, I had flickering and drawing issues until I added the flag PFD_SUPPORT_GDI into the pixel format, which made everything work nicely together. But when I use PFD_SUPPORT_GDI, I am only able to get a 1.1 OpenGL context.
Is it possible to use PFD_SUPPORT_GDI with a version of OpenGL higher than 1.1 so that I can use VBOs? or is there another way to get OpenGL to work properly without PFD_SUPPORT_GDI?
The biggest problem with not having PFD_SUPPORT_GDI is that the splitter window separator wipes the viewport contents away when you drag over it..which does not happen while using the PFD_SUPPORT_GDI flag.

PFD_SUPPORT_GDI means, you want to be able to draw using GDI calls, which will force you into using the software renderer.
Most of the time flicker issues, especially with MFC are due to not properly set/choosen WNDCLASS(EX) parameters. Most importantly CS_OWNDC flag should be set and the background brush should be NULL. Also you should overwrite the OnEraseBackground handler and implement a OnPaint handler, that reports a validated rect.

PFD_SUPPORT_GDI means that you can do GDI drawing to the window. This forces a software OpenGL implementation, because you cannot use GDI drawing (which is software) with hardware-based OpenGL drawing.
So no, you cannot have both hardware OpenGL (or D3D) acceleration and GDI support for the same window. If you're having issues with what happens to the contents of such windows, that is something you should resolve in some other way. Perhaps you could simply redraw the view when its size is changed or something.

I decided the best way to do this was to use a frame buffer. Handling OnEraseBackground() helped with the flicker, but the MFC still just doesn't want to play nice with OpenGL, so I had to go with a GDI solution.
Each viewport first gets drawn to it's own frame buffer, and then blitted to the appropriate window.
void FrameBuffer::Blit(HDC hDC, int width, int height)
{
glReadPixels(0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, blitBuffer);
SetDIBitsToDevice(hDC, 0, 0, width, height, 0, 0, 0, height, blitBuffer, &blitInfo, DIB_RGB_COLORS);
}
This solution doesn't seem to be making any visible impact on performance.

Related

Draw OpenGL to an offscreen bitmap

I've inherited a project which renders a 3D scene directly to the window using OpenGL. The code works fine, but we're now drawing an icon onto the 3D view to "Exit 3D view mode". This also works fine, but results in a lot of flickering as the view is rapidly rotated.
I'd like to be able to draw to an off-screen bitmap (ie. without a HWND), then draw my icon to the bitmap, then finally StretchBlt the bitmap to the window using double-buffering. We do this in other contexts (such as zooming into an image which does not need OpenGL) and it works great. My problem is that I am an OpenGL novice and all attempts at starting with the DC of the off-screen bitmap and creating a HWND from this DC fail, usually because of selecting a pixel format for the DC.
There are a few questions asking similar things here on StackOverflow (eg. this question without an accepted answer. Is this possible ? If so is there a relatively straightforward tutorial describing the procedure? If the process is extremely complex requiring detailed OpenGL knowledge, then I may just have to leave it and live with the flickering because it is a rarely used mode in our software.
Just draw the Icon using OpenGL using a textured quad.
All this draw to a bitmap copy to DC StretchBlt involves several roundtrips from and to graphics memory (wastes bandwidth) and StretchBlt will likely not be GPU accelerated. All in all what you want to do is inefficient and may even reduce quality.
I presume you have the icon stored in your executable as a resource. The most simple way to go about it is to create a memory DC (CreateCompatibleDC) with a DIBSECTION (CreateDIBSection), draw the icon to that and load the DIBSECTION data into a OpenGL texture. Then to draw the icon use glViewport to select the destination rectangle in window coordinates, use an identity transform to draw a rectangle covering the whole viewport (position values (-1,1)→(1,1), texture coordinate values (0,0)→(1,1) gives you the right outcome).
Important side fix: In case your program does silly things like setting viewport and the fixed function pipeline GL_PROJECTION matrix in a window resize handler you should clean up that anti pattern and move this to where it belongs: In the drawing code.

SDL resetting glViewport

I'm testing supporting multiple resolutions in an application using SDL2 with OpenGL. To create my "letterbox" functionality I set my glViewport to an appropriate value and everything works perfectly.
However, if I create my window with the SDL_WINDOW_ALLOW_HIGHDPI flag set, whenever I move my window (after receiving the SDL_WINDOWEVENT_MOVED event) SDL modifies the viewport to the full size of the window, which can be verified by calling SDL_GL_GetDrawableSize during the event.
If I do not set SDL_WINDOW_ALLOW_HIGHDPI when creating the window, the viewport is not reset. I do believe this to be a bug, but cannot find anything through the SDL bugzilla so I thought to ask if anyone has seen similar behavior.
You may need to have a retina MacBook Pro to experience this behavior.
Just do what you should be doing anyway: Always re-/set the viewport at the beginning of drawing each frame. As soon as you want to implement a HUD, use framebuffer objects or similar things you'll have to set the viewport (several times) for drawing each frame.

Is it possible to vsync SDL_SetVideoMode?

I am trying to change the size of the window of my app with:
mysurface = SDL_SetVideoMode(width, height, 32, SDL_OPENGL);
Although I am using vsync swapbuffers (in driver xorg-video-ati), I can see flickering when the window size changes (I guess one or more black frames):
void Video::draw()
{
if (videoChanged){
mysurface = SDL_SetVideoMode(width, height, 32, SDL_OPENGL);
scene->init(); //Update glFrustum & glViewPort
}
scene->draw();
SDL_GL_SwapBuffers();
}
So please, someone knows, if...
The SDL_SetVideoMode is not vsync'ed as is SDL_GL_SwapBuffers()?
Or is it destroying the window and creating another and the buffer is black in meantime?
Someone knows a working code to do this? Maybe in freeglut?
In SDL-1 when you're using a windowed video mode the window is completely torn down and a new one created when changing the video mode. Of course there's some undefined data inbetween, which is perceived as flicker. This issue has been addressed in SDL-2. Either use that or use a different OpenGL framework, that resizes windows without gong a full window recreation.
If you're using a FULLSCREEN video mode then something different happens additionally:
A change of the video mode actually changes the video signal timings going from the graphics card to the display. After such a change the display has to find synchronization with the new settings and that takes some time. This of course comes with some flickering as the display may try to display a frame of different timings with the old settings until it detects that those no longer match. It's a physical effect and there's nothing you can do in software to fix this, other than not changing the video mode at all.

C++ SDL How to undo a BlitSurface

I have used SDL to display an image:
SDL_BlitSurface(sprite, NULL, screen, NULL);
My question is: Is it possible to remove the image from the window?
Generally speaking, no. SDL_BlitSurface() overwrites (a subset of) the contents of the destination surface, essentially the same as an assignment to an array of pixel data. One solution is to redraw the entire screen every frame, first clearing with:
SDL_FillRect(screen, 0, SDL_MapRGB(screen->format, r, g, b));
For better performance, you could keep track of which regions on the screen are “dirty” and need to be repainted each frame, and only repaint those regions. SDL offers some functions for doing that (SDL_UpdateRect() and SDL_UpdateRects()) but I wouldn’t bother unless rendering speed becomes a serious issue. Most SDL applications seem to be able to do 30–50 frames per second; beyond that, you’ll want to look at OpenGL.

Improving window resize behaviour, possibly by manually setting bigger framebuffer size

I was considering using glfw in my application, while developing on mac
After successfully writing a very simple program to render a triangle on a colored backround,
I noticed that when resizing the window, it takes quite some time to rerender the scene, as I suspect due to framebuffer resize.
This is not the case when I am repeating the experiment with NSOpenGLView. Is there a way to hint glfw to use bigger framebuffer size on start, to avoid expensive resizes?
I am using GLFW 3.
Could you also help me with enabling High DPI for retina display. Couldn't find something in docs on that, but it supported in version 3.
Obtaining a larger framebuffer
Try to obtain a large initial frame-buffer by calling glfwCreateWindow() with large values for width & height and immediately switching to displaying a smaller window using glfwSetWindowSize() with the actual initial window size desired.
Alternately, register your own framebuffer size callback function using glfwSetFramebufferSizeCallback() and set the framebuffer to a large size according to your requirement as follows :
void custom_fbsize_callback(GLFWwindow* window, int width, int height)
{
/* use system width,height */
/* glViewport(0, 0, width, height); */
/* use custom width,height */
glViewport(0, 0, <CUSTOM_WIDTH>, <CUSTOM_HEIGHT>);
}
UPDATE :
The render pipeline stall seen during the window re-size(and window drag) operation is due to the blocking behavior implemented in the window manager.
To mitigate this in one's app, one needs to install handler functions for the window messages and run the render pipeline in a separate thread independent from the main app(GUI) thread.
High DPI support
The GLFW documentation says :
GLFW now supports high-DPI monitors on both Windows and OS X, giving
windows full resolution framebuffers where other UI elements are
scaled up. To achieve this, glfwGetFramebufferSize() and
glfwSetFramebufferSizeCallback() have been added. These work with
pixels, while the rest of the GLFW API work with screen coordinates.
AFAIK, that seems to be pretty much everything about high-DPI in the documentation.
Going through the code we can see that on Windows, glfw hooks into the SetProcessDPIAware() and calls it during platformInit. Currently i am not able to find any similar code for high-DPI support on mac.