Screen Tearing in GDI - c++

I have a C/C++ MFC application where a moving image from left to right displays with StretchDIBits in a window. And the screen tearing happens in the window.
Please understand I am not talking about flickering but screen tearing as flickering is more like whole screen blinking while screen tearing causes some scattering part of the image not synchronized when the image moves to right or left so that a moving vertical line temporarily looks broken one in a very short period of time.
At first, I thought StretchDIBits was the cause but changing it to SetDIBitsToDevice didn't help at all. So I am suspecting it's GDI and googling about it supported my doubt.
I saw this article to fix the problem but it'd be cleaner to directly use DirectX in my opinion.
So at this point, I am thinking about using DirectX or OpenGL to prevent this tearing but not sure if this approach will work or if there is any better approach. So my question is
Would going with OpenGL or DirectX solve this problem?
Is there any better approach than going with OpenGL and DirectX?
Any clue will be appreciated.
To save time, please don't recommend not to use MFC as this is one of the requirements.

Related

How to properly sync frames in freeglut?

How to enable something like v-sync in the freeglut program, is there a way to reduce screen tearing such as possibly properly syncing up the monitor refresh rate with the frames pushed out of the program? is there possibly a library that I could use that already contains something that could take care of this? I have found tutorials for "wglSwapIntervalEXT" but I have not found any simple tutorial in its bare form. I have my freeglut program in fullscreen and there seems to be screen tearing that I can't figure out how to fix.

C++ Allegro visual glitch

I am training in using the allegro library with c++ but I am getting an issue, while using large images for parrallax backgrounds i get a constant sort of load/glitch scrolling down the screen, making all my images flicker for a bit, is there a way to load backgrounds without having such an issue? The flicker does not appear when I try to print the screen.
Thanks
The flickering is most likely a result of you redrawing your scene, and the monitor refreshing partway through.
The cure for this is to use double buffering. Read this:
http://wiki.allegro.cc/index.php?title=Double_buffering
There is another artifact called 'tearing', which is caused by blitting your buffer during a refresh cycle. This is generally solved by waiting for a vertical sync (retrace) and then drawing, but that's a little oldschool now that most of us use libraries such as OpenGL or DirectX to talk to our graphics hardware.
Nevertheless, Allegro provides a function that waits for the vertical retrace to begin, which is the time at which you can safely blit your buffer without worrying about tearing. See here:
https://www.allegro.cc/manual/4/api/graphics-modes/vsync
I cannot promise that this is the solution, but looking at your code, I don't understand why you are creating multiple buffers.
bufDisplay = al_create_bitmap(WIDTH, HEIGHT);
buffer = al_create_bitmap(WIDTH, HEIGHT);
Unless you are doing some type of special effect that requires buffers, they are unnecessary. Allegro 5 already provides a double buffer with default settings.
Just draw everything to the default target bitmap (the display's back buffer), and then al_flip_display().
If you want to center or scale your output to a different sized window, you can usually just use transformations.
I don't know why you are calling Sleep(8).
If using Windows, you could switch to using OpenGL (set the ALLEGRO_OPENGL display flag).
You should try other Allegro games and demos (plenty come with the source) to see if it's a problem on all of them.

OpenGL flickering/damaged with window resize and DWM active

I have a wxWidgets application that has a number of child opengl windows. I'm using my own GL canvas class, not the wx one. The windows share their OpenGL context.
I don't think the fact it is wxwidgets is really relevant here.
The opengl windows are children of a windows that are siblings of one another, contained within a tab control. Kind of an MDI style interface, but it is not an MDI window.. Each one can be individually resized. All works lovely unless Aero is enabled and the DWM is active.
Resizing any window (not even the opengl ones) causes all of the opengl windows to flicker occasionally with a stale backing-store view that contains whatever rubbish has been on the screen at that point that is not opengl. This ONLY happens with Aero enabled.
I'm pretty certain that this is the DWM not actually having the opengl contents on its drawing surface backing store and the window not being repainted at the right moment.
I've tried so many things to get round this, I do have a solution but it is not very nice and involves reading the framebuffer with glReadPixels into a DIB and then blitting it to the paint DC in my onPaint routine. This workaround is only enabled if DWM is active but I'd rather not have to do this at all as it hurts performance slightly (but not too bad on a capable system - the scenes are relatively simple 3d graphs). Also mixing GDI and opengl is not recommended but this approach works, surprisingly. I can live with it for now but I'd rather not have to. I still have to do this in WM_PRINT if I want to take a screenshot of the child window anyway, I don't see a way around that.
Does anyone know of a better solution to this?
Before anyone asks I definitely do the following:
Window class has CS_OWNDC
WM_ERASEBACKGROUND does nothing and returns TRUE.
Double Buffering is enabled.
Windows have the WS_CLIPSIBLINGS and WS_CLIPCHILDREN window styles.
In my resize event handler I immediately repaint the window.
I've tried:
Setting PFD_SUPPORT_COMPOSITION in the pixel format descriptor.
Not using a wxPaintDC in the paint handler and calling
::ValidateRect(hwnd, NULL) instead.
Handling WM_NCPAINT and excluding the client area
Disabling NC paint via the DWM API
Excluding the client area in the paint event
Calling glFlush and/or glFinish before and after the buffer swap.
Invalidating the window at every paint event (as a test!) - still
flickers!
Not using a shared GL context.
Disabling double buffering.
Writing to GL_FRONT_AND_BACK
Disabling DWM is not an option.
And as far as I am aware this is even a problem if you are using Direct3D instead on OpenGL, though I have not tested this as it represents a lot of work.
This is a longshot, but I just solved exactly this same problem myself.
The longshot part comes in because we're doing owner draw of the outline of a captionless group box that surrounds our OpenGL window (i.e., to make a nice little border), and that may not describe your case.
What we found caused the problem was this:
We had been using a RoundRect() call (with a HOLLOW_BRUSH) to draw the outline of the group box. Changing it to a MoveToEx() and LineTo() calls to ensure JUST the lines are drawn and nothing gets done inside the group box kept the GDI from trying to unexpectedly repaint the whole content of the control. It's possible there's a difference in invalidation logic (or we had a bug somehow in loading the intended hollow brush). We're still investigating.
-Noel
My app has only a single OpenGL window (the main window) but I ran into some nasty DWM tearing issues on window resize and I wonder if one of the solutions may work for you.
First of all, I found that during window resize there are at least two different bad guys who want to "help" you by modifying your client area before you have a chance to update the window yourself, creating flicker.
The first bad guy dates back to a XP/Vista/7 BitBlt inside the SetWindowPos() that Windows does internally during window resize, and can be eliminated with a trick involving intercepting WM_NCCALCSIZE or another trick involving intercepting WM_WINDOWPOSCHANGING.
In Windows 8/10 we still have that problem but we have a new bad guy, the Aero DWM.exe window manager, who will do his own different kind of BitBlt when he thinks you are "behind" updating the screen.
I suspect that the rubbish pixels you are seeing might actually be an intentional and very very poor attempt by DWM to fill in something "acceptable" while it waits for you to draw. I discovered that DWM extends the edge pixels of old client area data when it blits the new client area, which is insane.
Unfortunately, I don't know of any 100% solution to prevent DWM from doing this, but I do have a timing hack that greatly reduces the frequency of it.
For source code to the WM_NCCALCSIZE/WM_WINDOWPOSCHANGING hack as well as the DWM timing hack, please see:
How to smooth ugly jitter/flicker/jumping when resizing windows, especially dragging left/top border (Win 7-10; bg, bitblt and DWM)?
Hmm, maybe you have ran into the same issue: if you are using "new" MFC
it will create and application with Tabs and Window Spliter.
The splitter has some logic (I am guessing somewhere around transparent window and drawing XOR
lines for the split) that causes this behavior. Remove the splitter to confirm it resolve
your issue. If you need split functionality -- put in a different splitter.
Also Tabs allow docking and again splitting the windows that has the same issue -- remove/replace.
Good luck,
Igor

How to efficiently render double buffered window without any tearing effect?

I want to create my own tiny windowless GUI system, for that I am using GDI+. I cannot post code here because it got huge(c++) but bellow is the main steps I am following...
Create a bitmap of size equal to the application window.
For all mouse and keyboard events update the custom control states (eg. if mouse is currently held over a particular control e.t.c.)
For WM_PAINT event paint the background to offscreen bitmap and then paint all the updated controls on top of it and finally copy entire offscreen image to the front buffer via Graphics::DrawImage(..) call.
For WM_SIZE/WM_SIZING delete the previous offscreen bitmap and create another one with new window size.
Also there are some checks to prevent repeated drawing of controls i.e. controls are drawn only when it needs repainting in other words when the state of a control is changed only then it is painted e.t.c.
The system is working fine but only with one exception...when window is being resizing something sort of tearing effect appears. Now what I mean by tearing effect I shall try to explain ...
On the sizing edge/border there is a flickering gap as I drag the border.It is as if my DrawImage() function returns immediately and while one swap operation is half done another image drawing starts up.
Now you may think that it is common artifact that happens in many other application for the fact that resizing backbuffer is not always as fast as resizing window are but in other applications I noticed in other applications that although there is a leg between window size and client area size as window grows in size nothing flickers near the edge (its usually just white background that shows up as thin uniform strips along the border).
Also the dynamic controls which move with window resize acts jerky during sizing.
At first it seemed to me that using a constant fullscreen size offscreen surface could minimize the artifact but when I tried it results are not that satisfactory. I also tried to call Sleep() during sizing so that the flipping is done completely before another flip starts but strangely even that won't worked for me!
I have heard that GDI on vista is not hardware accelerated, could that might be the problem?
Also I wonder how frameworks such as Qt renders windowless GUI so smoothly, even if you size a complex Qt GUI window very fast negligibly little artifact appears. As far as I know Qt can use opengl for GUI rendering but that is second option.
If I use directx then real time resizing is even harder, opengl on the other hand seems to be nice for resizing without any problem but I will loose all the 2d drawing capability of GDI+.
If any of you have done anything like this before please guide me. Also if you have any pointer that I should consider for custom user interface design then provide me the links.
Thanks!
I always wished to design interfaces like windows media player 11 but can someone tell me that there is a straight forward solution for a c++ programmer (I want to know how rather than use some existing framework etc.)? Subclassing, owner drawing, custom drawing nothing seems to give you such level of control, I dont know a way to draw semitransparent control with common controls, so I think this question deserves some special attention . Thanks again.
Could it be a WM_ERASEBKGND message that's causing it?
see this question: GDI+ double buffering in C++
Also, if you need fast response from your GUI I would advise against GDI+.

How to sync page-flips with vertical retrace in a windowed SDL application?

I'm currently writing a game of immense sophistication and cunning, that will fill you with awe and won- oh, OK, it's the 15 puzzle, and I'm just familiarising myself with SDL.
I'm running in windowed mode, and using SDL_Flip as the general-case page update, since it maps automatically to an SDL_UpdateRect of the full window in windowed mode. Not the optimum approach, but given that this is just the 15 puzzle...
Anyway, the tile moves are happening at ludicrous speed. IOW, SDL_Flip in windowed mode doesn't include any synchronisation with vertical retraces. I'm working in Windows XP ATM, but I assume this is correct behaviour for SDL and will occur on other platforms too.
Switching to using SDL_UpdateRect obviously won't change anything. Presumably, I need to implement the delay logic in my own code. But a simple clock-based timer could result in updates occuring when the window is half-drawn, causing visible distortions (I forget the technical name).
EDIT This problem is known as "tearing".
So - in a windowed mode game in SDL, how do I synchronise my page-flips with the vertical retrace?
EDIT I have seen several claims, while searching for a solution, that it is impossible to synchronise page-flips to the vertical retrace in a windowed application. On Windows, at least, this is simply false - I have written games (by which I mean things on a similar level to the 15-puzzle) that do this. I once wasted some time playing with Dark Basic and the Dark GDK - both DirectX-based and both syncronising page-flips to the vertical retrace in windowed mode.
Major Edit
It turns out I should have spent more time looking before asking. From the SDL FAQ...
http://sdl.beuc.net/sdl.wiki/FAQ_Double_Buffering_is_Tearing
That seems to imply quite strongly that synchronising with the vertical retrace isn't supported in SDL windowed-mode apps.
But...
The basic technique is possible on Windows, and I'm beginning the think SDL does it, in a sense. Just not quite certain yet.
On Windows, I said before, synchronising page-flips to vertical syncs in Windowed mode has been possible all the way back to the 16-bit days using WinG. It turns out that that's not exactly wrong, but misleading. I dug out some old source code using WinG, and there was a timer triggering the page-blits. WinG will run at ludicrous speed, just as I was surprised by SDL doing - the blit-to-screen page-flip operations don't wait for a vertical retrace.
On further investigation - when you do a blit to the screen in WinG, the blit is queued for later and the call exits. The blit is executed at the next vertical retrace, so hopefully no tearing. If you do further blits to the screen (dirty rectangles) before that retrace, they are combined. If you do loads of full-screen blits before the vertical retrace, you are rendering frames that are never displayed.
This blit-to-screen in WinG is obviously similar to the SDL_UpdateRect. SDL_UpdateRects is just an optimised way to manually combine some dirty rectangles (and be sure, perhaps, they are applied to the same frame). So maybe (on platforms where vertical retrace stuff is possible) it is being done in SDL, similarly to in WinG - no waiting, but no tearing either.
Well, I tested using a timer to trigger the frame updates, and the result (on Windows XP) is uncertain. I could get very slight and occasional tearing on my ancient laptop, but that may be no fault of SDLs - it could be that the "raster" is outrunning the blit. This is probably my fault for using SDL_Flip instead of a direct call to SDL_UpdateRect with a minimal dirty rectangle - though I was trying to get tearing in this case, to see if I could.
So I'm still uncertain, but it may be that windowed-mode SDL is as immune to tearing as it can be on those platforms that allow it. Results don't seem as bad as I imagined, even on my ancient laptop.
But - can anyone offer a definitive answer?
You can use the framerate control of SDL_gfx.
Looking at the docs of library, the flow of your application will be like this:
// initialization code
FPSManager *fpsManager;
SDL_initFramerate(fpsManager);
SDL_setFramerate(fpsManager, 60 /* desired FPS */);
// in the render loop
SDL_framerateDelay(fpsManager);
Also, you may look at the source code to create your own framerate control.