OpenGL: will it keep image on screen by itself? - opengl

Taking Windows as an example, when drawing graphics via GDI one must redraw the scene (+validate, etc.) each time WM_PAINT happens. This requirement is really noticeable because otherwise graphics are corrupted pretty easily.
However with OpenGL it seems that once the scene is displayed via swapping buffers it persists regardless what is done to the window. It may be a useful feature.
The question: is this behavior cross-platform and reliable? Or is this just a common but not mandatory driver characteristic that cannot be relied upon?

However with OpenGL it seems that once the scene is displayed via swapping buffers it persists regardless what is done to the window.
That's definitely not the case. After swapping the buffers the contents of the back buffer are undefined and the contents of the front buffer are subject to the same damages as things drawn with other methods.
The question: is this behavior cross-platform and reliable?
I don't know what you mean, because the behavior you describe does not exist.

Related

Is glutSwapBuffer command required to get background color?

I didn't understand the functionality of glutSwapBuffer properly. In my code if I don't use the glutSwapBuffer than no background color came in window and it remain transparent, capturing whatever is there in its background. I think that the background color is actually assigned by glClearColor, than how come without using glutSwapBuffer I didn't get any background color.
This question comes up over and over, I think what you are describing is actually what happens when you draw exclusively into the front-buffer in a compositing window manager.
Without swapping buffers, it does not draw your window correctly, so the window appears transparent. Double buffering is required for compositing window managers and it seems it is also required for many hybrid integrated/discrete GPU implementations (e.g. nVIDIA Optimus). In short, there is no real reason to use single-buffered rendering on a desktop platform these days.
To be certain, does your situation resemble this? This screenshot shows what happens when a window that only uses single-buffering is moved in a compositing window manager.
If so, a more thorough explanation can be found here.
opengl usually is configured to use double buffering.
You first draw to one buffer... then Swap it with the second and present it on the screen.
Without calling glutSwapBuffer you will not see anything and it is correct behavior.
about double (and more) buffering in opengl

C++ Allegro visual glitch

I am training in using the allegro library with c++ but I am getting an issue, while using large images for parrallax backgrounds i get a constant sort of load/glitch scrolling down the screen, making all my images flicker for a bit, is there a way to load backgrounds without having such an issue? The flicker does not appear when I try to print the screen.
Thanks
The flickering is most likely a result of you redrawing your scene, and the monitor refreshing partway through.
The cure for this is to use double buffering. Read this:
http://wiki.allegro.cc/index.php?title=Double_buffering
There is another artifact called 'tearing', which is caused by blitting your buffer during a refresh cycle. This is generally solved by waiting for a vertical sync (retrace) and then drawing, but that's a little oldschool now that most of us use libraries such as OpenGL or DirectX to talk to our graphics hardware.
Nevertheless, Allegro provides a function that waits for the vertical retrace to begin, which is the time at which you can safely blit your buffer without worrying about tearing. See here:
https://www.allegro.cc/manual/4/api/graphics-modes/vsync
I cannot promise that this is the solution, but looking at your code, I don't understand why you are creating multiple buffers.
bufDisplay = al_create_bitmap(WIDTH, HEIGHT);
buffer = al_create_bitmap(WIDTH, HEIGHT);
Unless you are doing some type of special effect that requires buffers, they are unnecessary. Allegro 5 already provides a double buffer with default settings.
Just draw everything to the default target bitmap (the display's back buffer), and then al_flip_display().
If you want to center or scale your output to a different sized window, you can usually just use transformations.
I don't know why you are calling Sleep(8).
If using Windows, you could switch to using OpenGL (set the ALLEGRO_OPENGL display flag).
You should try other Allegro games and demos (plenty come with the source) to see if it's a problem on all of them.

OpenGL flickering/damaged with window resize and DWM active

I have a wxWidgets application that has a number of child opengl windows. I'm using my own GL canvas class, not the wx one. The windows share their OpenGL context.
I don't think the fact it is wxwidgets is really relevant here.
The opengl windows are children of a windows that are siblings of one another, contained within a tab control. Kind of an MDI style interface, but it is not an MDI window.. Each one can be individually resized. All works lovely unless Aero is enabled and the DWM is active.
Resizing any window (not even the opengl ones) causes all of the opengl windows to flicker occasionally with a stale backing-store view that contains whatever rubbish has been on the screen at that point that is not opengl. This ONLY happens with Aero enabled.
I'm pretty certain that this is the DWM not actually having the opengl contents on its drawing surface backing store and the window not being repainted at the right moment.
I've tried so many things to get round this, I do have a solution but it is not very nice and involves reading the framebuffer with glReadPixels into a DIB and then blitting it to the paint DC in my onPaint routine. This workaround is only enabled if DWM is active but I'd rather not have to do this at all as it hurts performance slightly (but not too bad on a capable system - the scenes are relatively simple 3d graphs). Also mixing GDI and opengl is not recommended but this approach works, surprisingly. I can live with it for now but I'd rather not have to. I still have to do this in WM_PRINT if I want to take a screenshot of the child window anyway, I don't see a way around that.
Does anyone know of a better solution to this?
Before anyone asks I definitely do the following:
Window class has CS_OWNDC
WM_ERASEBACKGROUND does nothing and returns TRUE.
Double Buffering is enabled.
Windows have the WS_CLIPSIBLINGS and WS_CLIPCHILDREN window styles.
In my resize event handler I immediately repaint the window.
I've tried:
Setting PFD_SUPPORT_COMPOSITION in the pixel format descriptor.
Not using a wxPaintDC in the paint handler and calling
::ValidateRect(hwnd, NULL) instead.
Handling WM_NCPAINT and excluding the client area
Disabling NC paint via the DWM API
Excluding the client area in the paint event
Calling glFlush and/or glFinish before and after the buffer swap.
Invalidating the window at every paint event (as a test!) - still
flickers!
Not using a shared GL context.
Disabling double buffering.
Writing to GL_FRONT_AND_BACK
Disabling DWM is not an option.
And as far as I am aware this is even a problem if you are using Direct3D instead on OpenGL, though I have not tested this as it represents a lot of work.
This is a longshot, but I just solved exactly this same problem myself.
The longshot part comes in because we're doing owner draw of the outline of a captionless group box that surrounds our OpenGL window (i.e., to make a nice little border), and that may not describe your case.
What we found caused the problem was this:
We had been using a RoundRect() call (with a HOLLOW_BRUSH) to draw the outline of the group box. Changing it to a MoveToEx() and LineTo() calls to ensure JUST the lines are drawn and nothing gets done inside the group box kept the GDI from trying to unexpectedly repaint the whole content of the control. It's possible there's a difference in invalidation logic (or we had a bug somehow in loading the intended hollow brush). We're still investigating.
-Noel
My app has only a single OpenGL window (the main window) but I ran into some nasty DWM tearing issues on window resize and I wonder if one of the solutions may work for you.
First of all, I found that during window resize there are at least two different bad guys who want to "help" you by modifying your client area before you have a chance to update the window yourself, creating flicker.
The first bad guy dates back to a XP/Vista/7 BitBlt inside the SetWindowPos() that Windows does internally during window resize, and can be eliminated with a trick involving intercepting WM_NCCALCSIZE or another trick involving intercepting WM_WINDOWPOSCHANGING.
In Windows 8/10 we still have that problem but we have a new bad guy, the Aero DWM.exe window manager, who will do his own different kind of BitBlt when he thinks you are "behind" updating the screen.
I suspect that the rubbish pixels you are seeing might actually be an intentional and very very poor attempt by DWM to fill in something "acceptable" while it waits for you to draw. I discovered that DWM extends the edge pixels of old client area data when it blits the new client area, which is insane.
Unfortunately, I don't know of any 100% solution to prevent DWM from doing this, but I do have a timing hack that greatly reduces the frequency of it.
For source code to the WM_NCCALCSIZE/WM_WINDOWPOSCHANGING hack as well as the DWM timing hack, please see:
How to smooth ugly jitter/flicker/jumping when resizing windows, especially dragging left/top border (Win 7-10; bg, bitblt and DWM)?
Hmm, maybe you have ran into the same issue: if you are using "new" MFC
it will create and application with Tabs and Window Spliter.
The splitter has some logic (I am guessing somewhere around transparent window and drawing XOR
lines for the split) that causes this behavior. Remove the splitter to confirm it resolve
your issue. If you need split functionality -- put in a different splitter.
Also Tabs allow docking and again splitting the windows that has the same issue -- remove/replace.
Good luck,
Igor

How to efficiently render double buffered window without any tearing effect?

I want to create my own tiny windowless GUI system, for that I am using GDI+. I cannot post code here because it got huge(c++) but bellow is the main steps I am following...
Create a bitmap of size equal to the application window.
For all mouse and keyboard events update the custom control states (eg. if mouse is currently held over a particular control e.t.c.)
For WM_PAINT event paint the background to offscreen bitmap and then paint all the updated controls on top of it and finally copy entire offscreen image to the front buffer via Graphics::DrawImage(..) call.
For WM_SIZE/WM_SIZING delete the previous offscreen bitmap and create another one with new window size.
Also there are some checks to prevent repeated drawing of controls i.e. controls are drawn only when it needs repainting in other words when the state of a control is changed only then it is painted e.t.c.
The system is working fine but only with one exception...when window is being resizing something sort of tearing effect appears. Now what I mean by tearing effect I shall try to explain ...
On the sizing edge/border there is a flickering gap as I drag the border.It is as if my DrawImage() function returns immediately and while one swap operation is half done another image drawing starts up.
Now you may think that it is common artifact that happens in many other application for the fact that resizing backbuffer is not always as fast as resizing window are but in other applications I noticed in other applications that although there is a leg between window size and client area size as window grows in size nothing flickers near the edge (its usually just white background that shows up as thin uniform strips along the border).
Also the dynamic controls which move with window resize acts jerky during sizing.
At first it seemed to me that using a constant fullscreen size offscreen surface could minimize the artifact but when I tried it results are not that satisfactory. I also tried to call Sleep() during sizing so that the flipping is done completely before another flip starts but strangely even that won't worked for me!
I have heard that GDI on vista is not hardware accelerated, could that might be the problem?
Also I wonder how frameworks such as Qt renders windowless GUI so smoothly, even if you size a complex Qt GUI window very fast negligibly little artifact appears. As far as I know Qt can use opengl for GUI rendering but that is second option.
If I use directx then real time resizing is even harder, opengl on the other hand seems to be nice for resizing without any problem but I will loose all the 2d drawing capability of GDI+.
If any of you have done anything like this before please guide me. Also if you have any pointer that I should consider for custom user interface design then provide me the links.
Thanks!
I always wished to design interfaces like windows media player 11 but can someone tell me that there is a straight forward solution for a c++ programmer (I want to know how rather than use some existing framework etc.)? Subclassing, owner drawing, custom drawing nothing seems to give you such level of control, I dont know a way to draw semitransparent control with common controls, so I think this question deserves some special attention . Thanks again.
Could it be a WM_ERASEBKGND message that's causing it?
see this question: GDI+ double buffering in C++
Also, if you need fast response from your GUI I would advise against GDI+.

Many OpenGL drawing areas swapping buffers slowdown problem

I'm having a slowdown problem when using openGL with gtk (though gtkglext) and doing animations.
Essentially I have an application that does certain displays using OpenGL in a GTK app. Many windows can be open at once (and certain windows can have multiple drawing areas). So its possible to have say 20-30 openGL drawing areas on the screen at once. None of the drawing is too heavy and openGL does that very fast.
My problem comes when all these displays are animating it really slows down the application. After much research into the problem I have determined it is the swap buffer call to openGL that is causing my problems. When drawing in GTK you mush do all your drawing in the widgets expose event. So when you want to draw you call gtk_widget_queue_draw on the drawing area widget and then when GTK is processing its events it will call the expose event serially on all the widgets that need drawing. The problem comes in when after the drawing is done, I need to call swap buffers to paint the actual openGL on the screen (Because of double-buffering). This call seems to block (because vysnc is on) until the monitor refreshes. This isn't a problem when there is say 3 drawing areas on the screen, but when there is a ton, there is a ton of swap buffer calls all blocking and really slowing down the app because each of these swap buffer calls are called in their own expose event and none are in sync.
My question is then is there some way to sync all the swap buffer calls so there isn't so much blocking. Turning off vsync (Ugly in itself because its OS/openGL implementation specific) fixes the speed problem but then there is tearing issues. I'm not sure how multi-threads will help because I have to do the swapbuffers in the GTK expose event so the drawing is in sync with GTK, unless there is something I'm not thinking of.
Any help would be appreciated!
If you have 20+ windows, what do you expect? Each one is vsynced to the same timing: the screen refresh. Each one will have to do a bunch of memory operations during that time. All at the same time. Of course there's going to be slowdown. Unless you have 20+ processors, they're going to have to get in line one behind the other.
Really, there's not much you can do besides limit the number of GL windows shown to the user.
The typical approach to tackle this problem would be using a own thread for each OpenGL context to swap.
However OpenGL implementors could (and they should I say) introduce a new extension that introduces "coordinated swaps" or something like that. There are some synchronization extensions, most notably http://www.opengl.org/registry/specs/OML/glx_sync_control.txt