I am training in using the allegro library with c++ but I am getting an issue, while using large images for parrallax backgrounds i get a constant sort of load/glitch scrolling down the screen, making all my images flicker for a bit, is there a way to load backgrounds without having such an issue? The flicker does not appear when I try to print the screen.
Thanks
The flickering is most likely a result of you redrawing your scene, and the monitor refreshing partway through.
The cure for this is to use double buffering. Read this:
http://wiki.allegro.cc/index.php?title=Double_buffering
There is another artifact called 'tearing', which is caused by blitting your buffer during a refresh cycle. This is generally solved by waiting for a vertical sync (retrace) and then drawing, but that's a little oldschool now that most of us use libraries such as OpenGL or DirectX to talk to our graphics hardware.
Nevertheless, Allegro provides a function that waits for the vertical retrace to begin, which is the time at which you can safely blit your buffer without worrying about tearing. See here:
https://www.allegro.cc/manual/4/api/graphics-modes/vsync
I cannot promise that this is the solution, but looking at your code, I don't understand why you are creating multiple buffers.
bufDisplay = al_create_bitmap(WIDTH, HEIGHT);
buffer = al_create_bitmap(WIDTH, HEIGHT);
Unless you are doing some type of special effect that requires buffers, they are unnecessary. Allegro 5 already provides a double buffer with default settings.
Just draw everything to the default target bitmap (the display's back buffer), and then al_flip_display().
If you want to center or scale your output to a different sized window, you can usually just use transformations.
I don't know why you are calling Sleep(8).
If using Windows, you could switch to using OpenGL (set the ALLEGRO_OPENGL display flag).
You should try other Allegro games and demos (plenty come with the source) to see if it's a problem on all of them.
Related
I didn't understand the functionality of glutSwapBuffer properly. In my code if I don't use the glutSwapBuffer than no background color came in window and it remain transparent, capturing whatever is there in its background. I think that the background color is actually assigned by glClearColor, than how come without using glutSwapBuffer I didn't get any background color.
This question comes up over and over, I think what you are describing is actually what happens when you draw exclusively into the front-buffer in a compositing window manager.
Without swapping buffers, it does not draw your window correctly, so the window appears transparent. Double buffering is required for compositing window managers and it seems it is also required for many hybrid integrated/discrete GPU implementations (e.g. nVIDIA Optimus). In short, there is no real reason to use single-buffered rendering on a desktop platform these days.
To be certain, does your situation resemble this? This screenshot shows what happens when a window that only uses single-buffering is moved in a compositing window manager.
If so, a more thorough explanation can be found here.
opengl usually is configured to use double buffering.
You first draw to one buffer... then Swap it with the second and present it on the screen.
Without calling glutSwapBuffer you will not see anything and it is correct behavior.
about double (and more) buffering in opengl
I've written an OpenGL application in Linux through GLX. It uses double buffering with glXSwapBuffers and Sync to VBlank set via NVIDIA X Server Settings. I'm using Compiz and have smooth moving of windows and no tearing (Sync to VBlank enabled in Compiz settings).
But when I
Try to move or resize the OpenGL window or
Move other windows through the area occupied by the OpenGL window
the system stutters and freezes for 3-4 seconds. Moving other
windows outside the area occupied by the OpenGL window is smooth as always.
Moreover the problem only arises if the OpenGL application is in the
loop of producing frames of animation therefore swapping the buffers.
If the content is static and the application is not swapping the buffers there are no problems,moving the various windows is smooth.
Could be a synchronization issue between my application and Compiz?
I don't know if it's still in the same shape as a few years ago, but…
Your description matches very well a Compiz SNAFU. Every window resize triggers the recreation of a texture that will receive the window contents. Texture creation is a costly operation and hence should be avoided. Unfortunately the Compiz developers don't seems the brightest ones, because they did not realize there's an obvious solution to this problem: Windows in X11 can be reparented without much cost (every Window manager does this several times), it's called stacking. Compiz is a window manager.
So why doesn't Compiz keep a desktop sized window around into which it reparents those windows that are about to be resized, gets its constant sized window texture from there and after finishing the resize operation reparents the window into its decoration frame?
I don't know why this is the case. Anyway, some things Compiz does are not very smart.
If you want to fix this, well: Compiz is open source and I just described what to do.
I have an app that mixes OpenGL with Motif. The big main window that has OpenGL in it redraws fine. But, the sub windows sitting on top of it all go black. Specifically, just the parts of those subwindows that are right on top of the main window. Those subwindows all have just Motif code in them (except for one).
The app doesn't freeze up or dump core. Data is still flowing, and as text fields, etcetera of various subwindows get updated, those parts redraw. Dragging windows across each other or minimizing/unminimizing also trigger redraws. The timing of the "blackout" is random. I run the same 1-hour dataset every time and sometimes the blackout happens 5 minutes into the run and sometimes 30 minutes in, etc.
I went through the process of turning off sections of code until the problem stopped. Narrowed it down more and more and found it had to do with the use of the depth buffer. In other words, when I comment out the glEnable(GL_ENABLE_DEPTH_TEST), the problem goes away. So the problem seems to have something do with the use of the depth buffer.
As far as I can tell, the depth buffer is being cleared before redrawing is done, as it should. There's if-statements wrapped around the glClear calls, so I put messages in there and confirmed that the glClear of the depth buffer is indeed happening even when the blackout happens. Also, glGetError didn't return anything.
UPDATE 6/30/2014
Looks like there's still at least one person looking at this (thanks, UltraJoe). If I remember correctly, it turned out that it was sometimes swapping buffers without first defining the back buffer and drawing anything to it. It wasn't obvious to me before because it's such a long routine. There were some other minor things I had to clean-up, but I think that was the main cause.
How did you create the OpenGL window/context. Did you just get the X11 Window handle of your Motif main window and created the OpenGL context on that one? Or did you create a own subwindow within that Motif window for OpenGL?
You should not use any window managed by a toolkit directly, unless this was some widget for exclusive OpenGL use. The reason is, that most toolkits don't create a own sub-window for each an every element and also reuse parts of their graphics resources.
Thus you should create a own sub-window for OpenGL, and maybe a further subwindow using glXCreateWindow as well.
This is an old question, I know, but the answer may help someone else.
This sounds like you're selecting a bad visual for your OpenGL window, or you're creating a new colormap that's overriding the default. If at all possible, choose a DirectColor 24-plane visual for everything in your application. DirectColor visuals use read-only color cells, but 24 planes will allow every supported color to be available to every window without having to overwrite color cells.
I want to create my own tiny windowless GUI system, for that I am using GDI+. I cannot post code here because it got huge(c++) but bellow is the main steps I am following...
Create a bitmap of size equal to the application window.
For all mouse and keyboard events update the custom control states (eg. if mouse is currently held over a particular control e.t.c.)
For WM_PAINT event paint the background to offscreen bitmap and then paint all the updated controls on top of it and finally copy entire offscreen image to the front buffer via Graphics::DrawImage(..) call.
For WM_SIZE/WM_SIZING delete the previous offscreen bitmap and create another one with new window size.
Also there are some checks to prevent repeated drawing of controls i.e. controls are drawn only when it needs repainting in other words when the state of a control is changed only then it is painted e.t.c.
The system is working fine but only with one exception...when window is being resizing something sort of tearing effect appears. Now what I mean by tearing effect I shall try to explain ...
On the sizing edge/border there is a flickering gap as I drag the border.It is as if my DrawImage() function returns immediately and while one swap operation is half done another image drawing starts up.
Now you may think that it is common artifact that happens in many other application for the fact that resizing backbuffer is not always as fast as resizing window are but in other applications I noticed in other applications that although there is a leg between window size and client area size as window grows in size nothing flickers near the edge (its usually just white background that shows up as thin uniform strips along the border).
Also the dynamic controls which move with window resize acts jerky during sizing.
At first it seemed to me that using a constant fullscreen size offscreen surface could minimize the artifact but when I tried it results are not that satisfactory. I also tried to call Sleep() during sizing so that the flipping is done completely before another flip starts but strangely even that won't worked for me!
I have heard that GDI on vista is not hardware accelerated, could that might be the problem?
Also I wonder how frameworks such as Qt renders windowless GUI so smoothly, even if you size a complex Qt GUI window very fast negligibly little artifact appears. As far as I know Qt can use opengl for GUI rendering but that is second option.
If I use directx then real time resizing is even harder, opengl on the other hand seems to be nice for resizing without any problem but I will loose all the 2d drawing capability of GDI+.
If any of you have done anything like this before please guide me. Also if you have any pointer that I should consider for custom user interface design then provide me the links.
Thanks!
I always wished to design interfaces like windows media player 11 but can someone tell me that there is a straight forward solution for a c++ programmer (I want to know how rather than use some existing framework etc.)? Subclassing, owner drawing, custom drawing nothing seems to give you such level of control, I dont know a way to draw semitransparent control with common controls, so I think this question deserves some special attention . Thanks again.
Could it be a WM_ERASEBKGND message that's causing it?
see this question: GDI+ double buffering in C++
Also, if you need fast response from your GUI I would advise against GDI+.
I'm currently writing a game of immense sophistication and cunning, that will fill you with awe and won- oh, OK, it's the 15 puzzle, and I'm just familiarising myself with SDL.
I'm running in windowed mode, and using SDL_Flip as the general-case page update, since it maps automatically to an SDL_UpdateRect of the full window in windowed mode. Not the optimum approach, but given that this is just the 15 puzzle...
Anyway, the tile moves are happening at ludicrous speed. IOW, SDL_Flip in windowed mode doesn't include any synchronisation with vertical retraces. I'm working in Windows XP ATM, but I assume this is correct behaviour for SDL and will occur on other platforms too.
Switching to using SDL_UpdateRect obviously won't change anything. Presumably, I need to implement the delay logic in my own code. But a simple clock-based timer could result in updates occuring when the window is half-drawn, causing visible distortions (I forget the technical name).
EDIT This problem is known as "tearing".
So - in a windowed mode game in SDL, how do I synchronise my page-flips with the vertical retrace?
EDIT I have seen several claims, while searching for a solution, that it is impossible to synchronise page-flips to the vertical retrace in a windowed application. On Windows, at least, this is simply false - I have written games (by which I mean things on a similar level to the 15-puzzle) that do this. I once wasted some time playing with Dark Basic and the Dark GDK - both DirectX-based and both syncronising page-flips to the vertical retrace in windowed mode.
Major Edit
It turns out I should have spent more time looking before asking. From the SDL FAQ...
http://sdl.beuc.net/sdl.wiki/FAQ_Double_Buffering_is_Tearing
That seems to imply quite strongly that synchronising with the vertical retrace isn't supported in SDL windowed-mode apps.
But...
The basic technique is possible on Windows, and I'm beginning the think SDL does it, in a sense. Just not quite certain yet.
On Windows, I said before, synchronising page-flips to vertical syncs in Windowed mode has been possible all the way back to the 16-bit days using WinG. It turns out that that's not exactly wrong, but misleading. I dug out some old source code using WinG, and there was a timer triggering the page-blits. WinG will run at ludicrous speed, just as I was surprised by SDL doing - the blit-to-screen page-flip operations don't wait for a vertical retrace.
On further investigation - when you do a blit to the screen in WinG, the blit is queued for later and the call exits. The blit is executed at the next vertical retrace, so hopefully no tearing. If you do further blits to the screen (dirty rectangles) before that retrace, they are combined. If you do loads of full-screen blits before the vertical retrace, you are rendering frames that are never displayed.
This blit-to-screen in WinG is obviously similar to the SDL_UpdateRect. SDL_UpdateRects is just an optimised way to manually combine some dirty rectangles (and be sure, perhaps, they are applied to the same frame). So maybe (on platforms where vertical retrace stuff is possible) it is being done in SDL, similarly to in WinG - no waiting, but no tearing either.
Well, I tested using a timer to trigger the frame updates, and the result (on Windows XP) is uncertain. I could get very slight and occasional tearing on my ancient laptop, but that may be no fault of SDLs - it could be that the "raster" is outrunning the blit. This is probably my fault for using SDL_Flip instead of a direct call to SDL_UpdateRect with a minimal dirty rectangle - though I was trying to get tearing in this case, to see if I could.
So I'm still uncertain, but it may be that windowed-mode SDL is as immune to tearing as it can be on those platforms that allow it. Results don't seem as bad as I imagined, even on my ancient laptop.
But - can anyone offer a definitive answer?
You can use the framerate control of SDL_gfx.
Looking at the docs of library, the flow of your application will be like this:
// initialization code
FPSManager *fpsManager;
SDL_initFramerate(fpsManager);
SDL_setFramerate(fpsManager, 60 /* desired FPS */);
// in the render loop
SDL_framerateDelay(fpsManager);
Also, you may look at the source code to create your own framerate control.