My OpenGL screen consists of 2 triangles and 1 texture, nothing else. I'd like to update the screen as little as possible, to save power and limit CPU/GPU usage. Unfortunately, when my draw_scene routine returns early without drawing anything, the OpenGL screen goes black-- even if I never call glutSwapBuffers. My background color is not black by the way. It seems that if I do not draw, the OpenGL window loses its contents. How can I minimize the amount of drawing that is done?
Modern graphics systems assume, that when a redraw is initiated, that the whole contents are redrawn. Furthermore, if you get a redraw event from the graphics system, then that's usually because the contents of the window have become undefined and need to be recreated, so you must redraw in that situation.
To save power you have to disable the idle loop (or just pass over everything that does and immediately yield back to the OS scheduler) and don't have timers create events.
Related
I am making a C++ console application with lots of wingdi graphics mainly revolving around Rectangle() and FillRect() but as it is wingdi, the graphics are not permanent. The graphics get reset when i minimize the console, enlarge it, scroll down and whatsoever. I've seen in some threads that there is no predefined solution so you have to make one of your own.
One thing i tried, was drawing the rectangle once and then attaching a thread with infinite loop that checks the first pixel of rectangle in every iteration, if it's color is black, it draws whole rectangle again. As silly as it sounds, that's all i could think of. I know it's utterly inefficient. Is there any other solution for this?
Although you've been able to use GDI to draw on your application's console window (presumably by calling GetConsoleWindow and then GetDC), it isn't really designed for that. The system has code for the console window that tries to redraw the window itself whenever it needs to update. It's not aware of anything your program does through GDI, so it has no way to preserve that.
If you just need to draw colorful rectangles on a console window, you can do those kinds of things with the Console API. You can set the text colors as needed and draw blocks of spaces or block characters.
If you want to do more general graphics, your program will have to create a (non-console) window, and then you can draw whatever you want whenever your window receives a WM_PAINT message.
In: C++\Win32 application (not in fullscreen)\DX9
How can i redraw window content during resize fast and nice enough? Resize == user drag window border.
Different approaches:
Reset device on each WM_SIZE\WM_PAINT. Adequate resolution, but black stripes appears on fast upscale.
Reset device on WM_EXITSIZEMOVE and pause rendering on WM_ENTERSIZEMOVE. Best speed, but to ugly ~ black stripes during resize.
Can't find out how to use dx9's swapchain in this case
Keep rendering and swapping buffers during resize; reset on WM_EXITSIZEMOVE. Exactly what occurs in official demos from 2010 SDK. Looks fast and acceptable nice. But [suddenly] on slow computer black stripes reappeared. Thin and rare, but real. Actually, thats very strange: synchronous rendering and Present(,,,) on every WM_SIZE and\or WM_PAINT excludes that at first glance. And at the second. Perhaps i dont understand something important?
So, when last method failed i decided to ask here. Is that possible somehow make dx9 surface securely glued to window border. No black stripes. Prefer to slow scale speed down, but sync it somehow with DX.
I allocate a backbuffer large enough for the maximum client size of the window, and set a viewport that matches the current window client size on WM_SIZE. Then I render to resp. present from that viewport only. This way no device reset is required.
I have an app that mixes OpenGL with Motif. The big main window that has OpenGL in it redraws fine. But, the sub windows sitting on top of it all go black. Specifically, just the parts of those subwindows that are right on top of the main window. Those subwindows all have just Motif code in them (except for one).
The app doesn't freeze up or dump core. Data is still flowing, and as text fields, etcetera of various subwindows get updated, those parts redraw. Dragging windows across each other or minimizing/unminimizing also trigger redraws. The timing of the "blackout" is random. I run the same 1-hour dataset every time and sometimes the blackout happens 5 minutes into the run and sometimes 30 minutes in, etc.
I went through the process of turning off sections of code until the problem stopped. Narrowed it down more and more and found it had to do with the use of the depth buffer. In other words, when I comment out the glEnable(GL_ENABLE_DEPTH_TEST), the problem goes away. So the problem seems to have something do with the use of the depth buffer.
As far as I can tell, the depth buffer is being cleared before redrawing is done, as it should. There's if-statements wrapped around the glClear calls, so I put messages in there and confirmed that the glClear of the depth buffer is indeed happening even when the blackout happens. Also, glGetError didn't return anything.
UPDATE 6/30/2014
Looks like there's still at least one person looking at this (thanks, UltraJoe). If I remember correctly, it turned out that it was sometimes swapping buffers without first defining the back buffer and drawing anything to it. It wasn't obvious to me before because it's such a long routine. There were some other minor things I had to clean-up, but I think that was the main cause.
How did you create the OpenGL window/context. Did you just get the X11 Window handle of your Motif main window and created the OpenGL context on that one? Or did you create a own subwindow within that Motif window for OpenGL?
You should not use any window managed by a toolkit directly, unless this was some widget for exclusive OpenGL use. The reason is, that most toolkits don't create a own sub-window for each an every element and also reuse parts of their graphics resources.
Thus you should create a own sub-window for OpenGL, and maybe a further subwindow using glXCreateWindow as well.
This is an old question, I know, but the answer may help someone else.
This sounds like you're selecting a bad visual for your OpenGL window, or you're creating a new colormap that's overriding the default. If at all possible, choose a DirectColor 24-plane visual for everything in your application. DirectColor visuals use read-only color cells, but 24 planes will allow every supported color to be available to every window without having to overwrite color cells.
I want to create my own tiny windowless GUI system, for that I am using GDI+. I cannot post code here because it got huge(c++) but bellow is the main steps I am following...
Create a bitmap of size equal to the application window.
For all mouse and keyboard events update the custom control states (eg. if mouse is currently held over a particular control e.t.c.)
For WM_PAINT event paint the background to offscreen bitmap and then paint all the updated controls on top of it and finally copy entire offscreen image to the front buffer via Graphics::DrawImage(..) call.
For WM_SIZE/WM_SIZING delete the previous offscreen bitmap and create another one with new window size.
Also there are some checks to prevent repeated drawing of controls i.e. controls are drawn only when it needs repainting in other words when the state of a control is changed only then it is painted e.t.c.
The system is working fine but only with one exception...when window is being resizing something sort of tearing effect appears. Now what I mean by tearing effect I shall try to explain ...
On the sizing edge/border there is a flickering gap as I drag the border.It is as if my DrawImage() function returns immediately and while one swap operation is half done another image drawing starts up.
Now you may think that it is common artifact that happens in many other application for the fact that resizing backbuffer is not always as fast as resizing window are but in other applications I noticed in other applications that although there is a leg between window size and client area size as window grows in size nothing flickers near the edge (its usually just white background that shows up as thin uniform strips along the border).
Also the dynamic controls which move with window resize acts jerky during sizing.
At first it seemed to me that using a constant fullscreen size offscreen surface could minimize the artifact but when I tried it results are not that satisfactory. I also tried to call Sleep() during sizing so that the flipping is done completely before another flip starts but strangely even that won't worked for me!
I have heard that GDI on vista is not hardware accelerated, could that might be the problem?
Also I wonder how frameworks such as Qt renders windowless GUI so smoothly, even if you size a complex Qt GUI window very fast negligibly little artifact appears. As far as I know Qt can use opengl for GUI rendering but that is second option.
If I use directx then real time resizing is even harder, opengl on the other hand seems to be nice for resizing without any problem but I will loose all the 2d drawing capability of GDI+.
If any of you have done anything like this before please guide me. Also if you have any pointer that I should consider for custom user interface design then provide me the links.
Thanks!
I always wished to design interfaces like windows media player 11 but can someone tell me that there is a straight forward solution for a c++ programmer (I want to know how rather than use some existing framework etc.)? Subclassing, owner drawing, custom drawing nothing seems to give you such level of control, I dont know a way to draw semitransparent control with common controls, so I think this question deserves some special attention . Thanks again.
Could it be a WM_ERASEBKGND message that's causing it?
see this question: GDI+ double buffering in C++
Also, if you need fast response from your GUI I would advise against GDI+.
I'm thinking of creating a drawing program with layers, and using GDI+ to display them. I want to use GDI+ because it supports transparency.
The thing is, drawing lines to the DC is very fast, but drawing directly to a bitmap is very slow. It is only fast if you lock the bits and start setting pixels. Can I draw to multiple DC's in my WM_PAINT event, then just do DrawBitmap for each layer to the MemDC? What's the best way of going about this?
Thanks
GDI+ is certainly fast enough for a drawing program. I use it (from C#) for high-speed animation (>30 fps).
It appears that you want to be able to manipulate individual pixels. This is very fast with LockBits, and although it's slightly clunky to use in C# (requiring a pointer and the unsafe tag), it doesn't seem like it would be as difficult in C++.
You probably don't want to copy from multiple layers directly to your control surface inside a paint event. Instead, this rendering should be done to an off-screen buffer (B1). After B1 is drawn with all copying/drawing operations completed, copy it to a second off-screen buffer (B2) and then invalidate/refresh your control surface. In the control's paint event, you copy from B2 to the visible surface.
You don't want to draw directly on the visible surface with a multi-step drawing operation, since a form of flickering will result (sometimes the screen will repaint itself while your code is only partway through a multi-step operation, so the user sees an occasional half-finished frame).
You can render to a single off-screen buffer, and copy from it to the visible surface in the paint event. The main complication here is that you have to deal in some way with "stray" paint events, i.e. events not caused by your intentionally invalidating the control but by something else (like a user dragging another window over yours). If you copy from the off-screen buffer to the surface and the buffer is only halfway-drawn, you'll get flicker. If you block in the paint event until the buffer drawing is completed, you'll see "form trails" on your control, which looks even worse.
The solution is the double-buffered approach described above. Stray (or non-stray ones when you invalidate) paint events will copy from B2, which is always fully-rendered and up-to-date - thus no flicker. Double-buffering does use more memory, but in a drawing program that has multiple layers anyway, this isn't a big deal.