OpenGL: How do i control the rendering to make it idle? - c++

Say, my application has a 3D window rendering a tin model of millions of triangles using OpenGL.
Goal: For some operations of users, there is no need to update the 3D window. The 3D view can just stay idle with previously rendered content, without repeatly calulate the rotation/translation/scaling/texture things. I assume this will save lots of CPU and GPU time.
Current design: I have a rendering while loop running all the time. If I stop the while loop, then the content is rendered once and disappear.
Question: is there a way to achieve the goal? Can anyone give a direction?

Instead of having a continuous rendering loop, you can use OpenGL only to render your window when the system sends you an event to repaint the window. Additionally you invalidate your own window if you know that the contents of it changed (e.g. due to a reaction to a mouse click). In fact this is the proper way to draw in your window for anything other than latency sensitive applications.
The exact specifics highly depend on the API you use to create your window. For example,
With WinAPI you render on WM_PAINT and invalidate with InvalidateRect.
With Xlib you render on Expose and invalidate with IDK.
With Qt you render in QOpenGLWindow::paintGL and invalidate with update().
With GLUT you render in glutDisplayFunc and invalidate with glutPostRedisplay.
... and so on. This is not OpenGL specific in any way.

Related

OpenGL: How to minimize drawing?

My OpenGL screen consists of 2 triangles and 1 texture, nothing else. I'd like to update the screen as little as possible, to save power and limit CPU/GPU usage. Unfortunately, when my draw_scene routine returns early without drawing anything, the OpenGL screen goes black-- even if I never call glutSwapBuffers. My background color is not black by the way. It seems that if I do not draw, the OpenGL window loses its contents. How can I minimize the amount of drawing that is done?
Modern graphics systems assume, that when a redraw is initiated, that the whole contents are redrawn. Furthermore, if you get a redraw event from the graphics system, then that's usually because the contents of the window have become undefined and need to be recreated, so you must redraw in that situation.
To save power you have to disable the idle loop (or just pass over everything that does and immediately yield back to the OS scheduler) and don't have timers create events.

How to display smooth video in FireMonkey (FMX, FM3)?

Has anyone figured out how to display smooth video (i.e. a series of bitmaps) in a FireMonkey application, HD or 3D? In VCL you could write to a canvas from a thread and this would work perfectly, but this does not work in FMX. To make things worse, the apparently only reliable way is to use TImage, and that seems to be updated from the main thread (open a menu and video freezes temporarily). All EMB examples I could find all either write to TImage from the main thread, or use Synchronize(). These limitations make FMX unusable for decent video display so I am looking for a hack or possibly bypass of FMX. I use XE5/C++ but welcome any suggestions. Target OS is both Windows 7+ & OS X. Thanks!
How about putting a TPaintbox on your form to hold the video. In the OnPaint method you simply draw the next frame to the paintbox canvas. Now put a TTimer on the form, set the interval to the frame rate required. In the OnTimer event for the timer just write paintbox1.repaint
This should give you regular frames no matter what else the program is doing.
For extra safety, you could increment a frame number in the OnTimer event. Now in the paintbox paint method you know which frame to paint. This means you won't jump frames if something else calls the paint method as well as the timer - you will just end up repainting the same frame for the extra call to OnPaint.
I use this for marching ants selections although I go one step further and use an overlaid canvas so I can draw independently to the selection and the underlying paintbox canvas to remove the need to repaint the main canvas when the selection changes. That requires calls to API but I guess you won't need it unless you are doing videos with a transparent colour.
Further research, including some talks with the Itinerant developer, has unfortunately made it clear that, due to concurrency restrictions, FM has been designed so that all GPU access goes through the main thread and therefore painting will always be limited. As a result I have decided FM is not suitable for my needs and I am re-evaluating my options.

Hide GLUT window

Is it possible to hide OpenGL window and the rendering are still running? I use glutHideWindow which will never trigger display function.
If that is not possible, is it possible in the program to change the focus of the current window? I want to run opengl program but I don't need that window. In fact, I want to use the framebuffer that opengl updates at each frame in another program. But it's always annoying to toggle between the two programs. (They both have window)
Is it possible to hide OpenGL window and the rendering are still running?
Yes and No to both parts of the question.
If you hide a window, all the pixels of the window's viewport will fail the pixel ownership test when rendering. So you can't use a hidden window as a drawable for OpenGL to operate on.
What you need is an off-screen drawable to draw to.
The modern variant are Framebuffer Objects (FBOs), which you can create on a regular OpenGL context, that might even work on a hidden window. FBOs take some drawable attachments (render buffers, textures) and allow OpenGL to draw to these instead to the window.
An older method are PBuffers, also widely supported, but not as easy to use as FBOs.
Note that if you want to perform off-screen rendering on Linux/X11 the X server must be active, i.e. owning the VT so that the GPU actually processes the commands. So you can't just start an X server "in the background" but have another X server use the display device.
After creating the window, you can use glutHideWindow() to go offscreen. Then you still render as nomal and use glReadPixels to read back and get buffer to use it later.

Lazy rendering of Qt on OpenGL

i came about this problem and knew it could be done better.
The problem:
When overlaying a QGLWidget (Qt OpenGL contextview) with Qt widgets, Qt redraws those widgets after every Qt frame.
Qt isn’t built to redraw entire windows with >60fps constantly, so that’s enormously slow.
My idea:
Make Qt use something other to draw upon: a transparent texture. Make OpenGL use this texture whenever it redraws and draw it on top of everything else. Make Qt redirect all interaction with the OpenGL context view to the widgets drawn onto the texture.
The advantage would be that Qt only has to redraw whenever it has to (e.g. a widget is hovered or clicked, or the text cursor in a text field blinks), and can do partial redraws which are faster.
My Question:
How to approach this? how can I tell Qt to draw to a texture? how can i redirect interaction with a widget to another one (e.g. if i move the mouse above the region in the context view where a checkbox sits in the drawn-to-texture widget, Qt should register this event to the checkbox and repaint to reflect itshovered state)
I separate my 2D and 3D rendering out for my CAD-like app for the very same reasons you have, although in my case my the 2D stuff are not widgets - but it shouldn't make a difference. This is how would approach the problem:
When your widget changes render it onto a QGLFramebufferObject, do this by using the FBO as the QPaintDevice for a QPainter in your QGLWidget::paintEvent(..) and calling myWidget->render( myQPainter, ...). Repeat this for however many widgets you have, but only onto the same FBO - don't create an FBO for each one... Remember to clear it first, like a 'normal' framebuffer.
When your current OpenGL background changes, render it onto another QGLFramebufferObject using standard OpenGL calls, in the same way.
Create a pass through vertex shader (the 'camera' will just be a unit cube), and a very simple fragment shader that can layer the two textures on top of each other.
At the end of the QGLWidget::paintEvent(..), activate your shader program, bind your framebuffers as textures for it (myFBO->texture() gets the handle), and render a unit quad. Because your camera is a unit square, and the viewport size defined the FBO size, it will fill the viewport pixel perfect.
However, that's the easy part... The hard part is the widget interaction. Because you are essentially rendering a 'proxy', you going to have to relay the interaction between the 'real' and 'proxy' widget, whilst keeping the 'real' widget invisible. Here's how would I start:
Some operating systems are a bit weird about rendering widgets without ever showing them, so you may have to show and then hide the widget after instantiation - because of the clever painting queue in Qt, it's unlikely to actually make it to the screen.
Catch all mouse events in the viewport, work out which 'proxy' widget the cursor is over (if any), and then offset it to get the relative position for the 'real' hidden widget - this value will depend on what parent object the 'real' widget has, if any. Then pass the event onto the 'real' widget before redrawing the widget framebuffer.
I should state that I also had to create a 'flagging' system to handle redraws nicely. You don't want every widget event to trigger a widget FBO redraw, because there could many simultaneous events (don't just think about the mouse) - but you would only want one redraw. So I created a system where if anything in the application could change anything in the viewport visually, then it would flag the viewport as 'dirty'. Then setup a QTimer for however many fps you are aiming for (in my situation the scene could get very heavy, so I also timed how long a frame took and then used that value +10% as the timer delay for the next check, this way the system isn't bombarded when rendering gets laggy). And then check the dirty status: if it's dirty, redraw; otherwise don't. I found life got easier with two dirty flags, one for the 3D stuff and one for the 2D - but if you need to maintain a constant draw rate for the OpenGL drawing there's probably no need for two.
I imagine what I did wasn't the easiest way to do it, but it provides plenty of scope for tuning and profiling - which makes life easier in the long run. All the answers are definitely not in this post, but hopefully it will get you on the way to a strategy.

How to efficiently render double buffered window without any tearing effect?

I want to create my own tiny windowless GUI system, for that I am using GDI+. I cannot post code here because it got huge(c++) but bellow is the main steps I am following...
Create a bitmap of size equal to the application window.
For all mouse and keyboard events update the custom control states (eg. if mouse is currently held over a particular control e.t.c.)
For WM_PAINT event paint the background to offscreen bitmap and then paint all the updated controls on top of it and finally copy entire offscreen image to the front buffer via Graphics::DrawImage(..) call.
For WM_SIZE/WM_SIZING delete the previous offscreen bitmap and create another one with new window size.
Also there are some checks to prevent repeated drawing of controls i.e. controls are drawn only when it needs repainting in other words when the state of a control is changed only then it is painted e.t.c.
The system is working fine but only with one exception...when window is being resizing something sort of tearing effect appears. Now what I mean by tearing effect I shall try to explain ...
On the sizing edge/border there is a flickering gap as I drag the border.It is as if my DrawImage() function returns immediately and while one swap operation is half done another image drawing starts up.
Now you may think that it is common artifact that happens in many other application for the fact that resizing backbuffer is not always as fast as resizing window are but in other applications I noticed in other applications that although there is a leg between window size and client area size as window grows in size nothing flickers near the edge (its usually just white background that shows up as thin uniform strips along the border).
Also the dynamic controls which move with window resize acts jerky during sizing.
At first it seemed to me that using a constant fullscreen size offscreen surface could minimize the artifact but when I tried it results are not that satisfactory. I also tried to call Sleep() during sizing so that the flipping is done completely before another flip starts but strangely even that won't worked for me!
I have heard that GDI on vista is not hardware accelerated, could that might be the problem?
Also I wonder how frameworks such as Qt renders windowless GUI so smoothly, even if you size a complex Qt GUI window very fast negligibly little artifact appears. As far as I know Qt can use opengl for GUI rendering but that is second option.
If I use directx then real time resizing is even harder, opengl on the other hand seems to be nice for resizing without any problem but I will loose all the 2d drawing capability of GDI+.
If any of you have done anything like this before please guide me. Also if you have any pointer that I should consider for custom user interface design then provide me the links.
Thanks!
I always wished to design interfaces like windows media player 11 but can someone tell me that there is a straight forward solution for a c++ programmer (I want to know how rather than use some existing framework etc.)? Subclassing, owner drawing, custom drawing nothing seems to give you such level of control, I dont know a way to draw semitransparent control with common controls, so I think this question deserves some special attention . Thanks again.
Could it be a WM_ERASEBKGND message that's causing it?
see this question: GDI+ double buffering in C++
Also, if you need fast response from your GUI I would advise against GDI+.