i came about this problem and knew it could be done better.
The problem:
When overlaying a QGLWidget (Qt OpenGL contextview) with Qt widgets, Qt redraws those widgets after every Qt frame.
Qt isn’t built to redraw entire windows with >60fps constantly, so that’s enormously slow.
My idea:
Make Qt use something other to draw upon: a transparent texture. Make OpenGL use this texture whenever it redraws and draw it on top of everything else. Make Qt redirect all interaction with the OpenGL context view to the widgets drawn onto the texture.
The advantage would be that Qt only has to redraw whenever it has to (e.g. a widget is hovered or clicked, or the text cursor in a text field blinks), and can do partial redraws which are faster.
My Question:
How to approach this? how can I tell Qt to draw to a texture? how can i redirect interaction with a widget to another one (e.g. if i move the mouse above the region in the context view where a checkbox sits in the drawn-to-texture widget, Qt should register this event to the checkbox and repaint to reflect itshovered state)
I separate my 2D and 3D rendering out for my CAD-like app for the very same reasons you have, although in my case my the 2D stuff are not widgets - but it shouldn't make a difference. This is how would approach the problem:
When your widget changes render it onto a QGLFramebufferObject, do this by using the FBO as the QPaintDevice for a QPainter in your QGLWidget::paintEvent(..) and calling myWidget->render( myQPainter, ...). Repeat this for however many widgets you have, but only onto the same FBO - don't create an FBO for each one... Remember to clear it first, like a 'normal' framebuffer.
When your current OpenGL background changes, render it onto another QGLFramebufferObject using standard OpenGL calls, in the same way.
Create a pass through vertex shader (the 'camera' will just be a unit cube), and a very simple fragment shader that can layer the two textures on top of each other.
At the end of the QGLWidget::paintEvent(..), activate your shader program, bind your framebuffers as textures for it (myFBO->texture() gets the handle), and render a unit quad. Because your camera is a unit square, and the viewport size defined the FBO size, it will fill the viewport pixel perfect.
However, that's the easy part... The hard part is the widget interaction. Because you are essentially rendering a 'proxy', you going to have to relay the interaction between the 'real' and 'proxy' widget, whilst keeping the 'real' widget invisible. Here's how would I start:
Some operating systems are a bit weird about rendering widgets without ever showing them, so you may have to show and then hide the widget after instantiation - because of the clever painting queue in Qt, it's unlikely to actually make it to the screen.
Catch all mouse events in the viewport, work out which 'proxy' widget the cursor is over (if any), and then offset it to get the relative position for the 'real' hidden widget - this value will depend on what parent object the 'real' widget has, if any. Then pass the event onto the 'real' widget before redrawing the widget framebuffer.
I should state that I also had to create a 'flagging' system to handle redraws nicely. You don't want every widget event to trigger a widget FBO redraw, because there could many simultaneous events (don't just think about the mouse) - but you would only want one redraw. So I created a system where if anything in the application could change anything in the viewport visually, then it would flag the viewport as 'dirty'. Then setup a QTimer for however many fps you are aiming for (in my situation the scene could get very heavy, so I also timed how long a frame took and then used that value +10% as the timer delay for the next check, this way the system isn't bombarded when rendering gets laggy). And then check the dirty status: if it's dirty, redraw; otherwise don't. I found life got easier with two dirty flags, one for the 3D stuff and one for the 2D - but if you need to maintain a constant draw rate for the OpenGL drawing there's probably no need for two.
I imagine what I did wasn't the easiest way to do it, but it provides plenty of scope for tuning and profiling - which makes life easier in the long run. All the answers are definitely not in this post, but hopefully it will get you on the way to a strategy.
Related
Say, my application has a 3D window rendering a tin model of millions of triangles using OpenGL.
Goal: For some operations of users, there is no need to update the 3D window. The 3D view can just stay idle with previously rendered content, without repeatly calulate the rotation/translation/scaling/texture things. I assume this will save lots of CPU and GPU time.
Current design: I have a rendering while loop running all the time. If I stop the while loop, then the content is rendered once and disappear.
Question: is there a way to achieve the goal? Can anyone give a direction?
Instead of having a continuous rendering loop, you can use OpenGL only to render your window when the system sends you an event to repaint the window. Additionally you invalidate your own window if you know that the contents of it changed (e.g. due to a reaction to a mouse click). In fact this is the proper way to draw in your window for anything other than latency sensitive applications.
The exact specifics highly depend on the API you use to create your window. For example,
With WinAPI you render on WM_PAINT and invalidate with InvalidateRect.
With Xlib you render on Expose and invalidate with IDK.
With Qt you render in QOpenGLWindow::paintGL and invalidate with update().
With GLUT you render in glutDisplayFunc and invalidate with glutPostRedisplay.
... and so on. This is not OpenGL specific in any way.
I am working on updating an application for a client.
They use Qt and currently use a QGLWidget to display a full-screen view of 1 of 4 possible cameras selected by clicking the appropriate radio button. They then use OpenGL to draw on the image being displayed. This works great, but they want to update the UI to include a quad-split view of all 4 cameras.
My first thought on how to accomplish this was to keep the one QGLWidget for the full-screen display, and have 4 small QGLWidgets for the quad-split. From the documentation I found that you can't overlap QGLWidgets or QOpenGLWidgets because they don't handle z-order appropriately, but that this can be accomplished by using QOpenGLWindows and QWidget::createWindowContainer.
So, I coded up an application that uses a QOpenGLWidget (trying to bring them up to date) for the full-screen view, and 4 smaller QOpenGLWindows using QWidget::createWindowContainer, but this isn't working either.
The widgets built from QOpenGLWindows are always on top even if I use lower() to try to get them behind the full screen QOpenGLWidget. I've also tried using hide() on the widgets built from QOpenGLWindows, however, this has had no effect.
Do this at a lower level. Keep the one QGLWidget -- in fact don't touch your Qt objects. Instead, change the lower-level rendering so that it makes 4 calls to glViewport.
After each call to glViewport, update the modelview and projection and matrices according to the camera of interest, then draw the 3D scene.
This is simple and performant, because the driver only needs to deal with a single OpenGL context. You might have some extra work to adjust mouse input, but I think it'll be worthwhile.
Is it possible to hide OpenGL window and the rendering are still running? I use glutHideWindow which will never trigger display function.
If that is not possible, is it possible in the program to change the focus of the current window? I want to run opengl program but I don't need that window. In fact, I want to use the framebuffer that opengl updates at each frame in another program. But it's always annoying to toggle between the two programs. (They both have window)
Is it possible to hide OpenGL window and the rendering are still running?
Yes and No to both parts of the question.
If you hide a window, all the pixels of the window's viewport will fail the pixel ownership test when rendering. So you can't use a hidden window as a drawable for OpenGL to operate on.
What you need is an off-screen drawable to draw to.
The modern variant are Framebuffer Objects (FBOs), which you can create on a regular OpenGL context, that might even work on a hidden window. FBOs take some drawable attachments (render buffers, textures) and allow OpenGL to draw to these instead to the window.
An older method are PBuffers, also widely supported, but not as easy to use as FBOs.
Note that if you want to perform off-screen rendering on Linux/X11 the X server must be active, i.e. owning the VT so that the GPU actually processes the commands. So you can't just start an X server "in the background" but have another X server use the display device.
After creating the window, you can use glutHideWindow() to go offscreen. Then you still render as nomal and use glReadPixels to read back and get buffer to use it later.
I want to create my own tiny windowless GUI system, for that I am using GDI+. I cannot post code here because it got huge(c++) but bellow is the main steps I am following...
Create a bitmap of size equal to the application window.
For all mouse and keyboard events update the custom control states (eg. if mouse is currently held over a particular control e.t.c.)
For WM_PAINT event paint the background to offscreen bitmap and then paint all the updated controls on top of it and finally copy entire offscreen image to the front buffer via Graphics::DrawImage(..) call.
For WM_SIZE/WM_SIZING delete the previous offscreen bitmap and create another one with new window size.
Also there are some checks to prevent repeated drawing of controls i.e. controls are drawn only when it needs repainting in other words when the state of a control is changed only then it is painted e.t.c.
The system is working fine but only with one exception...when window is being resizing something sort of tearing effect appears. Now what I mean by tearing effect I shall try to explain ...
On the sizing edge/border there is a flickering gap as I drag the border.It is as if my DrawImage() function returns immediately and while one swap operation is half done another image drawing starts up.
Now you may think that it is common artifact that happens in many other application for the fact that resizing backbuffer is not always as fast as resizing window are but in other applications I noticed in other applications that although there is a leg between window size and client area size as window grows in size nothing flickers near the edge (its usually just white background that shows up as thin uniform strips along the border).
Also the dynamic controls which move with window resize acts jerky during sizing.
At first it seemed to me that using a constant fullscreen size offscreen surface could minimize the artifact but when I tried it results are not that satisfactory. I also tried to call Sleep() during sizing so that the flipping is done completely before another flip starts but strangely even that won't worked for me!
I have heard that GDI on vista is not hardware accelerated, could that might be the problem?
Also I wonder how frameworks such as Qt renders windowless GUI so smoothly, even if you size a complex Qt GUI window very fast negligibly little artifact appears. As far as I know Qt can use opengl for GUI rendering but that is second option.
If I use directx then real time resizing is even harder, opengl on the other hand seems to be nice for resizing without any problem but I will loose all the 2d drawing capability of GDI+.
If any of you have done anything like this before please guide me. Also if you have any pointer that I should consider for custom user interface design then provide me the links.
Thanks!
I always wished to design interfaces like windows media player 11 but can someone tell me that there is a straight forward solution for a c++ programmer (I want to know how rather than use some existing framework etc.)? Subclassing, owner drawing, custom drawing nothing seems to give you such level of control, I dont know a way to draw semitransparent control with common controls, so I think this question deserves some special attention . Thanks again.
Could it be a WM_ERASEBKGND message that's causing it?
see this question: GDI+ double buffering in C++
Also, if you need fast response from your GUI I would advise against GDI+.
I have a top-level Qt widget with the FramelessWindowHint flag and the WA_TranslucentBackground attribute set. It has several children, each of which draws an image on it. They are not in a layout. Instead, I simply move them around when something changes (it is not user-resizable).
There are two states to the window - a big state and a small state. When I switch between them, I resize the window and reposition the children. The problem is that as the window resizes, a black box is briefly flashed on the top-level window before the images are painted over it.
The problem goes away if I disable Aero. I found brief mention of this problem being fixed in an article describing a new release of Qt (this release is long past), but it still doesn't work.
Any ideas why?
Thanks!
I don't have experience with Qt specifically, but I have worked with other windowing toolkits. Typically you see this kind of flashing when you are drawing updates directly to the screen. The fix is to instead use Double buffering, which basically means that you render your updates into an offscreen buffer (a bitmap of some sort, in the purest sense of the word), and then copy the entire updated image to screen in a single, fast operation.
The reason you only see the flickering sometimes is simply an artifact of how quickly your screen refreshes versus how quickly the updates are drawn. If you get "lucky" then all the updates occur between screen refreshes and you may not see any flicker.