I have to display an image and text overlay, when text overlay contains many strings, but only one changes from frame to frame. I want to avoid redraw of the entire overlay and only update what has changed.
I tried wglCreateLayerContext but my GPU seems to not support it (PIXELFORMATDESCRIPTOR bReserved is 0).
What is the most efficient way to redraw only part of text overlay?
Redrawing the whole framebuffer is the canonical way in OpenGL. You can use Framebuffer Objects (FBO) to create several off-screen drawing surfaces, to which you render the individual layers. Then you composit the layers into a composite image presented on screen.
but only one changes from frame to frame. I want to avoid redraw of the entire overlay
Why? Figuring out what parts need redraw, masking them, do only a partial clear, updating it, etc. takes more time and effort than to simply redraw the whole text overlay.
Related
i came about this problem and knew it could be done better.
The problem:
When overlaying a QGLWidget (Qt OpenGL contextview) with Qt widgets, Qt redraws those widgets after every Qt frame.
Qt isn’t built to redraw entire windows with >60fps constantly, so that’s enormously slow.
My idea:
Make Qt use something other to draw upon: a transparent texture. Make OpenGL use this texture whenever it redraws and draw it on top of everything else. Make Qt redirect all interaction with the OpenGL context view to the widgets drawn onto the texture.
The advantage would be that Qt only has to redraw whenever it has to (e.g. a widget is hovered or clicked, or the text cursor in a text field blinks), and can do partial redraws which are faster.
My Question:
How to approach this? how can I tell Qt to draw to a texture? how can i redirect interaction with a widget to another one (e.g. if i move the mouse above the region in the context view where a checkbox sits in the drawn-to-texture widget, Qt should register this event to the checkbox and repaint to reflect itshovered state)
I separate my 2D and 3D rendering out for my CAD-like app for the very same reasons you have, although in my case my the 2D stuff are not widgets - but it shouldn't make a difference. This is how would approach the problem:
When your widget changes render it onto a QGLFramebufferObject, do this by using the FBO as the QPaintDevice for a QPainter in your QGLWidget::paintEvent(..) and calling myWidget->render( myQPainter, ...). Repeat this for however many widgets you have, but only onto the same FBO - don't create an FBO for each one... Remember to clear it first, like a 'normal' framebuffer.
When your current OpenGL background changes, render it onto another QGLFramebufferObject using standard OpenGL calls, in the same way.
Create a pass through vertex shader (the 'camera' will just be a unit cube), and a very simple fragment shader that can layer the two textures on top of each other.
At the end of the QGLWidget::paintEvent(..), activate your shader program, bind your framebuffers as textures for it (myFBO->texture() gets the handle), and render a unit quad. Because your camera is a unit square, and the viewport size defined the FBO size, it will fill the viewport pixel perfect.
However, that's the easy part... The hard part is the widget interaction. Because you are essentially rendering a 'proxy', you going to have to relay the interaction between the 'real' and 'proxy' widget, whilst keeping the 'real' widget invisible. Here's how would I start:
Some operating systems are a bit weird about rendering widgets without ever showing them, so you may have to show and then hide the widget after instantiation - because of the clever painting queue in Qt, it's unlikely to actually make it to the screen.
Catch all mouse events in the viewport, work out which 'proxy' widget the cursor is over (if any), and then offset it to get the relative position for the 'real' hidden widget - this value will depend on what parent object the 'real' widget has, if any. Then pass the event onto the 'real' widget before redrawing the widget framebuffer.
I should state that I also had to create a 'flagging' system to handle redraws nicely. You don't want every widget event to trigger a widget FBO redraw, because there could many simultaneous events (don't just think about the mouse) - but you would only want one redraw. So I created a system where if anything in the application could change anything in the viewport visually, then it would flag the viewport as 'dirty'. Then setup a QTimer for however many fps you are aiming for (in my situation the scene could get very heavy, so I also timed how long a frame took and then used that value +10% as the timer delay for the next check, this way the system isn't bombarded when rendering gets laggy). And then check the dirty status: if it's dirty, redraw; otherwise don't. I found life got easier with two dirty flags, one for the 3D stuff and one for the 2D - but if you need to maintain a constant draw rate for the OpenGL drawing there's probably no need for two.
I imagine what I did wasn't the easiest way to do it, but it provides plenty of scope for tuning and profiling - which makes life easier in the long run. All the answers are definitely not in this post, but hopefully it will get you on the way to a strategy.
I am haveing trouble understanding the concept of the UpdateLayaredWindow api, how it works and how to implement it. Say for example I want to override CFrameWnd and draw a custom, alpha blended frame with UpdateLayeredWindow, as I understand it, the only way to draw child controls is to either: Blend them to the frame's Bitmap buffer (Created with CreateCompatibleBitmap) and redraw the whole frame, or create another window that sits ontop of the layered frame and draws child controls regularly (which defeats the whole idea of layered windows, because the window region wouldn't update anyway).
If I use the first method, the whole frame is redrawn - surely this is inpractical for a large application..? Or is it that the frame is constantly updated anyway so modifying the bitmap buffer wouldn't cause extra redrawing.
An example of a window similar to what I would like to achieve is the Skype notification box/incoming call box. A translucent frame/window with child contorls sitting ontop, that you can move around the screen.
In a practical, commercial world, how do I do it? Please don't refer me to the documentation, I know what it says; I need someone to explain practical methods of the infrastructure I should use to implement this.
Thanks.
It is very unclear exactly what aspect of layered windows gives you a problem, I'll just noodle on about how they are implemented and explaining their limitations from that.
Layered windows are implemented by using a hardware feature of the video adapter called "layers". The adapter has the basic ability to combine the pixels from distinct chunks of video memory, mixing them before sending them to the monitor. Obvious examples of that are the mouse cursor, it gets super-imposed on the pixels of the desktop frame buffer so it doesn't take a lot of effort to animate it when you move the mouse. Or the overlay used to display a video, the video stream decoder writes the video pixels directly to a separate frame buffer. Or the shadow cast by the frame of a toplevel window on top of the windows behind it.
The video adapter allows a few simple logical operations on the two pixel values when combining their values. The first one is an obvious one, the mixing operation that lets some of the pixel value overlap the background pixel. That effect provides opacity, you can see the background partially behind the window.
The second one is color-keying, the kind of effect you see used when the weather man on TV stands in front of a weather map. He actually stands in front of a green screen, the camera mixing panel filters out the green and replaces it with the pixels from the weather map. That effect provides pure transparency.
You see this back in the arguments passed to UpdateLayeredWindow(), the function you must call in your code to setup the layered window. The dwFlags argument select the basic operations supported by the video hardware, ULW_ALPHA flag enables the opacity effect, the ULW_COLORKEY flag enables the transparency effect. The transparency effect requires the color key, that's specified with the crKey argument value. The opacity effect is controlled with the pblend argument. This one is built for future expansion, one that hasn't happened yet. The only interesting field in the BLENDFUNCTION struct is SourceConstantAlpha, it controls the amount of opacity.
So a basic effect available for a layered window is opacity, overlapping the background windows and leaving the partially visible. One restriction to that the entire window is partially opaque, including the border and the title bar. That doesn't look good, you typically want to create a borderless window and take on the burden of creating your own window frame. Requires a bunch of code btw.
And a basic effect is transparency, completely hiding parts of a window. You often want to combine the two effects and that requires two layered windows. One that provides the partial opacity, another on top and owned by the bottom one that displays the parts of the window that are opaque, like the controls. Using the color key to make its background transparent and make the the bottom window visible.
Beyond this, another important feature for custom windows is enabled by SetWindowRgn(). It lets you give the window a shape other than a rectangle. Again it is important to omit the border and title bar, they don't work on a shaped window. The programming effort is to combine these features in a tasteful way that isn't too grossly different from the look-and-feel of windows created by other applications and write the code that paints the replacement window parts and still makes the window functional like a regular window. Things like resizing and moving the window for example, you typically do so by custom handling the WM_NCHITTEST message.
In my application I take snapshots of a QGLWidget's contents for two purposes:
Not redraw the scene all the time when only an overlay changes, using a cached pixmap instead
Lat the user take screenshots of the particular plots (3D scene)
The first thing I tried is grabFrameBuffer(). It is natural to use this function as for the first application, what is currently visible in the widget is exactly what I want to cache.
PROBLEM: On some hardware (e.g. Intel integrade graphics, Mac OS X with GeForce graphics), the image obtained does not contain the current screen content, but the content before that. So if the scene would be drawn two times, on the screen you see the second drawing, in the image you see the first drawing (which should be the content of the backbuffer?).
The second thing I tried is renderToPixmap(). This renders using paintGL(), but not using paint(). I have all my stuff in paint(), as I use Qt's painting functionality and only a small piece of the code uses native GL (beginNativePainting(), endNativePainting()).
I also tried the regular QWidget's snapshot capability (QPixmap::fromWidget(), or what it is called), but there the GL framebuffer is black.
Any ideas on how to resolve the issue and get a reliable depiction of the currently drawn scene?
How to take reliable QGLWidget snapshot
Render current scene to framebuffer, save data from framebuffer to file. Or grab current backbuffer after glFlush. Anything else might include artifacts or incomplete scene.
It seems that QGLWidget::grabFrameBuffer() internally calls glReadPixels() from OpenGL. On double-bufferd configurations the initial mode reads the back buffer (GL_BACK), switch with the OpenGL call glReadBuffer(GL_FRONT) to the front buffer before using QGLWidget::grabFrameBuffer() displaying an image on the screen.
The result of QGLWidget::grabFrameBuffer(), like every other OpenGL calls, depends on the video driver. Qt merely forwards your call to the driver and grabs the image it returns, but the content of the image is not under Qt's control. Other than making sure you have installed the latest driver for your video card, there is not much you can do except report a bug to your video card manufacturer and pray.
I use paintGL(); and glFlush(); before using grabFrameBuffer(). The paintGL helps to draw current frame again before grab the frame buffer, which makes an exact copy of what is currently showing.
I'm thinking of creating a drawing program with layers, and using GDI+ to display them. I want to use GDI+ because it supports transparency.
The thing is, drawing lines to the DC is very fast, but drawing directly to a bitmap is very slow. It is only fast if you lock the bits and start setting pixels. Can I draw to multiple DC's in my WM_PAINT event, then just do DrawBitmap for each layer to the MemDC? What's the best way of going about this?
Thanks
GDI+ is certainly fast enough for a drawing program. I use it (from C#) for high-speed animation (>30 fps).
It appears that you want to be able to manipulate individual pixels. This is very fast with LockBits, and although it's slightly clunky to use in C# (requiring a pointer and the unsafe tag), it doesn't seem like it would be as difficult in C++.
You probably don't want to copy from multiple layers directly to your control surface inside a paint event. Instead, this rendering should be done to an off-screen buffer (B1). After B1 is drawn with all copying/drawing operations completed, copy it to a second off-screen buffer (B2) and then invalidate/refresh your control surface. In the control's paint event, you copy from B2 to the visible surface.
You don't want to draw directly on the visible surface with a multi-step drawing operation, since a form of flickering will result (sometimes the screen will repaint itself while your code is only partway through a multi-step operation, so the user sees an occasional half-finished frame).
You can render to a single off-screen buffer, and copy from it to the visible surface in the paint event. The main complication here is that you have to deal in some way with "stray" paint events, i.e. events not caused by your intentionally invalidating the control but by something else (like a user dragging another window over yours). If you copy from the off-screen buffer to the surface and the buffer is only halfway-drawn, you'll get flicker. If you block in the paint event until the buffer drawing is completed, you'll see "form trails" on your control, which looks even worse.
The solution is the double-buffered approach described above. Stray (or non-stray ones when you invalidate) paint events will copy from B2, which is always fully-rendered and up-to-date - thus no flicker. Double-buffering does use more memory, but in a drawing program that has multiple layers anyway, this isn't a big deal.
im using SetPixel to make stuff on my window which is the easyest because i only want to set one pixel at a time. SetPixel is great but i need to remove the color every time i update it, i could overwrite the color by black but.. it's a really big waste of time is there some way i can over write all of the colors to black? (i would like something that is faster then reseting them all to black). i make a window and then color with setpixel (there is other ways (to draw on the window) but i only want to set one pixel/color at a time)
You should typically create a bitmap, lock it, set and unset its pixels directly - possibly by direct access rather than using API calls, if there are a lot of updates - unlock and then invalidate the window so that your paint handler can blit the bitmap later.
If you want to restore pixels, you can keep two bitmaps and store the values to restore in one bitmap.