Draw GDI+ graphics objects using multi-threading - c++

I have an application in which i am drawing thousands of rectangles of different size. Now here i am giving on user selection of those rectangle i am just drawing rotating border on that particular rectangle...(marching ant animation on rectangle selection).
Now if user selects few rectangles than it won't create such trouble but once user selects all or many at a time then redrawing showing flickering effect which doesn't look good and not even acceptable.
i want make it parallelization of it so i can gain the performance out of it.

I suggest you to use double buffering: create a memory DC, draw on it and then perform BitBlt on a real DC. You can find a lot of examples about this technique in the Internet.
Also you may refer to this msdn article: Flicker-Free Displays Using an Off-Screen DC

Related

Draw OpenGL to an offscreen bitmap

I've inherited a project which renders a 3D scene directly to the window using OpenGL. The code works fine, but we're now drawing an icon onto the 3D view to "Exit 3D view mode". This also works fine, but results in a lot of flickering as the view is rapidly rotated.
I'd like to be able to draw to an off-screen bitmap (ie. without a HWND), then draw my icon to the bitmap, then finally StretchBlt the bitmap to the window using double-buffering. We do this in other contexts (such as zooming into an image which does not need OpenGL) and it works great. My problem is that I am an OpenGL novice and all attempts at starting with the DC of the off-screen bitmap and creating a HWND from this DC fail, usually because of selecting a pixel format for the DC.
There are a few questions asking similar things here on StackOverflow (eg. this question without an accepted answer. Is this possible ? If so is there a relatively straightforward tutorial describing the procedure? If the process is extremely complex requiring detailed OpenGL knowledge, then I may just have to leave it and live with the flickering because it is a rarely used mode in our software.
Just draw the Icon using OpenGL using a textured quad.
All this draw to a bitmap copy to DC StretchBlt involves several roundtrips from and to graphics memory (wastes bandwidth) and StretchBlt will likely not be GPU accelerated. All in all what you want to do is inefficient and may even reduce quality.
I presume you have the icon stored in your executable as a resource. The most simple way to go about it is to create a memory DC (CreateCompatibleDC) with a DIBSECTION (CreateDIBSection), draw the icon to that and load the DIBSECTION data into a OpenGL texture. Then to draw the icon use glViewport to select the destination rectangle in window coordinates, use an identity transform to draw a rectangle covering the whole viewport (position values (-1,1)→(1,1), texture coordinate values (0,0)→(1,1) gives you the right outcome).
Important side fix: In case your program does silly things like setting viewport and the fixed function pipeline GL_PROJECTION matrix in a window resize handler you should clean up that anti pattern and move this to where it belongs: In the drawing code.

winapi - How to use LayeredWindows properly

I am haveing trouble understanding the concept of the UpdateLayaredWindow api, how it works and how to implement it. Say for example I want to override CFrameWnd and draw a custom, alpha blended frame with UpdateLayeredWindow, as I understand it, the only way to draw child controls is to either: Blend them to the frame's Bitmap buffer (Created with CreateCompatibleBitmap) and redraw the whole frame, or create another window that sits ontop of the layered frame and draws child controls regularly (which defeats the whole idea of layered windows, because the window region wouldn't update anyway).
If I use the first method, the whole frame is redrawn - surely this is inpractical for a large application..? Or is it that the frame is constantly updated anyway so modifying the bitmap buffer wouldn't cause extra redrawing.
An example of a window similar to what I would like to achieve is the Skype notification box/incoming call box. A translucent frame/window with child contorls sitting ontop, that you can move around the screen.
In a practical, commercial world, how do I do it? Please don't refer me to the documentation, I know what it says; I need someone to explain practical methods of the infrastructure I should use to implement this.
Thanks.
It is very unclear exactly what aspect of layered windows gives you a problem, I'll just noodle on about how they are implemented and explaining their limitations from that.
Layered windows are implemented by using a hardware feature of the video adapter called "layers". The adapter has the basic ability to combine the pixels from distinct chunks of video memory, mixing them before sending them to the monitor. Obvious examples of that are the mouse cursor, it gets super-imposed on the pixels of the desktop frame buffer so it doesn't take a lot of effort to animate it when you move the mouse. Or the overlay used to display a video, the video stream decoder writes the video pixels directly to a separate frame buffer. Or the shadow cast by the frame of a toplevel window on top of the windows behind it.
The video adapter allows a few simple logical operations on the two pixel values when combining their values. The first one is an obvious one, the mixing operation that lets some of the pixel value overlap the background pixel. That effect provides opacity, you can see the background partially behind the window.
The second one is color-keying, the kind of effect you see used when the weather man on TV stands in front of a weather map. He actually stands in front of a green screen, the camera mixing panel filters out the green and replaces it with the pixels from the weather map. That effect provides pure transparency.
You see this back in the arguments passed to UpdateLayeredWindow(), the function you must call in your code to setup the layered window. The dwFlags argument select the basic operations supported by the video hardware, ULW_ALPHA flag enables the opacity effect, the ULW_COLORKEY flag enables the transparency effect. The transparency effect requires the color key, that's specified with the crKey argument value. The opacity effect is controlled with the pblend argument. This one is built for future expansion, one that hasn't happened yet. The only interesting field in the BLENDFUNCTION struct is SourceConstantAlpha, it controls the amount of opacity.
So a basic effect available for a layered window is opacity, overlapping the background windows and leaving the partially visible. One restriction to that the entire window is partially opaque, including the border and the title bar. That doesn't look good, you typically want to create a borderless window and take on the burden of creating your own window frame. Requires a bunch of code btw.
And a basic effect is transparency, completely hiding parts of a window. You often want to combine the two effects and that requires two layered windows. One that provides the partial opacity, another on top and owned by the bottom one that displays the parts of the window that are opaque, like the controls. Using the color key to make its background transparent and make the the bottom window visible.
Beyond this, another important feature for custom windows is enabled by SetWindowRgn(). It lets you give the window a shape other than a rectangle. Again it is important to omit the border and title bar, they don't work on a shaped window. The programming effort is to combine these features in a tasteful way that isn't too grossly different from the look-and-feel of windows created by other applications and write the code that paints the replacement window parts and still makes the window functional like a regular window. Things like resizing and moving the window for example, you typically do so by custom handling the WM_NCHITTEST message.

How to efficiently render double buffered window without any tearing effect?

I want to create my own tiny windowless GUI system, for that I am using GDI+. I cannot post code here because it got huge(c++) but bellow is the main steps I am following...
Create a bitmap of size equal to the application window.
For all mouse and keyboard events update the custom control states (eg. if mouse is currently held over a particular control e.t.c.)
For WM_PAINT event paint the background to offscreen bitmap and then paint all the updated controls on top of it and finally copy entire offscreen image to the front buffer via Graphics::DrawImage(..) call.
For WM_SIZE/WM_SIZING delete the previous offscreen bitmap and create another one with new window size.
Also there are some checks to prevent repeated drawing of controls i.e. controls are drawn only when it needs repainting in other words when the state of a control is changed only then it is painted e.t.c.
The system is working fine but only with one exception...when window is being resizing something sort of tearing effect appears. Now what I mean by tearing effect I shall try to explain ...
On the sizing edge/border there is a flickering gap as I drag the border.It is as if my DrawImage() function returns immediately and while one swap operation is half done another image drawing starts up.
Now you may think that it is common artifact that happens in many other application for the fact that resizing backbuffer is not always as fast as resizing window are but in other applications I noticed in other applications that although there is a leg between window size and client area size as window grows in size nothing flickers near the edge (its usually just white background that shows up as thin uniform strips along the border).
Also the dynamic controls which move with window resize acts jerky during sizing.
At first it seemed to me that using a constant fullscreen size offscreen surface could minimize the artifact but when I tried it results are not that satisfactory. I also tried to call Sleep() during sizing so that the flipping is done completely before another flip starts but strangely even that won't worked for me!
I have heard that GDI on vista is not hardware accelerated, could that might be the problem?
Also I wonder how frameworks such as Qt renders windowless GUI so smoothly, even if you size a complex Qt GUI window very fast negligibly little artifact appears. As far as I know Qt can use opengl for GUI rendering but that is second option.
If I use directx then real time resizing is even harder, opengl on the other hand seems to be nice for resizing without any problem but I will loose all the 2d drawing capability of GDI+.
If any of you have done anything like this before please guide me. Also if you have any pointer that I should consider for custom user interface design then provide me the links.
Thanks!
I always wished to design interfaces like windows media player 11 but can someone tell me that there is a straight forward solution for a c++ programmer (I want to know how rather than use some existing framework etc.)? Subclassing, owner drawing, custom drawing nothing seems to give you such level of control, I dont know a way to draw semitransparent control with common controls, so I think this question deserves some special attention . Thanks again.
Could it be a WM_ERASEBKGND message that's causing it?
see this question: GDI+ double buffering in C++
Also, if you need fast response from your GUI I would advise against GDI+.

Layers with GDI+

I'm thinking of creating a drawing program with layers, and using GDI+ to display them. I want to use GDI+ because it supports transparency.
The thing is, drawing lines to the DC is very fast, but drawing directly to a bitmap is very slow. It is only fast if you lock the bits and start setting pixels. Can I draw to multiple DC's in my WM_PAINT event, then just do DrawBitmap for each layer to the MemDC? What's the best way of going about this?
Thanks
GDI+ is certainly fast enough for a drawing program. I use it (from C#) for high-speed animation (>30 fps).
It appears that you want to be able to manipulate individual pixels. This is very fast with LockBits, and although it's slightly clunky to use in C# (requiring a pointer and the unsafe tag), it doesn't seem like it would be as difficult in C++.
You probably don't want to copy from multiple layers directly to your control surface inside a paint event. Instead, this rendering should be done to an off-screen buffer (B1). After B1 is drawn with all copying/drawing operations completed, copy it to a second off-screen buffer (B2) and then invalidate/refresh your control surface. In the control's paint event, you copy from B2 to the visible surface.
You don't want to draw directly on the visible surface with a multi-step drawing operation, since a form of flickering will result (sometimes the screen will repaint itself while your code is only partway through a multi-step operation, so the user sees an occasional half-finished frame).
You can render to a single off-screen buffer, and copy from it to the visible surface in the paint event. The main complication here is that you have to deal in some way with "stray" paint events, i.e. events not caused by your intentionally invalidating the control but by something else (like a user dragging another window over yours). If you copy from the off-screen buffer to the surface and the buffer is only halfway-drawn, you'll get flicker. If you block in the paint event until the buffer drawing is completed, you'll see "form trails" on your control, which looks even worse.
The solution is the double-buffered approach described above. Stray (or non-stray ones when you invalidate) paint events will copy from B2, which is always fully-rendered and up-to-date - thus no flicker. Double-buffering does use more memory, but in a drawing program that has multiple layers anyway, this isn't a big deal.

how do I do print preview in win32 c++?

I have a drawing function that just takes an HDC.
But I need to show an EXACT scaled version of what will print.
So currently, I use
CreateCompatibleDC() with a printer HDC and
CreateCompatibleBitmap() with the printer's HDC.
I figure this way the DC will have the printer's exact width and height.
And when I select fonts into this HDC, the text will be scaled exactly as the printer would.
Unfortunately, I can't to a StretchBlt() to copy this HDC's pixels to the control's HDC since they're of different HDC types I guess.
If I create the "memory canvas" from a window HDC with same w,h as the printer's page,
the fonts come out WAY teeny since they're scaled for the screen, not page...
Should I CreateCompatibleDC() from the window's DC and
CreateCompatibleBitmap() from the printer's DC or something??
If somebody could explain the RIGHT way to do this.
(And still have something that looks EXACTLY as it would on printer)...
Well, I'd appreciate it !!
...Steve
Depending on how accurate you want to be, this can get difficult.
There are many approaches. It sounds like you're trying to draw to a printer-sized bitmap and then shrink it down. The steps to do that are:
Create a DC (or better yet, an IC--Information Context) for the printer.
Query the printer DC to find out the resolution, page size, physical offsets, etc.
Create a DC for the window/screen.
Create a compatible DC (the memory DC).
Create a compatible bitmap for the window/screen, but the size should be the pixel size of the printer page. (The problem with this approach is that this is a HUGE bitmap and it can fail.)
Select the compatible bitmap into the memory DC.
Draw to the memory DC, using the same coordinates you would use if drawing to the actual printer. (When you select fonts, make sure you scale them to the printer's logical inch, not the screen's logical inch.)
StretchBlt the memory DC to the window, which will scale down the entire image. You might want to experiment with the stretch mode to see what works best for the kind of image you're going to display.
Release all the resources.
But before you head in that direction, consider the alternatives. This approach involves allocating a HUGE off-screen bitmap. This can fail on resource-poor computers. Even if it doesn't, you might be starving other apps.
The metafile approach given in another answer is a good choice for many applications. I'd start with this.
Another approach is to figure out all the sizes in some fictional high-resolution unit. For example, assume everything is in 1000ths of an inch. Then your drawing routines would scale this imaginary unit to the actual dpi used by the target device.
The problem with this last approach (and possibly the metafile one) is that GDI fonts don't scale perfectly linearly. The widths of individual characters are tweaked depending on the target resolution. On a high-resolution device (like a 300+ dpi laser printer), this tweaking is minimal. But on a 96-dpi screen, the tweaks can add up to a significant error over the length of a line. So text in your preview window might appear out-of-proportion (typically wider) than it does on the printed page.
Thus the hardcore approach is to measure text in the printer context, and measure again in the screen context, and adjust for the discrepancy. For example (using made-up numbers), you might measure the width of some text in the printer context, and it comes out to 900 printer pixels. Suppose the ratio of printer pixels to screen pixels is 3:1. You'd expect the same text on the screen to be 300 screen pixels wide. But you measure in the screen context and you get a value like 325 screen pixels. When you draw to the screen, you'll have to somehow make the text 25 pixels narrower. You can ram the characters closer together, or choose a slightly smaller font and then stretch them out.
The hardcore approach involves more complexity. You might, for example, try to detect font substitutions made by the printer driver and match them as closely as you can with the available screen fonts.
I've had good luck with a hybrid of the big-bitmap and the hardcore approaches. Instead of making a giant bitmap for the whole page, I make one large enough for a line of text. Then I draw at printer size to the offscreen bitmap and StretchBlt it down to screen size. This eliminates dealing with the size discrepancy at a slight degradation of font quality. It's suitable for actual print preview, but you wouldn't want to build a WYSIWYG editor like that. The one-line bitmap is small enough to make this practical.
The good news is only text is hard. All other drawing is a simple scaling of coordinates and sizes.
I've not used GDI+ much, but I think it did away with non-linear font scaling. So if you're using GDI+, you should just have to scale your coordinates. The drawback is that I don't think the font quality on GDI+ is as good.
And finally, if you're a native app on Vista or later, make sure you've marked your process as "DPI-aware" . Otherwise, if the user is on a high-DPI screen, Windows will lie to you and claim that the resolution is only 96 dpi and then do a fuzzy up-scaling of whatever you draw. This degrades the visual quality and can make debugging your print preview even more complicated. Since so many programs don't adapt well to higher DPI screens, Microsoft added "high DPI scaling" by default starting in Vista.
Edited to Add
Another caveat: If you select an HFONT into the memory DC with the printer-sized bitmap, it's possible that you get a different font than what would get when selecting that same HFONT into the actual printer DC. That's because some printer drivers will substitute common fonts with in memory ones. For example, some PostScript printers will substitute an internal PostScript font for certain common TrueType fonts.
You can first select the HFONT into the printer IC, then use GDI functions like GetTextFace, GetTextMetrics, and maybe GetOutlineTextMetrics to find out about the actual font selected. Then you can create a new LOGFONT to try to more closely match what the printer would use, turn that into an HFONT, and select that into your memory DC. This is the mark of a really good implementation.
Another Edit
I've recently written new code that uses enhanced meta files, and that works really well, at least for TrueType and OpenType fonts when there's no font substitution. This eliminates all the work I described above trying to create a screen font that is a scaled match for the printer font. You can just run through your normal printing code and print to an enhanced meta file DC as though it's the printer DC.
One thing that might be worth trying is to create an enhanced metafile DC, draw to it as normal and then scale this metafile using printer metrics. This is the approach used by the WTL BmpView sample - I don't know how accurate this will be but it might be worth looking at (it should be easy to port the relevant classes to Win32 but WTL is a great replacement for Win32 programming so might be worth utilizing.)
Well it won't look the same because you have a higher resolution in the printer DC, so you'll have to write a conversion function of sorts. I'd go with the method that you got to work but the text was too small and just multiply every position/font size by the printer window width and divide by the source window width.