Limiting rendered image size in CView::onDraw() - mfc

In a MFC SDI application containing a single CView, I pass the output device context pDC->m_hDC to a mapping library to render the map within the CMyView::OnDraw() method.
I would like the rendered image to appear in the centre of the cview surrounded by a black background, i.e. the image size would be smaller than the CView client rect size. I have experimented with CDC::SetViewportOrg() and set the device size in the mapping library, however unfortunately the mapping library draws outside of the device size set.
What is the best way of limiting the image to the desired size? Should I be looking at clipping functions? Or do I have to manually draw over the undesired parts of the image.

Well you can do it 2 ways.
1) You could SetBoundsRect to the boundaries you want.
2) You could just bit blt the section of the image you want into the DC.
Method 2 would be my preferred method as there is no extra logic. It only ever even tries to draw the part you are blitting :)

Related

Direct2D/C++ - Offscreen Rendering using Bitmap

I already have implemented a Direct2D application for windows desktop application using C++, where I show the graphical results (with points, lines, and ellipses) during the simulation. I keep a buffer for storing the simulation values as long as the simulation remains running, and every time interval I simply plot the values. Right now, the situation is, I draw directly on Hwnd (ID2D1HwndRenderTarget) like
pRenderTarget->BeginDraw()
for(values of simulation results)
pRenderTarget->DrawLine(....)
pRenderTarget->EndDraw()
Now I want use the offscreen rendering/drawing using Bitmap, as I need to store the bitmap as an image on the computer (equivalent to taking/capturing screenshot to store the simulation results). How should I proceed in this case (with/without Direct2D IWICBitmapFactory - for later screen capturing)?
create ID2D1HwndRenderTarget pHwndRenderTarget - using pD2DFactory->CreateHwndRenderTarget()
create ID2D1BitmapFactory pBitmapFactory - using pHwndRenderTarget->CreateCompatibleRenderTarget()
Create an empty bitmap ID2D1Bitmap ID2D1Bitmap pBmp - using pBitmapFactory->CreateBitmap()
?? On this Bitmap should I draw lines? if not, where should I draw lines
In the end, between whose BeginDraw() and EndDraw(), I should place the bitmap?
Later at some point, I'd capture a screenshot of this bitmap. Without IWICBitmapFactory can I achieve this? Any code samples would be appreciated.

overlay support in windows/opengl environment

I have to display an image and text overlay, when text overlay contains many strings, but only one changes from frame to frame. I want to avoid redraw of the entire overlay and only update what has changed.
I tried wglCreateLayerContext but my GPU seems to not support it (PIXELFORMATDESCRIPTOR bReserved is 0).
What is the most efficient way to redraw only part of text overlay?
Redrawing the whole framebuffer is the canonical way in OpenGL. You can use Framebuffer Objects (FBO) to create several off-screen drawing surfaces, to which you render the individual layers. Then you composit the layers into a composite image presented on screen.
but only one changes from frame to frame. I want to avoid redraw of the entire overlay
Why? Figuring out what parts need redraw, masking them, do only a partial clear, updating it, etc. takes more time and effort than to simply redraw the whole text overlay.

winapi - How to use LayeredWindows properly

I am haveing trouble understanding the concept of the UpdateLayaredWindow api, how it works and how to implement it. Say for example I want to override CFrameWnd and draw a custom, alpha blended frame with UpdateLayeredWindow, as I understand it, the only way to draw child controls is to either: Blend them to the frame's Bitmap buffer (Created with CreateCompatibleBitmap) and redraw the whole frame, or create another window that sits ontop of the layered frame and draws child controls regularly (which defeats the whole idea of layered windows, because the window region wouldn't update anyway).
If I use the first method, the whole frame is redrawn - surely this is inpractical for a large application..? Or is it that the frame is constantly updated anyway so modifying the bitmap buffer wouldn't cause extra redrawing.
An example of a window similar to what I would like to achieve is the Skype notification box/incoming call box. A translucent frame/window with child contorls sitting ontop, that you can move around the screen.
In a practical, commercial world, how do I do it? Please don't refer me to the documentation, I know what it says; I need someone to explain practical methods of the infrastructure I should use to implement this.
Thanks.
It is very unclear exactly what aspect of layered windows gives you a problem, I'll just noodle on about how they are implemented and explaining their limitations from that.
Layered windows are implemented by using a hardware feature of the video adapter called "layers". The adapter has the basic ability to combine the pixels from distinct chunks of video memory, mixing them before sending them to the monitor. Obvious examples of that are the mouse cursor, it gets super-imposed on the pixels of the desktop frame buffer so it doesn't take a lot of effort to animate it when you move the mouse. Or the overlay used to display a video, the video stream decoder writes the video pixels directly to a separate frame buffer. Or the shadow cast by the frame of a toplevel window on top of the windows behind it.
The video adapter allows a few simple logical operations on the two pixel values when combining their values. The first one is an obvious one, the mixing operation that lets some of the pixel value overlap the background pixel. That effect provides opacity, you can see the background partially behind the window.
The second one is color-keying, the kind of effect you see used when the weather man on TV stands in front of a weather map. He actually stands in front of a green screen, the camera mixing panel filters out the green and replaces it with the pixels from the weather map. That effect provides pure transparency.
You see this back in the arguments passed to UpdateLayeredWindow(), the function you must call in your code to setup the layered window. The dwFlags argument select the basic operations supported by the video hardware, ULW_ALPHA flag enables the opacity effect, the ULW_COLORKEY flag enables the transparency effect. The transparency effect requires the color key, that's specified with the crKey argument value. The opacity effect is controlled with the pblend argument. This one is built for future expansion, one that hasn't happened yet. The only interesting field in the BLENDFUNCTION struct is SourceConstantAlpha, it controls the amount of opacity.
So a basic effect available for a layered window is opacity, overlapping the background windows and leaving the partially visible. One restriction to that the entire window is partially opaque, including the border and the title bar. That doesn't look good, you typically want to create a borderless window and take on the burden of creating your own window frame. Requires a bunch of code btw.
And a basic effect is transparency, completely hiding parts of a window. You often want to combine the two effects and that requires two layered windows. One that provides the partial opacity, another on top and owned by the bottom one that displays the parts of the window that are opaque, like the controls. Using the color key to make its background transparent and make the the bottom window visible.
Beyond this, another important feature for custom windows is enabled by SetWindowRgn(). It lets you give the window a shape other than a rectangle. Again it is important to omit the border and title bar, they don't work on a shaped window. The programming effort is to combine these features in a tasteful way that isn't too grossly different from the look-and-feel of windows created by other applications and write the code that paints the replacement window parts and still makes the window functional like a regular window. Things like resizing and moving the window for example, you typically do so by custom handling the WM_NCHITTEST message.

How to combine two HICONs into one

I have one HICON which I want to use as an overlay on another HICON, to create a result HICON. The result HICON will then be used in an "owner draw" control (note: it doesn't use imagelists). The overlay icon has transparency color RGB(0, 255, 0).
How do I go about doing this in Native C++ (I've only found sources that show how to do this with managed code).
I generally agree with peterchen's answer, with some notes:
There's no reason to work with DIBs (unless you're synthesizing the image directly by altering its bits, rather than drawing using GDI functions).
You should keep in mind that GetIconInfo actually creates copies of the icon't bitmaps within your process. It's your responsibility to delete them when no more needed.
Unless you're going to pass the resulting HICON to either standard control or another process - there's just no need to create such. Instead it's better to work with a bitmap (and, possibly, mask).
It's important to understand the difference between icon and bitmap.
Bitmap is a GDI object. It's valid within your process.
Icon is a User object, and its scope is not limited to your process. It wraps a bitmap and, optionally, a mask.
There're several icon types:
The simplest, consisting of a single bitmap, which is drawn as-is.
Bitmap + mask, the mask marks solid/transparent pixels
32-bit bitmap, with alpha channel
Monochrome bitmap + mask. The bitmap + mask define the so-called AND-XOR operation (that is performed on the target surface).
So that after you get the contents of the icon (by GetIconInfo) you should discover the actual icon type, because each of those options requires different handling.
(1) overlay Icons
In many places of the windows API, there are overlay icons supported (e.g. ListView, and TreeView with help ofthe ImageList, also in the shell)
(2) As Hans says
The most straigtforward way would be to
create a memory DC on a bitmap
Draw the two icons on top of each other
create an icon from the bitmap
(3) if you insist
If you insist in doing it manually (though i see no reason to):
GetIconInfoto get the underlying bitmaps. Note that b&w icons need special treatment
GetObjectthe get the BITMAP for a HBITMAP. if you don't also insist on handling various bitmap formats, you should covnert them into DIB sections.
Do your magic

how do I do print preview in win32 c++?

I have a drawing function that just takes an HDC.
But I need to show an EXACT scaled version of what will print.
So currently, I use
CreateCompatibleDC() with a printer HDC and
CreateCompatibleBitmap() with the printer's HDC.
I figure this way the DC will have the printer's exact width and height.
And when I select fonts into this HDC, the text will be scaled exactly as the printer would.
Unfortunately, I can't to a StretchBlt() to copy this HDC's pixels to the control's HDC since they're of different HDC types I guess.
If I create the "memory canvas" from a window HDC with same w,h as the printer's page,
the fonts come out WAY teeny since they're scaled for the screen, not page...
Should I CreateCompatibleDC() from the window's DC and
CreateCompatibleBitmap() from the printer's DC or something??
If somebody could explain the RIGHT way to do this.
(And still have something that looks EXACTLY as it would on printer)...
Well, I'd appreciate it !!
...Steve
Depending on how accurate you want to be, this can get difficult.
There are many approaches. It sounds like you're trying to draw to a printer-sized bitmap and then shrink it down. The steps to do that are:
Create a DC (or better yet, an IC--Information Context) for the printer.
Query the printer DC to find out the resolution, page size, physical offsets, etc.
Create a DC for the window/screen.
Create a compatible DC (the memory DC).
Create a compatible bitmap for the window/screen, but the size should be the pixel size of the printer page. (The problem with this approach is that this is a HUGE bitmap and it can fail.)
Select the compatible bitmap into the memory DC.
Draw to the memory DC, using the same coordinates you would use if drawing to the actual printer. (When you select fonts, make sure you scale them to the printer's logical inch, not the screen's logical inch.)
StretchBlt the memory DC to the window, which will scale down the entire image. You might want to experiment with the stretch mode to see what works best for the kind of image you're going to display.
Release all the resources.
But before you head in that direction, consider the alternatives. This approach involves allocating a HUGE off-screen bitmap. This can fail on resource-poor computers. Even if it doesn't, you might be starving other apps.
The metafile approach given in another answer is a good choice for many applications. I'd start with this.
Another approach is to figure out all the sizes in some fictional high-resolution unit. For example, assume everything is in 1000ths of an inch. Then your drawing routines would scale this imaginary unit to the actual dpi used by the target device.
The problem with this last approach (and possibly the metafile one) is that GDI fonts don't scale perfectly linearly. The widths of individual characters are tweaked depending on the target resolution. On a high-resolution device (like a 300+ dpi laser printer), this tweaking is minimal. But on a 96-dpi screen, the tweaks can add up to a significant error over the length of a line. So text in your preview window might appear out-of-proportion (typically wider) than it does on the printed page.
Thus the hardcore approach is to measure text in the printer context, and measure again in the screen context, and adjust for the discrepancy. For example (using made-up numbers), you might measure the width of some text in the printer context, and it comes out to 900 printer pixels. Suppose the ratio of printer pixels to screen pixels is 3:1. You'd expect the same text on the screen to be 300 screen pixels wide. But you measure in the screen context and you get a value like 325 screen pixels. When you draw to the screen, you'll have to somehow make the text 25 pixels narrower. You can ram the characters closer together, or choose a slightly smaller font and then stretch them out.
The hardcore approach involves more complexity. You might, for example, try to detect font substitutions made by the printer driver and match them as closely as you can with the available screen fonts.
I've had good luck with a hybrid of the big-bitmap and the hardcore approaches. Instead of making a giant bitmap for the whole page, I make one large enough for a line of text. Then I draw at printer size to the offscreen bitmap and StretchBlt it down to screen size. This eliminates dealing with the size discrepancy at a slight degradation of font quality. It's suitable for actual print preview, but you wouldn't want to build a WYSIWYG editor like that. The one-line bitmap is small enough to make this practical.
The good news is only text is hard. All other drawing is a simple scaling of coordinates and sizes.
I've not used GDI+ much, but I think it did away with non-linear font scaling. So if you're using GDI+, you should just have to scale your coordinates. The drawback is that I don't think the font quality on GDI+ is as good.
And finally, if you're a native app on Vista or later, make sure you've marked your process as "DPI-aware" . Otherwise, if the user is on a high-DPI screen, Windows will lie to you and claim that the resolution is only 96 dpi and then do a fuzzy up-scaling of whatever you draw. This degrades the visual quality and can make debugging your print preview even more complicated. Since so many programs don't adapt well to higher DPI screens, Microsoft added "high DPI scaling" by default starting in Vista.
Edited to Add
Another caveat: If you select an HFONT into the memory DC with the printer-sized bitmap, it's possible that you get a different font than what would get when selecting that same HFONT into the actual printer DC. That's because some printer drivers will substitute common fonts with in memory ones. For example, some PostScript printers will substitute an internal PostScript font for certain common TrueType fonts.
You can first select the HFONT into the printer IC, then use GDI functions like GetTextFace, GetTextMetrics, and maybe GetOutlineTextMetrics to find out about the actual font selected. Then you can create a new LOGFONT to try to more closely match what the printer would use, turn that into an HFONT, and select that into your memory DC. This is the mark of a really good implementation.
Another Edit
I've recently written new code that uses enhanced meta files, and that works really well, at least for TrueType and OpenType fonts when there's no font substitution. This eliminates all the work I described above trying to create a screen font that is a scaled match for the printer font. You can just run through your normal printing code and print to an enhanced meta file DC as though it's the printer DC.
One thing that might be worth trying is to create an enhanced metafile DC, draw to it as normal and then scale this metafile using printer metrics. This is the approach used by the WTL BmpView sample - I don't know how accurate this will be but it might be worth looking at (it should be easy to port the relevant classes to Win32 but WTL is a great replacement for Win32 programming so might be worth utilizing.)
Well it won't look the same because you have a higher resolution in the printer DC, so you'll have to write a conversion function of sorts. I'd go with the method that you got to work but the text was too small and just multiply every position/font size by the printer window width and divide by the source window width.