How to use accurate GDI font size? - c++

A generic question: in GDI, the type of font size is int, so it is not accurate when do zoom out/ zoom in for the text draw by GDI in the window,
Is there a simple method to use a float font size in GDI to make font size accurate?
Thanks a lot for your kindly help!

GDI text does not scale linearly, and not just because it uses only integer sizes, but also because of hinting which tries to make text look better when rendered at a resolution on the order of its stroke width. If you double the height of a GDI font, the width may not exactly double. GDI text was once hardware accelerated but no longer is on modern versions of Windows. That doesn't matter much for performance because it's relatively simple and efficient, and hardware is fast.
GDI+ text will scale linearly in the sense that doubling the height will double the width (well, it's really close). But text may look "fuzzier" because it uses grayscale antialiasing instead of ClearType subpixel rendering. GDI+ text tends to be slower that GDI because more work is done in software.
DirectWrite (which runs on Direct2D) scales linearly and generally looks very good. It's harder to write efficient Direct2D/DirectWrite code, and, depending on your requirements, you might have to drop back to GDI if you also need to print. If you try to write DPI-aware programs, you may find yourself having to do a lot of conversions between DirectWrite's device-independent coordinates for graphics and mouse coordinates that are still device-dependant. DirectWrite is hardware accelerated, so i'ts fast if you use it efficiently by caching lots of intermediate data structures.

With CreateFont (and CreateFontIndirect) you specify the font size in pixels, so it remains accurate to the pixel regardless of zooming (within the constraints of the sizes available in the font that's selected--if you use a bitmapped font, scaling may be limited or nonexistent).
If you're using CreatePointFont to create the font, you specify the font size in tenths of a point, which usually works out to smaller than a pixel, so it gets rounded to the nearest pixel. If you really want to be sure you're specifying the height to the nearest pixel, however, you probably want to use CreateFont/CreateFontIndirect instead of CreatePointFont though.

Related

Real time drawing in GDI

I'm currently writing a 3D renderer (for fun and research), so I need a way to draw my framebuffer to a window. Since I'm doing all of my calculations on CPU, the drawing needs to be as fast as possible.
One of my goals is to use no existing graphics library (OpenGL/DirectX) so the drawing to the screen is pure Win32. In my research I've found a couple of ways to create and draw bitmaps and now I'm looking for the best one.
My current implementation uses a bitmap created with CreateDIBSection(), which is drawn to my window DC using BitBlt().
CreateDIBSection() give me a pointer to my bitmap bytes so I can manipulate it without copying. Using this method I achieve an update rate of about 260 FPS (without any rendering done).
This seems a bit slow, so I'm looking for optimizations.
I've read something about that if you don't create a bitmap with the same palette as the system palette, some slow color conversions are done.
How can I make sure my DIB bitmap and window are compatible?
Are there methods of drawing an bitmap which are faster than my current implementation?
I've also read something about DrawDibDraw(), can anyone confirm that this is faster?
I've read something about that if you don't create a bitmap with the same palette as the system palette, some slow color conversions are done.
Very few systems run in a palette mode any more, so it seems unlikely this is an issue for you.
Aside from palettes, some GDI functions also cause a color matching conversion to be applied if the source bitmap and the destination have different gamuts. BitBlt, however, does not do this type of color matching, so you're not paying a price for that.
How can I make sure my DIB bitmap and window are compatible?
You don't. You can use DIBs (which are Device-Independent Bitmaps) or compatible (device-dependent) bitmaps. It's possible that your DIB bitmap matches the current mode of your device. For example, if you're using a 32 bpp DIB, and your display is in that same mode, then no conversion is necessary. If you want a bitmap that's guaranteed to be in the same mode as your device, then you can't use a DIB and all the nice properties it provides for predictable pixel layout and format.
Are there methods of drawing an bitmap which are faster than my current implementation?
The limitation is most likely in getting the data from system memory to graphics adapter memory. To get around that limitation, you need a faster graphics bus, or you need to render directly into graphic memory, which means you'd need to do your computation on the GPU rather than the CPU.
If you're rendering a 1920 x 1080 pixel image at 24 bits per pixel, that's close to 6 MB for your frame buffer. That's an awful lot of data. If you're doing that 260 times per second, that's actually pretty impressive.
I've also read something about DrawDibDraw(), can anyone confirm that this is faster?
It's conceivable, but the only way to know would be to measure it. And the results might vary from machine to machine because of differences in the graphics adapter (and which bus they use).

Rounding errors when scaling the rendered output of the rich edit control via EM_FORMATRANGE

I am using the EM_FORMATRANGE message to render the output of a rich text control to an arbitrary device context. However, when rendering to a bitmap, the dots-per-inch of the bitmap's device context is the same as the display device's DPI, which is 96 dots-per-inch. This is much lower than what I would like to render to. I'd rather render at a much higher DPI so that the user can zoom in, and perhaps print on a high-DPI printer later.
I suspect what happens is that the RTF control calls GetDeviceCaps with LOGPIXELSX and LOGPIXELSY to get the number of pixels per inch of the device. It then renders the document using this DPI value at a 100% zoom level. Windows display devices always return a value of 96 DPI, unless large fonts are being used on the system (as set in Control Panel) and the application is DPI-aware.
Many examples on the Internet propose scaling the output of EM_FORMATRANGE. This is so that any arbitrary DPI resolution can be achieved. Most examples generally involve using SetMapMode, SetWindowExtEx, and SetViewportExtEx (e.g. see http://social.msdn.microsoft.com/Forums/en-us/netfxbcl/thread/37fd1bfb-f07b-421d-9b5e-5f4492ffbbc3). These functions can be used to scale the rich text control's rendered output: for example, if I specify 400% scaling, then if the rich text control rendered something that was 5 pixels wide, it would actually become 20 pixels wide.
Unfortunately, the old GDI functions use integers instead of floating point numbers. For example, suppose the RTF control decided that an element should be drawn at (12.7, 15.3) pixels. This would be rounded to a position of (13, 15). These rounded coordinates are passed to GDI, which then scales up the image using scaling specified by SetMapMode: for the example of 400%, it would be (13*4, 15*4), or (52, 60). But this is not accurate: the element would have better been placed at (12.7*4, 15.3*4), or (51, 61). The worst part is that for some cases, the error becomes cumulative.
I believe this is the underlying cause of this very noticeable error when scaling some simple text:
The above example is 8 point Segoe UI, scaled to 400% using EM_FORMATRANGE and SetMapMode on a 96 DPI display device context. The text has now become 32 point size, but the space between each character is too high and looks unnatural.
The above example was created in WordPad by entering the text as 8 point Segoe UI and then using the zoom control to set to a 400% zoom level. The space between each character looks normal. The exact same result is achieved with a 32 point font and 100% zoom level.
To work around this issue, I have tried the following. For each thing tried, the result has been identically unsatisfactory when scaled to 400%.
Using a scaling transform set using SetWorldTransform instead of the scaling done with SetMapMode and SetWindowExtEx etc.
Passing the device context for a metafile to EM_FORMATRANGE, and then scaling the metafile later.
Using SetMapMode to scale in conjunction with rendering to a metafile, and then showing the metafile later without scaling.
I believe the results are always unsatisfactory because the problem boils down to the fact that the rich edit control is rounding to the nearest integer and rendering to what it thinks is a 96 DPI device - ignoring the transforms in place. I looked into the metafile format and what I discovered is that the individual character positions are actually stored in the metafile at pixel-level resolution - that's why scaling the metafile obviously didn't work since the rounding has already happened by that point.
I can think of two real solutions that would work around this issue:
Use a device context with a higher user-specified dots per inch, such that GetDeviceCaps returns different values. (Note: some examples propose using the printer device since they generally have higher DPI, but I want my code to work on systems that don't have a printer and be able to render to an off-screen buffer).
Some way to tell the rich edit control to assume the device context has a different dots per inch than reported by GetDeviceCaps.
Anything else seems like it would still be subject to these rounding errors.
Does anyone (1) have an idea of how to implement either of the solutions I have proposed, or (2) have an alternate idea of how to achieve my goal of getting an accurate high-DPI output into a buffer?
I'm having the exact same problem.
A quick solution is to draw the text into 100% scale bitmap, and then just scale the bitmap.
it's not the best solution, but it might work for you.
Did you find any better solutions? if so, please share them here.
Also notice, that this problem also occurs when you draw the text to a 100% meta-file and then scale the meta-file to the screen - I believe this has something to do with GDI text drawing functions that aren't working well with scaling.
Roey
You could multiply the point size of all the text in the control by a factor of 4 and render the control to a bitmap that's 4 times larger.
If you're populating the control yourself this would be quite straightforward. If you support arbitrary content entered by the user it would be a lot more work and would require extra effort to handle anything that wasn't text (e.g. embedded bitmaps).
I just spent two weeks on a similar problem. I needed a Rich Edit that was scalable for WYSISWG editing. As we've found the windows rich edit control does not support scaling correctly with EM_FORMATRANGE and inter character spacing does not change between zoom levels and font sizes only scale in discrete font size steps.
Since I did not need large differences in scale the solution I settled on was to use the windowless text edit interfaces from ITextServices to render to an internal bitmap at a fixed resolution. Then I used GDI+ to resample the internal bitmap to the needed screen size with trilinear filtering. The results emulated a scalable rich edit well enough as long as scale difference were not too large, it was good enough for my needs.
After trying many different options I am convinced you can not get precise scaling with the windows rich edit control. You can write your own control that renders text. However, you would need to have a separate draw call for every piece of text with a different style. Also you would need to handle all the nicities rich edit handles for you like highlighting text, placing the cursor, handling mouse and keyboard input, parsing rtf text, et cetera. It would probably be best just to buy a third party component in this case(I could not find any suitable free open source components). In case someone wants to attempt it I will point out the relevant starting points for text rendering for different APIs.
GDI - TextOut does not set inter-character spacing correctly. You need GetCharacterPlacement and ExTextOut. You also need to calculate scaling yourself. You probably don't want to use GDI
GDI+ - DrawString handles scaling correctly. GDI+ is a reasonable option
DirectWrite - If you are willing to limit yourself to Vista Platform Update or later, DirectWrite is the newest text API from Microsoft.
Also here is link describing how text rendering is different between GDI and GDI+:
http://windowsclient.net/articles/gdiptext.aspx
Try using the EM_SETZOOM message to let the rich edit control scale the output itself.

Draw scaled images using CImageList

If you have images stored in a CImageList, is there an easy way to render them (with proper transparency) scaled to fit a given target rectangle? CImageList::DrawEx takes size information but I don't believe it does scaling, only cropping?
I guess you could render them to an offscreen bitmap, then StretchBlt() them to either your device or another offscreen bitmap, letting StretchBlt() do the scaling... Getting the transparency to carry over correctly will require some fiddling though, depending on your circumstances you may need to use AlphaBlend() instead.
My opinion is that most of the Win32 image handling code, and therefore by extension their MFC equivalents, like CImageList, CIcon, CImage, CBitmap, ... are inadequate for today's graphics needs. Especially handling per-pixel transparency hardly ever works consistently. I usually store my images in a CImage and use ::AlphaBlend() everywhere to get them to DC, or I use GetDIBits()/SetDIBits() and directly manipulate the RGBA entries (not very practical for doing scaling and similar operations, I admit). On the other hand I understand what it's like having to maintain code that uses these things already and wanting to update them to give them a bit of a modern look...

How do I enlarge a picture so that it is 300 DPI?

The accepted answer to the question C++ Library for image recognition: images containing words to string recommended that you:
Upsize/Downsize your input image to 300 DPI.
How would I do this... I was under the impression that DPI was for monitors, not image formats.
I think the more accurate term here is resampling. You want a pixel resolution high enough to support accurate OCR. Font size (e.g. in points) is typically measured in units of length, not pixels. Since 72 points = 1 inch, we need 300/72 pixels-per-point for a resolution of 300 dpi ("pixels-per-inch"). That means a typical 12-point font has a height (or more accurately, base-line to base-line distance in single-spaced text) of 50 pixels.
Ideally, your source documents should be scanned at an appropriate resolution for the given font size, so that the font in the image is about 50 pixels high. If the resolution is too high/low, you can easily resample the image using a graphics program (e.g. GIMP). You can also do this programmatically through a graphics library, such as ImageMagick which has interfaces for many programming languages.
DPI makes sense whenever you're relating an image in pixels to a physical device with a picture size. In the case of OCR, it usually means the resolution of the scan, i.e. how many pixels will you get for each inch of your scan. A 12-point font is meant to be printed at 12/72 inches per line, and an upper-case character might fill about 80% of that; thus it would be approximately 40 pixels tall when scanned at 300 DPI.
Many image formats have a DPI recorded in them. If the image was scanned, this should be the exact setting from the scanner. If it came from a digital camera, it always says 72 DPI, which is a default value mandated by the EXIF specification; this is because a camera can't know the original size of the image. When you create an image with an imaging program, you might have the opportunity to set the DPI to any arbitrary value. This is a convenience for you to specify how you want the final image to be used, and has no bearing on the detail contained in the image.
Here's a previous question that asks the details of resizing an image:
How do I do high quality scaling of a image?
OCR software is typically designed to work with "normal" font sizes. From an image point of view, this means that it will be looking for letters perhaps around the 30 to 100 pixel height range. Images of much higher resolution would produce letters that appear much too large for the OCR software to process efficiently. Similarly, images of lower resolution would not provide enough pixels for the software to recognise letters.
"How would I do this... I was under the impression that dpi was for monitors, not image formats."
DPI stands for dots per inch. What does it have to do with monitors? Well, we have a pixel made of three RGB subpixels. The higher the DPI, the more details you cram into that space.
DPI is a useful measurement for displays and prints but nothing useful... in fact, nothing for image formats themselves.
The reason for DPI being tagged inside some formats is to instruct the devices to display at that resolution but from what I understand, virtually all ignore that instruction and does its best to optimize the image for a particular output.
You can change 72 dpi to 1 dpi or 6000 dpi in an image format and it won't make a difference whatsoever on a monitor. "Upsize/downsize to 300 dpi" makes no sense. Resampling does not change DPI either. Try it in Photoshop, uncheck "Resample" when changing the DPI and you'll see no difference whatsoever. It will NOT get bigger or smaller.
DPI is totally meaningless for image formats, IMO.
If your goal is OCR, DPI makes sense as the number of dots in your image for each inch in the original scanned document. If your dpi is too low, the information is gone forever, and even bicubic interpolation is not going to to a brilliant job recovering it. If your dpi is too high, it's easy to throw away bits.
To get the job done; I'm a big fan of the netpbm/pbmplus toolset; the tool to start with is pnmscale, although if you've got a bitmap you want to consider related tools such as pbmreduce.

Draw array of bits(rgb) in windows

I have an array of raw rgb data.
I would like to know how can I draw this pixels on the screen in Windows OS?
Now I use API function DrawDIBits, but I must turn up my image data.
I always use SetDiBitsToDevice, but drawDIBits could be okay as well (haven't checked).
As for the upside-down nature of the windows blit functions:
There is a workaround. If you pass a BITMAPINFOHEADER or BITMAPINFO structure to the function just negate the value in the bitmap-height member. This will tell GDI to do the blit as if the height would be positive, but interpret the data as beeing stored in a top-down order.
You may get a nice speed improvement by this "hack" as well.
If you want to shuffle the byte-order of the pixels (e.g. turn ARGB into BGRA or so) you can use the BITMAPV4HEADER structure and tell GDI how your pixel-data is organized. That's a functionality that is rarely used but works since WIN98. I'd say it's save to use it these days..
If you mean drawing it without reversing the (R,G,B) into (B,G,R), I don't know an automatic way to do that.
If you mean drawing it without padding each line to a multiple of 4 pixels, you can do it by drawing each line one at a time. It will be slow, though.