I see parameters on CFdocument like height, width, and unit, but NO DPI. How do you set DPI?
I dont think its possible.
A PDF is postscript so any text will scale nicely and if your image quality is bad for print, just embed a higher resolution image.
This has worked for me previously.
Related
A generic question: in GDI, the type of font size is int, so it is not accurate when do zoom out/ zoom in for the text draw by GDI in the window,
Is there a simple method to use a float font size in GDI to make font size accurate?
Thanks a lot for your kindly help!
GDI text does not scale linearly, and not just because it uses only integer sizes, but also because of hinting which tries to make text look better when rendered at a resolution on the order of its stroke width. If you double the height of a GDI font, the width may not exactly double. GDI text was once hardware accelerated but no longer is on modern versions of Windows. That doesn't matter much for performance because it's relatively simple and efficient, and hardware is fast.
GDI+ text will scale linearly in the sense that doubling the height will double the width (well, it's really close). But text may look "fuzzier" because it uses grayscale antialiasing instead of ClearType subpixel rendering. GDI+ text tends to be slower that GDI because more work is done in software.
DirectWrite (which runs on Direct2D) scales linearly and generally looks very good. It's harder to write efficient Direct2D/DirectWrite code, and, depending on your requirements, you might have to drop back to GDI if you also need to print. If you try to write DPI-aware programs, you may find yourself having to do a lot of conversions between DirectWrite's device-independent coordinates for graphics and mouse coordinates that are still device-dependant. DirectWrite is hardware accelerated, so i'ts fast if you use it efficiently by caching lots of intermediate data structures.
With CreateFont (and CreateFontIndirect) you specify the font size in pixels, so it remains accurate to the pixel regardless of zooming (within the constraints of the sizes available in the font that's selected--if you use a bitmapped font, scaling may be limited or nonexistent).
If you're using CreatePointFont to create the font, you specify the font size in tenths of a point, which usually works out to smaller than a pixel, so it gets rounded to the nearest pixel. If you really want to be sure you're specifying the height to the nearest pixel, however, you probably want to use CreateFont/CreateFontIndirect instead of CreatePointFont though.
I am using the EM_FORMATRANGE message to render the output of a rich text control to an arbitrary device context. However, when rendering to a bitmap, the dots-per-inch of the bitmap's device context is the same as the display device's DPI, which is 96 dots-per-inch. This is much lower than what I would like to render to. I'd rather render at a much higher DPI so that the user can zoom in, and perhaps print on a high-DPI printer later.
I suspect what happens is that the RTF control calls GetDeviceCaps with LOGPIXELSX and LOGPIXELSY to get the number of pixels per inch of the device. It then renders the document using this DPI value at a 100% zoom level. Windows display devices always return a value of 96 DPI, unless large fonts are being used on the system (as set in Control Panel) and the application is DPI-aware.
Many examples on the Internet propose scaling the output of EM_FORMATRANGE. This is so that any arbitrary DPI resolution can be achieved. Most examples generally involve using SetMapMode, SetWindowExtEx, and SetViewportExtEx (e.g. see http://social.msdn.microsoft.com/Forums/en-us/netfxbcl/thread/37fd1bfb-f07b-421d-9b5e-5f4492ffbbc3). These functions can be used to scale the rich text control's rendered output: for example, if I specify 400% scaling, then if the rich text control rendered something that was 5 pixels wide, it would actually become 20 pixels wide.
Unfortunately, the old GDI functions use integers instead of floating point numbers. For example, suppose the RTF control decided that an element should be drawn at (12.7, 15.3) pixels. This would be rounded to a position of (13, 15). These rounded coordinates are passed to GDI, which then scales up the image using scaling specified by SetMapMode: for the example of 400%, it would be (13*4, 15*4), or (52, 60). But this is not accurate: the element would have better been placed at (12.7*4, 15.3*4), or (51, 61). The worst part is that for some cases, the error becomes cumulative.
I believe this is the underlying cause of this very noticeable error when scaling some simple text:
The above example is 8 point Segoe UI, scaled to 400% using EM_FORMATRANGE and SetMapMode on a 96 DPI display device context. The text has now become 32 point size, but the space between each character is too high and looks unnatural.
The above example was created in WordPad by entering the text as 8 point Segoe UI and then using the zoom control to set to a 400% zoom level. The space between each character looks normal. The exact same result is achieved with a 32 point font and 100% zoom level.
To work around this issue, I have tried the following. For each thing tried, the result has been identically unsatisfactory when scaled to 400%.
Using a scaling transform set using SetWorldTransform instead of the scaling done with SetMapMode and SetWindowExtEx etc.
Passing the device context for a metafile to EM_FORMATRANGE, and then scaling the metafile later.
Using SetMapMode to scale in conjunction with rendering to a metafile, and then showing the metafile later without scaling.
I believe the results are always unsatisfactory because the problem boils down to the fact that the rich edit control is rounding to the nearest integer and rendering to what it thinks is a 96 DPI device - ignoring the transforms in place. I looked into the metafile format and what I discovered is that the individual character positions are actually stored in the metafile at pixel-level resolution - that's why scaling the metafile obviously didn't work since the rounding has already happened by that point.
I can think of two real solutions that would work around this issue:
Use a device context with a higher user-specified dots per inch, such that GetDeviceCaps returns different values. (Note: some examples propose using the printer device since they generally have higher DPI, but I want my code to work on systems that don't have a printer and be able to render to an off-screen buffer).
Some way to tell the rich edit control to assume the device context has a different dots per inch than reported by GetDeviceCaps.
Anything else seems like it would still be subject to these rounding errors.
Does anyone (1) have an idea of how to implement either of the solutions I have proposed, or (2) have an alternate idea of how to achieve my goal of getting an accurate high-DPI output into a buffer?
I'm having the exact same problem.
A quick solution is to draw the text into 100% scale bitmap, and then just scale the bitmap.
it's not the best solution, but it might work for you.
Did you find any better solutions? if so, please share them here.
Also notice, that this problem also occurs when you draw the text to a 100% meta-file and then scale the meta-file to the screen - I believe this has something to do with GDI text drawing functions that aren't working well with scaling.
Roey
You could multiply the point size of all the text in the control by a factor of 4 and render the control to a bitmap that's 4 times larger.
If you're populating the control yourself this would be quite straightforward. If you support arbitrary content entered by the user it would be a lot more work and would require extra effort to handle anything that wasn't text (e.g. embedded bitmaps).
I just spent two weeks on a similar problem. I needed a Rich Edit that was scalable for WYSISWG editing. As we've found the windows rich edit control does not support scaling correctly with EM_FORMATRANGE and inter character spacing does not change between zoom levels and font sizes only scale in discrete font size steps.
Since I did not need large differences in scale the solution I settled on was to use the windowless text edit interfaces from ITextServices to render to an internal bitmap at a fixed resolution. Then I used GDI+ to resample the internal bitmap to the needed screen size with trilinear filtering. The results emulated a scalable rich edit well enough as long as scale difference were not too large, it was good enough for my needs.
After trying many different options I am convinced you can not get precise scaling with the windows rich edit control. You can write your own control that renders text. However, you would need to have a separate draw call for every piece of text with a different style. Also you would need to handle all the nicities rich edit handles for you like highlighting text, placing the cursor, handling mouse and keyboard input, parsing rtf text, et cetera. It would probably be best just to buy a third party component in this case(I could not find any suitable free open source components). In case someone wants to attempt it I will point out the relevant starting points for text rendering for different APIs.
GDI - TextOut does not set inter-character spacing correctly. You need GetCharacterPlacement and ExTextOut. You also need to calculate scaling yourself. You probably don't want to use GDI
GDI+ - DrawString handles scaling correctly. GDI+ is a reasonable option
DirectWrite - If you are willing to limit yourself to Vista Platform Update or later, DirectWrite is the newest text API from Microsoft.
Also here is link describing how text rendering is different between GDI and GDI+:
http://windowsclient.net/articles/gdiptext.aspx
Try using the EM_SETZOOM message to let the rich edit control scale the output itself.
Summary
Using Windows GDI to convert 24-bit color to indexed color, it seems GDI chooses colors which are "close enough" even though there are exact matches in the supplied palette.
Can anyone confirm this as a GDI issue or am I making a mistake somewhere?
Maybe there's a "please check the whole palette for color matches" flag which I've failed to find?
Note: This is not about quantizing. The source is 24-bit but contains 256 or fewer colors so an exact palette is trivial to calculate. The problem is GDI doesn't use the full palette.
Workaround
I've worked around the problem by mapping the colors myself but I'd prefer to use GDI as it should be better optimized. Problem is, it seems to be "fast but wrong."
Detailed description
My source image is 24-bit but uses 256 (or fewer) colors. I generate an exact palette for it and ask GDI to transfer the image into an indexed bitmap using that palette. For some pixels GDI chooses similar, but not exact, colors even though there are exact colors elsewhere in the palette. This ruins smooth gradients.
This problem happens with:
SetDIBitsToDevice
StretchDIBits
BitBlt
StretchBlt
The problem does not happen with:
SetPixel or SetPixelV in a loop (incredibly slow!)
Using my own code to do the mapping
I've tested this on:
Windows 7 (NVidia hardware/drivers)
Windows Vista (ATI hardware/drivers)
Windows 2000 (VMware hardware/drivers)
In every test I get the same results. (Not just the wrong colours but always the same wrong colors.)
I don't think the issue is color management (ICM/ICC profiles/etc.) as most of the APIs say they don't use it, I've tried explicitly turning it off on the GDI DC as well as via the V5 bitmap header, and I don't think it would apply within my vanlilla-Win2k VM.
Test Project
Code for a simple Win32/GDI/VS2008 test project can be found here:
http://www.pretentiousname.com/data/GdiIndexColor.zip
The Test1 function within Win32UI.cpp is the actual test. It has two arrays of RGBQUADs, one the source image and the other the exact palette for it. It verifies that the palette really is exact and then asks GDI to convert the image using the APIs mentioned above, testing the result each time. For each test it'll tell you the first incorrect pixel's before & after colors, or tell you that all pixels are correct if it worked.
Thanks!
Thanks for reading my question! Sorry if it's the result of me doing something really dumb! :-)
I ran into this exact same problem, eventually contacted Microsoft and provided them with a test case. In the test case I provided a gradient image that had 128 colors in a 24bit DIB, I then converted that to an 8bit DIB that was created with a color table containing all 128 colors from the 24bit image. After conversion, the 8 bit image had only used 65 of the 128 colors.
To sum up their response:
This is not a bug, GDI does use a close enough calculation when down converting the color depth of an image. This is not really documented anywhere, and the only way to insure all of the original colors will convert exactly is to manually manipulate the pixels yourself.
Are you using SetDIBColorTable()? This article seems to imply that, when drawing to a DIB, it is not sufficient to call SelectPalette() but that SetDIBColorTable() also needs to be called to set the palette for the DIB:
However, if the application is using
a DIB section, you create a logical
palette from the DIB colour table as
usual and then also pass the DIB
colour table to the DIB section with a
call to SetDIBColorTable(). Despite
what the "Platform SDK" documentation
of RealizePalette() appears to imply,
RealizePalette() does not adjust the
colour table of the DIB section.
The article contains some more information on drawing into palettized DIBs that may be relevant (see the section "Palettes and DIB sections").
I vaguely remember that you also need to call RealizePalette(hdc) after a palette is selected into a DC. We ditched our palette code so long ago that the code isn't even in our source tree anymore. I see from your code that you alrady tried this, but I suggest that you might want to play with that some more.
I do remember that the palette code was pretty fragile, and we stopped using it as soon as we could.
Some older AVI files would have 8 bit palettized video with a palette imbedded in the file, so playback code for those files would need to load an realize a palette. I remember that realize didn't do anything unless you were the foreground app, but that SHOULD only apply to screen DC's and not memory DC's.
If you searched around for sample source code that could play palettized AVI's you might find something that shows the magic formula for getting palettes to work.
Sorry I can't be more help.
The accepted answer to the question C++ Library for image recognition: images containing words to string recommended that you:
Upsize/Downsize your input image to 300 DPI.
How would I do this... I was under the impression that DPI was for monitors, not image formats.
I think the more accurate term here is resampling. You want a pixel resolution high enough to support accurate OCR. Font size (e.g. in points) is typically measured in units of length, not pixels. Since 72 points = 1 inch, we need 300/72 pixels-per-point for a resolution of 300 dpi ("pixels-per-inch"). That means a typical 12-point font has a height (or more accurately, base-line to base-line distance in single-spaced text) of 50 pixels.
Ideally, your source documents should be scanned at an appropriate resolution for the given font size, so that the font in the image is about 50 pixels high. If the resolution is too high/low, you can easily resample the image using a graphics program (e.g. GIMP). You can also do this programmatically through a graphics library, such as ImageMagick which has interfaces for many programming languages.
DPI makes sense whenever you're relating an image in pixels to a physical device with a picture size. In the case of OCR, it usually means the resolution of the scan, i.e. how many pixels will you get for each inch of your scan. A 12-point font is meant to be printed at 12/72 inches per line, and an upper-case character might fill about 80% of that; thus it would be approximately 40 pixels tall when scanned at 300 DPI.
Many image formats have a DPI recorded in them. If the image was scanned, this should be the exact setting from the scanner. If it came from a digital camera, it always says 72 DPI, which is a default value mandated by the EXIF specification; this is because a camera can't know the original size of the image. When you create an image with an imaging program, you might have the opportunity to set the DPI to any arbitrary value. This is a convenience for you to specify how you want the final image to be used, and has no bearing on the detail contained in the image.
Here's a previous question that asks the details of resizing an image:
How do I do high quality scaling of a image?
OCR software is typically designed to work with "normal" font sizes. From an image point of view, this means that it will be looking for letters perhaps around the 30 to 100 pixel height range. Images of much higher resolution would produce letters that appear much too large for the OCR software to process efficiently. Similarly, images of lower resolution would not provide enough pixels for the software to recognise letters.
"How would I do this... I was under the impression that dpi was for monitors, not image formats."
DPI stands for dots per inch. What does it have to do with monitors? Well, we have a pixel made of three RGB subpixels. The higher the DPI, the more details you cram into that space.
DPI is a useful measurement for displays and prints but nothing useful... in fact, nothing for image formats themselves.
The reason for DPI being tagged inside some formats is to instruct the devices to display at that resolution but from what I understand, virtually all ignore that instruction and does its best to optimize the image for a particular output.
You can change 72 dpi to 1 dpi or 6000 dpi in an image format and it won't make a difference whatsoever on a monitor. "Upsize/downsize to 300 dpi" makes no sense. Resampling does not change DPI either. Try it in Photoshop, uncheck "Resample" when changing the DPI and you'll see no difference whatsoever. It will NOT get bigger or smaller.
DPI is totally meaningless for image formats, IMO.
If your goal is OCR, DPI makes sense as the number of dots in your image for each inch in the original scanned document. If your dpi is too low, the information is gone forever, and even bicubic interpolation is not going to to a brilliant job recovering it. If your dpi is too high, it's easy to throw away bits.
To get the job done; I'm a big fan of the netpbm/pbmplus toolset; the tool to start with is pnmscale, although if you've got a bitmap you want to consider related tools such as pbmreduce.
Hopefully someone has an answer, and it's not TOO complex. I'm working on a C++ dll (no C# or .Net, fully static DLL).
Anyhow, its woring on buildin monochrome bitmaps. I have all of it working EXCEPT the resolution. I get the Device Context, Get Compatible Device Context, build bitmap, draw what I need to (as black/white), and can save. All this works fine. However, I can't figure out how to set the resolution of the bitmap.
While doing some testing from another utility under C#, I can create a bitmap and set the resolution. In doing so, I ran a routine to generate the same file content with a parameter from 1 to 300 for the resolution. Each image came out exactly the same EXCEPT for the values in the "biCompression" DWORD property. The default is the screen resolution of 96x96, but need to obviously change for printers of 300x300, and even some at 203x203 resolution.
Are you absolutely sure? The description of the behavior you observe sounds fishy to me and I would suspect the code that you're using to write your bitmaps or your code that reads them back in.
Are you sure you don't want to set biXPelsPerMeter and biYPelsPerMeter? Those two fields tell you how many pixels per meter in X and Y, which you can use to set the DPI. biCompression only deals with the compression type of the bitmap, e.g., RLE, JPG, PNG, etc.
Thanks for input, but I'll look into the biXPelsPerMeter an biYPels... too. I'll double check the format, and what was set... if so, you may have hit it with a second set of eyes (minds) on my issue.
Thanks