How to check if aspect ratio auto adjustment is enabled in monitor - c++

Game application is written in C++ and uses DirectX 8.
I am getting a maximum monitor resolution to calculate it's aspect ratio.
Then I use this value to fix game rendering (scale and set clipping to receive normal 4:3 image with black borders on wide screen monitors).
How can I check if monitor is using aspect ratio auto adjustment now?
Because my scaling plus monitor scaling makes resulting image overscaled.
Thanks
EDIT:
I saw correct different monitor resolution handling with or without aspect ratio auto adjustment in "Royal Envoy" casual game. But don't know how do they do this.

Often this is implemented in the monitor itself, so the application would have no way to detect this case.
Note that you also need to handle the case where the monitor resolution is set to 1280x1024, which is 5:4 resolution using non-square pixels (where the display dimensions are still physically 4:3). Assuming your pixels are square is not always correct. (Although there's probably a way to query the OS or other window manager for this information.)

Related

How do I change the resolution of a window in SDL? Everything is too large

I am making a game in SDL and had an oversight of the resolution. Everything (the sprites) are too large and I want to know if I can bump up the resolution so everything is smaller without changing every sprite or modifying how the surfaces are created? Here is what I am dealing with:
Since SDL 2.0.0, you can use the function SDL_RenderSetLogicalSize to set a device independent resolution for rendering.
This way, you can set up your renderer with the desired target resolution and draw everything as if you are working with that resolution.
I cite the official documentation:
This is nice in that you can change the logical rendering size to achieve various effects, but the primary use is this: instead of trying to make the system work with your rendering size, we can now make your rendering size work with the system. On my 1920x1200 monitor, this app thinks it's talking to a 640x480 resolution now, but SDL is using the GPU to scale it up to use all those pixels. Note that 640x480 and 1920x1200 aren't the same aspect ratio: SDL takes care of that, too, scaling as much as possible and letterboxing the difference.
Please, note that you can control how scaling happens with a bunch of hints, as an example SDL_HINT_RENDER_SCALE_QUALITY.
Yes you can bumb the resolution up. It would make your sprite looks smaller if you're using fullscreen mode, just like when you adjust your monitor resolution. When you're in Window mode, the window would get bigger but the sprite would look the same.
You can resorts to virtual resolution as #skypjack mentioned as well. One approach is to use SDL_RenderSetLogicalSize(), another is to do render target.
You can also resize the sprite, either online (specify the target rect which is smaller than the source), or off-line (using batch image processing tool like ACDSee or Photoshop).
In the end you'd have more space to fill.
My advise is, when you're working with graphics, try having a target resolution in mind, 1280x720 or 800x600 for example. Then designing all the asset around this resolution. In the end you will have the assets that fits well together without resorting to programming trick. Then if you run into devices with different resolution, you can use the 'virtual resolution' I mentioned above to fix it. There are a few tricks like safe-frame or letterboxing that would be needed too, but let's leave it as an advance question :-).

Does "Design Resolution Size" affect how sprites created using OpenGL renders them on screen?

In cocos2d-x, there is the the concept of "Design Resolution Size", which lets you pick the appropriate asset, depending on the size of the screen, and you can apply the appropriate content scaling factor.
Here is the problem:
I draw a 2d Sin Curve by passing in a set of vertices. These vertices are computed for a screen of 480x320.
What happens when I run it on a device which has a resolution of 1920x1200, even though the design resolution is set to 480x320 ? Do I have to recompute the vertices so that the same number of crests / troughs are seen on the higher resolution device, or is there some way to do this without extra computation ?
I don't have any more devices to test this, so I don't know how to figure this out.
EDIT: I now use cocos2d-x v3.
Anything drawn directly/solely with OpenGL will bypass/ignore any and all cocos2d internal settings and code paths, such as design resolution.
You can always use a simulator to test your resolution-specific code. For iOS the simulator comes with Xcode, for Android use the Emulator.
After purchasing and testing this on a number of devices with cocos2d-x v3, I find that scaling is handled automatically. Regardless of the resolution of the actual device, my openGl draw commands seem to result in the same output on all devices. It appears that cocos2d-x does something internally, so things looks the same on all devices.

Rounding errors when scaling the rendered output of the rich edit control via EM_FORMATRANGE

I am using the EM_FORMATRANGE message to render the output of a rich text control to an arbitrary device context. However, when rendering to a bitmap, the dots-per-inch of the bitmap's device context is the same as the display device's DPI, which is 96 dots-per-inch. This is much lower than what I would like to render to. I'd rather render at a much higher DPI so that the user can zoom in, and perhaps print on a high-DPI printer later.
I suspect what happens is that the RTF control calls GetDeviceCaps with LOGPIXELSX and LOGPIXELSY to get the number of pixels per inch of the device. It then renders the document using this DPI value at a 100% zoom level. Windows display devices always return a value of 96 DPI, unless large fonts are being used on the system (as set in Control Panel) and the application is DPI-aware.
Many examples on the Internet propose scaling the output of EM_FORMATRANGE. This is so that any arbitrary DPI resolution can be achieved. Most examples generally involve using SetMapMode, SetWindowExtEx, and SetViewportExtEx (e.g. see http://social.msdn.microsoft.com/Forums/en-us/netfxbcl/thread/37fd1bfb-f07b-421d-9b5e-5f4492ffbbc3). These functions can be used to scale the rich text control's rendered output: for example, if I specify 400% scaling, then if the rich text control rendered something that was 5 pixels wide, it would actually become 20 pixels wide.
Unfortunately, the old GDI functions use integers instead of floating point numbers. For example, suppose the RTF control decided that an element should be drawn at (12.7, 15.3) pixels. This would be rounded to a position of (13, 15). These rounded coordinates are passed to GDI, which then scales up the image using scaling specified by SetMapMode: for the example of 400%, it would be (13*4, 15*4), or (52, 60). But this is not accurate: the element would have better been placed at (12.7*4, 15.3*4), or (51, 61). The worst part is that for some cases, the error becomes cumulative.
I believe this is the underlying cause of this very noticeable error when scaling some simple text:
The above example is 8 point Segoe UI, scaled to 400% using EM_FORMATRANGE and SetMapMode on a 96 DPI display device context. The text has now become 32 point size, but the space between each character is too high and looks unnatural.
The above example was created in WordPad by entering the text as 8 point Segoe UI and then using the zoom control to set to a 400% zoom level. The space between each character looks normal. The exact same result is achieved with a 32 point font and 100% zoom level.
To work around this issue, I have tried the following. For each thing tried, the result has been identically unsatisfactory when scaled to 400%.
Using a scaling transform set using SetWorldTransform instead of the scaling done with SetMapMode and SetWindowExtEx etc.
Passing the device context for a metafile to EM_FORMATRANGE, and then scaling the metafile later.
Using SetMapMode to scale in conjunction with rendering to a metafile, and then showing the metafile later without scaling.
I believe the results are always unsatisfactory because the problem boils down to the fact that the rich edit control is rounding to the nearest integer and rendering to what it thinks is a 96 DPI device - ignoring the transforms in place. I looked into the metafile format and what I discovered is that the individual character positions are actually stored in the metafile at pixel-level resolution - that's why scaling the metafile obviously didn't work since the rounding has already happened by that point.
I can think of two real solutions that would work around this issue:
Use a device context with a higher user-specified dots per inch, such that GetDeviceCaps returns different values. (Note: some examples propose using the printer device since they generally have higher DPI, but I want my code to work on systems that don't have a printer and be able to render to an off-screen buffer).
Some way to tell the rich edit control to assume the device context has a different dots per inch than reported by GetDeviceCaps.
Anything else seems like it would still be subject to these rounding errors.
Does anyone (1) have an idea of how to implement either of the solutions I have proposed, or (2) have an alternate idea of how to achieve my goal of getting an accurate high-DPI output into a buffer?
I'm having the exact same problem.
A quick solution is to draw the text into 100% scale bitmap, and then just scale the bitmap.
it's not the best solution, but it might work for you.
Did you find any better solutions? if so, please share them here.
Also notice, that this problem also occurs when you draw the text to a 100% meta-file and then scale the meta-file to the screen - I believe this has something to do with GDI text drawing functions that aren't working well with scaling.
Roey
You could multiply the point size of all the text in the control by a factor of 4 and render the control to a bitmap that's 4 times larger.
If you're populating the control yourself this would be quite straightforward. If you support arbitrary content entered by the user it would be a lot more work and would require extra effort to handle anything that wasn't text (e.g. embedded bitmaps).
I just spent two weeks on a similar problem. I needed a Rich Edit that was scalable for WYSISWG editing. As we've found the windows rich edit control does not support scaling correctly with EM_FORMATRANGE and inter character spacing does not change between zoom levels and font sizes only scale in discrete font size steps.
Since I did not need large differences in scale the solution I settled on was to use the windowless text edit interfaces from ITextServices to render to an internal bitmap at a fixed resolution. Then I used GDI+ to resample the internal bitmap to the needed screen size with trilinear filtering. The results emulated a scalable rich edit well enough as long as scale difference were not too large, it was good enough for my needs.
After trying many different options I am convinced you can not get precise scaling with the windows rich edit control. You can write your own control that renders text. However, you would need to have a separate draw call for every piece of text with a different style. Also you would need to handle all the nicities rich edit handles for you like highlighting text, placing the cursor, handling mouse and keyboard input, parsing rtf text, et cetera. It would probably be best just to buy a third party component in this case(I could not find any suitable free open source components). In case someone wants to attempt it I will point out the relevant starting points for text rendering for different APIs.
GDI - TextOut does not set inter-character spacing correctly. You need GetCharacterPlacement and ExTextOut. You also need to calculate scaling yourself. You probably don't want to use GDI
GDI+ - DrawString handles scaling correctly. GDI+ is a reasonable option
DirectWrite - If you are willing to limit yourself to Vista Platform Update or later, DirectWrite is the newest text API from Microsoft.
Also here is link describing how text rendering is different between GDI and GDI+:
http://windowsclient.net/articles/gdiptext.aspx
Try using the EM_SETZOOM message to let the rich edit control scale the output itself.

The legacy device context is too coarse

I have a Process Control system. It has a huge 2D workspace where all the logic is laid out.
The 2D workspace is a coordinate system.
You usually do not see the whole workspace at once, but rather some in-zoomed part of it focusing on some part of the controlled process. Such subsystem views are bookmarked into predefined named images (Power Generator1, Diesel Generator, Main lubrication pump etc).
This workspace interacts with many legacy MFC software components that individually contribute graphics onto the workspace (the device context is passed around to all contributors).
Now, one of the software components renders AutoCAD drawings onto the surface. However, the resolution of the device context is not sufficient for the details of this job. The device context logical resolution is unfortunately dictated by our own coordinate system, which at high zoom levels is quite different from the device units (pixels).
For example, a line drawn using
DC.MoveTo(1,1);
DC.LineTo(1,2);
.... will actually, even though it's drawn directly onto the device context by increment of just one logical unit, cover quite some distance on the screen. But the width of the line would still be only one device pixel. A circle looks high res, but its data (center point and radius) can only be done in coarse increments.
I have considered the following options:
* When a predefined image is loaded and displayed, create a device context with a better suited resolution. The problem would then be that the other graphic providers interact with it using old logical units, which when used against the new DC would result in way too small and displaced graphical elements.
I wonder if I can create some DC wrapper that accepts both kinds of coordinates through different APIs, which are then translated into high res coordinates internally.
Is it possible to have two DCs with different logical/device unit ratio? And render them both to screen?
I mentioned that a circle is rendered beautifully with one pixel width even though it's placement and radius are restricted. Vertical lines are also rendered beautifully, even though the end points can only be given in coarse coordinates. This leads me to believe that it is technically possible to draw in an area that in DC logical coordinates could only be described in decimals.
Does anybody have any idea about what to do?
You need to scale your model, not the device context.
You could draw the high-def image to another DC in a new window and place that window over your low-res-drawing. Of course you have to handle clipping yourself.

How do I enlarge a picture so that it is 300 DPI?

The accepted answer to the question C++ Library for image recognition: images containing words to string recommended that you:
Upsize/Downsize your input image to 300 DPI.
How would I do this... I was under the impression that DPI was for monitors, not image formats.
I think the more accurate term here is resampling. You want a pixel resolution high enough to support accurate OCR. Font size (e.g. in points) is typically measured in units of length, not pixels. Since 72 points = 1 inch, we need 300/72 pixels-per-point for a resolution of 300 dpi ("pixels-per-inch"). That means a typical 12-point font has a height (or more accurately, base-line to base-line distance in single-spaced text) of 50 pixels.
Ideally, your source documents should be scanned at an appropriate resolution for the given font size, so that the font in the image is about 50 pixels high. If the resolution is too high/low, you can easily resample the image using a graphics program (e.g. GIMP). You can also do this programmatically through a graphics library, such as ImageMagick which has interfaces for many programming languages.
DPI makes sense whenever you're relating an image in pixels to a physical device with a picture size. In the case of OCR, it usually means the resolution of the scan, i.e. how many pixels will you get for each inch of your scan. A 12-point font is meant to be printed at 12/72 inches per line, and an upper-case character might fill about 80% of that; thus it would be approximately 40 pixels tall when scanned at 300 DPI.
Many image formats have a DPI recorded in them. If the image was scanned, this should be the exact setting from the scanner. If it came from a digital camera, it always says 72 DPI, which is a default value mandated by the EXIF specification; this is because a camera can't know the original size of the image. When you create an image with an imaging program, you might have the opportunity to set the DPI to any arbitrary value. This is a convenience for you to specify how you want the final image to be used, and has no bearing on the detail contained in the image.
Here's a previous question that asks the details of resizing an image:
How do I do high quality scaling of a image?
OCR software is typically designed to work with "normal" font sizes. From an image point of view, this means that it will be looking for letters perhaps around the 30 to 100 pixel height range. Images of much higher resolution would produce letters that appear much too large for the OCR software to process efficiently. Similarly, images of lower resolution would not provide enough pixels for the software to recognise letters.
"How would I do this... I was under the impression that dpi was for monitors, not image formats."
DPI stands for dots per inch. What does it have to do with monitors? Well, we have a pixel made of three RGB subpixels. The higher the DPI, the more details you cram into that space.
DPI is a useful measurement for displays and prints but nothing useful... in fact, nothing for image formats themselves.
The reason for DPI being tagged inside some formats is to instruct the devices to display at that resolution but from what I understand, virtually all ignore that instruction and does its best to optimize the image for a particular output.
You can change 72 dpi to 1 dpi or 6000 dpi in an image format and it won't make a difference whatsoever on a monitor. "Upsize/downsize to 300 dpi" makes no sense. Resampling does not change DPI either. Try it in Photoshop, uncheck "Resample" when changing the DPI and you'll see no difference whatsoever. It will NOT get bigger or smaller.
DPI is totally meaningless for image formats, IMO.
If your goal is OCR, DPI makes sense as the number of dots in your image for each inch in the original scanned document. If your dpi is too low, the information is gone forever, and even bicubic interpolation is not going to to a brilliant job recovering it. If your dpi is too high, it's easy to throw away bits.
To get the job done; I'm a big fan of the netpbm/pbmplus toolset; the tool to start with is pnmscale, although if you've got a bitmap you want to consider related tools such as pbmreduce.