Coordinate Error using GetWindowRect in different DPI - c++

I want to capture the coordinate of components in my MFC program.
Now I can perfectly complete this by using GetWindowRect.
However, when I set my windows dpi to 150% (120 dpi), I get different coordinates from GetWindowRect.
Therefore, I survey some method to convert new coordinates to that in default dpi (96 dpi).
Finally, I found there was some error when I tried:
Rect.top = Rect.top * (DEFAULT_DPIY / CURRENT_DPIY);
Rect.left = Rect.left * (DEFAULT_DPIX / CURRENT_DPIX);
The converted value is very close, but not equal.
Is there any method to convert it without error ?

Your program is subject to DPI virtualization. The right way to deal with this is to make your program high DPI aware but that may well involve more changes than you are prepared to attempt.
If being high DPI aware is not something you wish to tackle, then you can at least make your arithmetic better. Your code uses integer divide. But that's going to be inaccurate. In order to minimise that inaccuracy you should perform the division after the multiplication:
Rect.top = (Rect.top * DEFAULT_DPIY) / CURRENT_DPIY;
Rect.left = (Rect.left * DEFAULT_DPIX) / CURRENT_DPIX;
Of course, the parentheses could be omitted here without changing the meaning, but I think it's nice to be explicit about the ordering of the operations in this case.
Another option would be to use MulDiv:
Rect.top = MulDiv(Rect.top, DEFAULT_DPIY, CURRENT_DPIY);
Rect.left = MulDiv(Rect.left, DEFAULT_DPIX, CURRENT_DPIX);

Related

Incorrect metrics and sizes of font created by CreateFont()

I trying to render a font into bitmap using WinAPI, but I can't reach needed sizes of font.
Here's how the font is initialized:
HDC dc = ::CreateCompatibleDC(NULL);
::SetMapMode(dc, MM_TEXT);
::SetTextAlign(dc, TA_LEFT | TA_TOP | TA_UPDATECP);
int size_in_pixels = 18;
HFONT font = ::CreateFontA(-size_in_pixels, ..., "Arial");
::SelectObject(dc, font);
::TEXTMETRICW tm = { 0 };
GetTextMetricsW(dc, &tm);
But after it I getting incorrect values both in GetGlyphOutlineW and GetTextMetricsW, it's not size I passed as parameter
I know that it's expecting value in logical units, but in MM_TEXT 1 unit should be 1 pixel, don't it?
I expecting that CreateFontA accepting point size when I passing a negative value (like here https://i.stack.imgur.com/tEt8J.png), but in fact it's wrong.
I tried bruteforcing values, and find out proper parameter for a few sizes:
18px = -19; 36px = -39; 73px = -78;
Also I tried formula that provided by Microsoft:
nHeight = -MulDiv(PointSize, GetDeviceCaps(hDC, LOGPIXELSY), 72);
But it's also giving me a wrong result, rendered text (using GetGlyphOutlineW) is larger if measure it (for example height of 'j' should have exact size that I passed)
Also metrics from GetTextMetricsW are wrong, for example tmAscent. I know that on Windows it's including internal leading, but even if subtract tmInternalLeading from tmAscent it's still incorrect.
By the way, values from GetCharABCWidthsW are correct, so a+b+c is width of glyph in pixels (while documentation says it should be in logical units).
Also I should say about DPI, usually I using 125% on Windows 10 scale in settings, but I tried even with 100%, interesting that ::GetDeviceCaps(dc, LOGPIXELSY) not changing with scale I using, it's always 96
Here's example of CreateFontA(-128, ...) with final atlas and metrics:
rendered atlas
Question #1: What should I do to pass wanted point size in pixels and receive glyphs in proper size with correct metrics in pixels?
Question #2: What the strange units all these functions are using?
When you use ::SetMapMode(dc, MM_TEXT); the font size is specified in device pixels. Negative value excludes internal leading, so for the same absolute value the negative ones produce visually bigger fonts. If you want to get same height from GetTextExtentPoint32 for different fonts, use positive values.
In your example with -128 height, you are requesting font for which, after internal leading exclusion, height is 128 pixels. Font mapper selects 143 which is correct for internal leading of 15 pixels (128+15=143). tmAscent + tmDescent are also correct (115+28=143). You get what you specified.
You should take into account that values in text metric don't state hard bounds. Designer can design fonts so its glyphs sometimes go beyond guiding lines or don't reach them.
for example height of 'j' should have exact size that I passed
Dot over j can go beyond or not reach top line if designer finds it visually plausible to design it that way.
interesting that ::GetDeviceCaps(dc, LOGPIXELSY) not changing with scale I using, it's always 96
Unless you log off and log in, system DPI doesn't change. For per monitor DPI aware application you have to get DPI from monitor parameters or cache value given by WM_DPICHANGED.
Question #1: What should I do to pass wanted point size in pixels and receive glyphs in proper size with correct metrics in pixels?
I think you want to get specific distance between top and bottom lines and this is exactly how you create font HFONT font = ::CreateFontA(-size_in_pixels, ..., "Arial");. The problem lies in your assumption that font design lines are hard boundaries for each glyph, but font's designer don't have to strictly align glyphs to these lines. If you want glyphs strictly aligned, probably there is no way to get it. Maybe check different font.
Question #2: What the strange units all these functions are using?
When mode is set to WM_TEXT, raw device pixels are used. Positive height specifies height including tmInternalLeading, negative excludes it.
For positive value:
tmAscent + tmDescent = requestedHeight
For negative value:
tmAscent + tmDescent - tmInternalLeading = requestedHeight
Bellow I have pasted screen shots with different fonts showing that depending on selected font glyphs could be designed so they don't reach top line or go beyond it and bottom line in most cases also isn't reached.
Seems that for your requirements Arial Unicode MS would be better fit (but j still doesn't reach where you want it).
Arial:
Arial Unicode MS
Input Mono
Trebuched MS

pygame windows resize without changing anything in content

I am wondering if I can change window size and scale everything inside about 50 %. I would love to keep all the logic behind the program still relevant (for example pygame would register click on 1000 px, so after change at 500 px) I'm don't think there is an easy answer..
There's two different ways you could go about this. If you want to pick a screen size and have it stay the same until you decide to change it on the backend, you can go to each call of pygame.blit(foo) or pygame.draw.shape(foobar) and scale up the dimensions of foo and foobar by 50%. That's the simple way, but it has some drawbacks. For example, if you ever want to change the screen size again, you'll have to do it over again. Certainly not ideal.
If you want the screen to be re-sizable by the user, or if you want to offer yourself greater flexibility in choosing your own screen size, you have to have some formulas to determine where to draw something based on screen size. For example, let's say you're doing a chess game. You could have something like this:
SCREEN_WIDTH = 500
TILE_WIDTH = SCREEN_WIDTH / 8
Then your drawBoard function could look like:
for i in range(8):
for j in range(8):
....
#Color = black or white depending on which tile we're drawing
pygame.draw.rect(mainWindow, color, (i * SCREEN_WIDTH, J * SCREEN_WIDTH, SCREEN_WIDTH, SCREEN_WIDTH))
Now, if you want a larger screen, your tiles will scale up with it. Or, to make it a little more complicated, let's say you want a buffer between the chessboard and the window edge. Then you could do
SCREEN_WIDTH = 500
BUFFER = 20
TILE_WIDTH = (SCREEN_WIDTH - BUFFER * 2) / 8
And your drawBoard function could look like:
for i in range(8):
for j in range(8):
....
#Color = black or white depending on which tile we're drawing
pygame.draw.rect(mainWindow, color, (BUFFER + i * SCREEN_WIDTH, BUFFER + J * SCREEN_WIDTH, SCREEN_WIDTH, SCREEN_WIDTH))
This way will be harder short-term, but totally worth it in the long run. You'll have to also factor in whether you want everything to resize, or if there's a min/max size, and a couple of other thoughts, but I'm sure you can figure that out.

Freetype 2. Is there a way to change the size of the font in pixels after you have loaded it?

So i'm playing around with creating a simple game engine in c++. I needed to render some text so I used this tutorial (http://learnopengl.com/#!In-Practice/Text-Rendering) for guidance. It's using the library freetype 2.
Everything works great, text is rendering as it should. But now when i'm fleshing the ui out and is creating labels I would like to be able to change the size of the text. I can do so by scaling the text, but I would like to be able to do so by using pixels.
Here you can see the scaling in action:
GLfloat xpos = x + ch.Bearing.x * scale;
GLfloat ypos = y + linegap + (font.Characters['H'].Bearing.y - ch.Bearing.y) * scale;
GLfloat w = ch.Size.x * scale;
GLfloat h = ch.Size.y * scale;
So in my method renderText I just pass a scale variable and it scales the text. But I would prefer to use pixels as it is more user friendly, is there any way I could do this in freetype 2 or am I stuck with a scale variable?
Assuming you don't want to regenerate the glyphs at a different resolution, but instead want to specify scale as a unit of pixels instead of a ratio (i.e. you want to say scale = 14 pixels instead of scale = 29%), then you can do the following: Save the height value you passed to FT_Set_Pixel_Sizes (which is 48 in the tutorial). Now if you want a 14-pixel render, just divide 14 by that number (48), so it would be scale = 14.0f / 48.0f. That will give you the scaling needed to render at a 14-pixel scale from a font that was originally generated with a 48-pixel height.
You might want to play with your OpenGL texture filters or mipmapping as well when you do this to improve your results. Additionally, fonts sometimes have low-resolution pixel hinting, which helps them be rendered clearly even at low resolutions; unfortunately this hinting information is lost/not used when you generate a high res texture and then scale it down to a smaller render size, so it might not look as clear as you desire.

Linear movement stutter

I have created simple, frame independent, variable time step, linear movement in Direct3D9 using ID3DXSprite. Most users cant notice it, but on some (including mine) computers it happens often and sometimes it stutters a lot.
Stuttering occurs with VSync enabled and disabled.
I figured out that same happens in OpenGL renderer.
Its not floating point problem.
Seems like problem only exist in AERO Transparent Glass windowed mode (fine or at least much less noticeable in fullscreen, borderless full screen window or with aero disabled), even worse when window lost focus.
EDIT:
Frame delta time doesnt leave bounds 16 .. 17 ms even when stuttering occurs.
Seems like my frame delta time measurement log code was bugged. I fixed it now.
Normally with VSync enabled frame renders 17ms, but sometimes (probably when sutttering happens) it jumps to 25-30ms.
(I dump log only once at application exit, not while running, rendering, so its does not affect performance)
device->Clear(0, 0, D3DCLEAR_TARGET, D3DCOLOR_ARGB(255, 255, 255, 255), 0, 0);
device->BeginScene();
sprite->Begin(D3DXSPRITE_ALPHABLEND);
QueryPerformanceCounter(&counter);
float time = counter.QuadPart / (float) frequency.QuadPart;
float deltaTime = time - currentTime;
currentTime = time;
position.x += velocity * deltaTime;
if (position.x > 640)
velocity = -250;
else if (position.x < 0)
velocity = 250;
position.x = (int) position.x;
sprite->Draw(texture, 0, 0, &position, D3DCOLOR_ARGB(255, 255, 255, 255));
sprite->End();
device->EndScene();
device->Present(0, 0, 0, 0);
Fixed timer thanks to Eduard Wirch and Ben Voigt (although it doesnt fix initial problem)
float time()
{
static LARGE_INTEGER start = {0};
static LARGE_INTEGER frequency;
if (start.QuadPart == 0)
{
QueryPerformanceFrequency(&frequency);
QueryPerformanceCounter(&start);
}
LARGE_INTEGER counter;
QueryPerformanceCounter(&counter);
return (float) ((counter.QuadPart - start.QuadPart) / (double) frequency.QuadPart);
}
EDIT #2:
So far I have tried three update methods:
1) Variable time step
x += velocity * deltaTime;
2) Fixed time step
x += 4;
3) Fixed time step + Interpolation
accumulator += deltaTime;
float updateTime = 0.001f;
while (accumulator > updateTime)
{
previousX = x;
x += velocity * updateTime;
accumulator -= updateTime;
}
float alpha = accumulator / updateTime;
float interpolatedX = x * alpha + previousX * (1 - alpha);
All methods work pretty much same, fixed time step looks better, but it's not quite an option to depend on frame rate and it doesn't solve problem completely (still jumps (stutters) from time to time rarely).
So far disabling AERO Transparent Glass or going full screen is only significant positive change.
I am using NVIDIA latest drivers GeForce 332.21 Driver and Windows 7 x64 Ultimate.
Part of the solution was a simple precision data type problem. Exchange the speed calculation by a constant, and you'll see a extremely smooth movement. Analysing the calculation showed that you're storing the result from QueryPerformanceCounter() inside a float. QueryPerformanceCounter() returns a number which looks like this on my computer: 724032629776. This number requires at least 5 bytes to be stored. How ever a float uses 4 bytes (and only 24 bits for actual number) to store the value. So precision is lost when you convert the result of QueryPerformanceCounter() to float. And sometimes this leads to a deltaTime of zero causing stuttering.
This explains partly why some users do not experience this problem. It all depends on if the result of QueryPerformanceCounter() does fit into a float.
The solution for this part of the problem is: use double (or as Ben Voigt suggested: store the initial performance counter, and subtract this from new values before converting to float. This would give you at least more head room, but might eventually hit the float resolution limit again, when the application runs for a long time (depends on the growth speed of the performance counter).)
After fixing this, the stuttering was much less but did not disappear completely. Analyzing the runtime behaviour showed that a frame is skipped now and then. The application GPU command buffer is flushed by Present but the present command remains in the application context queue until the next vsync (even though Present was invoked long before vsync (14ms)). Further analysis showed that a back ground process (f.lux) told the system to set the gamma ramp once in a while. This command required the complete GPU queue to run dry before it was executed. Probably to avoid side effects. This GPU flush was started just before the 'present' command was moved to the GPU queue. The system blocked the video scheduling until the GPU ran dry. This took until the next vsync. So the present packet was not moved to GPU queue until the next frame. The visible effect of this: stutter.
It's unlikely that you're running f.lux on your computer too. But you're probably experiencing a similar background intervention. You'll need to look for the source of the problem on your system yourself. I've written a blog post about how to diagnose frame skips: Diagnose frame skips and stutter in DirectX applications. You'll also find the whole story of diagnosing f.lux as the culprit there.
But even if you find the source of your frame skip, I doubt that you'll achieve stable 60fps while dwm window composition is enabled. The reason is, you're not drawing to the screen directly. But instead you draw to a shared surface of dwm. Since it's a shared resource it can be locked by others for an arbitrary amount of time making it impossible for you to keep the frame rate stable for your application. If you really need a stable frame rate, go full screen, or disable window composition (on Windows 7. Windows 8 does not allow disabling window composition):
#include <dwmapi.h>
...
HRESULT hr = DwmEnableComposition(DWM_EC_DISABLECOMPOSITION);
if (!SUCCEEDED(hr)) {
// log message or react in a different way
}
I took a look at your source code and noticed that you only process one window message every frame. For me this caused stuttering in the past.
I would recommend to loop on PeekMessage until it returns zero to indicate that the message queue is exhausted. After that render a frame.
So change:
if (PeekMessageW(&message, 0, 0, 0, PM_REMOVE))
to
while (PeekMessageW(&message, 0, 0, 0, PM_REMOVE))
Edit:
I compiled and ran you code (with another texture) and it displayed the movement smoothly for me. I don't have aero though (Windows 8).
One thing I noticed: You set D3DCREATE_SOFTWARE_VERTEXPROCESSING. Have you tried to set this to D3DCREATE_HARDWARE_VERTEXPROCESSING?

Set DPI for QImage

I'm drawing text using QPainter on a QImage, and then saving it to TIFF.
I need to increase the DPI to 300, which should make the text bigger in terms of pixels (for the same point size).
You can try using QImage::setDotsPerMeterY() and QImage::setDotsPerMeterX(). DPI means "dots per inch". 1 inch equals 0.0254 meters. So you should be able to convert to dots per meter (dpm):
int dpm = 300 / 0.0254; // ~300 DPI
image.setDotsPerMeterX(dpm);
image.setDotsPerMeterY(dpm);
It's not going to be exactly 300DPI (it's actually 299.9994), since the functions only work with integral values. But for all intents and purposes, it's good enough (299.9994 vs 300 is quite good, I'd say.)
There are 39.37 inches in a meter. So:
Setting:
qimage.setDotsPerMeterX(xdpi * 39.37);
qimage.setDotsPerMeterY(ydpi * 39.37);
Getting:
xdpi = qimage.dotsPerMeterX() / 39.37;
ydpi = qimage.dotsPerMeterY() / 39.37;