GetSystemMetrics(SM_CYVIRTUALSCREEN) returning incorrect height? - c++

I'm attempting to get the width and height of the primary monitor via GetSystemMetrics. However, calling:
GetSystemMetrics(SM_CYVIRTUALSCREEN)
Is returning a value of 1018, rather than the actual vertical resolution, which is 1080.
Now, I thought maybe I misunderstood the docs, so I tried calling
SystemParametersInfo(SPI_GETWORKAREA)
to see if maybe that was actually the one that gave the full screen. But, it does as it describes, and returns the working area of the screen (total_height - taskbar_height). Which in my case is 1040 pixels (1080 - 40 (taskbar height)).
So, I'm a bit stumped. Where is 1018 coming from? What's causing it to be off by 62 pixels?

GetSystemMetrics(SM_CYSCREEN) should do the job.
As per MSDN this is equal to GetDeviceCaps(hdcPrimaryMonitor, VERTRES) which might be what you really want.

Related

Incorrect metrics and sizes of font created by CreateFont()

I trying to render a font into bitmap using WinAPI, but I can't reach needed sizes of font.
Here's how the font is initialized:
HDC dc = ::CreateCompatibleDC(NULL);
::SetMapMode(dc, MM_TEXT);
::SetTextAlign(dc, TA_LEFT | TA_TOP | TA_UPDATECP);
int size_in_pixels = 18;
HFONT font = ::CreateFontA(-size_in_pixels, ..., "Arial");
::SelectObject(dc, font);
::TEXTMETRICW tm = { 0 };
GetTextMetricsW(dc, &tm);
But after it I getting incorrect values both in GetGlyphOutlineW and GetTextMetricsW, it's not size I passed as parameter
I know that it's expecting value in logical units, but in MM_TEXT 1 unit should be 1 pixel, don't it?
I expecting that CreateFontA accepting point size when I passing a negative value (like here https://i.stack.imgur.com/tEt8J.png), but in fact it's wrong.
I tried bruteforcing values, and find out proper parameter for a few sizes:
18px = -19; 36px = -39; 73px = -78;
Also I tried formula that provided by Microsoft:
nHeight = -MulDiv(PointSize, GetDeviceCaps(hDC, LOGPIXELSY), 72);
But it's also giving me a wrong result, rendered text (using GetGlyphOutlineW) is larger if measure it (for example height of 'j' should have exact size that I passed)
Also metrics from GetTextMetricsW are wrong, for example tmAscent. I know that on Windows it's including internal leading, but even if subtract tmInternalLeading from tmAscent it's still incorrect.
By the way, values from GetCharABCWidthsW are correct, so a+b+c is width of glyph in pixels (while documentation says it should be in logical units).
Also I should say about DPI, usually I using 125% on Windows 10 scale in settings, but I tried even with 100%, interesting that ::GetDeviceCaps(dc, LOGPIXELSY) not changing with scale I using, it's always 96
Here's example of CreateFontA(-128, ...) with final atlas and metrics:
rendered atlas
Question #1: What should I do to pass wanted point size in pixels and receive glyphs in proper size with correct metrics in pixels?
Question #2: What the strange units all these functions are using?
When you use ::SetMapMode(dc, MM_TEXT); the font size is specified in device pixels. Negative value excludes internal leading, so for the same absolute value the negative ones produce visually bigger fonts. If you want to get same height from GetTextExtentPoint32 for different fonts, use positive values.
In your example with -128 height, you are requesting font for which, after internal leading exclusion, height is 128 pixels. Font mapper selects 143 which is correct for internal leading of 15 pixels (128+15=143). tmAscent + tmDescent are also correct (115+28=143). You get what you specified.
You should take into account that values in text metric don't state hard bounds. Designer can design fonts so its glyphs sometimes go beyond guiding lines or don't reach them.
for example height of 'j' should have exact size that I passed
Dot over j can go beyond or not reach top line if designer finds it visually plausible to design it that way.
interesting that ::GetDeviceCaps(dc, LOGPIXELSY) not changing with scale I using, it's always 96
Unless you log off and log in, system DPI doesn't change. For per monitor DPI aware application you have to get DPI from monitor parameters or cache value given by WM_DPICHANGED.
Question #1: What should I do to pass wanted point size in pixels and receive glyphs in proper size with correct metrics in pixels?
I think you want to get specific distance between top and bottom lines and this is exactly how you create font HFONT font = ::CreateFontA(-size_in_pixels, ..., "Arial");. The problem lies in your assumption that font design lines are hard boundaries for each glyph, but font's designer don't have to strictly align glyphs to these lines. If you want glyphs strictly aligned, probably there is no way to get it. Maybe check different font.
Question #2: What the strange units all these functions are using?
When mode is set to WM_TEXT, raw device pixels are used. Positive height specifies height including tmInternalLeading, negative excludes it.
For positive value:
tmAscent + tmDescent = requestedHeight
For negative value:
tmAscent + tmDescent - tmInternalLeading = requestedHeight
Bellow I have pasted screen shots with different fonts showing that depending on selected font glyphs could be designed so they don't reach top line or go beyond it and bottom line in most cases also isn't reached.
Seems that for your requirements Arial Unicode MS would be better fit (but j still doesn't reach where you want it).
Arial:
Arial Unicode MS
Input Mono
Trebuched MS

Determine the size of a checkbox at arbitrary dpi on windows 10

See here: How to get size of check and gap in check box?
It does not seem to quite answer the question as it applies to DPI.
I have tried several methods, but none yield the results of actual drawn checkboxes at various scale choices in Windows 10. The closest is
12 * GetDeviceCaps (LOGPIXELSX) / 96 + 1
This yields 22 pixels # 168 DPI, however, but Windows draws a 20 pixel checkbox.
Is there a reliable way to determine this? Below is a grid of results I captured, with greens being those that match the "on screen" values.

OpenCV - Getting a part of an image

I want to get a part of an image loaded in another image. There are several, easy ways to do that but for example cv::Mat OutImage = Image(cv::Rect(7,47,1912,980)) but- the resulted image is to large For example:
I got an image with 1920 x 1024 pixel. I want to cut a cv:Rect(7,47,1912,980) from it. I would suggest, that the resulting image has the size (1912 - 7 = 1905) x (980 - 47 = 933) pixel but it has 1912 x 980. It seems, that Opencv is just cutting on the right lower side and keeping the left upper area.
The dimension of the image is important, because in the next step I'd like to perform a substraction which is only valid if the Mat object has the same dimension. I also don't want to use a loop designed by myself, because performance is very important.
Any ideas?
Regards,
Jan
It is actually cv:Rect(x,y,width,height), so you should set the last two parameters as your willing output width and height. Mind the range you set or it would cause errors.
I had also dealed with this issue I will just give my example here it is working for me well. You may also try this one.
Rect const box(100, 295, 400, 185); //this mean the first corner is
//(x,y)=(100,295)
// and the second corner is
//(x + b, y+c )= (100 +400,295+185)
Mat ROI = frame(box);

Marker and figure size in matplotlib : not sure how it works

I want to make a figure that marker's size depend on the size of the figure. That way, using square marker size, no matter what resolution or figure size you choose, all the markers will touch each other, masking the backgroud without overlapping. Here is where I am at:
The marker size is specified in pt^2, with 1pt=1/72inch, the resolution in Pixel Per Inches, and the figure size in pixels (also the proportion that main subplot represent out of the main figure size : 0.8). So, if my graph's limits are lim_min and lim_max, I should by able to get the corresponding marker size using :
marker_size=((fig_size*0.8*72/Resolution)/(lim_max-lim_min))**2
because (fig_size*0.8*72/Resolution) is the size of the figure in points, and (lim_max-lim_min) the number of marker I want to fill a line.
And that should do the trick !... Well it doesn't... At all... The marker are so small they are invisible without a zoom. And I don't get why.
I understand this my not be the best way, and the way you would do it, but I see no reason why it wouldn't work, so I want to understand where I am wrong.
PS : both my main figure and my subplot are squares
Edit :
Okay so I found the reason of the problem, not the solution. The problem in the confusion between ppi and dpi. Matplotlib set the resolution in dpi, which is defined as a unit specific to scanner or printer depending on the model (?!?).
Needless to say I am extremely confused on the actual meaning of the resolution in matplotlib. It simply makes absolutely no sens to me. Please someone help. How do i convert this to a meaningful unit ? It seems that matplotlib website is completely silent on the matter.
If you specify the figure size in inches and matplotlib uses a resolution of 72 points per inch (ppi), then for a given number of markers the width of each marker should be size_in_inches * points_per_inch / number_of_markers points (assuming for now that the subplot uses the entire figure)? As I see it, dpi is only used to display or save the figure in a size of size_in_inches * dpi pixels.
If I understand your goal correctly, the code below should reproduce the required behavior:
# Figure settings
fig_size_inch = 3
fig_ppi = 72
margin = 0.12
subplot_fraction = 1 - 2*margin
# Plot settings
lim_max = 10
lim_min = 2
n_markers = lim_max-lim_min
# Centers of each marker
xy = np.arange(lim_min+0.5, lim_max, 1)
# Size of the marker, in points^2
marker_size = (subplot_fraction * fig_size_inch * fig_ppi / n_markers)**2
fig = pl.figure(figsize=(fig_size_inch, fig_size_inch))
fig.subplots_adjust(margin, margin, 1-margin, 1-margin, 0, 0)
# Create n_markers^2 colors
cc = pl.cm.Paired(np.linspace(0,1,n_markers*n_markers))
# Plot each marker (I could/should have left out the loops...)
for i in range(n_markers):
for j in range(n_markers):
ij=i+j*n_markers
pl.scatter(xy[i], xy[j], s=marker_size, marker='s', color=cc[ij])
pl.xlim(lim_min, lim_max)
pl.ylim(lim_min, lim_max)
This is more or less the same as you wrote (in the calculation of marker_size), except the division by Resolution has been left out.
Result:
Or when settings fig_ppi incorrectly to 60:

Can Silverlight Pivotviewer handle 3 levels of semantic zoom?

I can get 2 levels of PivotViewerItemTemplate to work just fine, but not three.
If I set one template at MaxWidth=130, the next at MaxWidth=400 and then a third with no MaxWidth, the second level starts transitioning into the 3rd at about 170 pixels and is no longer visible at all at only 280 pixels. I expect to see the 2nd level until it is 400 pixels wide.
Any tips on what I'm doing wrong here?
TIA
Your MaxWidth's need to be powers of two: 32, 64, 128 etc. You can then have as many levels as there are powers :)
I've written a post explaining it in more detail here http://www.rogernoble.com/2012/04/02/picking-maxwidth-for-pivotviewer-semantic-zoom