In what Units does QSvgGenerator operate in? - c++

In the official documentation for QSvgGenerator there are two properties that relate to the size of the output. One is size and the other is resolution.
Resolution is set in DPI (Dots Per Inch), but which unit is size in?
I have tested multiple values now and none make sense to me when inspecting the output file.

size is in pixels. If you treat size as pixels then the output SVG width and height attributes will be approximately the correct number of mm.

Related

How do I get the logical to device ratio from an EMF ()Enhanced MetaFile?

I have closely studied the MS documentation on EMF files and from the definitions for the 3 header types I can't see how to convert from logical coords (which the graphics records coords are stored as) to device coords. The header has a Frame part that specifies the page size surrounding (but not necessarily bounding) the composite image in 0.01mm units; and a Bounds part that specifies the actual bounds of the composite image in logical units. And finally there are the Device and Millimeters parts that specify the size of the recording device.
From these there seems no way that calculating the ratio to convert from logical coords to device coords is possible.
I must be missing something simple :-)
Think I sussed it: you use the records:
EMR_SETVIEWPORTEXTEX - device units
EMR_SETVIEWPORTORGEX - (ditto)
EMR_SETWINDOWEXTEX - logical units
EMR_SETWINDOWORGEX - (ditto)
EMR_SETWORLDTRANSFORM
Yes, the Bounds header property is specified as the actual bounds of the composite image (in logical units) but, on investigating Inkscape and Adobe Illustrator created emf's, I find that they do not adhere to this.
After creating your DC (createDC), use getdevicecaps to get the total number of dots (raster lines) available for your DC. Horzres for width, Vertres for height. The dots aren't square. Then after reading your EMF file with getenhmetafile, use getenhmetafileheader to get the header record. You then look at either rclbound or rclframe in the header record. The second rectangle is a multiple of the first rectangle. For emfs created by powerpoint, the top and left is zero in my experience, so you focus on the bottom and right. The ratio of the two is your aspect ratio. You use that ratio to calculate the rectangle in DC units that has the same aspect ratio as rclbound, but likely adds margins all around so your image doesn't go right to the edge of your device. That rectangle, with units that fall within the range provide by vertres and horzres is the third arguement to the playenhmetafile command where your finish up. In sum, you convert from the EMF logical units to the DC logical units by using vertres and horzres (from your DC) combined with the aspect ratio you calculate (from your EMF).

Incorrect metrics and sizes of font created by CreateFont()

I trying to render a font into bitmap using WinAPI, but I can't reach needed sizes of font.
Here's how the font is initialized:
HDC dc = ::CreateCompatibleDC(NULL);
::SetMapMode(dc, MM_TEXT);
::SetTextAlign(dc, TA_LEFT | TA_TOP | TA_UPDATECP);
int size_in_pixels = 18;
HFONT font = ::CreateFontA(-size_in_pixels, ..., "Arial");
::SelectObject(dc, font);
::TEXTMETRICW tm = { 0 };
GetTextMetricsW(dc, &tm);
But after it I getting incorrect values both in GetGlyphOutlineW and GetTextMetricsW, it's not size I passed as parameter
I know that it's expecting value in logical units, but in MM_TEXT 1 unit should be 1 pixel, don't it?
I expecting that CreateFontA accepting point size when I passing a negative value (like here https://i.stack.imgur.com/tEt8J.png), but in fact it's wrong.
I tried bruteforcing values, and find out proper parameter for a few sizes:
18px = -19; 36px = -39; 73px = -78;
Also I tried formula that provided by Microsoft:
nHeight = -MulDiv(PointSize, GetDeviceCaps(hDC, LOGPIXELSY), 72);
But it's also giving me a wrong result, rendered text (using GetGlyphOutlineW) is larger if measure it (for example height of 'j' should have exact size that I passed)
Also metrics from GetTextMetricsW are wrong, for example tmAscent. I know that on Windows it's including internal leading, but even if subtract tmInternalLeading from tmAscent it's still incorrect.
By the way, values from GetCharABCWidthsW are correct, so a+b+c is width of glyph in pixels (while documentation says it should be in logical units).
Also I should say about DPI, usually I using 125% on Windows 10 scale in settings, but I tried even with 100%, interesting that ::GetDeviceCaps(dc, LOGPIXELSY) not changing with scale I using, it's always 96
Here's example of CreateFontA(-128, ...) with final atlas and metrics:
rendered atlas
Question #1: What should I do to pass wanted point size in pixels and receive glyphs in proper size with correct metrics in pixels?
Question #2: What the strange units all these functions are using?
When you use ::SetMapMode(dc, MM_TEXT); the font size is specified in device pixels. Negative value excludes internal leading, so for the same absolute value the negative ones produce visually bigger fonts. If you want to get same height from GetTextExtentPoint32 for different fonts, use positive values.
In your example with -128 height, you are requesting font for which, after internal leading exclusion, height is 128 pixels. Font mapper selects 143 which is correct for internal leading of 15 pixels (128+15=143). tmAscent + tmDescent are also correct (115+28=143). You get what you specified.
You should take into account that values in text metric don't state hard bounds. Designer can design fonts so its glyphs sometimes go beyond guiding lines or don't reach them.
for example height of 'j' should have exact size that I passed
Dot over j can go beyond or not reach top line if designer finds it visually plausible to design it that way.
interesting that ::GetDeviceCaps(dc, LOGPIXELSY) not changing with scale I using, it's always 96
Unless you log off and log in, system DPI doesn't change. For per monitor DPI aware application you have to get DPI from monitor parameters or cache value given by WM_DPICHANGED.
Question #1: What should I do to pass wanted point size in pixels and receive glyphs in proper size with correct metrics in pixels?
I think you want to get specific distance between top and bottom lines and this is exactly how you create font HFONT font = ::CreateFontA(-size_in_pixels, ..., "Arial");. The problem lies in your assumption that font design lines are hard boundaries for each glyph, but font's designer don't have to strictly align glyphs to these lines. If you want glyphs strictly aligned, probably there is no way to get it. Maybe check different font.
Question #2: What the strange units all these functions are using?
When mode is set to WM_TEXT, raw device pixels are used. Positive height specifies height including tmInternalLeading, negative excludes it.
For positive value:
tmAscent + tmDescent = requestedHeight
For negative value:
tmAscent + tmDescent - tmInternalLeading = requestedHeight
Bellow I have pasted screen shots with different fonts showing that depending on selected font glyphs could be designed so they don't reach top line or go beyond it and bottom line in most cases also isn't reached.
Seems that for your requirements Arial Unicode MS would be better fit (but j still doesn't reach where you want it).
Arial:
Arial Unicode MS
Input Mono
Trebuched MS

Why does srcset resize image?

I'm having a weird behaviour using srcset and I'm having a hard time understanding it. I've done a CodePen: http://codepen.io/anon/pen/dYBvNM
The problem is that I have a set of images (that Shopify generates) of various sizes: 240px, 480px, 600px and 1024px. The problem is that those are the maximum sizes. This means that if a merchant uploads a smaller image (let's say 600px), the 1024px version will be 600px, not 1024px. I cannot know that in advance, so I'm forced to simply add all the sizes as a "best case":
<img
src="my_1024x1024.jpg"
srcset="my_240px.jpg 240w, my_480px.jpg 480w, my_600px.jpg 600w, my_1024px 1024w"
sizes="(max-width: 35em) 100vh, 610px"
>
The weirdness happen when the image is indeed smaller than the expected max size. When that the case, the browser correctly select the appropriate image (in this case, it would select the 1024 version on a 15' Retina), but as the image is actually smaller than 1024px (size that I've indicated), the browser is actually resizing the image to be smaller than its native resolution.
You can compare in the CodePen http://codepen.io/anon/pen/dYBvNM that those two images are the 1024px version, but in the one using srcset, the rendering is actually smaller than with src only. I would have expected that it would leave the image at its native resolution.
Could you please explain why does that?
THanks!
The way it works is that 'w' descriptors are calculated into 'x' descriptors by dividing the given value with the effective size from the sizes attribute. So for instance, if 1024w is picked and the size is 610px, then 1024/610 = 1.67868852459016x, and that is the pixel density of the image that the browser will apply. If the image is then not in fact 1024 pixels wide, the browser will still apply this same density, which will "shrink" the image, because that's the right thing to do in the valid case when the image width and 'w' descriptor match.
You have to make the descriptors match the resource width. When the user uploads an image, you can check its width and use that as the biggest descriptor in sizes (if it's smaller than 1024), and remove the descriptors that are bigger than the given image width.

Poppler: render with a target resolution

I am writing a pdf viewer in Qt and C++ using Poppler. How can I render a pdf page to fit my widget size? Poppler provides a method named renderToImage which takes in a dpi and returns a QImage whose size varies with that dpi. How to calculate the right dpi?
pageSizeF() returns the page size in points, which divided by 72 gives you the page size in inches.
Each component of your widget size in pixels divided by each component of the size in inches gives you 2 dpi values (1 for each axis).
If you want to keep the page aspect ratio, you should pass the smaller of these two dpi values to renderToImage for both xres and yres parameters.

C++: How to interpret a byte array representation of an image?

I'm trying to work with this camera SDK, and let's say the camera has this function called CameraGetImageData(BYTE* data), which I assume takes in a byte array, modifies it with the image data, and then returns a status code based on success/failure. The SDK provides no documentation whatsoever (not even code comments) so I'm just guestimating here. Here's a code snippet on what I think works
BYTE* data = new BYTE[10000000]; // an array of an arbitrary large size, I'm not
// sure what the exact size needs to be so I
// made it large
CameraGetImageData(data);
// Do stuff here to process/output image data
I've run the code w/ breakpoints in Visual Studio and can confirm that the CameraGetImageData function does indeed modify the array. Now my question is, is there a standard way for cameras to output data? How should I start using this data and what does each byte represent? The camera captures in 8-bit color.
Take pictures of pure red, pure green and pure blue. See what comes out.
Also, I'd make the array 100 million, not 10 million if you've got the memory, at least initially. A 10 megapixel camera using 24 bits per pixel is going to use 30 million bytes, bigger than your array. If it does something crazy like store 16 bits per colour it could take up to 60 million or 80 million bytes.
You could fill this big array with data before passing it. For example fill it with '01234567' repeated. Then it's really obvious what bytes have been written and what bytes haven't, so you can work out the real size of what's returned.
I don't think there is a standard but you can try to identify which values are what by putting some solid color images in front of the camera. So all pixels would be approximately the same color. Having an idea of what color should be stored in each pixel you may understand how the color is represented in your array. I would go with black, white, reg, green, blue images.
But also consider finding a better SDK which has the documentation, because making just a big array is really bad design
You should check the documentation on your camera SDK, since there's no "standard" or "common" way for data output. It can be raw data, it can be RGB data, it can even be already compressed. If the camera vendor doesn't provide any information, you could try to find some libraries that handle most common formats, and try to pass the data you have to see what happens.
Without even knowing the type of the camera, this question is nearly impossible to answer.
If it is a scientific camera, chances are good that it adhers to the IEEE 1394 (aka IIDC or DCAM) standard. I have personally worked with such a camera made by Hamamatsu using this library to interface with the camera.
In my case the camera output was just raw data. The camera itself was monochrome and each pixel had a depth-resolution of 12 bit. Therefore, each pixel intensity was stored as 16-bit unsigned value in the result array. The size of the array was simply width * height * 2 bytes, where width and height are the image dimensions in pixels the factor 2 is for 16-bit per pixel. The width and height were known a-priori from the chosen camera mode.
If you have the dimensions of the result image, try to dump your byte array into a file and load the result either in Python or Matlab and just try to visualize the content. Another possibility is to load this raw file with an image editor such as ImageJ and hope to get anything out from it.
Good luck!
I hope this question's solution will helps you: https://stackoverflow.com/a/3340944/291372
Actually you've got an array of pixels (assume 1 byte per pixel if you camera captires in 8-bit). What you need - is just determine width and height. after that you can try to restore bitmap image from you byte array.