Get size of excel cell in pixels - c++

I am trying to programatically (C++ but VBA explanations are OK) get the size of an excel cell in pixels. The excel application gui shows the size of the cell as:
Width: 8.28 (160 pixels) Height: 24.6 (41 pixels), Font is Arial 20 pt.
Using an excel range I can get the:
ColumnWidth: 8.3, RowHeight: 24.6
Range Width: 96, Range Height 24.6
I tried using PointsToScreenPixelsX and PointsToScreenPixelsY for all of the above values but they returned values which didn't match up with what the excel gui said (396 for row/cell height, 136 for column width and 224 for column width).
Any ideas?

The conversion from points to pixels depends on your DPI setting. There are 72 points to an inch, so if you have 96 points that's 4/3 of an inch. If your DPI (in Display Properties) is 120 then that works out to 160 pixels.
In other words, pixels = points * DPI / 72.
However, this doesn't take zoom into account. ActiveWindow.Zoom in Excel is a percentage, so for instance 200 is twice normal size. Note that the UI still shows unzoomed pixels.

The OP stated:
The excel application gui shows the size of the cell as:
Width: 8.28 (160 pixels) Height: 24.6 (41 pixels), Font is Arial 20 pt.
First let me clarify: the application gui shows column widths and height in a decimal measurement and a pixel measurement, regardless of font size, screen size, zoom, etc. For any of these factors, if the Excel column width is 8.43, it will always be defined as 64 pixels. Second, I am a little confused, because my version of Excel (2010 currently) and every prior version I can remember had the standard column width of 8.43 equal 64 pixels; likewise, the standard row height of 15 equals 20 pixels, which does not seem to match the OP's example.
Having established that, one poster asked "Why?" One reason: if you're adjusting column widths or row heights, Excel allows that in discrete units that, unfortunately, they decided to name pixels. Maybe they related to pixels in some early version, but they seem just as random as the units used - 8.43 is what, inches, picas, ??? Not twips, that's for sure! Here, I'll call it a decimal unit.
Anyway, for all column widths over 1.00, that discrete pixel unit is 1/7th of a decimal unit. Bizarrely, column widths under 1.00 are divided into 12 units. Therefore, the discrete widths up to the 2.00 decimal unit are as follows:
0.08, 0.17, 0.25, 0.33, 0.42, 0.5, 0.58, 0.67, 0.75, 0.83, 0.92, 1.00,
1.14, 1.29, 1.43, 1.57, 1.71, 1.86, 2.00
with 2.00 equaling 19 pixels. Yes, I'll pause while you shake your head in disbelief, but that's how they made it.
Fortunately, row heights appear to be more uniform, with 1 pixel equaling 0.75 decimal units; 10 pixels equaling 7.50; standard row height of 20 pixels equaling 15.00; and so on. Just in case you should ever need to convert between these randomly discrete units, here are a couple of VBA functions to do so:
Function ColumnWidthToPixels(ByVal ColWidth As Single) As Integer
Select Case Round(ColWidth, 4) ' Adjust for floating point errors
Case Is < 0:
ColumnWidthToPixels = ColumnWidthToPixels(ActiveSheet.StandardWidth)
Case Is < 1:
ColumnWidthToPixels = Round(ColWidth * 12, 0)
Case Is <= 255:
ColumnWidthToPixels = Round(12 + ((Int(ColWidth) - 1) * 7) _
+ Round((ColWidth - Int(ColWidth)) * 7, 0), 0)
Case Else:
ColumnWidthToPixels = ColumnWidthToPixels(ActiveSheet.StandardWidth)
End Select
End Function
Function PixelsToColumnWidth(ByVal Pixels As Integer) As Single
Select Case Pixels
Case Is < 0:
PixelsToColumnWidth = ActiveSheet.StandardWidth
Case Is < 12:
PixelsToColumnWidth = Round(Pixels / 12, 2)
Case Is <= 1790:
PixelsToColumnWidth = Round(1 + ((Pixels - 12) / 7), 2)
Case Else:
PixelsToColumnWidth = ActiveSheet.StandardWidth
End Select
End Function

Example
This example determines the height and width (in pixels) of the selected cells in the active window and returns the values in the lWinWidth and lWinHeight variables.
With ActiveWindow
lWinWidth = PointsToScreenPixelsX(.Selection.Width)
lWinHeight = PointsToScreenPixelsY(.Selection.Height)
End With

Related

Map a pixel color found in OpenCV to a pre-determined list of colors

I have a scenario where I have obtained one or more colors from an image, but now I need to determine which one of my existing color options it is closest to.
For example, I may have red(255,0,0), green(0,255,0) and blue(0,0,255) as my three choices, but the image may contain orange(255,165,0).
What I need then is a way to determine which one of those three values I should choose as my output color to replace orange.
One approach I have considered is to measure the range from those three values and see which one is the smallest & select that color.
Example:
orange -> red
abs(255 - 255) = 0, abs(165 - 0) = 165, abs(0 - 0) = 0
0 + 165 + 0 = 165
orange -> green
abs(255 - 0) = 255, abs(165 - 255) = 90, abs(0 - 0) = 0
255 + 90 + 0 = 345
orange -> blue
abs(255 - 0) = 255, abs(165 - 0) = 165, abs(0 - 255) = 255
255 + 165 + 255 = 675
Under this approach, I would pick red.
However, I am not sure if this is the best, or even a particularly valid, one so was wondering if there is something out there that is more accurate & would scale better to an increased color pallete.
Update The reduction answer linked in here does not help as it reduces things across the board. I need the ability to link a broad range of colors to several specific options.
I think you should represent and compare colors in different color space. I suggest space, that represent human color perception. Therefore L*a*b color space will be the best.
https://en.wikipedia.org/wiki/Lab_color_space/
Color distances in that coordinate space are represented by delta e value. You could find different standards for delta e below:
https://en.wikipedia.org/wiki/Color_difference#CIELAB_Delta_E.2A/
In order to change color space you have to use cv::cvtColor() method. Color conversion for single pixel is described below:
https://stackoverflow.com/a/35737319/8682088/
After calculating pixel coordinates in L*a*b space, you could easily calculate delta e and compare colors with any reference and pick the one with the smallest error.

textureGather() behavior at texel center coordinates

Suppose I have a 100x100 texture and I do the following:
vec4 texelQuad = textureGather(sampler, vec2(50.5)/vec2(100.0));
The coordinate I am requesting is exactly at the center of texel (50, 50). So, will I get a quad of texels bounded by (49, 49) and (50, 50) or the one bounded by (50, 50) and (51, 51). The spec is evasive on the subject. It merely states the following:
The rules for the LINEAR minification filter are applied to
identify the four selected texels.
The relevant section 8.14.2 Coordinate Wrapping and Texel Selection of the spec is not terribly clear either. My best hypothesis would be the following:
ivec2 lowerBoundTexelCoord = ivec2(floor(textureCoord * textureSize - 0.5));
Does that hypothesis hold in practice? No, it doesn't. In fact no other hypothesis would hold either, since different hardware returns different results for this particular case:
textureSize: 100x100
textureCoord: vec2(50.5)/vec2(100.0)
Hyphotesis: (49, 49) to (50, 50)
GeForce 1050 Ti: (49, 49) to (50, 50)
Intel HD Graphics 630: (50, 50) to (51, 51)
another case:
textureSize: 100x100
textureCoord: vec2(49.5)/vec2(100.0)
Hyphotesis: (48, 48) to (49, 49)
GeForce 1050 Ti: (49, 49) to (50, 50)
Intel HD Graphics 630: (48, 48) to (49, 49)
Does that make textureGather() useless due to the unpredictable behavior at texel center coordinates? Not at all!. While you may not be able to predict which 4 texels it will return in some particular cases, you can still force it to return the ones you want, by giving it a coordinate between those 4 texels you want. That is, if I want texels bounded by (49, 49) and (50, 50), I would call:
textureGather(sampler, vec2(50.0, 50.0)/textureSize);
Since the coordinate I am requesting this time is the point where 4 texels meet, any implementation will surely return me those 4 texels.
Now, the question: Is my analysis correct? Does everyone who uses textureGather() force it to return a particular quad of texels rather then figuring out which ones it would return by itself? If so, it's such a shame it's not reflected in any documentation.
EDIT
It was pointed out that OpenGL doesn't guarantee the same result dividing identical floating point numbers on different hardware. Therefore, it becomes necessary to mention that in my actual code I had vec2(50.5)/vec2(textureSize(sampler, 0)) rather than vec2(50.5)/vec2(100.0). That's important, since the presence of textureSize() prevents that division from being carried out at shader compilation time.
Let me also rephrase the question:
Suppose you've got a normalized texture coordinate from a black box. That coordinate is then passed to textureGather():
vec2 textureCoord = takeFromBlackBox();
vec4 texelQuad = textureGather(sampler, textureCoord);
Can anyone produce GLSL code that would return the integer pair of coordinates of the texel returned in texelQuad[3], which is the lower-bound corner of a 2x2 box? The obvious solution below doesn't work in all cases:
vec2 textureDims = textureSize(sampler, 0);
ivec2 lowerBoundTexelCoord = ivec2(floor(textureCoord * textureDims - 0.5));
Examples of tricky cases where the above approach may fail are:
vec2 textureCoord = vec2(49.5)/vec2(textureSize(sampler, 0));
vec2 textureCoord = vec2(50.5)/vec2(textureSize(sampler, 0));
where textureSize(sampler, 0) returns ivec2(100, 100).
Recall that the texel locations for GL_LINEAR ([OpenGL 4.6 (Core) §8.14 Texture Minification]) are selected by the following formulas:
i0 = wrap(⌊u′ - 1/2⌋)
j0 = wrap(⌊v′ - 1/2⌋)
...
The value of (u′,v′) in this case is equal to
(vec2(50.5) / vec2(100)) * vec2(100)
However, note that this is not guaranteed to be equal to vec2(50.5). See The OpenGL Shading Language 4.60 §4.71 Range and Precision:
a / b, 1.0 / b: 2.5 ULP for b in the range [2-126, 2126].
So the value of u′ might be slightly larger than 50.5, slightly smaller than 50.5, or it might be 50.5 exactly. No promises! Well, the spec promises no more than 2.5 ULP, but that's nothing to write home about. You can see that when you subtract 0.5 and take the floor, you are either going to get 49 or 50, depending on how the number was rounded.
i0 = wrap(⌊(50.5 / 100) * 100 - 1/2⌋)
i0 = wrap(⌊(.505 ± error) * 100 - 1/2⌋)
i0 = wrap(⌊50.5 ± error - 1/2⌋)
i0 = wrap(⌊50 ± error⌋)
i0 = 50 (if error >= 0) or 49 (if error < 0)
So in fact it is not textureGather() that is behaving unpredictably. The unpredictable part is the rounding error when you try to divide by 100, which is explained in the GLSL spec and in the classic article, WhatEvery Computer Scientist Should Know About Floating-Point Numbers.
Or in other words, textureGather() always gives you the same result, but 50.5/100.0 does not.
Note that you could get exact results if your texture were a power of two, since you could use 50.5 * 0.0078125 to compute 50.5 / 128, and the result would be exactly correct, since multiplication is correctly rounded and 0.0078125 is a power of two.

Creating a diverging color palette with a "midrange" instead of a "midpoint"

I am using python seaborn package to generate a diverging color palette (seaborn.diverging_palette).
I can choose my two extremity colors, and define if the center is light-> white or dark->black (centerparameter). But what I would like is to extend this center part color (white in my case) to a given range of values.
For example, my values are from 0 to 20. So, my midpoint is 10. Hence, only 10 is in white, and then it becomes more green/more blue when going to 0/20. I would like to keep the color white from 7 to 13 (3 before/after the midpont), and then start to move to green/blue.
I found the sep parameter, which extends or reduces this center white part. But I can't find any explanation on what its value means, in order to find which value of sepwould correspond to 3 each side of the midpoint for example.
Does anybody know the relationship between sep and the value scale ?
Or if another parameter could do the expected behaviour ?
It seems the sep parameter can take any integer between 1 and 254. The fraction of the colourmap that will be covered by the midpoint colour will be equal to sep/256.
Perhaps an easy way to visualise this is to use the seaborn.palplot, with n=256 to split the palette up into 256 colours.
Here is a palette with sep = 1:
sns.palplot(sns.diverging_palette(0, 255, sep=1, n=256))
And here is a palette with sep = 8
sns.palplot(sns.diverging_palette(0, 255, sep=8, n=256))
Here is sep = 64 (i.e. one quarter of the palette is the midpoint colour)
sns.palplot(sns.diverging_palette(0, 255, sep=64, n=256))
Here is sep = 128 (i.e. one half is the midpoint colour)
sns.palplot(sns.diverging_palette(0, 255, sep=128, n=256))
And here is sep = 254 (i.e. all but the colours on the very edge of the palette are the midpoint colour)
sns.palplot(sns.diverging_palette(0, 255, sep=254, n=256))
Your specific palette
So, for your case where you have a range of 0 - 20, but a midpoint range of 7 - 13, you would want the fraction of the palette to be the midpoint to be 6/20. To convert that to sep, we need to multiply by 256, so we get sep = 256 * 6 / 20 = 76.8. However, sep must be an integer, so lets use 77.
Here is a script to make a diverging palette, and plot a colorbar to show that using sep = 77 leaves the correct midpoint colour between 7 and 13:
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
# Create your palette
cmap = sns.diverging_palette(0, 255, sep=77, as_cmap=True)
# Some data with a range of 0 to 20
x = np.linspace(0, 20, 20).reshape(4, 5)
# Plot a heatmap (I turned off the cbar here,
# so I can create it later with ticks spaced every integer)
ax = sns.heatmap(x, cmap=cmap, vmin=0, vmax=20, cbar=False)
# Grab the heatmap from the axes
hmap = ax.collections[0]
# make a colorbar with ticks spaced every integer
cbar = plt.gcf().colorbar(hmap)
cbar.set_ticks(range(21))
plt.show()

getting equivalent pixel of two different wide resolutions

I want to get the equivalent pixel location of two different wide resolutions.
Here is an example.
In a 1366x768 resolution, the desired pixel is located in row 120 and column 300.
I want to convert it to a lower resolution and get the equivalent of 120x300 point from the original to the converted one.
Use percent.
e.g. 120/1366=60/683=x ~ 0.0878 and 300/768=25/64=y ~ 0.3906. Now simply multiply these percent with your desired resolution.
For example if you have the resolution 800x600 and want this position just multiply.
x = 800 * 0.0878 = 70.24
y = 600 * 0.3906 = 234.36
This works because the position got kindof 'normalized' so that it lays between 0 and 1. Whatever you multiply this with will have same 'dimensions'. e.g. assume we want the position 400x300 from screen 800x600 in another screen so that it has same ratios. We can do similiar to your problem up there:
x = 400 / 800 = 0.5
y = 300 / 600 = 0.5
To get the position for any other screen we multiply the result from there with the resolution.
Percentage

Finding nearest RGB colour

I was told to use distance formula to find if the color matches the other one so I have,
struct RGB_SPACE
{
float R, G, B;
};
RGB_SPACE p = (255, 164, 32); //pre-defined
RGB_SPACE u = (192, 35, 111); //user defined
long distance = static_cast<long>(pow(u.R - p.R, 2) + pow(u.G - p.G, 2) + pow(u.B - p.B, 2));
this gives just a distance, but how would i know if the color matches the user-defined by at least 25%?
I'm not just sure but I have an idea to check each color value to see if the difference is 25%. for example.
float R = u.R/p.R * 100;
float G = u.G/p.G * 100;
float B = u.B/p.B * 100;
if (R <= 25 && G <= 25 && B <= 25)
{
//color matches with pre-defined color.
}
I would suggest not to check in RGB space. If you have (0,0,0) and (100,0,0) they are similar according to cababungas formula (as well as according to casablanca's which considers too many colors similar). However, they LOOK pretty different.
The HSL and HSV color models are based on human interpretation of colors and you can then easily specify a distance for hue, saturation and brightness independently of each other (depending on what "similar" means in your case).
"Matches by at least 25%" is not a well-defined problem. Matches by at least 25% of what, and according to what metric? There's tons of possible choices. If you compare RGB colors, the obvious ones are distance metrics derived from vector norms. The three most important ones are:
1-norm or "Manhattan distance": distance = abs(r1-r2) + abs(g1-g2) + abs(b1-b2)
2-norm or Euclidean distance: distance = sqrt(pow(r1-r2, 2) + pow(g1-g2, 2) + pow(b1-b2, 2)) (you compute the square of this, which is fine - you can avoid the sqrt if you're just checking against a threshold, by squaring the threshold too)
Infinity-norm: distance = max(abs(r1-r2), abs(g1-g2), abs(b1-b2))
There's lots of other possibilities, of course. You can check if they're within some distance of each other: If you want to allow up to 25% difference (over the range of possible RGB values) in one color channel, the thresholds to use for the 3 methods are 3/4*255, sqrt(3)/4*255 and 255/4, respectively. This is a very coarse metric though.
A better way to measure distances between colors is to convert your colors to a perceptually uniform color space like CIELAB and do the comparison there; there's a fairly good Wikipedia article on the subject, too. That might be overkill depending on your intended application, but those are the color spaces where measured distances have the best correlation with distances perceived by the human visual system.
Note that the maximum possible distance is between (255, 255, 255) and (0, 0, 0), which are at a distance of 3 * 255^2. Obviously these two colours match the least (0% match) and they are a distance 100% apart. Then at least a 25% match means a distance less than 75%, i.e. 3 / 4 * 3 * 255^2 = 9 / 4 * 255 * 255. So you could just check if:
distance <= 9 / 4 * 255 * 255