Given a list of screen sizes, how do I detect which ones are in 4:3 16:9 aspect ratio?
I can use width / height to get it but for 16:9 sizes I sometimes get 1.778 and sometimes I get 1.777778 due to rounding errors.
Check if 4 * height == 3 * width or 16 * height == 9 * width.
Remember the definition of a rational number: It is an equivalence class of pairs of integers (m, n) subject to the equivalence (m, n) ≡ (m', n') if and only if n' m = n m'.
You can force the rounding to be always the same, and then you can compare the values:
float ratio = (int)((width / height) * 100);
You will get always 177 for 16:9 and 133 for 4:3
good luck
compare with some epsilon proximity.
should be something like:
double epsilon = 0.01;
if(math.abs(screen1.height/screen1.width - screen2.height/screen2.width) < epsilon)
{
//equal ratios
}
you must use an epsilon value for the comparison.
You can have a look at: http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm
Related
I am trying to compute integer array bounds that will include floating point limits divided by a scale. For example, if my origin is 0, my floating point maximum is 10 then my integer array bounds need to be 2. The obvious formula is to divide my bounds by the scale, giving the incorrect result of 1.
I need to divide the inclusive maximum values by the scale and add one if the division is an exact multiple.
I am running into a mismatch between the normal way to define and use integer array indexes and my desired way to use real value coordinates. I am trying to map inclusive real value coordinates into integer array indexes, using a scaling term.
(I am actually working with two dimensional maps, but the problem can be expressed more simply in one dimension.)
This is wrong:
int get_array_size(double, scale, double maximum)
{
return std::ceil(maximum / scale); // Fails on exact multiples
}
This is wasteful:
int get_array_size(double, scale, double maximum)
{
return 1 + std::ceil(maximum / scale); // Allocates extra array memory
}
This is ugly and I am not sure if it is correct:
int get_array_size(double, scale, double maximum)
{
if (maximum % scale == 0) // I am not sure if this is correct
return 1 + std::ceil(maximum / scale);
else
return std::ceil(maximum / scale); // Maybe I can eliminate the call to std::ceil?
}
I am trying to get the value maximum / scale on every open ended interval ending at multiples of scale and 1 + maximum / scale on every interval from >= multiple of scale ending at < multiple of scale + 1. I am not sure how to correctly express this in mathematical terms or how to implement it in c++. I would be grateful if someone can clarify my understand and point me in the right direction.
Mathematically I think I am trying to define f(x, s) = y s.t. if s * n <= x and x < s * (n + 1) then y = n + 1. I want to implement this efficiently and respect the difference between <= and < comparison.
The way I interpret this question, I think maximum and scale don't actually matter - what you are really asking about is how to correctly map from floats to ints with specific boundary conditions. For example [0.0, 1.0) to 0, [1.0, 2.0) to 1, etc. So the question becomes a bit simpler if we just consider maximum / scale to be a single quantity; I'll call it t.
I believe you actually want to use std::floor instead of std::ceil:
int scaled_coord_to_index(float t) {
return std::floor(t);
}
And the size of your array should always be the maximum scaled coordinate + 1 (with negative values normalized to start at 0).
int array_size(float min_t, float max_t) {
// NOTE: This will "anchor" your coords based on the most negative value.
// e.g. if that value is 1.6, then your bins will be [1.6, 2.6), [2.6, 3.6), etc.
// To change that behavior you could use std::floor(min_t) instead.
return scaled_coord_to_index(max_t - min_t) + 1;
}
I got trouble need your help:
I'm working on a program that shows n videos in tiling mode (aka, videos wall, c columns and r rows). The n is arbitrary, the videos have same size (W x H) and we have W / H ratio, the size of wall is fixed, how can I get best set of c, r, W and H when n changes? The best set defined as: W and H is maximum values and videos fill maximum area of the wall.
I have taken a look at Packing Problem but still can't solve my problem above, can someone help me this? Thank you very much!
As far as I understand, you want to place n rectangles with fixed C=W/H ratio on the wall with given Width and Height
Let rectangle height is h (unknown yet), width is w = C * h
Every row of grid contains
nr = Floor(Width / (C * h)) // rounding down
Every column contains
nc = Floor(Height / h)
Write inequality
n <= nc * nr
n <= Floor(Width / (C * h)) * Floor(Height / h)
and solve it (find maximal possible h value) for unknown h
For real values of parameters h might be found getting initial approximate value:
h0 = Ceil(Sqrt(Width * Height / (n * C)))
and decrementing h value until inequality becomes true
I'm working on a platform that has only integer arithmetic. The application uses geographic information, and I'm representing points by (x, y) coordinates where x and y are distances measured in meters.
As an approximation, I want to compute the Euclidean distance between two points. But to do this I have to square distances, and with 32-bit integers, the largest distance I can represent is 32 kilometers. Not good.
My needs are more on the order of 1000 kilometers. But I'd like to be able to resolve distances on a scale smaller than 30 meters.
Hence my question: how can I compute Euclidean distance, using only integer arithmetic, without overflow, on distances whose squares don't fit in a single word?
ETA: I would like to be able to compute distances, but I might settle for being able to compare them.
Perhaps comparing the octagonal distance approximation would be sufficient?
Slightly more up to date is this article on fast approximate distance functions.
I would recommend to use fixed point calculation using integers and then the distance approximation is already not too complicated.
fixed point calculation
distance approximation
Fast Approximate Distance Functions by Rafael Baptista
First step is to choose some fixed point representation for our needs:
For example in case we need a number range for 1000km with 1m resolution we can use 20bits that would be 2^20 = 1,048,576. So we have around 10bits for fractions.
Then we need to implement the approximation we choose:
For example in case we select the following approximation:
h ≈ b (1 + 0.337 (a/b)) = b + 0.337 a AND assuming 0 ≤ a ≤ b
We will implement as follows:
int32_t dx = (x1 > x2 ? x1 - x2 : x2 - x1);
int32_t dy = (y1 > y2 ? y1 - y2 : y2 - y1);
int32_t a = dx > dy ? dy : dx;
int32_t b = dx > dy ? dx : dy;
int32_t h = b + (345 * a >> 10); /* 345.088 = 0.337 * 2^10 */
About overflow:
Adding two <+20.0> positive numbers will result a maximum of <+21.0> number. That is Ok.
The multiplication is also safe while we use numbers in a range of -1..1. In this case the result will also remain in the same range. In our case <+20.0> * <+0.10> will result <+20.10> numbers. That we convert back to <+20.0>.
There is one step here we need to pay attention. During the multiplication we will get temporary a number with <+20.10> that is already near to our 32bits limit.
Exact calculation
We can also calculate the exact distance using the following consideration:
h = b sqrt(1 + (a/b)^2) AND assuming 0 < b ≤ a
In tis case we also need to calculate the square root:
square root
In case the a/b still significantly larger than one or too large to calculate the square of it, we can simplify the calculation to:
h = a
See the implementation here
I would leave the square root out of play, so that I can approximate the Euclidean distance. However, when comparing distances, this approach gives you 100% accuracy, since the comparison would be the same if you squared the distances.
I am pretty sure about that, since I had use that approach when searching for nearest neighbours in high dimensional spaces. You can check my code and the theory in kd-GeRaF.
I was told to use distance formula to find if the color matches the other one so I have,
struct RGB_SPACE
{
float R, G, B;
};
RGB_SPACE p = (255, 164, 32); //pre-defined
RGB_SPACE u = (192, 35, 111); //user defined
long distance = static_cast<long>(pow(u.R - p.R, 2) + pow(u.G - p.G, 2) + pow(u.B - p.B, 2));
this gives just a distance, but how would i know if the color matches the user-defined by at least 25%?
I'm not just sure but I have an idea to check each color value to see if the difference is 25%. for example.
float R = u.R/p.R * 100;
float G = u.G/p.G * 100;
float B = u.B/p.B * 100;
if (R <= 25 && G <= 25 && B <= 25)
{
//color matches with pre-defined color.
}
I would suggest not to check in RGB space. If you have (0,0,0) and (100,0,0) they are similar according to cababungas formula (as well as according to casablanca's which considers too many colors similar). However, they LOOK pretty different.
The HSL and HSV color models are based on human interpretation of colors and you can then easily specify a distance for hue, saturation and brightness independently of each other (depending on what "similar" means in your case).
"Matches by at least 25%" is not a well-defined problem. Matches by at least 25% of what, and according to what metric? There's tons of possible choices. If you compare RGB colors, the obvious ones are distance metrics derived from vector norms. The three most important ones are:
1-norm or "Manhattan distance": distance = abs(r1-r2) + abs(g1-g2) + abs(b1-b2)
2-norm or Euclidean distance: distance = sqrt(pow(r1-r2, 2) + pow(g1-g2, 2) + pow(b1-b2, 2)) (you compute the square of this, which is fine - you can avoid the sqrt if you're just checking against a threshold, by squaring the threshold too)
Infinity-norm: distance = max(abs(r1-r2), abs(g1-g2), abs(b1-b2))
There's lots of other possibilities, of course. You can check if they're within some distance of each other: If you want to allow up to 25% difference (over the range of possible RGB values) in one color channel, the thresholds to use for the 3 methods are 3/4*255, sqrt(3)/4*255 and 255/4, respectively. This is a very coarse metric though.
A better way to measure distances between colors is to convert your colors to a perceptually uniform color space like CIELAB and do the comparison there; there's a fairly good Wikipedia article on the subject, too. That might be overkill depending on your intended application, but those are the color spaces where measured distances have the best correlation with distances perceived by the human visual system.
Note that the maximum possible distance is between (255, 255, 255) and (0, 0, 0), which are at a distance of 3 * 255^2. Obviously these two colours match the least (0% match) and they are a distance 100% apart. Then at least a 25% match means a distance less than 75%, i.e. 3 / 4 * 3 * 255^2 = 9 / 4 * 255 * 255. So you could just check if:
distance <= 9 / 4 * 255 * 255
How can I rewrite the following pseudocode in C++?
real array sine_table[-1000..1000]
for x from -1000 to 1000
sine_table[x] := sine(pi * x / 1000)
I need to create a sine_table lookup table.
You can reduce the size of your table to 25% of the original by only storing values for the first quadrant, i.e. for x in [0,pi/2].
To do that your lookup routine just needs to map all values of x to the first quadrant using simple trig identities:
sin(x) = - sin(-x), to map from quadrant IV to I
sin(x) = sin(pi - x), to map from quadrant II to I
To map from quadrant III to I, apply both identities, i.e. sin(x) = - sin (pi + x)
Whether this strategy helps depends on how much memory usage matters in your case. But it seems wasteful to store four times as many values as you need just to avoid a comparison and subtraction or two during lookup.
I second Jeremy's recommendation to measure whether building a table is better than just using std::sin(). Even with the original large table, you'll have to spend cycles during each table lookup to convert the argument to the closest increment of pi/1000, and you'll lose some accuracy in the process.
If you're really trying to trade accuracy for speed, you might try approximating the sin() function using just the first few terms of the Taylor series expansion.
sin(x) = x - x^3/3! + x^5/5! ..., where ^ represents raising to a power and ! represents the factorial.
Of course, for efficiency, you should precompute the factorials and make use of the lower powers of x to compute higher ones, e.g. use x^3 when computing x^5.
One final point, the truncated Taylor series above is more accurate for values closer to zero, so its still worthwhile to map to the first or fourth quadrant before computing the approximate sine.
Addendum:
Yet one more potential improvement based on two observations:
1. You can compute any trig function if you can compute both the sine and cosine in the first octant [0,pi/4]
2. The Taylor series expansion centered at zero is more accurate near zero
So if you decide to use a truncated Taylor series, then you can improve accuracy (or use fewer terms for similar accuracy) by mapping to either the sine or cosine to get the angle in the range [0,pi/4] using identities like sin(x) = cos(pi/2-x) and cos(x) = sin(pi/2-x) in addition to the ones above (for example, if x > pi/4 once you've mapped to the first quadrant.)
Or if you decide to use a table lookup for both the sine and cosine, you could get by with two smaller tables that only covered the range [0,pi/4] at the expense of another possible comparison and subtraction on lookup to map to the smaller range. Then you could either use less memory for the tables, or use the same memory but provide finer granularity and accuracy.
long double sine_table[2001];
for (int index = 0; index < 2001; index++)
{
sine_table[index] = std::sin(PI * (index - 1000) / 1000.0);
}
One more point: calling trigonometric functions is pricey. if you want to prepare the lookup table for sine with constant step - you may save the calculation time, in expense of some potential precision loss.
Consider your minimal step is "a". That is, you need sin(a), sin(2a), sin(3a), ...
Then you may do the following trick: First calculate sin(a) and cos(a). Then for every consecutive step use the following trigonometric equalities:
sin([n+1] * a) = sin(n*a) * cos(a) + cos(n*a) * sin(a)
cos([n+1] * a) = cos(n*a) * cos(a) - sin(n*a) * sin(a)
The drawback of this method is that during this procedure the round-off error is accumulated.
double table[1000] = {0};
for (int i = 1; i <= 1000; i++)
{
sine_table[i-1] = std::sin(PI * i/ 1000.0);
}
double getSineValue(int multipleOfPi){
if(multipleOfPi == 0) return 0.0;
int sign = 1;
if(multipleOfPi < 0){
sign = -1;
}
return signsine_table[signmultipleOfPi - 1];
}
You can reduce the array length to 500, by a trick sin(pi/2 +/- angle) = +/- cos(angle).
So store sin and cos from 0 to pi/4.
I don't remember from top of my head but it increased the speed of my program.
You'll want the std::sin() function from <cmath>.
another approximation from a book or something
streamin ramp;
streamout sine;
float x,rect,k,i,j;
x = ramp -0.5;
rect = x * (1 - x < 0 & 2);
k = (rect + 0.42493299) *(rect -0.5) * (rect - 0.92493302) ;
i = 0.436501 + (rect * (rect + 1.05802));
j = 1.21551 + (rect * (rect - 2.0580201));
sine = i*j*k*60.252201*x;
full discussion here:
http://synthmaker.co.uk/forum/viewtopic.php?f=4&t=6457&st=0&sk=t&sd=a
I presume that you know, that using a division is a lot slower than multiplying by decimal number, /5 is always slower than *0.2
it's just an approximation.
also:
streamin ramp;
streamin x; // 1.5 = Saw 3.142 = Sin 4.5 = SawSin
streamout sine;
float saw,saw2;
saw = (ramp * 2 - 1) * x;
saw2 = saw * saw;
sine = -0.166667 + saw2 * (0.00833333 + saw2 * (-0.000198409 + saw2 * (2.7526e-006+saw2 * -2.39e-008)));
sine = saw * (1+ saw2 * sine);