Find size of rectangles to fill area - c++

I got trouble need your help:
I'm working on a program that shows n videos in tiling mode (aka, videos wall, c columns and r rows). The n is arbitrary, the videos have same size (W x H) and we have W / H ratio, the size of wall is fixed, how can I get best set of c, r, W and H when n changes? The best set defined as: W and H is maximum values and videos fill maximum area of the wall.
I have taken a look at Packing Problem but still can't solve my problem above, can someone help me this? Thank you very much!

As far as I understand, you want to place n rectangles with fixed C=W/H ratio on the wall with given Width and Height
Let rectangle height is h (unknown yet), width is w = C * h
Every row of grid contains
nr = Floor(Width / (C * h)) // rounding down
Every column contains
nc = Floor(Height / h)
Write inequality
n <= nc * nr
n <= Floor(Width / (C * h)) * Floor(Height / h)
and solve it (find maximal possible h value) for unknown h
For real values of parameters h might be found getting initial approximate value:
h0 = Ceil(Sqrt(Width * Height / (n * C)))
and decrementing h value until inequality becomes true

Related

Calculating the area of overlap of two functions

Locked. There are disputes about this question’s content being resolved at this time. It is not currently accepting new answers or interactions.
I have two functions. I am giving here the basic structure only, as they have quite a few parameters each to adjust their exact shape.
For example, y = sin(.1*pi*x)^2 and y = e^-(x-5)^2.
The question is how much area of the sine is captured by the e function:
I tried to be clever and recursively find the points of intersection, but that turned out to be a lot more work than was necessary.
As n.m. pointed out, you want the integral from a to b of min(f, g). Since you're integrating by approximation, you're already stepping through the interval, meaning you can check at each step which function is greater and compute the area of the current trapezoid.
Simple implementation in C:
#define SLICES 10000
/*
* Computes the integral of min(f, g) on [a, b].
*
* Intended use is for when f and g are both non-negative, real-valued
* functions of one variable.
*
* That is, f: R -> R and g: R -> R.
*
* Assumes b ≥ a.
*
* #param a left boundary of interval to integrate over
* #param b right boundary of interval to integrate over
* #param f function accepting one double argument which returns a double
* #param g function accepting one double argument which returns a double
* #return integral of min(f, g) on [a, b]
*/
double minIntegrate (double a, double b, double (*f)(double), double (*g)(double)) {
double area = 0.0;
// the height of each trapezoid
double deltaX = (b - a) / SLICES;
/*
* We are integrating by partitioning the interval into SLICES pieces, then
* adding the areas of the trapezoids formed to our running total.
* To save a computation, we can cache the last side encountered.
* That is, let lastSide be the minimum of f(x) and g(x), where x was the
* previous "fence post" (side of the trapezoid) encountered.
* Initialize lastSide with the minimum of f and g at the left boundary.
*/
double lastSide = min(f(a), g(a));
// The loop starts at 1 since we already have the last (trapezoid) side
// for the 0th fencepost.
for (int i = 1; i <= SLICES; i++) {
double fencePost = a + (i * deltaX);
double currentSide = min(f(fencePost), g(fencePost));
area += trapezoid(lastSide, currentSide, deltaX);
lastSide = currentSide;
}
return area;
}
/*
* Computes the area of a trapezoid with bases `a` and `b` and height `height`.
*/
double trapezoid (double a, double b, double height) {
return h * (a + b) / 2.0;
}
If you're looking for something really, really simple, why don't you do Monte Carlo Integration?
Use the fact that the functions are easy to calculate to sample a large number of points. For each point, check whether it's below 0, 1, or 2 of the curves.
You might have some fiddling to find the boundaries for the sampling, but this method will work for a variety of curves.
https://en.wikipedia.org/wiki/Monte_Carlo_integration
I'm guessing your exponential is actually of the form e^-(x-5)^2 so that the exponential decays to zero at plus/minus infinity.
Given that, your integral would be most quickly and accurately calculated by something called Gaussian quadrature. There are a few types of common integrals which have very simple solutions using different polynomials (Hermite, Legendre, etc.). Yours specifically looks like it could be solved using Gauss-Hermite quadrature.
Hope this helps.

Converting Cartesian image to polar, appearance differences

I'm trying to do a polar transform on the first image below and end up with the second. However my result is the third image. I have a feeling it has to do with what location I choose as my "origin" but am unsure.
radius = sqrt(width**2 + height**2)
nheight = int(ceil(radius)/2)
nwidth = int(ceil(radius/2))
for y in range(0, height):
for x in range(0, width):
t = int(atan(y/x))
r = int(sqrt(x**2+y**2)/2)
color = getColor(getPixel(pic, x, y))
setColor( getPixel(radial,r,t), color)
There are a few differences / errors:
They use the centre of the image as the origin
They scale the axis appropriately. In your example, you're plotting your angle (between 0 and in your case, pi), instead of utilising the full height of the image.
You're using the wrong atan function (atan2 works a lot better in this situation :))
Not amazingly important, but you're rounding unnecessarily quite a lot, which throws off accuracy a little and can slow things down.
This is the code combining my suggested improvements. It's not massively efficient, but it should hopefully work :)
maxradius = sqrt(width**2 + height**2)/2
rscale = width / maxradius
tscale = height / (2*math.pi)
for y in range(0, height):
dy = y - height/2
for x in range(0, width):
dx = x - width/2
t = atan2(dy,dx)%(2*math.pi)
r = sqrt(dx**2+dy**2)
color = getColor(getPixel(pic, x, y))
setColor( getPixel(radial,int(r*rscale),int(t*tscale)), color)
In particular, it fixes the above problems in the following ways:
We use dx = x - width / 2 as a measure of distance from the centre, and similarly with dy. We then use these in replace of x, y throughout the computation.
We will have our r satisfying 0 <= r <= sqrt( (width/2)^2 +(height/2)^2 ), and our t eventually satisfying 0 < t <= 2 pi so, I create the appropriate scale factors to put r and t along the x and y axes respectively.
Normal atan can only distinguish based on gradients, and is computationally unstable near vertical lines... Instead, atan2 (see http://en.wikipedia.org/wiki/Atan2) solves both problems, and accepts (y,x) pairs to give an angle. atan2 returns an angle -pi < t <= pi, so we can find the remainder modulo 2 * math.pi to it to get it in the range 0 < t <= 2pi ready for scaling.
I've only rounded at the end, when the new pixels get set.
Any questions, just ask!

Compare aspect ratios given width and height

Given a list of screen sizes, how do I detect which ones are in 4:3 16:9 aspect ratio?
I can use width / height to get it but for 16:9 sizes I sometimes get 1.778 and sometimes I get 1.777778 due to rounding errors.
Check if 4 * height == 3 * width or 16 * height == 9 * width.
Remember the definition of a rational number: It is an equivalence class of pairs of integers (m, n) sub­ject to the equivalence (m, n) ≡ (m', n') if and only if n' m = n m'.
You can force the rounding to be always the same, and then you can compare the values:
float ratio = (int)((width / height) * 100);
You will get always 177 for 16:9 and 133 for 4:3
good luck
compare with some epsilon proximity.
should be something like:
double epsilon = 0.01;
if(math.abs(screen1.height/screen1.width - screen2.height/screen2.width) < epsilon)
{
//equal ratios
}
you must use an epsilon value for the comparison.
You can have a look at: http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm

Probability density function from a paper, implemented using C++, not working as intended

So i'm implementing a heuristic algorithm, and i've come across this function.
I have an array of 1 to n (0 to n-1 on C, w/e). I want to choose a number of elements i'll copy to another array. Given a parameter y, (0 < y <= 1), i want to have a distribution of numbers whose average is (y * n). That means that whenever i call this function, it gives me a number, between 0 and n, and the average of these numbers is y*n.
According to the author, "l" is a random number: 0 < l < n . On my test code its currently generating 0 <= l <= n. And i had the right code, but i'm messing with this for hours now, and i'm lazy to code it back.
So i coded the first part of the function, for y <= 0.5
I set y to 0.2, and n to 100. That means it had to return a number between 0 and 99, with average 20.
And the results aren't between 0 and n, but some floats. And the bigger n is, smaller this float is.
This is the C test code. "x" is the "l" parameter.
//hate how code tag works, it's not even working now
int n = 100;
float y = 0.2;
float n_copy;
for(int i = 0 ; i < 20 ; i++)
{
float x = (float) (rand()/(float)RAND_MAX); // 0 <= x <= 1
x = x * n; // 0 <= x <= n
float p1 = (1 - y) / (n*y);
float p2 = (1 - ( x / n ));
float exp = (1 - (2*y)) / y;
p2 = pow(p2, exp);
n_copy = p1 * p2;
printf("%.5f\n", n_copy);
}
And here are some results (5 decimals truncated):
0.03354
0.00484
0.00003
0.00029
0.00020
0.00028
0.00263
0.01619
0.00032
0.00000
0.03598
0.03975
0.00704
0.00176
0.00001
0.01333
0.03396
0.02795
0.00005
0.00860
The article is:
http://www.scribd.com/doc/3097936/cAS-The-Cunning-Ant-System
pages 6 and 7.
or search "cAS: cunning ant system" on google.
So what am i doing wrong? i don't believe the author is wrong, because there are more than 5 papers describing this same function.
all my internets to whoever helps me. This is important to my work.
Thanks :)
You may misunderstand what is expected of you.
Given a (properly normalized) PDF, and wanting to throw a random distribution consistent with it, you form the Cumulative Probability Distribution (CDF) by integrating the PDF, then invert the CDF, and use a uniform random predicate as the argument of the inverted function.
A little more detail.
f_s(l) is the PDF, and has been normalized on [0,n).
Now you integrate it to form the CDF
g_s(l') = \int_0^{l'} dl f_s(l)
Note that this is a definite integral to an unspecified endpoint which I have called l'. The CDF is accordingly a function of l'. Assuming we have the normalization right, g_s(N) = 1.0. If this is not so we apply a simple coefficient to fix it.
Next invert the CDF and call the result G^{-1}(x). For this you'll probably want to choose a particular value of gamma.
Then throw uniform random number on [0,n), and use those as the argument, x, to G^{-1}. The result should lie between [0,1), and should be distributed according to f_s.
Like Justin said, you can use a computer algebra system for the math.
dmckee is actually correct, but I thought that I would elaborate more and try to explain away some of the confusion here. I could definitely fail. f_s(l), the function you have in your pretty formula above, is the probability distribution function. It tells you, for a given input l between 0 and n, the probability that l is the segment length. The sum (integral) for all values between 0 and n should be equal to 1.
The graph at the top of page 7 confuses this point. It plots l vs. f_s(l), but you have to watch out for the stray factors it puts on the side. You notice that the values on the bottom go from 0 to 1, but there is a factor of x n on the side, which means that the l values actually go from 0 to n. Also, on the y-axis there is a x 1/n which means these values don't actually go up to about 3, they go to 3/n.
So what do you do now? Well, you need to solve for the cumulative distribution function by integrating the probability distribution function over l which actually turns out to be not too bad (I did it with the Wolfram Mathematica Online Integrator by using x for l and using only the equation for y <= .5). That however was using an indefinite integral and you are really integration along x from 0 to l. If we set the resulting equation equal to some variable (z for instance), the goal now is to solve for l as a function of z. z here is a random number between 0 and 1. You can try using a symbolic solver for this part if you would like (I would). Then you have not only achieved your goal of being able to pick random ls from this distribution, you have also achieved nirvana.
A little more work done
I'll help a little bit more. I tried doing what I said about for y <= .5, but the symbolic algebra system I was using wasn't able to do the inversion (some other system might be able to). However, then I decided to try using the equation for .5 < y <= 1. This turns out to be much easier. If I change l to x in f_s(l) I get
y / n / (1 - y) * (x / n)^((2 * y - 1) / (1 - y))
Integrating this over x from 0 to l I got (using Mathematica's Online Integrator):
(l / n)^(y / (1 - y))
It doesn't get much nicer than that with this sort of thing. If I set this equal to z and solve for l I get:
l = n * z^(1 / y - 1) for .5 < y <= 1
One quick check is for y = 1. In this case, we get l = n no matter what z is. So far so good. Now, you just generate z (a random number between 0 and 1) and you get an l that is distributed as you desired for .5 < y <= 1. But wait, looking at the graph on page 7 you notice that the probability distribution function is symmetric. That means that we can use the above result to find the value for 0 < y <= .5. We just change l -> n-l and y -> 1-y and get
n - l = n * z^(1 / (1 - y) - 1)
l = n * (1 - z^(1 / (1 - y) - 1)) for 0 < y <= .5
Anyway, that should solve your problem unless I made some error somewhere. Good luck.
Given that for any values l, y, n as described, the terms you call p1 and p2 are both in [0,1) and exp is in [1,..) making pow(p2, exp) also in [0,1) thus I don't see how you'd ever get an output with the range [0,n)

When drawing an ellipse or circle with OpenGL, how many vertices should we use?

Should we just blindly use 360 vertices? 720 seems to work better, but where do we stop?
It depends on how much error you can tolerate (i.e. the visual quality) and the size of the circle (ellipse). A bigger circle will need more points to achieve the same quality. You can work out exactly how many points you need for a given error with a bit of maths.
If you consider the circle represented by a series of line segments, the end points of the line segments lie exactly on the circle (ignoring the pixel grid). The biggest deviation between the real circle and our line segment representation occurs in the center of each line segment, and this error is the same for all of the line segments.
Looking at the first segment from the x-axis going anti-clockwise, its two endpoints are:
A = (r, 0)
B = (r . cos(th), r . sin(th))
where r is the radius of the circle and th is the angle covered by each line segment (e.g. if we have 720 points then each line segment covers 0.5 degree so th would be 0.5 degrees).
The midpoint of this line segment is at
M = A + (B - A) / 2
= (r, 0) + (r (cos(th) - 1) / 2, r . sin(th) / 2)
= (r / 2) . (1 + cos(th), sin(th))
and the distance from the origin to the point is
l = (r / 2) . sqrt((1 + cos(th))^2 + (sin(th))^2)
= (r / 2) . sqrt(2) . sqrt(1 + cos(th))
If our line segment representation were perfect then this length should be equal to the radius (the midpoint of the line segment should fall on the circle). Normally there'll be some error and this point will be slightly less than the radius. The error is
e = r - l
= r . (1 - sqrt(2) . sqrt(1 + cos(th)) / 2)
Rearranging so we have th in terms of e and r
2 . e / r = 2 - sqrt(2) . sqrt(1 + cos(th))
sqrt(2) . sqrt(1 + cos(th)) = 2 . (1 - e / r)
1 + cos(th) = 2 . (1 - e / r)^2
th = arccos(2 . (1 - e / r)^2 - 1)
This lets us calculate the maximum angle we can have between each point to achieve a certain error. For example, say we're drawing a circle with a radius of 100 pixels and we want a maximum error of 0.5 pixels. We can calculate
th = arccos(2 . (1 - 0.5 / 100)^2 - 1))
= 11.46 degrees
This corresponds to ceil(360 / 11.46) = 32 points. So if we draw a circle of radius 100 using 32 points our worst pixel will be off by less than a half which should mean every pixel we draw will be in the correct place (ignoring aliasing).
This kind of analysis can be performed for ellipses too, but in the spirit of all good maths that is left as an exercise for the reader ;) (the only difference is determining where the maximum error occurs).
as many as the resolution you are using requires, or as many as the visual result requires an accurate representation. It's difficult to say, and mostly depends on what you want to achieve. In a CAD program, having a circle visually similar to an octagon could be annoying. On the other hand, if you are programming a game on the iphone, if the wheel of a car looks like an octagon it's not a big deal.
A possible strategy you could use is to evaluate the length of each segment with respect to the resolution of the current view, and if longer than, say, 3 pixels, increase the number of vertexes you use, but only for the visible segments. This way, you increase the resolution when zooming in, but you don't have to describe vertexes you are not going to draw.