Basically, the problem simulates the following:
There is an urn with 50 green balls and 50 red balls.
I am allowed to pick balls from the urn, without replacement, with the following rules: For every red ball picked, I lose a dollar, for every green ball picked, I gain a dollar.
I can stop picking whenever I want. Worst case scenario is I pick all 100, and net 0.
The question is to come up with an optimal stopping strategy, and create a program to compute the expected value of the strategy.
My strategy is to continue picking balls, while the expected value of picking another ball is positive.
That is, the stopping rule is DYNAMIC.
In Latex, here's the recursive formula in an image:
http://i.stack.imgur.com/fnzYk.jpg
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
double ExpectedValue(double, double);
double max(double, double);
main() {
double g = 50;
double r = 50;
double EV = ExpectedValue(g, r);
printf ("%f\n\n", EV);
system("PAUSE");
}
double ExpectedValue(double g, double r){
double p = (g / (g + r));
double q = 1 - p;
if (g == 0)
return r;
if (r == 0)
return 0;
double E_gr = max ((p * ExpectedValue (g - 1, r)) + (q * ExpectedValue (g, r - 1)), (r - g));
return E_gr;
}
double max(double a, double b){
if (a > b)
return a;
else return b;
}
I let it run for 30 minutes, and it was still working.
For small values of g and r, a solution is computed very quickly. What am I doing wrong?
Any help is much appreciated!
Your algorithm is fine, but you are wasting information. For a certain pair (g, r) you calculate it's ExpectedValue and then you throw that information away. Often with recursion algorithms remembering previously calculated values can speed it up a LOT.
The following code runs in the blink of an eye. For example for g = r = 5000 it calculates 36.900218 in 1 sec. It remembers previous calculations of ExpectedValue(g, r) to prevent unnecessary recursion and recalculation.
#include <stdio.h>
#include <stdlib.h>
double ExpectedValue(int g, int r, double ***expectedvalues);
inline double max(double, double);
int main(int argc, char *argv[]) {
int g = 50;
int r = 50;
int i, j;
double **expectedvalues = malloc(sizeof(double*) * (g+1));
// initialise
for (i = 0; i < (g+1); i++) {
expectedvalues[i] = malloc(sizeof(double) * (r+1));
for (j = 0; j < (r+1); j++) {
expectedvalues[i][j] = -1.0;
}
}
double EV = ExpectedValue(g, r, &expectedvalues);
printf("%f\n\n", EV);
// free memory
for (i = 0; i < (g+1); i++) free(expectedvalues[i]);
free(expectedvalues);
return 0;
}
double ExpectedValue(int g, int r, double ***expectedvalues) {
if (g == 0) return r;
if (r == 0) return 0;
// did we calculate this before? If yes, then return that value
if ((*expectedvalues)[g][r] != -1.0) return (*expectedvalues)[g][r];
double p = (double) g / (g + r);
double E_gr = max(p * ExpectedValue(g-1, r, expectedvalues) + (1.0-p) * ExpectedValue(g, r-1, expectedvalues), (double) (r-g));
// store value for later lookup
(*expectedvalues)[g][r] = E_gr;
return E_gr;
}
double max(double a, double b) {
if (a > b) return a;
else return b;
}
Roughly speaking, adding one ball to the urn doubles the number of calls you will have to make to ExpectedValue (let's not quibble about boundary conditions). This is called O(en), and it can bring the most powerful computer on Earth to its knees.
The problem is that you are calculating the same values over and over again. Keep a table of ExpectedValue(r,g) and fill it in as you go, so that you never have to calculate the same value more than once. Then you'll be working in O(n2), which is heck of a lot faster.
In my opinion, correct, but rather straightforward solution.
Here's what you can do:
Eliminate recursion!
Eliminate recalulations of ExpectedValue
Parallelize your code
Read this [lecture notes]. It definitely will be useful
I can provide some code samples, but it'd not be fair.
Related
hey I am making small C++ program to calculate the value of sin(x) till 7 decimal points but when I calculate sin(PI/2) using this program it gives me 0.9999997 rather than 1.0000000 how can I solve this error?
I know of little bit why I'm getting this value as output, question is what should be my approach to solve this logical error?
here is my code for reference
#include <iostream>
#include <iomanip>
#define PI 3.1415926535897932384626433832795
using namespace std;
double sin(double x);
int factorial(int n);
double Pow(double a, int b);
int main()
{
double x = PI / 2;
cout << setprecision(7)<< sin(x);
return 0;
}
double sin(double x)
{
int n = 1; //counter for odd powers.
double Sum = 0; // to store every individual expression.
double t = 1; // temp variable to store individual expression
for ( n = 1; t > 10e-7; Sum += t, n = n + 2)
{
// here i have calculated two terms at a time because addition of two consecutive terms is always less than 1.
t = (Pow(-1.00, n + 1) * Pow(x, (2 * n) - 1) / factorial((2 * n) - 1))
+
(Pow(-1.00, n + 2) * Pow(x, (2 * (n+1)) - 1) / factorial((2 * (n+1)) - 1));
}
return Sum;
}
int factorial(int n)
{
if (n < 2)
{
return 1;
}
else
{
return n * factorial(n - 1);
}
}
double Pow(double a, int b)
{
if (b == 1)
{
return a;
}
else
{
return a * Pow(a, b - 1);
}
}
sin(PI/2) ... it gives me 0.9999997 rather than 1.0000000
For values outside [-pi/4...+pi/4] the Taylor's sin/cos series converges slowly and suffers from cancelations of terms and overflow of int factorial(int n)**. Stay in the sweet range.
Consider using trig properties sin(x + pi/2) = cos(x), sin(x + pi) = -sin(x), etc. to bring x in to the [-pi/4...+pi/4] range.
Code uses remquo (ref2) to find the remainder and part of quotient.
// Bring x into the -pi/4 ... pi/4 range (i.e. +/- 45 degrees)
// and then call owns own sin/cos function.
double my_wide_range_sin(double x) {
if (x < 0.0) {
return -my_sin(-x);
}
int quo;
double x90 = remquo(fabs(x), pi/2, &quo);
switch (quo % 4) {
case 0:
return sin_sweet_range(x90);
case 1:
return cos_sweet_range(x90);
case 2:
return sin_sweet_range(-x90);
case 3:
return -cos_sweet_range(x90);
}
return 0.0;
}
This implies OP needs to code up a cos() function too.
** Could use long long instead of int to marginally extend the useful range of int factorial(int n) but that only adds a few x. Could use double.
A better approach would not use factorial() at all, but scale each successive term by 1.0/(n * (n+1)) or the like.
I see three bugs:
10e-7 is 10*10^(-7) which seems to be 10 times larger than you want. I think you wanted 1e-7.
Your test t > 10e-7 will become false, and exit the loop, if t is still large but negative. You may want abs(t) > 1e-7.
To get the desired accuracy, you need to get up to n = 7, which has you computing factorial(13), which overflows a 32-bit int. (If using gcc you can catch this with -fsanitize=undefined or -ftrapv.) You can gain some breathing room by using long long int which is at least 64 bits, or int64_t.
I've been trying to write a function to approximate an the value of an integral using the Composite Simpson's Rule.
template <typename func_type>
double simp_rule(double a, double b, int n, func_type f){
int i = 1; double area = 0;
double n2 = n;
double h = (b-a)/(n2-1), x=a;
while(i <= n){
area = area + f(x)*pow(2,i%2 + 1)*h/3;
x+=h;
i++;
}
area -= (f(a) * h/3);
area -= (f(b) * h/3);
return area;
}
What I do is multiply each value of the function by either 2 or 4 (and h/3) with pow(2,i%2 + 1) and subtract off the edges as these should only have a weight of 1.
At first, I thought it worked just fine, however, when I compared it to my Trapezoidal Method function it was way more inaccurate which shouldn't be the case.
This is a simpler version of a code I previously wrote which had the same problem, I thought that if I cleaned it up a little the problem would go away, but alas. From another post, I get the idea that there's something going on with the types and the operations I'm doing on them which results in loss of precision, but I just don't see it.
Edit:
For completeness, I was running it for e^x from 1 to zero
\\function to be approximated
double f(double x){ double a = exp(x); return a; }
int main() {
int n = 11; //this method works best for odd values of n
double e = exp(1);
double exact = e-1; //value of integral of e^x from 0 to 1
cout << simp_rule(0,1,n,f) - exact;
The Simpson's Rule uses this approximation to estimate a definite integral:
Where
and
So that there are n + 1 equally spaced sample points xi.
In the posted code, the parameter n passed to the function appears to be the number of points where the function is sampled (while in the previous formula n is the number of intervals, that's not a problem).
The (constant) distance between the points is calculated correctly
double h = (b - a) / (n - 1);
The while loop used to sum the weighted contributes of all the points iterates from x = a up to a point with an ascissa close to b, but probably not exactly b, due to rounding errors. This implies that the last calculated value of f, f(x_n), may be slightly different from the expected f(b).
This is nothing, though, compared to the error caused by the fact that those end points are summed inside the loop with the starting weight of 4 and then subtracted after the loop with weight 1, while all the inner points have their weight switched. As a matter of fact, this is what the code calculates:
Also, using
pow(2, i%2 + 1)
To generate the sequence 4, 2, 4, 2, ..., 4 is a waste, in terms of efficency, and may add (depending on the implementation) other unnecessary rounding errors.
The following algorithm shows how to obtain the same (fixed) result, without a call to that library function.
template <typename func_type>
double simpson_rule(double a, double b,
int n, // Number of intervals
func_type f)
{
double h = (b - a) / n;
// Internal sample points, there should be n - 1 of them
double sum_odds = 0.0;
for (int i = 1; i < n; i += 2)
{
sum_odds += f(a + i * h);
}
double sum_evens = 0.0;
for (int i = 2; i < n; i += 2)
{
sum_evens += f(a + i * h);
}
return (f(a) + f(b) + 2 * sum_evens + 4 * sum_odds) * h / 3;
}
Note that this function requires the number of intervals (e.g. use 10 instead of 11 to obtain the same results of OP's function) to be passed, not the number of points.
Testable here.
The above excellent and accepted solution could benefit from liberal use of std::fma() and templatize on the floating point type.
https://en.cppreference.com/w/cpp/numeric/math/fma
#include <cmath>
template <typename fptype, typename func_type>
double simpson_rule(fptype a, fptype b,
int n, // Number of intervals
func_type f)
{
fptype h = (b - a) / n;
// Internal sample points, there should be n - 1 of them
fptype sum_odds = 0.0;
for (int i = 1; i < n; i += 2)
{
sum_odds += f(std::fma(i,h,a));
}
fptype sum_evens = 0.0;
for (int i = 2; i < n; i += 2)
{
sum_evens += f(std::fma(i,h,a);
}
return (std::fma(2,sum_evens,f(a)) +
std::fma(4,sum_odds,f(b))) * h / 3;
}
I'm trying to solve this problem:
Given an a×b rectangle, your task is to cut it into squares. On each move you can select a rectangle and cut it into two rectangles in such a way that all side lengths remain integers. What is the minimum possible number of moves?
My logic is that the minimum number of cuts means the minimum number of squares; I don't know if it's the correct approach.
I see which side is smaller, Now I know I need to cut bigSide/SmallSide of cuts to have squares of smallSide sides, then I am left with SmallSide and bigSide%smallSide. Then I go on till any side is 0 or both are equal.
#include <iostream>
int main() {
int a, b; std::cin >> a >> b; // sides of the rectangle
int res = 0;
while (a != 0 && b != 0) {
if (a > b) {
if (a % b == 0)
res += a / b - 1;
else
res += a / b;
a = a % b;
} else if (b > a) {
if (b % a == 0)
res += b / a - 1;
else
res += b / a;
b = b % a;
} else {
break;
}
}
std::cout << res;
return 0;
}
When the input is 404 288, my code gives 18, but the right answer is actually 10.
What am I doing wrong?
It seems clear to me that the problem defines each move as cutting a rectangle to two rectangles along the integer lines, and then asks for the minimum number of such cuts. As you can see there is a clear recursive nature in this problem. Once you cut a rectangle to two parts, you can recurse and cut each of them into squares with minimum moves and then sum up the answers. The problem is that the recursion might lead to exponential time complexity which leads us directly do dynamic programming. You have to use memoization to solve it efficiently (worst case time O(a*b*(a+b))) Here is what I'd suggest doing:
#include <iostream>
#include <vector>
using std::vector;
int min_cuts(int a, int b, vector<vector<int> > &mem) {
int min = mem[a][b];
// if already computed, just return the value
if (min > 0)
return min;
// if one side is divisible by the other,
// store min-cuts in 'min'
if (a%b==0)
min= a/b-1;
else if (b%a==0)
min= b/a -1;
// if there's no obvious solution, recurse
else {
// recurse on hight
for (int i=1; i<a/2; i++) {
int m = min_cuts(i,b, mem);
int n = min_cuts(a-i, b, mem);
if (min<0 or m+n+1<min)
min = m + n + 1;
}
// recurse on width
for (int j=1; j<b/2; j++) {
int m = min_cuts(a,j, mem);
int n = min_cuts(a, b-j, mem);
if (min<0 or m+n+1<min)
min = m + n + 1;
}
}
mem[a][b] = min;
return min;
}
int main() {
int a, b; std::cin >> a >> b; // sides of the rectangle
// -1 means the problem is not solved yet,
vector<vector<int> > mem(a+1, vector<int>(b+1, -1));
int res = min_cuts(a,b,mem);
std::cout << res << std::endl;
return 0;
}
The reason the foor loops go up until a/2 and b/2 is that cuting a paper is symmetric: if you cut along vertical line i it is the same as cutting along the line a-i if you flip the paper vertically. This is a little optimization hack that reduces complexity by a factor of 4 overall.
Another little hack is that by knowing that the problem is that if you transpose the paper the result is the same, meaining min_cuts(a,b)=min_cuts(b,a) you can potentially reduce computations by half. But any major further improvement, say a greedy algorithm would take more thinking (if there exists one at all).
The current answer is a good start, especially the suggestions to use memoization or dynamic programming, and potentially efficient enough.
Obviously, all answerers used the first with a sub-par data-structure. Vector-of-Vector has much space and performance overhead, using a (strict) lower triangular matrix stored in an array is much more efficient.
Using the maximum value as sentinel (easier with unsigned) would also reduce complexity.
Finally, let's move to dynamic programming instead of memoization to simplify and get even more efficient:
#include <algorithm>
#include <memory>
#include <utility>
constexpr unsigned min_cuts(unsigned a, unsigned b) {
if (a < b)
std::swap(a, b);
if (a == b || !b)
return 0;
const auto triangle = [](std::size_t n) { return n * (n - 1) / 2; };
const auto p = std::make_unique_for_overwrite<unsigned[]>(triangle(a));
/* const! */ unsigned zero = 0;
const auto f = [&](auto a, auto b) -> auto& {
if (a < b)
std::swap(a, b);
return a == b ? zero : p[triangle(a - 1) + b - 1];
};
for (auto i = 1u; i <= a; ++i) {
for (auto j = 1u; j < i; ++j) {
auto r = -1u;
for (auto k = i / 2; k; --k)
r = std::min(r, f(k, j) + f(i - k, j));
for (auto k = j / 2; k; --k)
r = std::min(r, f(k, i) + f(j - k, i));
f(i, j) = ++r;
}
}
return f(a, b);
}
I've written a code to find the reciprocal of a number without using division but rather approximating it using bisection method. Got two doubts. One, what should I keep the lower limit and the upper limit for x? Also is there a faster method to converge from the limits to the required value(reciprocal of the input) rather than just averaging it? And the main doubt, when I try to run it, It just stops after receiving the input number from the user. Any hints so solve that?
Here is the code:
#include<stdio.h>
#include<cstdlib>
#define epsilon 0.0001
float f(float x, float &m)
{
if(m==0)
{
printf("Reciprocal not defined");
return 0.0;
}
else
{
return x+1/m;
}
}
int main(void)
{
float m,g1,x,g2,c;
printf("Enter a number:\n");
scanf("%f",f(x,m));
g1=epsilon;
g2=m;
while(abs(f(g1,m)-f(g2,m))>epsilon)
{
c=(g1+g2)/2;
if(f(g1,m)*f(c,m)<0)
{
g2=c;
}
else if(f(c,m)*f(g2,m)<0)
{
g1=c;
}
}
printf("The reciprocal is approximately %f",c);
}
The code is expected to work as follows:
Enter a number: 5
The reciprocal is approximately 0.2
Enter a number: 0
Reciprocal is not defined
Instead of this, it shows:
Enter a number:
Reciprocal is not defined
(without accepting any input)
Your overall code is far too convoluted and your usage of scanf doesn't make sens.
You probbaly want something like this:
#include <stdio.h>
#include <cstdlib>
#include <cmath>
#define epsilon 0.0001f
int main(void)
{
float m, g1, g2, c, diff;
printf("Enter a number:\n");
scanf("%f", &m);
g1 = epsilon;
g2 = m;
do
{
c = (g1 + g2) / 2;
diff = c * m - 1.0f;
if (diff > 0)
g2 = c;
else
g1 = c;
} while (fabs(diff) > epsilon);
printf("The reciprocal is approximately %f", c);
}
For your question "Also is there a faster method to converge from the limits to the required value (reciprocal of the input) rather than just averaging it?", I have no idea, but searching by bisection is generally rather fast.
Test program that tests for a range of values:
#include <stdio.h>
#include <cstdlib>
#include <cmath>
#define epsilon 0.0001f
int main(void)
{
float g1, g2, c, diff;
for (float m = 1; m < 20; m += 1)
{
g1 = epsilon;
g2 = m;
do
{
c = (g1 + g2) / 2;
diff = c * m - 1.0f;
if (diff > 0)
g2 = c;
else
g1 = c;
} while (fabs(diff) > epsilon);
printf("The reciprocal of %f is approximately %f vs. %f, diff %f\n", m, c, 1/m, c-1/m);
}
}
Live demonstration
First of all, sorry for my English.
I am solving this problem:
Tom wants to shoot from a cannon at the Jerry, but he would like to have as many pieces as possible, but they must be the same size and as big as possible too. He only have n cannonballs at his disposal, so he can cut them in smaller pieces. And he would like to have k + 1 pieces to shoot from cannon at the Jerry. He knows the radius of every cannonball. What is the biggest volume of one piece? Output is rounded printf("%.3f\n",answer). First number is n and the second k , next n numbers are radiuses of cannonballs.
Possible input:
3 50
1 2 3
Output: 2.900*
Here is my solution: The volume of every piece can be only smaller or equal to volume of the smallest cannonball because you cannot join parts from cannonballs together. So I have used Binary search from 0.0 to minimal volume and as the predicate I have used function numberOfPieces, which counts the number of pieces from every cannonball with specific volume of one piece(this is the median in binary search). This function return number of pieces I can get if I use median as volume of one piece. Then I just compare it to k + 1 and if it is bigger or equal I use median as low otherwise I use it as high. My solution works for this test input.
The problem is that I get WA(wrong answer) and I cannot check the test input values. Can you please look at my code and check if I did something wrong please? The problem may be number inaccuracy, but I have small EPS so it should be good. Thanks in advance for every idea.
Here is my code:
#include <vector>
#include <iostream>
#include <algorithm>
#include <stdio.h>
#define PI 3.14159265358979323846
#define VC ((4.0/3.0) * PI) // constant for volume calculation
#define EPS 1E-12
using namespace std;
// return the number of pieces depending on the volume
int numberOfPieces(int v[], int n, double volume)
{
int ans = 0;
for(int i = 0; i < n; i++)
ans += (int)(v[i] * VC / volume);
return ans;
}
double binarySearch(double a, double b, int k, int n, int v[])
{
double low = a, high = b;
while(abs(low-high) > EPS)
{
double mid = low + (high - low) / 2.0;
if(numberOfPieces(v, n, mid) >= k)
low = mid;
else
high = mid;
}
return (low + high)/2.0;
}
int main()
{
int n, k, x; // n - number of cannonballs, k - number of wanted pieces, x - variable for input
int v[10001]; // radiuses ^ 3 of the cannonballs
scanf("%d%d", &n, &k);
int minVolume = 9999999;
for(int i = 0; i < n; i++) {
scanf("%d",&x);
minVolume = min(minVolume, x);
v[i] = x * x * x;
}
printf("%.3f\n", binarySearch(0.0, minVolume * minVolume * minVolume * VC, k + 1, n, v));
return 0;
}
The problem was I was setting minimal volume as high in the binary search, but I should use the maximal volume. The second problem was I was not passing maximal radius ^ 3 to the binary search function. Thanks for help