Method of the golden ratio - c++

I need to find the maximum of the function between the specific ratio. The code below show "Method of the golden ratio", which could find the maximum of the funciton. The problem is when I use a exp() function in [0.,10.] the result is about 10, but it should be about 20k. Do you know where is the problem? Have you got some other methods to find the maximum of the function?
#include <iostream>
#include <cmath>
using namespace std;
double goldenRatioMethodMax(double(*p_pFunction)(double), double a, double b)
{
double k = (sqrt(5.) - 1.) / 2.;
double xL = b - k * (b - a);
double xR = a + k * (b - a);
while (b - a > EPSILON)
{
if (p_pFunction(xL) > p_pFunction(xR))
{
b = xR;
xR = xL;
xL = b - k*(b - a);
}
else
{
a = xL;
xL = xR;
xR = a + k * (b - a);
}
}
return (a + b) / 2.;
}
int main(int argc, char **argv)
{
cout << goldenRatioMethodMax(exp, 0.,10.);//the answer is about 10 but it should be about 20k
return 0;
}

The problem is that you return the value at which the max is found, not the max itself. Just change the last line of the function to return p_pFunction((a + b) / 2.); and it will generate the expected output.

Related

Rational approximation of double using int numerator and denominator in C++

A real world third party API takes a parameter of type fraction which is a struct of an int numerator and denominator. The value that I need to pass is known to me as a decimal string that is converted to a double.
The range of possible values are, let's say 10K to 300M but if there is a fraction part after the decimal point, it's significant.
I have here code for two approximation approaches, one uses the extended euclidean algorithm while the other is brute-force. Both methods find a rational approximation using int types for a given double.
The brute-force is of course the more accurate of the two and is actually faster when the converted numbers are large. My questions is, can I say anything clever about the quality of the approximation using the euclidean algorithm.
More formally, can I put a bound on the approximation using the euclidean algorithm vs. the approximation of the brute-force algorithm (which I believe to be optimal).
An example for a bound:
If the error of the optimal approximation is r, then the euclidean algorithm approximation would produce an error that is less than 2*r.
(I'm not claiming this is the bound and I certainly can't prove it, it's just an example for what a good bound may look like).
Here's the code an a test program:
#include <iostream>
#include <iomanip>
#include <cmath>
#include <limits>
#include <chrono>
#include <random>
// extended euclidian algorithm
// finds the coefficients that produce the gcd
// in u, we store m,n the coefficients that produce m*a - n*b == gcd.
// in v, we store m,n the coefficients that produce m*a - n*b == 0.
// breaks early if the coefficients become larger than INT_MAX
int gcd_e(uint64_t a, int b, int u[2], int v[2])
{
auto w = lldiv(a, b);
// u[0] * a' - u[1] * b' == a
// v[0] * a' - v[1] * b' == b
// a - w.quot * b == w.rem
// (u[0] * a' - u[1] * b') - w.quot * (v[0] * a' - v[1] * b') == w.rem
// (u[0] - w.quot * v[0]) * a' - u[1] * b' + w.quot * v[1] * b' == w.rem
// (u[0] - w.quot * v[0]) * a' + (w.quot * v[1] - u[1]) * b' == w.rem
// (u[0] - w.quot * v[0]) * a' - (u[1] - w.quot * v[1]) * b' == w.rem
auto m = u[0] - w.quot * v[0];
auto n = u[1] - w.quot * v[1];
u[0] = v[0];
u[1] = v[1];
constexpr auto L = std::numeric_limits<int>::max();
if (m > L || n > L)
throw 0; // break early
if (m < -L || n < -L)
throw 0; // break early
v[0] = int(m);
v[1] = int(n);
if (w.rem == 0)
return b;
return gcd_e(b, int(w.rem), u, v);
}
inline double helper_pre(double d, bool* negative, bool* inverse)
{
bool v = (d < 0);
*negative = v;
if (v)
d = -d;
v = (d < 1);
*inverse = v;
if (v)
d = 1 / d;
return d;
}
inline void helper_post(int* m, int* n, bool negative, bool inverse)
{
if (inverse)
std::swap(*n, *m);
if (negative)
*n = -(*n);
}
// gets a rational approximation for double d
// numerator is stored in n
// denominator is stored in m
void approx(double d, int* n, int *m)
{
int u[] = { 1, 0 }; // 1*a - 0*b == a
int v[] = { 0, -1 }; // 0*a - (-1)*b == b
bool negative, inverse;
d = helper_pre(d, &negative, &inverse);
constexpr int q = 1 << 30;
auto round_d = std::round(d);
if (d == round_d)
{
// nothing to do, it's an integer.
v[1] = int(d);
v[0] = 1;
}
else try
{
uint64_t k = uint64_t(std::round(d*q));
gcd_e(k, q, u, v);
}
catch (...)
{
// OK if we got here.
// int limits
}
// get the approximate numerator and denominator
auto nn = v[1];
auto mm = v[0];
// make them positive
if (mm < 0)
{
mm = -mm;
nn = -nn;
}
helper_post(&mm, &nn, negative, inverse);
*m = mm;
*n = nn;
}
// helper to test a denominator
// returns the magnitude of the error
double helper_rattest(double x, int tryDenom, int* numerator)
{
double r = x * tryDenom;
double rr = std::round(r);
auto num = int(rr);
auto err = std::abs(r - rr) / tryDenom;
*numerator = num;
return err;
}
// helper to reduce the rational number
int gcd(int a, int b)
{
auto c = a % b;
if (c == 0)
return b;
return gcd(b, int(c));
}
// gets a rational approximation for double d
// numerator is stored in n
// denominator is stored in m
// uses brute force by scanning denominator range
void approx_brute(double d, int* n, int* m)
{
bool negative, inverse;
d = helper_pre(d, &negative, &inverse);
int upto = int(std::numeric_limits<int>::max() / d);
int bestNumerator;
int bestDenominator = 1;
auto bestErr = helper_rattest(d, 1, &bestNumerator);
for (int kk = 2; kk < upto; ++kk)
{
int n;
auto e = helper_rattest(d, kk, &n);
if (e < bestErr)
{
bestErr = e;
bestNumerator = n;
bestDenominator = kk;
}
if (bestErr == 0)
break;
}
// reduce, just in case
auto g = gcd(bestNumerator, bestDenominator);
bestNumerator /= g;
bestDenominator /= g;
helper_post(&bestDenominator, &bestNumerator, negative, inverse);
*n = bestNumerator;
*m = bestDenominator;
}
int main()
{
int n, m;
auto re = std::default_random_engine();
std::random_device rd;
re.seed(rd());
for (auto& u : {
std::uniform_real_distribution<double>(10000, 15000),
std::uniform_real_distribution<double>(100000, 150000),
std::uniform_real_distribution<double>(200000, 250000),
std::uniform_real_distribution<double>(400000, 450000),
std::uniform_real_distribution<double>(800000, 850000),
std::uniform_real_distribution<double>(1000000, 1500000),
std::uniform_real_distribution<double>(2000000, 2500000),
std::uniform_real_distribution<double>(4000000, 4500000),
std::uniform_real_distribution<double>(8000000, 8500000),
std::uniform_real_distribution<double>(10000000, 15000000)
})
{
auto dd = u(re);
std::cout << "approx: " << std::setprecision(14) << dd << std::endl;
auto before = std::chrono::steady_clock::now();
approx_brute(dd, &n, &m);
auto after = std::chrono::steady_clock::now();
std::cout << n << " / " << m << " dur: " << (after - before).count() << std::endl;
before = std::chrono::steady_clock::now();
approx(dd, &n, &m);
after = std::chrono::steady_clock::now();
std::cout << n << " / " << m << " dur: " << (after - before).count()
<< std::endl
<< std::endl;
}
}
Here's some sample output:
approx: 13581.807792679
374722077 / 27590 dur: 3131300
374722077 / 27590 dur: 15000
approx: 103190.31976517
263651267 / 2555 dur: 418700
263651267 / 2555 dur: 6300
approx: 223753.78683426
1726707973 / 7717 dur: 190100
1726707973 / 7717 dur: 5800
approx: 416934.79214075
1941665327 / 4657 dur: 102100
403175944 / 967 dur: 5700
approx: 824300.61241502
1088901109 / 1321 dur: 51900
1088901109 / 1321 dur: 5900
approx: 1077460.29557
1483662827 / 1377 dur: 39600
1483662827 / 1377 dur: 5600
approx: 2414781.364653
1079407270 / 447 dur: 17900
1079407270 / 447 dur: 7300
approx: 4189869.294816
1776504581 / 424 dur: 10600
1051657193 / 251 dur: 9900
approx: 8330270.2432111
308219999 / 37 dur: 5400
308219999 / 37 dur: 10300
approx: 11809264.006453
1830435921 / 155 dur: 4000
1830435921 / 155 dur: 10500
Thanks to all who commented and drew my attention to the concept of continued fractions.
According to this paper by (William F. Hammond)
There is equivalence between the euclidean algorithm and the continued fractions method.
The sub-optimal results are due to the fact that the numerator is constrained as well as the denominator so if the non brute force algorithm only produces "convergents" it means that it neglects the range of denominators between the first convergent to violate the constraints and the one just before it.
The denominators after the returned convergent and the one that follows may approximate close to the latter convergent and the difference between subsequent convergents can be shown to be:
So I suppose this would be the bound on the difference between the brute-force and the euclidean algorithm. The ratio of the error between them can be practically anything.
(can find examples of error ratios of more than 100 easily)
I hope I read everything correctly. I'm no authority on this.

Karatsuba Implementation C++

So I've decided to take a stab at implementing Karatsuba's algorithm in C++ (haven't used this language since my second coding class a life time ago so I'm very very rusty). Well anyhow, I believe that I've followed the pseudocode line by line but my algorithm still keeps popping up with the wrong answer.
x = 1234, y = 5678
Actual Answer: x*y ==> 7006652
Program output: x*y ==> 12272852
*Note: I'm running on a mac and using the following to create the executable to run c++ -std=c++11 -stdlib=libc++ karatsuba.cpp
Anywho, here's the code drafted up and feel free to make some callouts on what I'm doing wrong or how to improve upon c++.
Thanks!
Code:
#include <iostream>
#include <tuple>
#include <cmath>
#include <math.h>
using namespace std;
/** Method signatures **/
tuple<int, int> splitHalves(int x);
int karatsuba(int x, int y, int n);
int main()
{
int x = 5678;
int y = 1234;
int xy = karatsuba(x, y, 4);
cout << xy << endl;
return 0;
}
int karatsuba(int x, int y, int n)
{
if (n == 1)
{
return x * y;
}
else
{
int a, b, c, d;
tie(a, b) = splitHalves(x);
tie(c, d) = splitHalves(y);
int p = a + b;
int q = b + c;
int ac = karatsuba(a, c, round(n / 2));
int bd = karatsuba(b, d, round(n / 2));
int pq = karatsuba(p, q, round(n / 2));
int acbd = pq - bd - ac;
return pow(10, n) * ac + pow(10, round(n / 2)) * acbd + bd;
}
}
/**
* Method taken from https://stackoverflow.com/questions/32016815/split-integer-into-two-separate-integers#answer-32017073
*/
tuple<int, int> splitHalves(int x)
{
const unsigned int Base = 10;
unsigned int divisor = Base;
while (x / divisor > divisor)
divisor *= Base;
return make_tuple(round(x / divisor), x % divisor);
}
There are a lot of problems in your code...
First, you have a wrong coefficient here:
int q = b + c;
Has to be:
int q = c + d;
Next, the implementation of splitHalves doesn't do the work. Try that:
tuple<int, int> splitHalves(int x, int power)
{
int divisor = pow(10, power);
return make_tuple(x / divisor, x % divisor);
}
That would give you the "correct" answer for your input, but... that is not a Karatsuba method.
First, keep in mind that you don't need to "split in halves". Consider 12 * 3456. splitting the first number to halves mean a = 0, b = 12, while your implementation gives a = 1, b = 2.
Overall Karastuba works with arrays, not integers.

How to convert integer to double implicitly?

int a{5},b{2},c{9};
double d = (double)a / (double)b + (double)c;
Or I can use static_cast. Either way is verbose, especially when the formula is long. Is there a better solution?
You can multiply by 1.0:
int a{5}, b{2}, c{9};
double d = 1.0 * a / b + 1.0 * c;
And when you work with sums you can add to 0.0:
double d = 0.0 + a - b + c;
Most compilers perform optimization such that the meaningless operation is not evaluated. Only type conversion is done.
Remember that you only need to cast the first member in each division/multiply group. Do so in any manner that seems reasonable. And simple addition/substraction (with no other type multipliers/divisors) is casted too. Compilers guarantee casting. So your example:
double d = (double)a / (double)b + (double)c;
Really may be rewritten like this:
double d = (double)a / b + c;
double d = 1.0 * a / b + c;
double d = static_cast<double>(a) / b + c;
Some more examples:
double d = (double)a / b + (double)c / d + e;
double d = 1.0 * a / b + 1.0 * c / d + e;
double d = static_cast<double>(a) / b + static_cast<double>(c) / d + e;
This works but all you need is a single 1.0* in front of a
int a{5},b{2},c{9};
double d = (double)a / (double)b + (double)c;
int a{5},b{2},c{9};
double d = 1.0*a / b + c;
The rules of precedence and implicit conversion will cause all the variables to be converted to doubles.
One thing to be careful of is grouped variables which will need to have their own 1.0* or 0.0+ as appropriate:
int a{5},b{2},c{9};
double d = a / (0.0 + b + c);
int a{5},b{2},c{9};
double d = a / (1.0 * b * c);
Alternately, one use use a static cast on the associated variable. I prefer the smaller version as the 1.0* or 0.0+ both scream out implicit conversion to doubles.
int a{5},b{2},c{9};
double d = a / (static_cast<double>(b) * c);
Is there a better solution?
Yes. Express intent through functions.
Marvel as the optimiser emits perfectly efficient assembler. Enjoy the accolades of your colleagues who gaze in wonder at your awesomely readable and maintainable code:
#include <iostream>
auto a_over_b_plus_c(double a, double b, double c)
{
double d = a / b + c;
return d;
}
int main()
{
int a = 5, b = 2, c = 9;
std::cout << a_over_b_plus_c(a, b, c) << std::endl;
}
For fun, here's a solution based on tuples & lambdas:
#include <iostream>
#include <tuple>
template<class T, class...Args>
auto to(Args&&...args)
{
return std::make_tuple(T(std::forward<Args>(args))...);
}
int main()
{
int a = 5, b = 2, c = 9;
auto calc = [](auto&& vals) {
auto& a = std::get<0>(vals);
auto& b = std::get<1>(vals);
auto& c = std::get<2>(vals);
return a / b + c;
};
auto result = calc(to<double>(a, b, c));
std::cout << result << std::endl;
}
... and something perhaps more readable...
#include <iostream>
#include <tuple>
#include <complex>
template<class T, class F, class...Args>
auto with(F f, Args&&...args)
{
return f(T(std::forward<Args>(args))...);
}
int main()
{
int a = 5, b = 2, c = 9;
auto calc = [](auto&& a, auto&& b, auto&& c) {
return a / b + c;
};
auto result = with<double>(calc, a, b, c);
auto result2 = with<float>(calc, a, b, c);
auto result3 = with<std::complex<double>>(calc, a, b, c);
auto result4 = with<std::complex<float>>(calc, a, b, c);
std::cout << result << std::endl;
std::cout << result2 << std::endl;
std::cout << result3 << std::endl;
}

Monte Carlo Sims - Please check my algorithm

Basically, the problem simulates the following:
There is an urn with 50 green balls and 50 red balls.
I am allowed to pick balls from the urn, without replacement, with the following rules: For every red ball picked, I lose a dollar, for every green ball picked, I gain a dollar.
I can stop picking whenever I want. Worst case scenario is I pick all 100, and net 0.
The question is to come up with an optimal stopping strategy, and create a program to compute the expected value of the strategy.
My strategy is to continue picking balls, while the expected value of picking another ball is positive.
That is, the stopping rule is DYNAMIC.
In Latex, here's the recursive formula in an image:
http://i.stack.imgur.com/fnzYk.jpg
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
double ExpectedValue(double, double);
double max(double, double);
main() {
double g = 50;
double r = 50;
double EV = ExpectedValue(g, r);
printf ("%f\n\n", EV);
system("PAUSE");
}
double ExpectedValue(double g, double r){
double p = (g / (g + r));
double q = 1 - p;
if (g == 0)
return r;
if (r == 0)
return 0;
double E_gr = max ((p * ExpectedValue (g - 1, r)) + (q * ExpectedValue (g, r - 1)), (r - g));
return E_gr;
}
double max(double a, double b){
if (a > b)
return a;
else return b;
}
I let it run for 30 minutes, and it was still working.
For small values of g and r, a solution is computed very quickly. What am I doing wrong?
Any help is much appreciated!
Your algorithm is fine, but you are wasting information. For a certain pair (g, r) you calculate it's ExpectedValue and then you throw that information away. Often with recursion algorithms remembering previously calculated values can speed it up a LOT.
The following code runs in the blink of an eye. For example for g = r = 5000 it calculates 36.900218 in 1 sec. It remembers previous calculations of ExpectedValue(g, r) to prevent unnecessary recursion and recalculation.
#include <stdio.h>
#include <stdlib.h>
double ExpectedValue(int g, int r, double ***expectedvalues);
inline double max(double, double);
int main(int argc, char *argv[]) {
int g = 50;
int r = 50;
int i, j;
double **expectedvalues = malloc(sizeof(double*) * (g+1));
// initialise
for (i = 0; i < (g+1); i++) {
expectedvalues[i] = malloc(sizeof(double) * (r+1));
for (j = 0; j < (r+1); j++) {
expectedvalues[i][j] = -1.0;
}
}
double EV = ExpectedValue(g, r, &expectedvalues);
printf("%f\n\n", EV);
// free memory
for (i = 0; i < (g+1); i++) free(expectedvalues[i]);
free(expectedvalues);
return 0;
}
double ExpectedValue(int g, int r, double ***expectedvalues) {
if (g == 0) return r;
if (r == 0) return 0;
// did we calculate this before? If yes, then return that value
if ((*expectedvalues)[g][r] != -1.0) return (*expectedvalues)[g][r];
double p = (double) g / (g + r);
double E_gr = max(p * ExpectedValue(g-1, r, expectedvalues) + (1.0-p) * ExpectedValue(g, r-1, expectedvalues), (double) (r-g));
// store value for later lookup
(*expectedvalues)[g][r] = E_gr;
return E_gr;
}
double max(double a, double b) {
if (a > b) return a;
else return b;
}
Roughly speaking, adding one ball to the urn doubles the number of calls you will have to make to ExpectedValue (let's not quibble about boundary conditions). This is called O(en), and it can bring the most powerful computer on Earth to its knees.
The problem is that you are calculating the same values over and over again. Keep a table of ExpectedValue(r,g) and fill it in as you go, so that you never have to calculate the same value more than once. Then you'll be working in O(n2), which is heck of a lot faster.
In my opinion, correct, but rather straightforward solution.
Here's what you can do:
Eliminate recursion!
Eliminate recalulations of ExpectedValue
Parallelize your code
Read this [lecture notes]. It definitely will be useful
I can provide some code samples, but it'd not be fair.

How can i format a decimal to a fraction with limits to the denominator

Hi All I am trying to format a decimal A into a fraction B + C/D, where certain limit is imposed on D, say D could be one among [2...9] or [2...19] etc. BCD are integers
The goal is to get the formatted fraction as close to the decimal as possible.
Is there an existing algorithm/theory on this?
Or is there any API I can call on Mac SDK?
// Not tested or even compiled :-). Assumes you are handling sign
// in: a - the decimal to convert
// limit - the largest denominator you will allow
// out: outN - Numerator
// outD Denominator
#include <math.h>
void d2f(double a, int limit, int& outN, int& outD) {
double z;
int dPrev, d, n;
a = fabs(a);
z = a;
d = 1;
n = a;
dPrev = 0;
while (a - (double)(n/d) != 0 && z != floor(z)) {
z = 1 / (z - floor(z));
int tmp = d;
d = d * (int)floor(z) + dPrev;
if (d > limit) {
d = tmp;
break;
}
dPrev = tmp;
n = floor(a * d + 0.5);
}
outN = n;
outD = d;
}
Hope that helps/works :-)
Look into continued fractions.