Discrete binary search - c++

Can someone please explain discrete binary search with an example?
I read about it on the above link, and got a basic idea about what it is, but I still don't understand the code part and how it is practically implemented.

Basically, assume that
You have a function f(x) which is monotonically increasing
(decreasing) on the interval [a, b].
f(a) < C < f(b)
You want to find x such that f(x) = C.
Then you can use binary search to find x. Essentially, you half the possible interval for the variable x each time.
To implement it, do something along the lines of:
#define EPS 1E-9
double f(double x)
{
///some monotonically increasing function on [a, b], for example f(x) = x^3:
return x*x*x;
}
double binarySearch(double C, double a, double b)
{
double low = a, high = b;
double mid;
while(abs(low-high) > EPS)
{
mid = low + (high - low) / 2;
if f(mid) < C
low = mid;
else
high = mid;
}
return mid;
}

Related

How to accurately summate Clenshaw algorithm

I have following code to summate Chebyshev expansion of a function using Clenshaw algorithm:
long double summate_chebyshev(long double x, long double* c, int count) {
long double bn, bn_1 = 0, bn_2 = 0;
for (int i = count - 1; i > 0; i--) {
bn = c[i] + 2.0*x*bn_1 - bn_2;
bn_2 = bn_1;
bn_1 = bn;
}
bn = 2.0l*c[0] + 2.0*x*bn_1 - bn_2;
return (bn - bn_2)/2.0l;
}
I gives me pretty nice precision but Chebyshev polynomial coefficients tend to converge to 0 pretty quickly (c[17] is already 4.34e-20 < LDBL_EPS, so it ignores it). I wish to increase accuracy of summation and add more terms that is quite small but make the difference together. Is there any way to achieve this (improved versions of Clenshaw summation or any other method to evaluate Chebyshev polynomials accurately)?

Fast quadratic minimizer

Given a quadratic function, that is f(x) = ax^2 + bx + c, what is the fastest way to find x in [-1, 1] which minimizes f(x)?
So far this is the function I've come up with:
double QuadraticMinimizer(double a, double b, double c) {
double x = 1 - 2*(b > 0);
if (a > 0) {
x = -b/(2*a);
if (fabs(x) > 1)
x = Sign(x);
}
return x;
}
Is it possible to do better?
There is no "fastest way" because the running time depends on the particular machine and the particular distribution of the input parameters. Also, there is not much that you can remove from the initial code.
If the location of the extremum -b/2a frequently falls outside of the interval [-1,1], you can avoid the division in those cases.
If you allow to hack the sign bit from the floating-point representation to implement fast abs, sgn and setsgn functions, you can use something like
a*= -2;
if (hack_abs(b) >= hack_abs(a))
return hack_setsgn(1, hack_sgn(a) ^ hack_sgn(b));
return b / a;
You can also try with the more portable copysign function.

How to calculate (n!)%1000000009

I need to find n!%1000000009.
n is of type 2^k for k in range 1 to 20.
The function I'm using is:
#define llu unsigned long long
#define MOD 1000000009
llu mulmod(llu a,llu b) // This function calculates (a*b)%MOD caring about overflows
{
llu x=0,y=a%MOD;
while(b > 0)
{
if(b%2 == 1)
{
x = (x+y)%MOD;
}
y = (y*2)%MOD;
b /= 2;
}
return (x%MOD);
}
llu fun(int n) // This function returns answer to my query ie. n!%MOD
{
llu ans=1;
for(int j=1; j<=n; j++)
{
ans=mulmod(ans,j);
}
return ans;
}
My demand is such that I need to call the function 'fun', n/2 times. My code runs too slow for values of k around 15. Is there a way to go faster?
EDIT:
In actual I'm calculating 2*[(i-1)C(2^(k-1)-1)]*[((2^(k-1))!)^2] for all i in range 2^(k-1) to 2^k. My program demands (nCr)%MOD caring about overflows.
EDIT: I need an efficient way to find nCr%MOD for large n.
The mulmod routine can be speeded up by a large factor K.
1) '%' is overkill, since (a + b) are both less than N.
- It's enough to evaluate c = a+b; if (c>=N) c-=N;
2) Multiple bits can be processed at once; see optimization to "Russian peasant's algorithm"
3) a * b is actually small enough to fit 64-bit unsigned long long without overflow
Since the actual problem is about nCr mod M, the high level optimization requires using the recurrence
(n+1)Cr mod M = (n+1)nCr / (n+1-r) mod M.
Because the left side of the formula ((nCr) mod M)*(n+1) is not divisible by (n+1-r), the division needs to be implemented as multiplication with the modular inverse: (n+r-1)^(-1). The modular inverse b^(-1) is b^(M-1), for M being prime. (Otherwise it's b^(phi(M)), where phi is Euler's Totient function.)
The modular exponentiation is most commonly implemented with repeated squaring, which requires in this case ~45 modular multiplications per divisor.
If you can use the recurrence
nC(r+1) mod M = nCr * (n-r) / (r+1) mod M
It's only necessary to calculate (r+1)^(M-1) mod M once.
Since you are looking for nCr for multiple sequential values of n you can make use of the following:
(n+1)Cr = (n+1)! / ((r!)*(n+1-r)!)
(n+1)Cr = n!*(n+1) / ((r!)*(n-r)!*(n+1-r))
(n+1)Cr = n! / ((r!)*(n-r)!) * (n+1)/(n+1-r)
(n+1)Cr = nCr * (n+1)/(n+1-r)
This saves you from explicitly calling the factorial function for each i.
Furthermore, to save that first call to nCr you can use:
nC(n-1) = n //where n in your case is 2^(k-1).
EDIT:
As Aki Suihkonen pointed out, (a/b) % m != a%m / b%m. So the method above so the method above won't work right out of the box. There are two different solutions to this:
1000000009 is prime, this means that a/b % m == a*c % m where c is the inverse of b modulo m. You can find an explanation of how to calculate it here and follow the link to the Extended Euclidean Algorithm for more on how to calculate it.
The other option which might be easier is to recognize that since nCr * (n+1)/(n+1-r) must give an integer, it must be possible to write n+1-r == a*b where a | nCr and b | n+1 (the | here means divides, you can rewrite that as nCr % a == 0 if you like). Without loss of generality, let a = gcd(n+1-r,nCr) and then let b = (n+1-r) / a. This gives (n+1)Cr == (nCr / a) * ((n+1) / b) % MOD. Now your divisions are guaranteed to be exact, so you just calculate them and then proceed with the multiplication as before. EDIT As per the comments, I don't believe this method will work.
Another thing I might try is in your llu mulmod(llu a,llu b)
llu mulmod(llu a,llu b)
{
llu q = a * b;
if(q < a || q < b) // Overflow!
{
llu x=0,y=a%MOD;
while(b > 0)
{
if(b%2 == 1)
{
x = (x+y)%MOD;
}
y = (y*2)%MOD;
b /= 2;
}
return (x%MOD);
}
else
{
return q % MOD;
}
}
That could also save some precious time.

Recursion on Integration Approximation Function

I am attempting to approximate integrals using an adaptive Trapezoidal Rule.
I have a coarse integral approximation:
//Approximates the integral of f across the interval [a,b]
double coarse_app(double(*f)(double x), double a, double b) {
return (b - a) * (f(a) + f(b)) / 2.0;
}
I have a fine integral approximation:
//Approximates the integral of f across the interval [a,b]
double fine_app(double(*f)(double x), double a, double b) {
double m = (a + b) / 2.0;
return (b - a) / 4.0 * (f(a) + 2.0 * f(m) + f(b));
}
This is made adaptive by summing the approximation across decreasing portions of the given interval until either the recursion level is too high or the coarse and fine approximation are very close to one another:
//Adaptively approximates the integral of f across the interval [a,b] with
// tolerance tol.
double trap(double(*f)(double x), double a, double b, double tol) {
double q = fine_app(f, a, b);
double r = coarse_app(f, a, b);
if ((currentLevel >= minLevel) && (abs(q - r) <= 3.0 * tol)) {
return q;
} else if (currentLevel >= maxLevel) {
return q;
} else {
++currentLevel;
return (trap(f, a, b / 2.0, tol / 2.0) + trap(f, a + (b / 2.0), b, tol / 2.0));
}
}
If I manually calculate an integral by breaking it up into sections and using fine_app on it, I get a very good approximation. However, when I use the trap function, which should do this for me, all of my results are far too small.
For example, trap(square, 0, 2.0, 1.0e-2) gives the output 0.0424107, where the square function is defined as x^2. However, the output should be around 2.667. This is far worse than doing a single run of fine_app on the entire interval, which gives a value of 3.
Conceptually, I believe I have it implemented correctly, but there is something about C++ recursion which is not doing what I expect it to.
First time programming in C++, so all improvements are welcome.
I'm assuming you have currentLevel defined somewhere else. You don't want to do that. You also calculate your midpoints incorrectly.
Take a = 3, b = 5:
[a, b / 2.0] = [3, 2.5]
[a + b / 2.0, b] = 2.5, 3]
The correct points should be [3, 4] and [4, 5]
The code should look like this:
double trap(double(*f)(double x), double a, double b, double tol, int currentLevel) {
double q = fine_app(f, a, b);
double r = coarse_app(f, a, b);
if ((currentLevel >= minLevel) && (abs(q - r) <= 3.0 * tol)) {
return q;
} else if (currentLevel >= maxLevel) {
return q;
} else {
++currentLevel;
return (trap(f, a, (a + b) / 2.0, tol / 2, currentLevel) + trap(f, (a + b) / 2.0, b, tol / 2, currentLevel));
}
}
You can add a helper function so you don't have to specify currentLevel:
double integrate(double (*f)(double x), double a, double b, double tol)
{
return trap(f, a, b, tol, 1);
}
If I call this as integrate(square, 0, 2, 0.01) I get the answer of 2.6875, which means you need an even lower tolerance to converge to the correct result of 8/3 = 2.6666...7. You can check the exact error bound on this by using the error terms for Simpson's method.

finding cube root in C++?

Strange things happen when i try to find the cube root of a number.
The following code returns me undefined. In cmd : -1.#IND
cout<<pow(( double )(20.0*(-3.2) + 30.0),( double )1/3)
While this one works perfectly fine. In cmd : 4.93242414866094
cout<<pow(( double )(20.0*4.5 + 30.0),( double )1/3)
From mathematical way it must work since we can have the cube root from a negative number.
Pow is from Visual C++ 2010 math.h library. Any ideas?
pow(x, y) from <cmath> does NOT work if x is negative and y is non-integral.
This is a limitation of std::pow, as documented in the C standard and on cppreference:
Error handling
Errors are reported as specified in math_errhandling
If base is finite and negative and exp is finite and non-integer, a domain error occurs and a range error may occur.
If base is zero and exp is zero, a domain error may occur.
If base is zero and exp is negative, a domain error or a pole error may occur.
There are a couple ways around this limitation:
Cube-rooting is the same as taking something to the 1/3 power, so you could do std::pow(x, 1/3.).
In C++11, you can use std::cbrt. C++11 introduced both square-root and cube-root functions, but no generic n-th root function that overcomes the limitations of std::pow.
The power 1/3 is a special case. In general, non-integral powers of negative numbers are complex. It wouldn't be practical for pow to check for special cases like integer roots, and besides, 1/3 as a double is not exactly 1/3!
I don't know about the visual C++ pow, but my man page says under errors:
EDOM The argument x is negative and y is not an integral value. This would result in a complex number.
You'll have to use a more specialized cube root function if you want cube roots of negative numbers - or cut corners and take absolute value, then take cube root, then multiply the sign back on.
Note that depending on context, a negative number x to the 1/3 power is not necessarily the negative cube root you're expecting. It could just as easily be the first complex root, x^(1/3) * e^(pi*i/3). This is the convention mathematica uses; it's also reasonable to just say it's undefined.
While (-1)^3 = -1, you can't simply take a rational power of a negative number and expect a real response. This is because there are other solutions to this rational exponent that are imaginary in nature.
http://www.wolframalpha.com/input/?i=x^(1/3),+x+from+-5+to+0
Similarily, plot x^x. For x = -1/3, this should have a solution. However, this function is deemed undefined in R for x < 0.
Therefore, don't expect math.h to do magic that would make it inefficient, just change the signs yourself.
Guess you gotta take the negative out and put it in afterwards. You can have a wrapper do this for you if you really want to.
function yourPow(double x, double y)
{
if (x < 0)
return -1.0 * pow(-1.0*x, y);
else
return pow(x, y);
}
Don't cast to double by using (double), use a double numeric constant instead:
double thingToCubeRoot = -20.*3.2+30;
cout<< thingToCubeRoot/fabs(thingToCubeRoot) * pow( fabs(thingToCubeRoot), 1./3. );
Should do the trick!
Also: don't include <math.h> in C++ projects, but use <cmath> instead.
Alternatively, use pow from the <complex> header for the reasons stated by buddhabrot
pow( x, y ) is the same as (i.e. equivalent to) exp( y * log( x ) )
if log(x) is invalid then pow(x,y) is also.
Similarly you cannot perform 0 to the power of anything, although mathematically it should be 0.
C++11 has the cbrt function (see for example http://en.cppreference.com/w/cpp/numeric/math/cbrt) so you can write something like
#include <iostream>
#include <cmath>
int main(int argc, char* argv[])
{
const double arg = 20.0*(-3.2) + 30.0;
std::cout << cbrt(arg) << "\n";
std::cout << cbrt(-arg) << "\n";
return 0;
}
I do not have access to the C++ standard so I do not know how the negative argument is handled... a test on ideone http://ideone.com/bFlXYs seems to confirm that C++ (gcc-4.8.1) extends the cube root with this rule cbrt(x)=-cbrt(-x) when x<0; for this extension you can see http://mathworld.wolfram.com/CubeRoot.html
I was looking for cubit root and found this thread and it occurs to me that the following code might work:
#include <cmath>
using namespace std;
function double nth-root(double x, double n){
if (!(n%2) || x<0){
throw FAILEXCEPTION(); // even root from negative is fail
}
bool sign = (x >= 0);
x = exp(log(abs(x))/n);
return sign ? x : -x;
}
I think you should not confuse exponentiation with the nth-root of a number. See the good old Wikipedia
because the 1/3 will always return 0 as it will be considered as integer...
try with 1.0/3.0...
it is what i think but try and implement...
and do not forget to declare variables containing 1.0 and 3.0 as double...
Here's a little function I knocked up.
#define uniform() (rand()/(1.0 + RAND_MAX))
double CBRT(double Z)
{
double guess = Z;
double x, dx;
int loopbreaker;
retry:
x = guess * guess * guess;
loopbreaker = 0;
while (fabs(x - Z) > FLT_EPSILON)
{
dx = 3 * guess*guess;
loopbreaker++;
if (fabs(dx) < DBL_EPSILON || loopbreaker > 53)
{
guess += uniform() * 2 - 1.0;
goto retry;
}
guess -= (x - Z) / dx;
x = guess*guess*guess;
}
return guess;
}
It uses Newton-Raphson to find a cube root.
Sometime Newton -Raphson gets stuck, if the root is very close to 0 then the derivative can
get large and it can oscillate. So I've clamped and forced it to restart if that happens.
If you need more accuracy you can change the FLT_EPSILONs.
If you ever have no math library you can use this way to compute the cubic root:
cubic root
double curt(double x) {
if (x == 0) {
// would otherwise return something like 4.257959840008151e-109
return 0;
}
double b = 1; // use any value except 0
double last_b_1 = 0;
double last_b_2 = 0;
while (last_b_1 != b && last_b_2 != b) {
last_b_1 = b;
// use (2 * b + x / b / b) / 3 for small numbers, as suggested by willywonka_dailyblah
b = (b + x / b / b) / 2;
last_b_2 = b;
// use (2 * b + x / b / b) / 3 for small numbers, as suggested by willywonka_dailyblah
b = (b + x / b / b) / 2;
}
return b;
}
It is derives from the sqrt algorithm below. The idea is that b and x / b / b bigger and smaller from the cubic root of x. So, the average of both lies closer to the cubic root of x.
Square Root And Cubic Root (in Python)
def sqrt_2(a):
if a == 0:
return 0
b = 1
last_b = 0
while last_b != b:
last_b = b
b = (b + a / b) / 2
return b
def curt_2(a):
if a == 0:
return 0
b = a
last_b_1 = 0;
last_b_2 = 0;
while (last_b_1 != b and last_b_2 != b):
last_b_1 = b;
b = (b + a / b / b) / 2;
last_b_2 = b;
b = (b + a / b / b) / 2;
return b
In contrast to the square root, last_b_1 and last_b_2 are required in the cubic root because b flickers. You can modify these algorithms to compute the fourth root, fifth root and so on.
Thanks to my math teacher Herr Brenner in 11th grade who told me this algorithm for sqrt.
Performance
I tested it on an Arduino with 16mhz clock frequency:
0.3525ms for yourPow
0.3853ms for nth-root
2.3426ms for curt