How can find out the contents of exp() built in function of the C numerics library of <cmath> - c++

I recently decided to build a simple calculator programme, but when it came to exponents i was lost. OK you can use , but i'd rather know how they solves the problem of that function, other than an impossible amount of if statements e.g.
if(y==2){
x=xx;
}
else if (y==3){
x=xx*x;
}
And so on... So, how did 's exp() do it, and how can i find out?

From An algorithm for calculating exp(x) or e^x:
An algorithm for calculating exp(x) or e^x
This algorithm makes it possible for exp(x) or e^x to be calculated
using only the operations of addition, subtraction, multiplication and
division. The basic idea is to to use a polynomial approximation in
step 3 to calculate e^x. But because this approximation is only
accurate for small arguments x we must take steps 1 and 2 to reduce x
to a smaller value.
Split up x: Write x = n + r, where n is the
nearest integer to x and r is a real number between −½ and +½. Then e^x = e^n · e^r.
Evaluate e^n: Multiply the number e by itself n times. To 14 digits, e
= 2.7182818284590. The multiplication can be done quite efficiently. For example e 8 can be evaluated with just 3 multiplications if it is
written as (((e) 2 ) 2 ) 2. To further increase efficiency various
integer powers of e can be calculated once and stored in a lookup
table.
Evaluate e^r using the polynomial: EXP(r)=e^r=1 + r + (r^2)/2 + (r^3)/6 + (r^4)/24 + (r^5)/120
For r between −½ and +½ this polynomial is accurate to within
±0.00003.
EDIT:
If you are interested in the original implementation in the GNU libc library then you can download the sources from here.

Related

How to write this floating point code in a portable way?

I am working on a cryptocurrency and there is a calculation that nodes must make:
average /= total;
double ratio = average/DESIRED_BLOCK_TIME_SEC;
int delta = -round(log2(ratio));
It is required that every node has the exact same result no matter what architecture or stdlib being used by the system. My understanding is that log2 might have different implementations that yield very slightly different results or flags like --ffast-math could impact the outputted results.
Is there a simple way to convert the above calculation to something that is verifiably portable across different architectures (fixed point?) or am I overthinking the precision that is needed (given that I round the answer at the end).
EDIT: Average is a long and total is an int... so average ends up rounded to the closest second.
DESIRED_BLOCK_TIME_SEC = 30.0 (it's a float) that is #defined
For this kind of calculation to be exact, one must either calculate all the divisions and logarithms exactly -- or one can work backwards.
-round(log2(x)) == round(log2(1/x)), meaning that one of the divisions can be turned around to get (1/x) >= 1.
round(log2(x)) == floor(log2(x * sqrt(2))) == binary_log((int)(x*sqrt(2))).
One minor detail here is, if (double)sqrt(2) rounds down, or up. If it rounds up, then there might exist one or more value x * sqrt2 == 2^n + epsilon (after rounding), where as if it would round down, we would get 2^n - epsilon. One would give the integer value of n the other would give n-1. Which is correct?
Naturally that one is correct, whose ratio to the theoretical mid point x * sqrt(2) is smaller.
x * sqrt(2) / 2^(n-1) < 2^n / (x * sqrt(2)) -- multiply by x*sqrt(2)
x^2 * 2 / 2^(n-1) < 2^n -- multiply by 2^(n-1)
x^2 * 2 < 2^(2*n-1)
In order of this comparison to be exact, x^2 or pow(x,2) must be exact as well on the boundary - and it matters, what range the original values are. Similar analysis can and should be done while expanding x = a/b, so that the inexactness of the division can be mitigated at the cost of possible overflow in the multiplication...
Then again, I wonder how all the other similar applications handle the corner cases, which may not even exist -- and those could be brute force searched assuming that average and total are small enough integers.
EDIT
Because average is an integer, it makes sense to tabulate those exact integer values, which are on the boundaries of -round(log2(average)).
From octave: d=-round(log2((1:1000000)/30.0)); find(d(2:end) ~= find(d(1:end-1))
1 2 3 6 11 22 43 85 170 340 679 1358 2716
5431 10862 21723 43445 86890 173779 347558 695115
All the averages between [1 2( -> 5
All the averages between [2 3( -> 4
All the averages between [3 6( -> 3
..
All the averages between [43445 86890( -> -11
int a = find_lower_bound(average, table); // linear or binary search
return 5 - a;
No floating point arithmetic needed

Fast integer solution of x(x-1)/2 = c

Given a non-negative integer c, I need an efficient algorithm to find the largest integer x such that
x*(x-1)/2 <= c
Equivalently, I need an efficient and reliably accurate algorithm to compute:
x = floor((1 + sqrt(1 + 8*c))/2) (1)
For the sake of defineteness I tagged this question C++, so the answer should be a function written in that language. You can assume that c is an unsigned 32 bit int.
Also, if you can prove that (1) (or an equivalent expression involving floating-point arithmetic) always gives the right result, that's a valid answer too, since floating-point on modern processors can be faster than integer algorithms.
If you're willing to assume IEEE doubles with correct rounding for all operations including square root, then the expression that you wrote (plus a cast to double) gives the right answer on all inputs.
Here's an informal proof. Since c is a 32-bit unsigned integer being converted to a floating-point type with a 53-bit significand, 1 + 8*(double)c is exact, and sqrt(1 + 8*(double)c) is correctly rounded. 1 + sqrt(1 + 8*(double)c) is accurate to within one ulp, since the last term being less than 2**((32 + 3)/2) = 2**17.5 implies that the unit in the last place of the latter term is less than 1, and thus (1 + sqrt(1 + 8*(double)c))/2 is accurate to within one ulp, since division by 2 is exact.
The last piece of business is the floor. The problem cases here are when (1 + sqrt(1 + 8*(double)c))/2 is rounded up to an integer. This happens if and only if sqrt(...) rounds up to an odd integer. Since the argument of sqrt is an integer, the worst cases look like sqrt(z**2 - 1) for positive odd integers z, and we bound
z - sqrt(z**2 - 1) = z * (1 - sqrt(1 - 1/z**2)) >= 1/(2*z)
by Taylor expansion. Since z is less than 2**17.5, the gap to the nearest integer is at least 1/2**18.5 on a result of magnitude less than 2**17.5, which means that this error cannot result from a correctly rounded sqrt.
Adopting Yakk's simplification, we can write
(uint32_t)(0.5 + sqrt(0.25 + 2.0*c))
without further checking.
If we start with the quadratic formula, we quickly reach sqrt(1/4 + 2c), round up at 1/2 or higher.
Now, if you do that calculation in floating point, there can be inaccuracies.
There are two approaches to deal with these inaccuracies. The first would be to carefully determine how big they are, determine if the calculated value is close enough to a half for them to be important. If they aren't important, simply return the value. If they are, we can still bound the answer to being one of two values. Test those two values in integer math, and return.
However, we can do away with that careful bit, and note that sqrt(1/4 + 2c) is going to have an error less than 0.5 if the values are 32 bits, and we use doubles. (We cannot make this guarantee with floats, as by 2^31 the float cannot handle +0.5 without rounding).
In essense, we use the quadratic formula to reduce it to two possibilities, and then test those two.
uint64_t eval(uint64_t x) {
return x*(x-1)/2;
}
unsigned solve(unsigned c) {
double test = sqrt( 0.25 + 2.*c );
if ( eval(test+1.) <= c )
return test+1.
ASSERT( eval(test) <= c );
return test;
}
Note that converting a positive double to an integral type rounds towards 0. You can insert floors if you want.
This may be a bit tangential to your question. But, what caught my attention is the specific formula. You are trying to find the triangular root of Tn - 1 (where Tn is the nth triangular number).
I.e.:
Tn = n * (n + 1) / 2
and
Tn - n = Tn - 1 = n * (n - 1) / 2
From the nifty trick described here, for Tn we have:
n = int(sqrt(2 * c))
Looking for n such that Tn - 1 ≤ c in this case doesn't change the definition of n, for the same reason as in the original question.
Computationally, this saves a few operations, so it's theoretically faster than the exact solution (1). In reality, it's probably about the same.
Neither this solution or the one presented by David are as "exact" as your (1) though.
floor((1 + sqrt(1 + 8*c))/2) (blue) vs int(sqrt(2 * c)) (red) vs Exact (white line)
floor((1 + sqrt(1 + 8*c))/2) (blue) vs int(sqrt(0.25 + 2 * c) + 0.5 (red) vs Exact (white line)
My real point is that triangular numbers are a fun set of numbers that are connected to squares, pascal's triangle, Fibonacci numbers, et. al.
As such there are loads of identities around them which might be used to rearrange the problem in a way that didn't require a square root.
Of particular interest may be that Tn + Tn - 1 = n2
I'm assuming you know that you're working with a triangular number, but if you didn't realize that, searching for triangular roots yields a few questions such as this one which are along the same topic.

Fast Exponentiation when only k digits are required - continued

Where I need help...
What I want to do now is translate this solution, which calculates the mantissaof a number to c++:
n^m = exp10(m log10(n)) = exp(q (m log(n)/q)) where q = log(10)
Finding the first n digits from the result can be done like this:
"the first K digits of exp10(x) = the first K digits of exp10(frac(x))
where frac(x) = the fractional part of x = x - floor(x)."
My attempts (sparked by the math and this code) failed...:
u l l function getPrefix(long double pow /*exponent*/, long double length /*length of prefix*/)
{
long double dummy; //unused but necessary for modf
long double q = log(10);
u l l temp = floor(pow(10.0, exp(q * modf( (pow * log(2)/q), &dummy) + length - 1));
return temp;
}
If anyone out there can correctly implement this solution, I need your help!!
EDIT
Example output from my attempts:
n: 2
m: 0
n^m: 1
Calculated mantissa: 1.16334
n: 2
m: 1
n^m: 2
Calculated mantissa: 2.32667
n: 2
m: 2
n^m: 4
Calculated mantissa: 4.65335
n: 2
m: 98
n^m: 3.16913e+29
Calculated mantissa: 8.0022
n: 2
m: 99
n^m: 6.33825e+29
Calculated mantissa: 2.16596
I'd avoid pow for this. It's notoriously hard to implement correctly. There are lots of SO questions where people got burned by a bad pow implementation in their standard library.
You can also save yourself a good deal of pain by working in the natural base instead of base 10. You'll get code that looks like this:
long double foo = m * logl(n);
foo = fmodl(foo, logl(10.0)) + some_epsilon;
sprintf(some_string, "%.9Lf", expl(foo));
/* boring string parsing code here */
to compute the appropriate analogue of m log(n). Notice that the largest m * logl(n) that can arise is just a little bigger than 2e10. When you divide that by 264 and round up to the nearest power of two, you see that an ulp of foo is 2-29 at worst. This means, in particular, that you cannot get more than 8 digits out of this method using long doubles, even with a perfect implementation.
some_epsilon will be the smallest long double that makes expl(foo) always exceed the mathematically correct result; I haven't computed it exactly, but it should be on the order of 1e-9.
In light of the precision difficulties here, I might suggest using a library like MPFR instead of long doubles. You may also be able to get something to work using a double double trick and quad-precision exp, log, and fmod.

to find power of a decimal number with exponent as floating point number without using math library [duplicate]

This question already has answers here:
How can I write a power function myself?
(14 answers)
Closed 9 years ago.
#include<iostream>
#include<cmath>
using namespace std;
int main()
{
double x,y,z;
cin>>x>>y;
z=exp(y*log(x));
cout<<z;
system("pause");
return 0;
}
this is code to find power of a numbers whose exponent is floating point number i.e 2.3^2.3 if we do using logs and antilogs we can get the answer easily but my interview question was to find power with out using any math library in c++. i googled it and did not able to understand some of the refere nces from google.
You can always implement exp() and log() yourself.
And it's easier to actually implement 2x and log2x for the purpose and use in the same way as exp() and log().
2x = 2integer_part(x)+fractional_part(x) = 2integer_part(x) * 2fractional_part(x)
2fractional_part(x) can be calculated for -1 <= x <= +1 using Taylor series expansion.
And then multiplying by 2integer_part(x) amounts to adjusting the exponent part of the floating point number by integer_part(x) or you can indeed raise 2 to the integer power of integer_part(x) and multiply by that.
Similarly, log2x = log2(x * 2N) - N
where N (an integer, a power of 2) is chosen such that 0.5 <= x * 2N <= 1 (or, alternatively, between 1 and 2).
After choosing N, again, we can use Taylor series expansion to calculate log2(x * 2N).
And that's all, just a little bit of math.
EDIT: It's also possible to use approximating polynomials instead of Taylor series, they are more efficient. Thanks Eric Postpischil for reminding. But you'd probably need a math reference to find or construct those.
You could use Taylor series expansions for ln(x) and e^x:
ln(x) = 2 * sum[ ((x-1)/(x+1))^(2n-1) / (2n-1), n=1..inf ]
= 2 [ (x-1)/(x+1) + (1/3)( (x-1)/(x+1) )^3 + (1/5)( (x-1)/(x+1) )^5 + (1/7) ( (x-1)/(x+1) )^7 + ... ]
e^x = sum( x^n / n!, n = 0 .. inf )
= 1/1 + x/1 + x^2 / 2 + x^3 / 6 + ...
Where you could implement the integral powers as a for-loop and continue the expansion for the desired approximation. Then plug in your values, and badda-bing, badda-boom. Note the convergence regions for the above are for x > 0 for ln(x) and for all values for e^x.

Optimising code for modular arithmetic

I am trying to calculate below expression for large numbers.
Since the value of this expression will be very large, I just need the value of this expression modulus some prime number. Suppose the value of this expression is x and I choose the prime number 1000000007; I'm looking for x % 1000000007.
Here is my code.
#include<iostream>
#define MOD 1000000007
using namespace std;
int main()
{
unsigned long long A[1001];
A[2]=2;
for(int i=4;i<=1000;i+=2)
{
A[i]=((4*A[i-2])/i)%MOD;
A[i]=(A[i]*(i-1))%MOD;
while(1)
{
int N;
cin>>N;
cout<<A[N];
}
}
But even this much optimisation is failing for large values of N. For example if N is 50, the correct output is 605552882, but this gives me 132924730. How can I optimise it further to get the correct output?
Note : I am only considering N as even.
When you do modular arithmetic, there is no such operation as division. Instead, you take the modular inverse of the denominator and multiply. The modular inverse is computed using the extended Euclidean algorithm, discovered by Etienne Bezout in 1779:
# return y such that x * y == 1 (mod m)
function inverse(x, m)
a, b, u := 0, m, 1
while x > 0
q, r := divide(b, x)
x, a, b, u := b % x, u, x, a - q * u
if b == 1 return a % m
error "must be coprime"
The divide function returns both quotient and remainder. All of the assignment operators given above are simultaneous assignment, where all of the right hand sides are computed first, then all of the left hand sides are assigned simultaneously. You can see more about modular arithmetic at my blog.
For starters no modulo division is needed at all, your formula can be rewrited as follows:
N!/((N/2)!^2)
=(1.2.3...N)/((1.2.3...N/2)*(1.2.3...N/2))
=((N/2+1)...N)/(1.2.3...N/2))
ok now you are dividing bigger number by the smaller
so you can iterate the result by multiplicating divisor and divident
so booth sub results have similar magnitude
any time both numbers are divisible 2 shift them left
this will ensure that the do not overflow
if you are at the and of (N/2)! than continue the the multiplicetion only for the rest.
any time both subresults are divisible by anything divide them
until you are left with divison by 1
after this you can multiply with modulo arithmetics till the end normaly.
for more advanced approach see this.
N! and (N/2)! are decomposable much further than it seems at the first look
i had solved that for some time now,...
here is what i found: Fast exact bigint factorial
in shortcut your terms N! and ((N/2)!)^2 will disappear completely.
only simple prime decomposition + 4N <-> 1N correction will remind
solution:
I. (4N!)=((2N!)^2) . mul(i=all primes<=4N) of [i^sum(j=1,2,3,4,5,...4N>=i^j) of [(4N/(i^j))%2]]
II. (4N)!/((4N/2)!^2) = (4N)!/((2N)!^2)
----------------------------------------
I.=II. (4N)!/((2N)!^2)=mul(i=all primes<=4N) of [i^sum(j=1,2,3,4,5,...4N>=i^j) of [(4N/(i^j))%2]]
the only thing is that N must be divisible by 4 ... therefore 4N in all terms.
if you have N%4!=0 than solve for N-N%4 and the result correct by the misin 1-3 numbers.
hope it helps