Calculation and data limits - c++

I have a task:
On notebook in cell (standart grided notebook for math/numbers) painted rectangle with size NxM (intergers). How much different rectangles can contain this rectangle?
Max value for N and M == 10^9 (1 000 000 000)
If result >= (10^9 + 7) show: Result mod (10^9 + 7)
Example:
I know formula:
M*(M+1) * N*(N+1) / 4
And realised this problem in C++:
#include <iostream>
#include <cmath>
#include <iomanip>
int main()
{
long double n, m;
std::cin >> n >> m;
long double n1 = (n*(n + 1) / 2);
long double m1 = (m*(m + 1) / 2);
long double count = std::fmod((n1 * m1), 1000000007);
std::cout << std::fixed << std::setprecision(0) << count;
return 0;
}
But when I wrote for test 1000000000 x 1000000000
My program displayed me 499881764, when Windows calclulator and other calculator displayed 441 =_=
What's wrong I made? I will be very grateful if someone can show code-example of correct solution.

You're losing precision in the long double type: the fact that your observed output is a multiple of a power of 2 is a touchstone of this effect.
Since your're using Windows, my money is on long double being a 64 bit IEEE754 double precision floating point type (i.e. the same as a double on that platform), and that gives you 53 bits of precision.
You could switch to an arbitrary precision library, or Google "Schranges Algorithm" for a clever way of computing modulus for a product.

Related

Efficient way of checking the length of a double in C++

Say I have a number, 100000, I can use some simple maths to check its size, i.e. log(100000) -> 5 (base 10 logarithm). Theres also another way of doing this, which is quite slow. std::string num = std::to_string(100000), num.size(). Is there an way to mathematically determine the length of a number? (not just 100000, but for things like 2313455, 123876132.. etc)
Why not use ceil? It rounds up to the nearest whole number - you can just wrap that around your log function, and add a check afterwards to catch the fact that a power of 10 would return 1 less than expected.
Here is a solution to the problem using single precision floating point numbers in O(1):
#include <cstdio>
#include <iostream>
#include <cstring>
int main(){
float x = 500; // to be converted
uint32_t f;
std::memcpy(&f, &x, sizeof(uint32_t)); // Convert float into a manageable int
uint8_t exp = (f & (0b11111111 << 23)) >> 23; // get the exponent
exp -= 127; // floating point bias
exp /= 3.32; // This will round but for this case it should be fine (ln2(10))
std::cout << std::to_string(exp) << std::endl;
}
For a number in scientific notation a*10^e this will return e (when 1<=a<10), so the length of the number (if it has an absolute value larger than 1), will be exp + 1.
For double precision this works, but you have to adapt it (bias is 1023 I think, and bit alignment is different. Check this)
This only works for floating point numbers, though so probably not very useful in this case. The efficiency in this case relative to the logarithm will also be determined by the speed at which int -> float conversion can occur.
Edit:
I just realised the question was about double. The modified result is:
int16_t getLength(double a){
uint64_t bits;
std::memcpy(&bits, &a, sizeof(uint64_t));
int16_t exp = (f >> 52) & 0b11111111111; // There is no 11 bit long int so this has to do
exp -= 1023;
exp /= 3.32;
return exp + 1;
}
There are some changes so that it behaves better (and also less shifting).
You can also use frexp() to get the exponent without bias.
If the number is whole, keep dividing by 10, until you're at 0. You'd have to divide 100000 6 times, for example. For the fractional part, you need to keep multiplying by 10 until trunc(f) == f.

C++: Calculate using large numbers (>1 mrd.) and procentage

I want to calculate a price for an element using the percentage of the current value. For example: An apple costs 4$ right now and for every apple you buy, the price increases by 7%. So the first apple would cost 4$, the next 4.28$ (price + 7% of 4$), the next 4.58$ (price + 7% of 4.28$) and so on.
My script is working somehow, but as the numbers grow larger, i struggle to find the right data type for this.
Since i have to round to the last 2 digits, i'm doing something like this:
int64_t c = 0;
double ergebnis = 0;
double komma1 = 0;
komma1 = komma1 + ((komma1 / 100) * 7);
c = komma1 * 100;
ergebnis = c / 100.0;
The problem is, that if i bought 50000 apples, the number would grow so large, that the result would just become a negative number. I'm using "Visual Studio 2017 Community Edition".
#include "stdafx.h"
#include "stdint.h"
#include "iostream"
#include "Windows.h"
using namespace std;
int main()
{
cout.precision(40);
int price = 4;
long double cost = 0;
long double b = 4;
int d = 0;
while (d != 50000)
{
cost = price + (price / 100 * 7);
price = cost * 100.00;
cost = price / 100.00;
d = d++;
}
cout << "Number is: " << cost << endl;
Sleep(5000);
return 0;
}
This pretty much describes my problem.
I already used google and found the idea to use "int64_t" as integer, but i think the real problem is the double here, right?
So, tl;dr:
Any way to use pretty large numbers in C++?
Any way to solve my problem using another solution? like splitting the numbers up or something? Like:
if (number == 1.000.000)
{
million = million++;
number = 4;
}
In your edited code:
First, this:
d = d++;
may be undefined (you should get a compilation warning), like this:
warning: operation on 'd' may be undefined [-Wsequence-point]
d = d++;
~~^~~~~
Replace this line of code with:
d++;
which is equivalent to:
d = d + 1;
price is an int. You then perform integer division price / 100, which will give an int as its result! So 4/100 in an integer division gives 0. Cast one operand to a floating point value, to get a division that will yield a floating point number as a result. E.g. (long double)price / 100.
However, what you describe as your algorithm doesn't match your expected results. You say that you want to charge from 4$, to 4.28$, then to 4.58$. The formula to achieve this is not the one you describe, but this:
cost = cost + cost * 7%
Moreover, these two lines of code make no sense:
price = cost * 100.00;
cost = price / 100.00;
just discard them.
Furthermore, since you know how many times you want the iteration to execute, use a for loop, instead of a while loop.
Putting everything together, you get:
#include "iostream"
using namespace std;
int main()
{
cout.precision(40);
long double cost = 4;
for(int i = 0; i < 50000; ++i)
{
cost = cost + ((long double)cost / 100 * 7);
}
cout << "Cost is: " << cost << endl;
return 0;
}
Your attempt to calculate compound interest went horribly wrong. The program doesn't do anything remotely similar to what your description says it should do.
Here's how your loop should look like:
long double price = 4.0;
for (int i = 0; i < 50000; ++i)
{
price = price * 1.07; // == (100.0 + 7.0)/100.0
}
The regular double doesn't have a range to represent your price, but long double is OK. If you want to go bigger, you need a third party large numbers library. Note that the nearest integer to your answer has ~1500 decimal digits, so calculations with such numbers can be rather slow.
The same result can be obtained without looping:
std::cout "The number is " << 4 * std::pow (1.07L, 50000) << "\n";
Try long double instead of double. Difference between long double and double in C and C++.
Beware of the floating point precision and for conversion when performing calculations with variables that are sensitive to precision.
Any way to use pretty large numbers in C++?
Use a library, like Gnu Multiple Precision (GMP):
"a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating-point numbers. There is no practical limit to the precision except the ones implied by the available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a regular interface."
However, these libraries are for advanced usage. Make sure you really need to use that large numbers (large numbers are time consuming when being processed).

distribute two integers according to ratio

Say, I have some integer n and would like to subdivide it into two other integers according to some ratio. I have some approach where I ask myself whether it does work or not.
For example: 20 with ratio 70% should be subdivided into 14,6.
The obvious solution would be:
int n = 20;
double ratio = .7;
int n1 = static_cast<int>(n * ratio);
int n2 = static_cast<int>(n * (1 - ratio));
Since the cast always floors, however, I usually underrate my result. If I use std::round, there are still cases that are not working. For example, if the first decimal place is a 5, then both numbers will be rounded up.
Some colleagues suggested: Ceil the first number and floor the second one. In most of my tests, this works, however:
1) Does it really always work, also taking into accounting possible rounding errors that naturally occur in multiplying numbers? What I think of: 20*.7 could be 14, while 20*.3 could be 5.999999. So, my sum might be 14 + 5 = 19. This is just my guess, however, I do not know whether these kind of results can or cannot occur (otherwise the answer would be simply that this kind of rounding proposition does not work)
2) Even if it does work... Why?
(I have in mind that I could just calculate number 1 by n * ratio and calculate number 2 by n - n * ratio, but I would still be interested in the answer to this question)
How about this?
int n = 20;
double ratio = .7;
int n1 = static_cast<int>(n * ratio);
int n2 = n - n1;
Here is example that confirms your suspicion and shows that the ceil+floor method doesn't always work. It is caused by the finite precision of floating point numbers on computer:
#include <iostream>
#include <cmath>
int main() {
int n = 10;
double ratio = 0.7;
int n1 = static_cast<int>(floor(n * ratio));
int n2 = static_cast<int>(ceil(n * (1.0 - ratio)));
std::cout << n1 << " " << n2 << std::endl;
}
Output:
7 4
7 + 4 is 11, so it's wrong.
Your solution doesn't always work, take a ratio of 77%, you'll get 15 and 4 (See on coliru).
Welcome to the domain of numerical analysis.
First, your computer can't always perfectly store a floating number. As you can see in the example, .77 is stored as 0.77000000000000001776 (it is an approach of the number by a sum of powers of 2).
When doing floating point calculation, you will always have a loss in precision. You can get this precision with std::numeric_limits<double>::epsilon().
Moreover, you'll still get more precision loss when converting from a floating number to an integer, and in your case the difference is big enough to give you an incoherent result.
The solution provided by #ToniBig and your last sentence has the advantage of "hiding" this loss and keep coherent data.

long double vs long int

I'm doing a program that calculates the probability of lotteries.
Specification is choose 5 numbers out of 47 and 1 out of 27
So I did the following:
#include <iostream>
long int choose(unsigned n, unsigned k);
long int factorial(unsigned n);
int main(){
using namespace std;
long int regularProb, megaProb;
regularProb = choose(47, 5);
megaProb = choose(27, 1);
cout << "The probability of the correct number is 1 out of " << (regularProb * megaProb) << endl;
return 0;
}
long int choose(unsigned n, unsigned k){
return factorial(n) / (factorial(k) * factorial(n-k));
}
long int factorial(unsigned n){
long int result = 1;
for (int i=2;i<=n;i++) result *= i;
return result;
}
However the program doesn't work. The program calculates for 30 seconds, then gives me Process 4 exited with code -1,073,741,676 I have to change all the long int to long double, but that loses precision. Is it because long int is too short for the big values? Though I thought long int nowadays are 64bit? My compiler is g++ win32 (64bit host).
Whether long is 64-bit or not depends on the model. Windows uses a 32-bit long. Use int64_t from <stdint.h> if you need to ensure it is 64-bit.
But even if long is 64-bit it is still too small to hold factorial(47).
47! == 2.58623242e+59
2^64 == 1.84467441e+19
although 47C5 is way smaller than that.
You should never use nCr == n!/(r! (n-r)!) directly do the calculation as it overflows easily. Instead, factor out the n!/(n-r)! to get:
47 * 46 * 45 * 44 * 43
C = ----------------------
47 5 5 * 4 * 3 * 2 * 1
this can be managed even by a 32-bit integer.
BTW, for #Coffee's question: a double only has 53-bits of precision, where 47! requires 154 bits. 47! and 42! represented in double would be
47! = (0b10100100110011011110001010000100011110111001100100100 << 145) ± (1 << 144)
42! = (0b11110000010101100000011101010010010001101100101001000 << 117) ± (1 << 116)
so 47! / (42! × 5!)'s possible range of value will be
0b101110110011111110011 = 1533939 53 bits
v
max = 0b101110110011111110011.000000000000000000000000000000001001111...
val = 0b101110110011111110010.111111111111111111111111111111111010100...
min = 0b101110110011111110010.111111111111111111111111111111101011010...
that's enough to get the exact value 47C5.
to use 64bit long, you should use long long. (as mentioned here)
KennyTM has it right, you're going to overflow no matter what type you use. You need to approach the problem more smartly and factor out lots of work. If you're ok with an approximate answer, then take a look at Stirling's approximation:
Ln(n!) ~ n Ln(n) - n
So if you have
n!/(k!*(n-k)!)
You could say that's
e(ln(n!/(k!*(n-k)!)))
which after some math (double check to make sure I got it right) is
e(n*ln(n)-k*ln(k)-(n-k)*ln(n-k))
And that shouldn't overflow (but it's an approximate answer)
It's easy to calculate binomial coefficients up to 47C5 and beyond without overflow, using standard unsigned long 32-bit arithmetic. See my response to this question: https://math.stackexchange.com/questions/34518/are-there-examples-where-mathematicians-needs-to-calculate-big-combinations/34530#comment-76389

An efficient way to compute mathematical constant e

The standard representation of constant e as the sum of the infinite series is very inefficient for computation, because of many division operations. So are there any alternative ways to compute the constant efficiently?
Since it's not possible to calculate every digit of 'e', you're going to have to pick a stopping point.
double precision: 16 decimal digits
For practical applications, "the 64-bit double precision floating point value that is as close as possible to the true value of 'e' -- approximately 16 decimal digits" is more than adequate.
As KennyTM said, that value has already been pre-calculated for you in the math library.
If you want to calculate it yourself, as Hans Passant pointed out, factorial already grows very fast.
The first 22 terms in the series is already overkill for calculating to that precision -- adding further terms from the series won't change the result if it's stored in a 64 bit double-precision floating point variable.
I think it will take you longer to blink than for your computer to do 22 divides. So I don't see any reason to optimize this further.
thousands, millions, or billions of decimal digits
As Matthieu M. pointed out, this value has already been calculated, and you can download it from Yee's web site.
If you want to calculate it yourself, that many digits won't fit in a standard double-precision floating-point number.
You need a "bignum" library.
As always, you can either use one of the many free bignum libraries already available, or re-invent the wheel by building your own yet another bignum library with its own special quirks.
The result -- a long file of digits -- is not terribly useful, but programs to calculate it are sometimes used as benchmarks to test the performance and accuracy of "bignum" library software, and as stress tests to check the stability and cooling capacity of new machine hardware.
One page very briefly describes the algorithms Yee uses to calculate mathematical constants.
The Wikipedia "binary splitting" article goes into much more detail.
I think the part you are looking for is the number representation:
instead of internally storing all numbers as a long series of digits before and after the decimal point (or a binary point),
Yee stores each term and each partial sum as a rational number -- as two integers, each of which is a long series of digits.
For example, say one of the worker CPUs was assigned the partial sum,
... 1/4! + 1/5! + 1/6! + ... .
Instead of doing the division first for each term, and then adding, and then returning a single million-digit fixed-point result to the manager CPU:
// extended to a million digits
1/24 + 1/120 + 1/720 => 0.0416666 + 0.0083333 + 0.00138888
that CPU can add all the terms in the series together first with rational arithmetic, and return the rational result to the manager CPU: two integers of perhaps a few hundred digits each:
// faster
1/24 + 1/120 + 1/720 => 1/24 + 840/86400 => 106560/2073600
After thousands of terms have been added together in this way, the manager CPU does the one and only division at the very end to get the decimal digits after the decimal point.
Remember to avoid PrematureOptimization, and
always ProfileBeforeOptimizing.
If you're using double or float, there is an M_E constant in math.h already.
#define M_E 2.71828182845904523536028747135266250 /* e */
There are other representions of e in http://en.wikipedia.org/wiki/Representations_of_e#As_an_infinite_series; all the them will involve division.
I'm not aware of any "faster" computation than the Taylor expansion of the series, i.e.:
e = 1/0! + 1/1! + 1/2! + ...
or
1/e = 1/0! - 1/1! + 1/2! - 1/3! + ...
Considering that these were used by A. Yee, who calculated the first 500 billion digits of e, I guess that there's not much optimising to do (or better, it could be optimised, but nobody yet found a way, AFAIK)
EDIT
A very rough implementation
#include <iostream>
#include <iomanip>
using namespace std;
double gete(int nsteps)
{
// Let's skip the first two terms
double res = 2.0;
double fact = 1;
for (int i=2; i<nsteps; i++)
{
fact *= i;
res += 1/fact;
}
return res;
}
int main()
{
cout << setprecision(50) << gete(10) << endl;
cout << setprecision(50) << gete(50) << endl;
}
Outputs
2.71828152557319224769116772222332656383514404296875
2.71828182845904553488480814849026501178741455078125
This page has a nice rundown of different calculation methods.
This is a tiny C program from Xavier Gourdon to compute 9000 decimal digits of e on your computer. A program of the same kind exists for π and for some other constants defined by mean of hypergeometric series.
[degolfed version from https://codereview.stackexchange.com/a/33019 ]
#include <stdio.h>
int main() {
int N = 9009, a[9009], x;
for (int n = N - 1; n > 0; --n) {
a[n] = 1;
}
a[1] = 2;
while (N > 9) {
int n = N--;
while (--n) {
a[n] = x % n;
x = 10 * a[n-1] + x/n;
}
printf("%d", x);
}
return 0;
}
This program [when code-golfed] has 117 characters. It can be changed to compute more digits (change the value 9009 to more) and to be faster (change the constant 10 to another power of 10 and the printf command). A not so obvious question is to find the algorithm used.
I gave this answer at CodeReviews on the question regarding computing e by its definition via Taylor series (so, other methods were not an option). The cross-post here was suggested in the comments. I've removed my remarks relevant to that other topic; Those interested in further explanations migth want to check the original post.
The solution in C (should be easy enough to adapt to adapt to C++):
#include <stdio.h>
#include <math.h>
int main ()
{
long double n = 0, f = 1;
int i;
for (i = 28; i >= 1; i--) {
f *= i; // f = 28*27*...*i = 28! / (i-1)!
n += f; // n = 28 + 28*27 + ... + 28! / (i-1)!
} // n = 28! * (1/0! + 1/1! + ... + 1/28!), f = 28!
n /= f;
printf("%.64llf\n", n);
printf("%.64llf\n", expl(1));
printf("%llg\n", n - expl(1));
printf("%d\n", n == expl(1));
}
Output:
2.7182818284590452354281681079939403389289509505033493041992187500
2.7182818284590452354281681079939403389289509505033493041992187500
0
1
There are two important points:
This code doesn't compute 1, 1*2, 1*2*3,... which is O(n^2), but computes 1*2*3*... in one pass (which is O(n)).
It starts from smaller numbers. If we tried to compute
1/1 + 1/2 + 1/6 + ... + 1/20!
and tried to add it 1/21!, we'd be adding
1/21! = 1/51090942171709440000 = 2E-20,
to 2.something, which has no effect on the result (double holds about 16 significant digits). This effect is called underflow.
However, when we start with these numbers, i.e., if we compute 1/32!+1/31!+... they all have some impact.
This solution seems in accordance to what C computes with its expl function, on my 64bit machine, compiled with gcc 4.7.2 20120921.
You may be able to gain some efficiency. Since each term involves the next factorial, some efficiency may be obtained by remembering the last value of the factorial.
e = 1 + 1/1! + 1/2! + 1/3! ...
Expanding the equation:
e = 1 + 1/(1 * 1) + 1/(1 * 1 * 2) + 1/(1 * 2 * 3) ...
Instead of computing each factorial, the denominator is multiplied by the next increment. So keeping the denominator as a variable and multiplying it will produce some optimization.
If you're ok with an approximation up to seven digits, use
3-sqrt(5/63)
2.7182819
If you want the exact value:
e = (-1)^(1/(j*pi))
where j is the imaginary unit and pi the well-known mathematical constant (Euler's Identity)
There are several "spigot" algorithms which compute digits sequentially in an unbounded manner. This is useful because you can simply calculate the "next" digit through a constant number of basic arithmetic operations, without defining beforehand how many digits you wish to produce.
These apply a series of successive transformations such that the next digit comes to the 1's place, so that they are not affected by float rounding errors. The efficiency is high because these transformations can be formulated as matrix multiplications, which reduce to integer addition and multiplication.
In short, the taylor series expansion
e = 1/0! + 1/1! + 1/2! + 1/3! ... + 1/n!
Can be rewritten by factoring out fractional parts of the factorials (note that to make the series regular we've moved 1 to the left side):
(e - 1) = 1 + (1/2)*(1 + (1/3)*(1 + (1/4)...))
We can define a series of functions f1(x) ... fn(x) thus:
f1(x) = 1 + (1/2)x
f2(x) = 1 + (1/3)x
f3(x) = 1 + (1/4)x
...
The value of e is found from the composition of all of these functions:
(e-1) = f1(f2(f3(...fn(x))))
We can observe that the value of x in each function is determined by the next function, and that each of these values is bounded on the range [1,2] - that is, for any of these functions, the value of x will be 1 <= x <= 2
Since this is the case, we can set a lower and upper bound for e by using the values 1 and 2 for x respectively:
lower(e-1) = f1(1) = 1 + (1/2)*1 = 3/2 = 1.5
upper(e-1) = f1(2) = 1 + (1/2)*2 = 2
We can increase precision by composing the functions defined above, and when a digit matches in the lower and upper bound, we know that our computed value of e is precise to that digit:
lower(e-1) = f1(f2(f3(1))) = 1 + (1/2)*(1 + (1/3)*(1 + (1/4)*1)) = 41/24 = 1.708333
upper(e-1) = f1(f2(f3(2))) = 1 + (1/2)*(1 + (1/3)*(1 + (1/4)*2)) = 7/4 = 1.75
Since the 1s and 10ths digits match, we can say that an approximation of (e-1) with precision of 10ths is 1.7. When the first digit matches between the upper and lower bounds, we subtract it off and then multiply by 10 - this way the digit in question is always in the 1's place where floating-point precision is high.
The real optimization comes from the technique in linear algebra of describing a linear function as a transformation matrix. Composing functions maps to matrix multiplication, so all of those nested functions can be reduced to simple integer multiplication and addition. The procedure of subtracting the digit and multiplying by 10 also constitutes a linear transformation, and therefore can also be accomplished by matrix multiplication.
Another explanation of the method:
http://www.hulver.com/scoop/story/2004/7/22/153549/352
The paper that describes the algorithm:
http://www.cs.ox.ac.uk/people/jeremy.gibbons/publications/spigot.pdf
A quick intro to performing linear transformations via matrix arithmetic:
https://people.math.gatech.edu/~cain/notes/cal6.pdf
NB this algorithm makes use of Mobius Transformations which are a type of linear transformation described briefly in the Gibbons paper.
From my point of view, the most efficient way to compute e up to a desired precision is to use the following representation:
e := lim (n -> inf): (1 + (1/n))^n
Especially if you choose n = 2^x, you can compute the potency with just x multiplications, since:
a^n = (a^2)^(n/2), if n % 2 = 0
The binary splitting method lends itself nicely to a template metaprogram which produces a type which represents a rational corresponding to an approximation of e. 13 iterations seems to be the maximum - any higher will produce a "integral constant overflow" error.
#include <iostream>
#include <iomanip>
template<int NUMER = 0, int DENOM = 1>
struct Rational
{
enum {NUMERATOR = NUMER};
enum {DENOMINATOR = DENOM};
static double value;
};
template<int NUMER, int DENOM>
double Rational<NUMER, DENOM>::value = static_cast<double> (NUMER) / DENOM;
template<int ITERS, class APPROX = Rational<2, 1>, int I = 2>
struct CalcE
{
typedef Rational<APPROX::NUMERATOR * I + 1, APPROX::DENOMINATOR * I> NewApprox;
typedef typename CalcE<ITERS, NewApprox, I + 1>::Result Result;
};
template<int ITERS, class APPROX>
struct CalcE<ITERS, APPROX, ITERS>
{
typedef APPROX Result;
};
int test (int argc, char* argv[])
{
std::cout << std::setprecision (9);
// ExpType is the type containing our approximation to e.
typedef CalcE<13>::Result ExpType;
// Call result() to produce the double value.
std::cout << "e ~ " << ExpType::value << std::endl;
return 0;
}
Another (non-metaprogram) templated variation will, at compile-time, calculate a double approximating e. This one doesn't have the limit on the number of iterations.
#include <iostream>
#include <iomanip>
template<int ITERS, long long NUMERATOR = 2, long long DENOMINATOR = 1, int I = 2>
struct CalcE
{
static double result ()
{
return CalcE<ITERS, NUMERATOR * I + 1, DENOMINATOR * I, I + 1>::result ();
}
};
template<int ITERS, long long NUMERATOR, long long DENOMINATOR>
struct CalcE<ITERS, NUMERATOR, DENOMINATOR, ITERS>
{
static double result ()
{
return (double)NUMERATOR / DENOMINATOR;
}
};
int main (int argc, char* argv[])
{
std::cout << std::setprecision (16);
std::cout << "e ~ " << CalcE<16>::result () << std::endl;
return 0;
}
In an optimised build the expression CalcE<16>::result () will be replaced by the actual double value.
Both are arguably quite efficient since they calculate e at compile time :-)
#nico Re:
..."faster" computation than the Taylor expansion of the series, i.e.:
e = 1/0! + 1/1! + 1/2! + ...
or
1/e = 1/0! - 1/1! + 1/2! - 1/3! + ...
Here are ways to algebraically improve the convergence of Newton’s method:
https://www.researchgate.net/publication/52005980_Improving_the_Convergence_of_Newton's_Series_Approximation_for_e
It appears to be an open question as to whether they can be used in conjunction with binary splitting to computationally speed things up. Nonetheless, here is an example from Damian Conway using Perl that illustrates the improvement in direct computational efficiency for this new approach. It’s in the section titled “𝑒 is for estimation”:
http://blogs.perl.org/users/damian_conway/2019/09/to-compute-a-constant-of-calculusa-treatise-on-multiple-ways.html
(This comment is too long to post as a reply for answer on Jun 12 '10 at 10:28)
From wikipedia replace x with 1