This question already has answers here:
What is more efficient? Using pow to square or just multiply it with itself?
(7 answers)
Closed 8 years ago.
I have got one question: for calculating simple integer powers of a double, is pow() function slower than simple multiplication? such as for 2.71828^4, is pow(2.71828, double(4)) slower than the simple multiplication using for loop?
I have tried to compare the durations for both approaches, but the durations are not stable, sometimes pow() wins and sometimes simple multiplication wins. Can anyone give me an confirmatory answer?
my code is as followed:
#include <iostream>
#include <cmath>
#include <ctime>
using namespace std;
double myFunction(double a) {
double c = 1;
for (int i = 1; i <= 4; i++)
c *= a;
return c;
}
int main() {
// Calculate the time used by pow function
clock_t start = clock();
for (double i = 0; i < 1000000; i = i + 0.001)
pow(i, 4);
clock_t durationP = double(clock() - start);
cout << "the duration for pow function is: " << durationP << "s" << endl;
// Calculate the time used by simple multiplication
start = clock();
for (double i = 0; i < 1000000; i = i + 0.001)
myFunction(i);
double durationS = double(clock() - start);
cout << "the duration for simple multiplication is:" << durationS << "s"
<< endl;
}
thanks a lot!
Yes, pow is slower than multiplication, multiplication is slower than addition. Tradeoff is, for simple power like pow(x, 2), use x*x instead
Related
Hi I am trying to calculate the results of the Taylor series expansion for sine to the specified number of terms.
I am running into some problems
Your task is to implement makeSineToOrder(k)
This is templated by the type of values used in the calculation.
It must yield a function that takes a value of the specified type and
returns the sine of that value (in the specified type again)
double factorial(double long order){
#include <iostream>
#include <iomanip>
#include <cmath>
double fact = 1;
for(int i = 1; i <= num; i++){
fact *= i;
}
return fact;
}
void makeSineToOrder(long double order,long double precision = 15){
double value = 0;
for(int n = 0; n < precision; n++){
value += pow(-1.0, n) * pow(num, 2*n+1) / factorial(2*n + 1);
}
return value;
int main()
{
using namespace std;
long double pi = 3.14159265358979323846264338327950288419716939937510L;
for(int order = 1;order < 20; order++) {
auto sine = makeSineToOrder<long double>(order);
cout << "order(" << order << ") -> sine(pi) = " << setprecision(15) << sine(pi) << endl;
}
return 0;
}
I tried debugging
here is a version that at least compiles and gives some output
#include <iostream>
#include <iomanip>
#include <cmath>
using namespace std;
double factorial(double long num) {
double fact = 1;
for (int i = 1; i <= num; i++) {
fact *= i;
}
return fact;
}
double makeSineToOrder(double num, double precision = 15) {
double value = 0;
for (int n = 0; n < precision; n++) {
value += pow(-1.0, n) * pow(num, 2 * n + 1) / factorial(2 * n + 1);
}
return value;
}
int main(){
long double pi = 3.14159265358979323846264338327950288419716939937510L;
for (int order = 1; order < 20; order++) {
auto sine = makeSineToOrder(order);
cout << "order(" << order << ") -> sine(pi) = " << setprecision(15) << sine << endl;
}
return 0;
}
not sure what that odd sine(pi) was supposed to be doing
Apart the obvious syntax errors (the includes should be before your factorial header) in your code:
I see no templates in your code which your assignment clearly states to use
so I would expect template like:
<class T> T mysin(T x,int n=15){ ... }
using pow for generic datatype is not safe
because inbuild pow will use float or double instead of your generic type so you might expect rounding/casting problems or even unresolved function in case of incompatible type.
To remedy that you can rewrite the code to not use pow as its just consequent multiplication in loop so why computing pow again and again?
using factorial function is waste
you can compute it similar to pow in the same loop no need to compute the already computed multiplications again and again. Also not using template for your factorial makes the same problems as using pow
so putting all together using this formula:
along with templates and exchanging pow,factorial functions with consequent iteration I got this:
template <class T> T mysin(T x,int n=15)
{
int i;
T y=0; // result
T x2=x*x; // x^2
T xi=x; // x^i
T ii=1; // i!
if (n>0) for(i=1;;)
{
y+=xi/ii; xi*=x2; i++; ii*=i; i++; ii*=i; n--; if (!n) break;
y-=xi/ii; xi*=x2; i++; ii*=i; i++; ii*=i; n--; if (!n) break;
}
return y;
}
so factorial ii is multiplied by i+1 and i+2 every iteration and power xi is multiplied by x^2 every iteration ... the sign change is hard coded so for loop does 2 iterations per one run (that is the reason for the break;)
As you can see this does not use anything funny so you do not need any includes for this not even math ...
You might want to add x=fmod(x,6.283185307179586476925286766559) at the start of mysin in order to use more than just first period however in that case you have to ensure fmod implementation uses T or compatible type to it ... Also the 2*pi constant should be in target precision or higher
beware too big n will overflow both int and generic type T (so you might want to limit n based on used type somehow or just use it wisely).
Also note on 32bit floats you can not get better than 5 decimal places no matter what n is with this kind of computation.
Btw. there are faster and more accurate methods of computing goniometrics like Chebyshev and CORDIC
original outdated code:
Write an algorithm that compute the Euler's number until
My professor from Algorithms course gave me the following homework:
Write a C/C++ program that calculates the value of the Euler's number (e) with a given accuracy of eps > 0.
Hint: The number e = 1 + 1/1! +1/2! + ... + 1 / n! + ... = 2.7172 ... can be calculated as the sum of elements of the sequence x_0, x_1, x_2, ..., where x_0 = 1, x_1 = 1+ 1/1 !, x_2 = 1 + 1/1! +1/2 !, ..., the summation continues as long as the condition |x_(i+1) - x_i| >= eps is valid.
As he further explained, eps is the precision of the algorithm. For example, the precision could be 1/100 |x_(i + 1) - x_i| = absolute value of ( x_(i+1) - x_i )
Currently, my program looks in the following way:
#include<iostream>
#include<cstdlib>
#include<math.h>
// Euler's number
using namespace std;
double factorial(double n)
{
double result = 1;
for(double i = 1; i <= n; i++)
{
result = result*i;
}
return result;
}
int main()
{
long double euler = 2;
long double counter = 2;
long double epsilon = 1.0/1000;
long double moduloDifference;
do
{
euler+= 1 / factorial(counter);
counter++;
moduloDifference = (euler + 1 / factorial(counter+1) - euler);
} while(moduloDifference >= epsilon);
printf("%.35Lf ", euler );
return 0;
}
Issues:
It seems my epsilon value does not work properly. It is supposed to control the precision. For example, when I wish precision of 5 digits, I initialize it to 1.0/10000, and it outputs 3 digits before they get truncated after 8 (.7180).
When I use long double data type, and epsilon = 1/10000, my epsilon gets the value 0, and my program runs infinitely. Yet, if change the data type from long double to double, it works. Why epsilon becomes 0 when using long double data type?
How can I optimize the algorithm of finding Euler's number? I know, I can rid off the function and calculate the Euler's value on the fly, but after each attempt to do that, I receive other errors.
One problem with computing Euler's constant this way is pretty simple: you're starting with some fairly large numbers, but since the denominator in each term is N!, the amount added by each successive term shrinks very quickly. Using naive summation, you quickly reach a point where the value you're adding is small enough that it no longer affects the sum.
In the specific case of Euler's constant, since the numbers constantly decrease, one way we can deal with them quite a bit better is to compute and store all the terms, then add them up in reverse order.
Another possibility that's more general is to use Kahan's summation algorithm instead. This keeps track of a running error while it's doing the summation, and takes the current error into account as it's adding each successive term.
For example, I've rewritten your code to use Kahan summation to compute to (approximately) the limit of precision of a typical (80-bit) long double:
#include<iostream>
#include<cstdlib>
#include<math.h>
#include <vector>
#include <iomanip>
#include <limits>
// Euler's number
using namespace std;
long double factorial(long double n)
{
long double result = 1.0L;
for(int i = 1; i <= n; i++)
{
result = result*i;
}
return result;
}
template <class InIt>
typename std::iterator_traits<InIt>::value_type accumulate(InIt begin, InIt end) {
typedef typename std::iterator_traits<InIt>::value_type real;
real sum = real();
real running_error = real();
for ( ; begin != end; ++begin) {
real difference = *begin - running_error;
real temp = sum + difference;
running_error = (temp - sum) - difference;
sum = temp;
}
return sum;
}
int main()
{
std::vector<long double> terms;
long double epsilon = 1e-19;
long double i = 0;
double term;
for (int i=0; (term=1.0L/factorial(i)) >= epsilon; i++)
terms.push_back(term);
int width = std::numeric_limits<long double>::digits10;
std::cout << std::setw(width) << std::setprecision(width) << accumulate(terms.begin(), terms.end()) << "\n";
}
Result: 2.71828182845904522
In fairness, I should actually add that I haven't checked what happens with your code using naive summation--it's possible the problem you're seeing is from some other source. On the other hand, this does fit fairly well with a type of situation where Kahan summation stands at least a reasonable chance of improving results.
#include<iostream>
#include<cmath>
#include<iomanip>
#define EPSILON 1.0/10000000
#define AMOUNT 6
using namespace std;
int main() {
long double e = 2.0, e0;
long double factorial = 1;
int counter = 2;
long double moduloDifference;
do {
e0 = e;
factorial *= counter++;
e += 1.0 / factorial;
moduloDifference = fabs(e - e0);
} while (moduloDifference >= EPSILON);
cout << "Wynik:" << endl;
cout << setprecision(AMOUNT) << e << endl;
return 0;
}
This an optimized version that does not have a separate function to calculate the factorial.
Issue 1: I am still not sure how EPSILON manages the precision.
Issue 2: I do not understand the real difference between long double and double. Regarding my code, why long double requires a decimal point (1.0/someNumber), and double doesn't (1/someNumber)
I am trying to estimate PI using C++ as a fun math project. I've run into an issues where I can only get it as precise as 6 decimal places.
I have tried using a float instead of a double but found the same result.
My code works by summing all the results of 1/n^2 where n=1 through to a defined limit. It then multiplies this result by 6 and takes the square root.
Here is a link to an image written out in mathematical notation
Here is my main function. PREC is the predefined limit. It will populate the array with the results of these fractions and get the sum. My guess is that the sqrt function is causing the issue where I cannot get more precise than 6 digits.
int main(int argc, char *argv[]) {
nthsums = new float[PREC];
for (int i = 1; i < PREC + 1; i += 1) {
nthsums[i] = nth_fraction(i);
}
float array_sum = sum_array(nthsums);
array_sum *= 6.000000D;
float result = sqrt(array_sum);
std::string resultString = std::to_string(result);
cout << resultString << "\n";
}
Just for the sake of it, I'll also include my sum function as I suspect that there could be something wrong with that, too.
float sum_array(float *array) {
float returnSum = 0;
for (int itter = 0; itter < PREC + 1; itter += 1) {
if (array[itter] >= 0) {
returnSum += array[itter];
}
}
return returnSum;
}
I would like to get at least as precise as 10 digits. Is there any way to do this in C++?
So even with long double as the floating point type used for this, there's some subtlety required because adding two long doubles of substantially different order of magnitudes can cause precision loss. See here for a discussion in Java but I believe it to be basically the same behavior in C++.
Code I used:
#include <iostream>
#include <cmath>
#include <numbers>
long double pSeriesApprox(unsigned long long t_terms)
{
long double pi_squared = 0.L;
for (unsigned long long i = t_terms; i >= 1; --i)
{
pi_squared += 6.L * (1.L / i) * (1.L / i);
}
return std::sqrtl(pi_squared);
}
int main(int, char[]) {
const long double pi = std::numbers::pi_v<long double>;
const unsigned long long num_terms = 10'000'000'000;
std::cout.precision(30);
std::cout << "Pi == " << pi << "\n\n";
std::cout << "Pi ~= " << pSeriesApprox(num_terms) << " after " << num_terms << " terms\n";
return 0;
}
Output:
Pi == 3.14159265358979311599796346854
Pi ~= 3.14159265349430016911469465413 after 10000000000 terms
9 decimal digits of accuracy, which is about what we'd expect from a series converging at this rate.
But if all I do is reverse the order the loop in pSeriesApprox goes, adding the exact same terms but from largest to smallest instead of smallest to largest:
long double pSeriesApprox(unsigned long long t_terms)
{
long double pi_squared = 0.L;
for (unsigned long long i = 1; i <= t_terms; ++i)
{
pi_squared += 6.L * (1.L / i) * (1.L / i);
}
return std::sqrtl(pi_squared);
}
Output:
Pi == 3.14159265358979311599796346854
Pi ~= 3.14159264365071688729358356795 after 10000000000 terms
Suddenly we're down to 7 digits of accuracy, even though we used 10 billion terms. In fact, after 100 million terms or so, the approximation to pi stabilizes at this specific value. So while using sufficiently large data types to store these computations is important, some additional care is still needed when trying to perform this kind of sum.
I have run into a problem where i am trying to optimize my query which is created to calculate Nmin values for the increasing values of N and error approximation.
I am not from programming background and have just started to take it up.
This is the calculation which is inefficient as it calculates Nmin even after finding Nmin.
Now to reduce the time i did below changes reduce function call with no improvement:
#include<iostream>
#include<cmath>
#include<time.h>
#include<iomanip>
using namespace std;
double f(int);
int main(void)
{
double err;
double pi = 4.0*atan(1.0);
cout<<fixed<<setprecision(7);
clock_t start = clock();
for (int n=1;;n++)
{
if((f(n)-pi)>= 1e-6)
{
cout<<"n_min is "<< n <<"\t"<<f(n)-pi<<endl;
}
else
{
break;
}
}
clock_t stop = clock();
//double elapsed = (double)(stop - start) * 1000.0 / CLOCKS_PER_SEC; //this one in ms
cout << "time: " << (stop-start)/double(CLOCKS_PER_SEC)*1000 << endl; //this one in s
return 0;
}
double f(int n)
{
double sum=0;
for (int i=1;i<=n;i++)
{
sum += 1/(1+pow((i-0.5)/n,2));
}
return (4.0/n)*sum;
}
Is there any way to reduce the time and make the second query efficient?
Any help would be greatly appreciated.
I do not see any immediate way of optimizing the algorithm itself. You could however reduce the time significantly by not writing to the standard output for every iteration. Also, do not calculate f(n) more than once per iteration.
for (int n=1;;n++)
{
double val = f(n);
double diff = val-pi;
if(diff < 1e-6)
{
cout<<"n_min is "<< n <<"\t"<<diff<<endl;
break;
}
}
Note however that this will yield a higher n_min (increased by 1 compared to the result of your version) since we changed the condition to diff < 1e-6.
My task is to ask the user to how many decimal places of accuracy they want the summation to iterate compared to the actual value of pi. So 2 decimal places would stop when the loop reaches 3.14. I have a complete program, but I am unsure if it actually works as intended. I have checked for 0 and 1 decimal places with a calculator and they seem to work, but I don't want to assume it works for all of them. Also my code may be a little clumsy since were are still learning the basics. We only just learned loops and nested loops. If there are any obvious mistakes or parts that could be cleaned up, I would appreciate any input.
Edit: I only needed to have this work for up to five decimal places. That is why my value of pi was not precise. Sorry for the misunderstanding.
#include <iostream>
#include <cmath>
using namespace std;
int main() {
const double PI = 3.141592;
int n, sign = 1;
double sum = 0,test,m;
cout << "This program determines how many iterations of the infinite series for\n"
"pi is needed to get with 'n' decimal places of the true value of pi.\n"
"How many decimal places of accuracy should there be?" << endl;
cin >> n;
double p = PI * pow(10.0, n);
p = static_cast<double>(static_cast<int>(p) / pow(10, n));
int counter = 0;
bool stop = false;
for (double i = 1;!stop;i = i+2) {
sum = sum + (1.0/ i) * sign;
sign = -sign;
counter++;
test = (4 * sum) * pow(10.0,n);
test = static_cast<double>(static_cast<int>(test) / pow(10, n));
if (test == p)
stop = true;
}
cout << "The series was iterated " << counter<< " times and reached the value of pi\nwithin "<< n << " decimal places." << endl;
return 0;
}
One of the problems of the Leibniz summation is that it has an extremely low convergence rate, as it exhibits sublinear convergence. In your program you also compare a calculated extimation of π with a given value (a 6 digits approximation), while the point of the summation should be to find out the right figures.
You can slightly modify your code to make it terminate the calculation if the wanted digit doesn't change between iterations (I also added a max number of iterations check). Remember that you are using doubles not unlimited precision numbers and sooner or later rounding errors will affect the calculation. As a matter of fact, the real limitation of this code is the number of iterations it takes (2,428,700,925 to obtain 3.141592653).
#include <iostream>
#include <cmath>
#include <iomanip>
using std::cout;
// this will take a long long time...
const unsigned long long int MAX_ITER = 100000000000;
int main() {
int n;
cout << "This program determines how many iterations of the infinite series for\n"
"pi is needed to get with 'n' decimal places of the true value of pi.\n"
"How many decimal places of accuracy should there be?\n";
std::cin >> n;
// precalculate some values
double factor = pow(10.0,n);
double inv_factor = 1.0 / factor;
double quad_factor = 4.0 * factor;
long long int test = 0, old_test = 0, sign = 1;
unsigned long long int count = 0;
double sum = 0;
for ( long long int i = 1; count < MAX_ITER; i += 2 ) {
sum += 1.0 / (i * sign);
sign = -sign;
old_test = test;
test = static_cast<long long int>(sum * quad_factor);
++count;
// perform the test on integer values
if ( test == old_test ) {
cout << "Reached the value of Pi within "<< n << " decimal places.\n";
break;
}
}
double pi_leibniz = static_cast<double>(inv_factor * test);
cout << "Pi = " << std::setprecision(n+1) << pi_leibniz << '\n';
cout << "The series was iterated " << count << " times\n";
return 0;
}
I have summarized the results of several runs in this table:
digits Pi iterations
---------------------------------------
0 3 8
1 3.1 26
2 3.14 628
3 3.141 2,455
4 3.1415 136,121
5 3.14159 376,848
6 3.141592 2,886,751
7 3.1415926 21,547,007
8 3.14159265 278,609,764
9 3.141592653 2,428,700,925
10 3.1415926535 87,312,058,383
Your program will never terminate, because test==p will never be true. This is a comparison between two double-precision numbers that are calculated differently. Due to round-off errors, they will not be identical, even if you run an infinite number of iterations, and your math is correct (and right now it isn't, because the value of PI in your program is not accurate).
To help you figure out what's going on, print the value of test in each iteration, as well as the distance between test and pi, as follows:
#include<iostream>
using namespace std;
void main() {
double pi = atan(1.0) * 4; // Make sure you have a precise value of PI
double sign = 1.0, sum = 0.0;
for (int i = 1; i < 1000; i += 2) {
sum = sum + (1.0 / i) * sign;
sign = -sign;
double test = 4 * sum;
cout << test << " " << fabs(test - pi) << "\n";
}
}
After you make sure the program works well, change the stopping condition eventually to be based on the distance between test and pi.
for (int i=1; fabs(test-pi)>epsilon; i+=2)