...
cout << setprecision(100) << pow((3+sqrt(5.0)),28) << endl;
...
outputs
135565048129406451712
which isn't precise enough but.
$ bc <<< "scale = 100; (3+sqrt(5.0))^28"
outputs
135565048129406369791.9994684648068789538123313610677119237534230237579838585720347675878761558402979025019238688523799354
which is what I want. I'm setting the cout precision so it must be the sqrt, pow or + are losing the precision?
Setting precision on cout doesn't have any effect on how the underlying computation is done in C++. floats typically have about 8 digits of precision, doubles about 16; your C++ output has only the first 15 digits matching the bc output.
If you want more precision then you'll have to use another method, such as an arbitrary precision numerical library. That's how the bc program implements arbitrary precision math.
For example, using:
https://gmplib.org
#include <gmp.h>
#include <gmpxx.h>
#include <iostream>
#include <iomanip>
int main() {
mpf_set_default_prec(402);
mpf_class a = 3_mpf + sqrt(5_mpf);
mpf_class output;
mpf_pow_ui(output.get_mpf_t(), a.get_mpf_t(), 28);
std::cout << std::setprecision(121);
std::cout << output << '\n';
}
This prints:
135565048129406369791.9994684648068789538123313610677119237534230237579838585720347675878761558402979528909982661363879709
Interestingly this is different from the output of bc <<< "scale = 100; (3+sqrt(5.0))^28", but if you set the scale higher for bc you'll see that gmp's output is correct.
It looks like bc is willing to print out however many digits it has even if the operands to expressions that produced those digits didn't have enough precision to get them right. In contrast GMP appears to set the precision for results based on what's accurate given the precision of the inputs.
Related
I have the following piece of code
#include <iostream>
#include <iomanip>
int main()
{
double x = 7033753.49999141693115234375;
double y = 7033753.499991415999829769134521484375;
double z = (x+ y)/2.0;
std::cout << "y is " << std::setprecision(40) << y << "\n";
std::cout << "x is " << std::setprecision(40) << x << "\n";
std::cout << "z is " << std::setprecision(40) << z << "\n";
return 0;
}
When the above code is run I get,
y is 7033753.499991415999829769134521484375
x is 7033753.49999141693115234375
z is 7033753.49999141693115234375
When I do the same in Wolfram Alpha the value of z is completely different
z = 7033753.4999914164654910564422607421875 #Wolfram answer
I am familiar with floating point precision and that large numbers away from zero can not be exactly represented. Is that what is happening here? Is there anyway in c++ where I can get the same answer as Wolfram without any performance penalty?
large numbers away from zero can not be exactly represented. Is that what is happening here?
Yes.
Note that there are also infinitely many rational numbers that cannot be represented near zero as well. But the distance between representable values does grow exponentially in larger value ranges.
Is there anyway in c++ where I can get the same answer as Wolfram ...
You can potentially get the same answer by using long double. My system produces exactly the same result as Wolfram. Note that precision of long double varies between systems even among systems that conform to IEEE 754 standard.
More generally though, if you need results that are accurate to many significant digits, then don't use finite precision math.
... without any performance penalty?
No. Precision comes with a cost.
Just telling IOStreams to print to 40 significant decimal figures of precision, doesn't mean that the value you're outputting actually has that much precision.
A typical double takes you up to 17 significant decimal figures (ish); beyond that, what you see is completely arbitrary.
Per eerorika's answer, it looks like the Wolfram Alpha answer is also falling foul of this, albeit possibly with some different precision limit than yours.
You can try a different approach like a "bignum" library, or limit yourself to the precision afforded by the types that you've chosen.
This is my first post here so sorry if it drags a little.
I'm assisting in some research for my professor, and I'm having some trouble with precision when I'm parsing some numbers that need to be precise to the 12th decimal point. For example, here is a number that I'm parsing from a string into an integer, before it's parsed:
-82.636097527336
Here is the code I'm using to parse it, which I also found on this site (thanks for that!):
std::basic_string<char> str = prelim[i];
std::stringstream s_str( str );
float val;
s_str >> val;
degrees.push_back(val);
Where 'prelim[i]' is just the current number I'm on, and 'degrees' is my new vector that holds all of the numbers after they've been parsed to a float. My issue is that, after it's parsed and stored in 'degrees', I do an 'std::cout' command comparing both values side-by-side, and shows up like this (old value (string) on the left, new value (float) on the right):
-82.6361
Does anyone have any insight into how I could alleviate this issue and make my numbers more precise? I suppose I could go character by character and use a switch case, but I think that there's an easier way to do it with just a few lines of code.
Again, thank you in advance and any pointers would be appreciated!
(Edited for clarity regarding how I was outputting the value)
Change to a double to represent the value more accurately, and use std::setprecision(30) or more to show as much of the internal representation as is available.
Note that the internal storage isn't exact; using an Intel Core i7, I got the following values:
string: -82.636097527336
float: -82.63610076904296875
double: -82.63609752733600544161163270473480224609
So, as you can see, double correctly represents all of the digits of your original input string, but even so, it isn't quite exact, since there are a few extra digits than in your string.
There are two problems:
A 32-bit float does not have enough precision for 14 decimal digits. From a 32-bit float you can get about 7 decimal digits, because it has a 23-bit binary mantissa. A 64-bit float (double) has 52 bits of mantissa, which gives you about 16 decimal digits, just enough.
Printing with cout by default prints six decimal digits.
Here is a little program to illustrate the difference:
#include <iomanip>
#include <iostream>
#include <sstream>
int main(int, const char**)
{
float parsed_float;
double parsed_double;
std::stringstream input("-82.636097527336 -82.636097527336");
input >> parsed_float;
input >> parsed_double;
std::cout << "float printed with default precision: "
<< parsed_float << std::endl;
std::cout << "double printed with default precision: "
<< parsed_double << std::endl;
std::cout << "float printed with 14 digits precision: "
<< std::setprecision(14) << parsed_float << std::endl;
std::cout << "double printed with 14 digits precision: "
<< std::setprecision(14) << parsed_double << std::endl;
return 0;
}
Output:
float printed with default precision: -82.6361
double printed with default precision: -82.6361
float printed with 14 digits precision: -82.636100769043
double printed with 14 digits precision: -82.636097527336
So you need to use a 64-bit float to be able to represent the input, but also remember to print with the desired precision with std::setprecision.
You cannot have precision up to the 12th decimal using a simple float. The intuitive course of action would be to use double or long double... but your are not going to have the precision your need.
The reason is due to the representation of real numbers in memory. You have more information here.
For example. 0.02 is actually stored as 0.01999999...
You should use a dedicated library for arbitrary precision, instead.
Hope this helps.
Looking at the name and the Boost Multiprecision documentation I would expect that the cpp_dec_float_50 datatype has a precision of 50 decimal digits:
Using typedef cpp_dec_float_50 hides the complexity of multiprecision to allow us to define variables with 50 decimal digit precision just like built-in double.
(Although I don't understand the comparison with double - I mean double usually implements binary floating point arithmetic, not decimal floating point arithmetic.)
This is also matched by the output of following code (except for the double part, but this is expected):
cout << std::numeric_limits<boost::multiprecision::cpp_dec_float_50>::digits10
<< '\n';
// -> 50
cout << std::numeric_limits<double>::digits10 << '\n';
// -> 15
But why does following code print 74 digits then?
#include <boost/multiprecision/cpp_dec_float.hpp>
// "12" repeated 50 times, decimal point after the 10th digit
boost::multiprecision::cpp_dec_float_50 d("1212121212.121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212");
cout << d.convert_to<string>() << '\n';
// Expected output: 50 digits
// Actual output: 74 digits
// -> 1212121212.1212121212121212121212121212121212121212121212121212121212121212
The str() member function works as expected, e.g.
cout << d.str(50) << '\n';
does only print 50 digits - where it is documented as:
Returns the number formatted as a string, with at least precision digits, and in scientific format if scientific is true.
What you are seeing is likely related to the guard digits used internally. The reason is that even decimal representation has limited accuracy (think ("100.0" / "3.0") * "3.0").
In order to get reasonable rounding errors during calculations, the stored precision will be more than the guaranteed precision.
In summary: always be specific about your expected precision. In your example d.str(50) would do.
(In realistic scenarios, you should want to track the precision of your inputs and deduce the precision on your outputs. Most often, people just reserve surplus precision and only print the part they're interested in)
I have a program and I'm trying to calculatecos(M_PI*3/2) and instead of getting 0, as I should, I get -1.83691e-016
What am I missing here? I am in radians as I need to be.
First, M_PI is not a very portable macro and is usually good to about 15 decimal places, depending on the compiler you use - my guess is you're using Microsoft's C++ compiler.
Second, if you want a more accurate (and portable) version, use the Boost Math library:
http://www.boost.org/doc/libs/1_55_0/libs/math/doc/html/math_toolkit/tutorial/non_templ.html
Third, as Kay has pointed out, pi in itself is an irrational number and therefore no amount of bits (or digits in base 10) would be enough to accurately represent it. Therefore, What you're actually calculating is not cos(3*pi/2) exactly, but "the cosine of 3/2 times the closest approximation of pi given the bits required", which will NOT be 3 *pi/2 and therefore won't be zero.
Finally, if you want custom precision for your mathematical constants, read this: http://www.boost.org/doc/libs/1_55_0/libs/math/doc/html/math_toolkit/tutorial/user_def.html
The number M_PI is only an approximation of π. The cosine that you get back is also an approximation, and it's a pretty good one - it has fifteen correct digits after the decimal point.
Given the discrete nature of double values, the standard margin of error against which to test for numerical equality is numeric_limits<double>::epsilon():
#include <iostream>
#include <limits>
#include <cmath>
using namespace std;
int main()
{
double x = cos(M_PI*3/2);
cout << "x = << " << x << endl;
cout << "numeric_limits<double>::epsilon() = "
<< numeric_limits<double>::epsilon() << endl;
cout << "Is x sufficiently close to 0? "
<< (abs(x) < numeric_limits<double>::epsilon() ? "yes" : "no") << endl;
return 0;
}
Output:
x = << -1.83697e-16
numeric_limits<double>::epsilon() = 2.22045e-16
Is x sufficiently close to 0? yes
As you can see, the absolute value of -1.83697e-16 is within the margin of error given by epsilon 2.22045e-16.
Pi is irrational, the computer cannot represent the number perfectly. The small error to the "correct" value of pi causes the error in the output. Being 1.83691 × 10-16 off is still pretty good.
If you want to learn more about the restrictions of actual system and the impact of little errors in the input, then refer to http://en.wikipedia.org/wiki/Numerical_stability.
double a = 2451550;
double b = .407864;
double c= a*b;
cout<<c;
I was expecting the results to be "999898.9892" but getting "999899". I need the actual unrounded result.Please suggest.
By default, iostreams output 6 digits of precision. If you want more, you have to ask for it:
std::cout.precision(15);
It can also be done using Manipulator setprecision like below.
#include <iostream>
#include <iomanip>
using namespace std;
int main()
{
double a = 2451550;
double b = .407864;
double c= a*b;
cout<<setprecision(15)<<c;
}
Also, Usage of manipulator will make the code compact.
By default the precision of an std::iostream will show how many digits total to display and by default precision is 6. So, since your number has six digits before the decimal place it will not display any after the decimal.
You can change this behavior with the 'fixed' manipulator. 'precision' then changes to mean the number of digits to display after the decimal which is probably what you were expecting.
To always get four digits after the decimal you can do this:
cout << setprecision(4) << fixed << c;
However, keep in mind that this will always display four digits after the decimal even if they are zeros. There is no simple way to get 'precision' to mean at most x number of digits after the decimal place with std::iostreams.