#include <iostream>
#include <limits>
int main(void) {
cout << std::numeric_limits<uint64_t>::max();
return 0;
}
The code above outputs (on my machine) 18446744073709551615, but I'm trying to multiply numbers that have at least 25 digits. How to properly handle a multiplication of 2 integers that are larger than uint64?
You need to use a library that handles big numbers. Here are some of them:
The GNU Multiple Precision Arithmetic Library
https://gmplib.org/
C++ Big Integer Library
https://mattmccutchen.net/bigint/
Boost.Multiprecision
http://www.boost.org/doc/libs/1_55_0/libs/multiprecision/doc/html/index.html
Related
I encountered some queer behavior, at least in my own mind, while debugging some code involved with determining if an addition operation would underflow a double. Here is an example program demonstrating what I found.
#include <iostream>
#include <limits>
using std::cout;
using std::endl;
using std::numeric_limits;
int main()
{
double lowest = numeric_limits<double>::lowest();
bool truth = (lowest + 10000) == lowest;
cout << truth << endl;
}
When I execute this code, I get true as a result. Is this a bug or am I just sleep deprived?
The smallest double is:
-1.7976931348623157e+308
Adding 10,000, or 1e4, to this would only have a noticeable effect if doubles had 300+ digits of precision, which they most definitely do not. Doubles can only hold 15-17 significant digits.
The difference in magnitude between these two numbers is so great that adding 10,000 does not produce a new number. In fact, the minimum double is such a huge number (so to speak) that you could add a googol to it—that's 1 followed by a hundred zeros—and it wouldn't change.
I have a file.txt with hundreds of numbers.
They have many digits (max 20) after the point and I need to get them all without truncation, otherwise they introduce errors in the following computations. I made these numbers with matlab so it has a monstrous precision but now I must replicate this behaviour in my program.
I've done this way:
fstream in;
in.open(file.txt, ios::in);
long double number;
in>>number;
I also tried this
in.precision(20);
in>>number;
before each ">>" operation but it is vain
std::numeric_limits::min std::numeric_limits::digits10 can tell you what your target's actual precision is for long double.
If you find that it's insufficient to represent your data, you probably want arbitrary precision. There are a couple of arbitrary precision number libraries you can use, none of which are standard in C++.
boost::multiprecision
GNU MP
MPFR
The following works fine on my system (Win7, VS2012):
#include <fstream>
#include <iostream>
int main (void)
{
std::ifstream file ("test.txt") ;
long double d = 0 ;
file >> d ;
std::cout.precision (20) ;
std::cout << d << "\n" ;
return 0 ;
}
The text file:
2.7239385667867091
The output:
2.7239385667867091
If this doesn't work on your system, then you need to use a third-party number library.
I have a file.txt with hundreds of numbers.
They have many digits (max 20) after the point and I need to get them all without truncation, otherwise they introduce errors in the following computations. I made these numbers with matlab so it has a monstrous precision but now I must replicate this behaviour in my program.
I've done this way:
fstream in;
in.open(file.txt, ios::in);
long double number;
in>>number;
I also tried this
in.precision(20);
in>>number;
before each ">>" operation but it is vain
std::numeric_limits::min std::numeric_limits::digits10 can tell you what your target's actual precision is for long double.
If you find that it's insufficient to represent your data, you probably want arbitrary precision. There are a couple of arbitrary precision number libraries you can use, none of which are standard in C++.
boost::multiprecision
GNU MP
MPFR
The following works fine on my system (Win7, VS2012):
#include <fstream>
#include <iostream>
int main (void)
{
std::ifstream file ("test.txt") ;
long double d = 0 ;
file >> d ;
std::cout.precision (20) ;
std::cout << d << "\n" ;
return 0 ;
}
The text file:
2.7239385667867091
The output:
2.7239385667867091
If this doesn't work on your system, then you need to use a third-party number library.
I'm beginning to teach myself C++ until my class starts in the fall. I was wondering if you might be able to help me come up with a better way to ask the user for the number of digits they want for the number pi, and then display it. My problem is that using pi = atan(1)*4 isn't precise past around 10 decimal places. Is there a better built in number that has pi to at least 20 decimal places? Here is what I have so far, thanks!
#include <iostream>
#include <string>
#include <iomanip>
#include <ios>
#include <sstream>
using namespace std;
using std::setprecision;
using std::streamsize;
int main()
{
double pi = atan(1)*4;
int input = 0;
while(true)
{
cout << "Please enter how many digits of PI you would like to see (Max 20): ";
cin >> input;
if(input > 0 && input <= 20)
{
break;
}
else
{
cout << "That's not a valid number! Try again." << endl;
}
}
streamsize prec = cout.precision();
cout << setprecision(input);
cout << "Here you go: " << pi <<endl;
system("pause");
}
The easiest way to do this would probably just have a std::string containing the digits that you want ("3.14159265358979323846264338327950288419"), and then just print the first input digits beyond the decimal point.
I would say that this is less of a C++ problem, and more of a math problem. There are several infinite series that converge to the true value of Pi very quickly. Have you looked at the Wikipedia article on this topic?
As for being precise to nine or ten digits, you may run into rounding issues using double (especially with certain calculation methods). I would consider looking into an arbitrary-precision math library. I'm a big fan of MPFR, but I'm sure Boost has something analogous if that's more your thing (keep in mind that Boost is a C++ library, whereas MPFR is a C library [although you can, of course, use C code from C++]).
MPFR does have a C++ wrapper, but in general I don't enjoy using it as much as the C functions, since the last time I looked at it (admittedly, a while ago), it wasn't quite as feature-complete.
It's probably worth noting also that, since your goal is to learn C++, and not to learn how to efficiently approximate Pi, it might be preferrable to solve this problem by e.g. just retrieving the first n-digits from a hard-coded string instead, as Chris said.
A double will have 53 bits of precision. Each bit gives about 1/3 of a decimal digit (log(10)/log(2) to be precise), which means that we get approximately 53/3 digits out of a double. That should give 17 digits (including the 3 at the beginning). It's plausible that atan(1) isn't giving quite all the digits of pi/4 either (because atan as well as any other trigonometric function is an approximation).
If you want many more digits than about 12-14 digits, you will need to use a "big number" library, and there are a number of "clever" ways to calculate the decimals of PI.
The most common way is to get the power of 2 for each non-zero position of the binary number, and then sum them up. This is not workable when the binary number is huge, say,
10000...0001 //1000000 positions
It is impossible to let the computer compute the pow(2,1000000). So the traditional way is not workable.
Other way to do this?
Could someone give an arithmetic method about how to compute, not library?
As happydave said, there are existing libraries (such as GMP) for this type of thing. If you need to roll your own for some reason, here's an outline of a reasonably efficient approach.
You'll need bigint subtraction, comparison and multiplication.
Cache values of 10^(2^n) in your binary format until the next value is bigger than your binary number. This will allow you to quickly generate a power of ten by doing the following:
Select the largest value in your cache smaller than your remaining number, store this
in a working variable.
do{
Multiply it by the next largest value in your cache and store the result in a
temporary value.
If the new value is still smaller, set your working value to this number (swapping
references here rather than allocating new memory is a good idea),
Keep a counter to see which digit you're at. If this changes by more than one
between instances of the outer loop, you need to pad with zeros
} Until you run out of cache
This is your next base ten value in binary, subtract it from your binary number while
the binary number is larger than your digit, the number of times you do this is the
decimal digit -- you can cheat a little here by comparing the most significant bits
and finding a lower bound before trying subtraction.
Repeat until your binary number is 0
This is roughly O(n^4) with regards to number of binary digits, and O(nlog(n)) with regards to memory. You can get that n^4 closer to n^3 by using a more sophisticated multiplication algorithm.
You could write your own class for handling arbitrarily large integers (which you can represent as an array of integers, or whatever makes the most sense), and implement the operations (*, pow, etc.) yourself. Or you could google "C++ big integer library", and find someone else who has already implemented it.
It is impossible to let the computer compute the pow(2,1000000). So the traditional way is not workable.
It is not impossible. For example, Python can do the arithmetic calculation instantly, and the conversion to a decimal number in about two seconds (on my machine). Python has built in facilities for dealing with large integers that exceed the size of a machine word.
In C++ (and C), a good choice of big integer library is GMP. It is robust, well tested, and actively maintained. It includes a C++ wrapper that uses operator overloading to provide a nice interface (except, there is no C++ operator for the pow() operation).
Here is a C++ example that uses GMP:
#include <iostream>
#include <gmpxx.h>
int main(int, char *[])
{
mpz_class a, b;
a = 2;
mpz_pow_ui(b.get_mpz_t(), a.get_mpz_t(), 1000000);
std::string s = b.get_str();
std::cout << "length is " << s.length() << std::endl;
return 0;
}
The output of the above is
length is 301030
which executes on my machine in 0.18 seconds.
"This is roughly O(n^4) with regards to number of binary digits, and O(nlog(n)) with regards to memory". You can do O(n^(2 + epsilon)) operations (where n is the number of binary digits), and O(n) memory as follows: Let N be an enormous number of binary length n. Compute the residues mod 2 (easy; grab the low bit) and mod 5 (not easy but not terrible; break the binary string into successive strings of four bits; compute the residue mod 5 of each such 4-tuple, and add them up as with casting out 9's for decimal numbers.). By computing the residues mod 2 and 5 you can read off the low decimal digit. Subtract this; divide by 10 (the internet documents ways to do this), and repeat to get the next-lowest digit.
I calculated 2 ** 1000000 and converted it to decimal in 9.3 seconds in Smalltalk so it's not impossible. Smalltalk has large integer libraries built in.
2 raisedToInteger: 1000000
As mentioned in another answer, you need a library that handles arbitrary precision integer numbers. Once you have that, you do MOD 10 and DIV 10 operations on it to compute the decimal digits in reverse order (least significant to most significant).
The rough idea is something like this:
LargeInteger *a;
char *string;
while (a != 0) {
int remainder;
LargeInteger *quotient;
remainder = a % 10.
*string++ = remainder + 48.
quotient = a / 10.
}
Many details are missing (or wrong) here concerning type conversions, memory management and allocation of objects but it's meant to demonstrate the general technique.
It's quite simple with the Gnu Multiprecision Library. Unfortunately, I couldn't test this program because it seems I need to rebuild my library after a compiler upgrade. But there's not much room for error!
#include "gmpxx.h"
#include <iostream>
int main() {
mpz_class megabit( "1", 10 );
megabit <<= 1000000;
megabit += 1;
std::cout << megabit << '\n';
}