How to define and use numbers smaller than 2e-308 - c++

Smallest double is 2.22507e-308. Is there any way I can use smaller numbers?
I found a library called gmp, but have no idea how to use it, documentation is not clear at all, and I'm not sure if it works on windows.
I don't expect to give me instructions, but maybe at least some piece of advice.

If you need really big precision, then give gmp chance. I am sure it works on Windows too.
If you just need bigger precision than double, try long double. It may or may not give you more, depends on your compiler and target platform.
In my case it does give more (gcc 6, x86_64 linux):
Test program:
#include <iostream>
#include <limits>
int main() {
std::cout << "float:"
<< " bytes=" << sizeof(float)
<< " min=" << std::numeric_limits<float>::min()
<< std::endl;
std::cout << "double:"
<< " bytes=" << sizeof(double)
<< " min=" << std::numeric_limits<double>::min()
<< std::endl;
std::cout << "long double:"
<< " bytes=" << sizeof(long double)
<< " min=" << std::numeric_limits<long double>::min()
<< std::endl;
}
Output:
float: bytes=4 min=1.17549e-38
double: bytes=8 min=2.22507e-308
long double: bytes=16 min=3.3621e-4932

If your compiler/architecture allows it, you could use something like long double, which compiles to an 80-bit float (though I think it aligns to 128 bits, so there's a bit of wasted space) and has more range and precision than a typical double value. Not all compilers will do that though, and on many compilers, long double is equivalent to a double, at 64-bits.
"gmp" is one library you could use for extended precision floats. I generally recommend boost.multiprecision, which includes gmp, though personally, I'd use cpp_bin_float or cpp_dec_float for my multiprecision needs (the former is IEEE756 compliant, the latter isn't)
As for how to use them: I haven't used gmp, so I can't comment on its syntax, but cpp_bin_float is pretty easy to use:
typedef boost::multiprecision::cpp_bin_float_quad quad;
quad a = 34;
quad b = 17.95467;
b += a;
for(int i = 0; i < 10; i++) {
b *= b;
}
std::cout << "This might be rather big: " << b << std::endl;

If you change your compiler to gcc or Intel type long double will be supported with bigger precission (80-bit). With default visual studio compiler, I have no advice for you what to do.

Related

double and float memory allocation on modern comptuers

I am learning about double and float and what the difference is. I ran a piece of code as posted below, to see how much memory is allocated depending on how many integers I add and decimal points but it seems that no matter how many integers I type I always get size 8 bytes for both float and double. I learned that float occupies 4 bytes, but Im starting to think that on modern computers that's not the case and perhaps this was the case back in the days and today we can use them interchangeably without affecting the results? Am I missing something here?
// C++ program to sizes of data types
#include<iostream>
using namespace std;
int main()
{
cout << "Size of int : " << sizeof(11111111111111111) << " bytes" << endl;
cout << "Size of float : " << sizeof(11111111111111111111111111111111111111111111111111111111111.1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111) << " bytes" <<endl;
cout << "Size of double : " << sizeof(.11111111111111111111111111111111111111111111111111111) << " bytes" << endl;
return 0;
}
You print the size of double on both lines, because both floating point literals has the type of double. If you want to create a float literal, append an f to it: 1.0f. This has the type of float. If you don't append an f to it, it will have the type of double.
Or you can just simply use sizeof(float).

std::cout with floating number

I'm using visual studio 2015 to print two floating numbers:
double d1 = 1.5;
double d2 = 123456.789;
std::cout << "value1: " << d1 << std::endl;
std::cout << "value2: " << d2 << std::endl;
std::cout << "maximum number of significant decimal digits (value1): " << -std::log10(std::nextafter(d1, std::numeric_limits<double>::max()) - d1) << std::endl;
std::cout << "maximum number of significant decimal digits (value2): " << -std::log10(std::nextafter(d2, std::numeric_limits<double>::max()) - d2) << std::endl;
This prints the following:
value1: 1.5
value2: 123457
maximum number of significant decimal digits (value1): 15.6536
maximum number of significant decimal digits (value2): 10.8371
Why 123457 is printed out for the value 123456.789? Does ANSI C++ specification allow to display anything for floating numbers when std::cout is used without std::setprecision()?
The rounding off happens because of the C++ standard which can be seen by writing
std::cout<<std::cout.precision();
The output screen will show 6 which tells that the default number of significant digits which will be printed by the std::cout statement is 6. That is why it automatically rounds off the floating number to 6 digits.
What you have have pointed out is actually one of those many things that the standardization committee should consider regarding the standard iostream in C++. Such things work well when you write :-
printf ("%f\n", d2);
But not with std::cout where you need to use std::setprecision because it's formatting is similar to the use of %g instead of %f in printf. So you need to write :-
std::cout << std::setprecision(10) << "value2: " << d2 << std::endl;
But if you dont like this method & are using C++11 (& onwards) then you can also write :-
std::cout << "value2: " << std::to_string(d2) << std::endl;
This will give you the same result as printf ("%f\n", d2);.
A much better method is to cancel the rounding that occurs in std::cout by using std::fixed :-
#include <iostream>
#include <iomanip>
int main()
{
std::cout << std::fixed;
double d = 123456.789;
std::cout << d;
return 0;
}
Output :-
123456.789000
So I guess your problem is solved !!
I think the problem here is that the C++ standard is not written to be easy to read, it is written to be precise and not repeat itself. So if you look up the operator<<(double), it doesn't say anything other than "it uses num_put - because that is how the cout << some_float_value is implemented.
The default behaviour is what print("%g", value); does [table 88 in n3337 version of the C++ standard explains what the equivalence of printf and c++ formatting]. So if you want to do %.16g you need to change the precision by calling setprecision(16).

Is there something wrong with the way I am using a long double?

I have recently become interested in learning about programming in c++, because I want to get a bit deeper understanding of the way computers work and handle instructions. I thought I would try out the data types, but I don't really understand what's happening with my output...
#include <iostream>
#include <iomanip>
using namespace std;
int main() {
float fValue = 123.456789;
cout << setprecision(20) << fixed << fValue << endl;
cout << "Size of float: " << sizeof(float) << endl;
double dValue = 123.456789;
cout << setprecision(20) << fixed << dValue << endl;
cout << "Size of double: " << sizeof(double) << endl;
long double lValue = 123.456789;
cout << setprecision(20) << fixed << lValue << endl;
cout << "Size of long double: " << sizeof(long double) << endl;
return 0;
}
The output I expected would be something like:
123.45678710937500000000
Size of float: 4
123.45678900000000000000
Size of double: 8
123.45678900000000000000
Size of long double: 16
This is my actual output:
123.45678710937500000000
Size of float: 4
123.45678900000000000000
Size of double: 8
-6518427077408613100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.00000000000000000000
Size of long double: 12
Any ideas on what happened would be much appreciated, thanks!
Edit:
System:
Windows 10 Pro Technical Preview
64-bit Operating System, x64-based processor
Eclipse CDT 8.5
From the patch that fixed this in earlier versions:
MinGW uses the Microsoft runtime DLL msvcrt.dll. Here lies a problem: while gcc creates 80 bits long doubles, the MS runtime accepts 64 bit long doubles only.
This bug happens to me when I use 4.8.1 revision 4 from MinGW-get (the most recent version it offers), but not when I use 4.8.1 revision 5.
So you are not using long double wrong (although there would be better accuracy to do long double lValue = 123.456789L to make sure it doesn't take 123.456789 as a double, then cast it to a long double).
The easiest way to fix this would be to simply change the version of MinGW you are using to 4.9 or 4.7, depending on what you need (you can get 4.9 here).
If you are willing to instead use printf, you could change to printf("%Lf", ...), and either:
add the flag -posix when you compile with g++
add #define __USE_MINGW_ANSI_STDIO 1 before #include <cstdio> (found this from the origional patch)
Finally, you can even just cast to a double whenever you try to print out the long double (there is some loss of accuracy, but it shouldn't matter when just printing out numbers).
To find more details, you can also look at my blog post on this issue.
Update: If you want to continue to use Mingw 4.8, you can also just download a different distribution of Mignw, which didn't have that problem for me.

std::cout << Predicting the automatic field width in displayed for an arbitrary double

I'm displaying a large number of doubles on the console, and I would like to know in advance how many decimal places std::cout will decide to display for a given double. This is basically so I can make it look pretty in the console.
e.g. (pseudo-code)
feild_width = find_maximum_display_precision_that_cout_will_use( whole_set_of_doubles );
...
// Every cout statement:
std::cout << std::setw( feild_width ) << double_from_the_set << std::endl;
I figure cout "guesses"? a good precision to display based on the double. For example, it seems to display
std::cout << sqrt(2) << std::endl;
as 1.41421, but also
std::cout << (sqrt(0.5)*sqrt(0.5) + sqrt(1.5)*sqrt(1.5)) << std::endl;
as 2 (rather than 2.000000000000?????? or 1.99999999?????). Well, maybe this calculates to exactly 2.0, but I don't think that sqrt(2) will calculate to exactly 1.41421, so std::cout has to make some decision about how many decimal places to display at some point, right?
Anyway possible to predict this to formulate a find_maximum_display_precision...() function?
What you need is the fixed iomanip.
http://www.cplusplus.com/reference/iostream/manipulators/fixed/
double d = 10/3;
std::cout << std::setprecision(5) << std::fixed << d << std::endl;
Sometimes C++ I/O bites. Making pretty output is one of those sometimes. The C printf family is easier to control, more understandable, more terse, and isn't plagued with those truly awful ios:: global variables. If you need to use C++ output for other reasons, you can always sprintf/snprintf to a string buffer and then print that using the << to stream operator. IMHO, If you don't need to use C++ output, don't. It is ugly and verbose.
In your question you are mixing precision and width, which are two different things.
Other answers concentrate on precision, but the given precision is the maximum, not a minimum of displayed digits. It does not pad trailing zeros, if not ios::fixed or ios::scientific is set.
Here is a solution to determine the number of characters used for output, including sign and powers of 10:
#include <string>
#include <sstream>
#include <vector>
size_t max_width(const std::vector<double>& v)
{
size_t max = 0;
for (size_t i = 0; i < v.size(); ++i)
{
std::ostringstream out;
// optional: set precision, width, etc. to the same as in std::cout
out << v[i];
size_t length = out.str().size();
if (length > max) max = length;
}
return max;
}
std::cout::precision(); use it to determine precision
example :
# include <iostream>
# include <iomanip>
int main (void)
{
double x = 3.1415927
std::cout << "Pi is " << std::setprecision(4) << x << std::endl;
return 1;
}
This would display:
Pi is 3.142
This link also includes explanation for std::cout::precision();
http://www.cplusplus.com/reference/iostream/ios_base/precision/

Actual long double precision does not agree with std::numeric_limits

Working on Mac OS X 10.6.2, Intel, with i686-apple-darwin10-g++-4.2.1, and compiling with the -arch x86_64 flag, I just noticed that while...
std::numeric_limits<long double>::max_exponent10 = 4932
...as is expected, when a long double is actually set to a value with exponent greater than 308, it becomes inf--ie in reality it only has 64bit precision instead of 80bit.
Also, sizeof() is showing long doubles to be 16 bytes, which they should be.
Finally, using <limits.h> gives the same results as <limits>.
Does anyone know where the discrepancy might be?
long double x = 1e308, y = 1e309;
cout << std::numeric_limits<long double>::max_exponent10 << endl;
cout << x << '\t' << y << endl;
cout << sizeof(x) << endl;
gives
4932
1e+308 inf
16
It's because 1e309 is a literal that gives a double. You need to use a long-double literal 1e309L.