Double to string conversions for calculator - c++

I am currently developing a cCalc, which is a graphical user interface calculator that looks like the built-in calculator in Windows 10 in engineering mode. My project is based on C++17, using FLTK as a GUI wrapper, and long double as the main type for working with numbers. I am using MinGW 10.2.
Today, I discovered a problem with converting from long double type to std::string, since I don't know how to choose the number of decimal places. The C++ language has built-in capabilities for converting from long double to std::string, but they are not suitable because:
std::to_string(long double) always uses six digits of decimal places. If the user wants to count 40 + 1, he expects 41 but 41.000000 will not be quite expected.
std::ostringstream s with s.precision(n) is also a bad idea. Any option with a fixed precision will not work, as the real part with important digits may be discarded.
std::ostringstream s without s.precision(n) is not suitable, since the automatic selection of the number of decimal places is not always correct. For example:
std::ostringstream s;
s << 3.14159265L;
std::cout << s.str();
This will give 3.14159, 3 digits with valuable information have been lost.
I was programming in C# a few years ago, and as far as I remember, the easy way of System.Convert.ToString(double d) worked very well.
Question:
What is a good and free-licensed implementation that solves the described problem? I am expecting an implementation that for large (or small) numbers uses scientific notation.
Also, I do not want to add huge libraries, like Boost, to my project.

Related

Why cents for std::put_money()?

I'm wondering why the std::put_money() function accepts cents instead of dollars. Also looking at the definition on cppreference, it does not say what the input number should be.
Is it true that whatever the currency we have to use a decimal number at the lowest possible decimal value of said currency? (i.e. so * 1.0, * 100.0, or * 1000.0 as the case may be?) Because that seems to incorporate knowledge of the currency opposed to the current locale...
The general idea is that you don't want to use floating point with currency, because values with a finite number of decimal digits can be periodic in binary, and given that floating point values have finite precision this leads to surprises when summing them; the usual example is
#include <stdio.h>
int main(void) {
double v = 0.;
for(int i=0; i<10; ++i) v+=0.1;
printf("%0.18g\n", v-1.0f);
return 0;
}
which prints -1.11022302462515654e-16.
A simple approach to deal with the problem is to use integral values for "the smallest non-fractional units of the currency" (thanks #Justin for the quote); this makes sure that when the user inputs $ 0.10 it's exactly represented, and does not lead to any rounding surprise, at least as long as we are dealing with values where exact precision is expected.
This is fine and explains the cents, but why long double and not some integral type? Here I'm speculating, but I see two reasonable motivations:
fractional amounts of currency are something that exists, typically for unitary prices (e.g. the price per liter of gasoline); the precision there is generally less of an issue - you are going to multiply it by another floating point value anyway - but you want to be able to read such values;
but most importantly, historically floating point values had the best precision over a wide spectrum of platforms, even for integral values. long long (guaranteed to be at least 64 bit) is a recent addition to the standard, and long was generally 32 bit wide: it would have capped monetary values to a meager ~21 million dollars.
OTOH, even a plain double on most platforms has a 53 digits mantissa, which means that it can represent exactly integral values up to 9007199254740991 - so, something like 90 thousand billion dollars; that's good enough to represent exactly the US public debt down to cents, so it's probably precise enough for pretty much anything else. They probably chose long double as "the biggest hammer they can throw at the problem" (even if nowadays it's generally as big as a plain double).
Because that seems to incorporate knowledge of the currency opposed to the current locale...
Yes and no; I think that the idea was that, as long as you use the relevant locale facets both for input and for output, you simply shouldn't really care - the library should do the conversions for you, and you just work with numbers whose exact magnitude shouldn't really matter to you.
That's the theory; but as said in the comments, C and C++ locales are a badly designed piece of software, with an overly complicated design which however falls short when tested for real-world usage.
Honestly, I would never use this stuff "for real":
you can never be sure of how updated the standard library is, how broken it is (I once had VC++ not being able to do a roundtrip of Italian-localized numbers), if it actually supports the currencies you care about.
you do need to care about what is its idea of "smallest non-fractional unit of the currency" if you need to talk with anything besides textual IO in the format expected by the library - say, you have to get the price of a stock from a web service, or if you have built-in data to combine with the user input;
same for serialization in a machine readable format; you don't want to expose yourself to the vagaries of your C runtime and of OS configuration when storing the user data, especially if they are to be exchanged with other applications, especially if said applications run on a different C runtime (it may even be your own application compiled for a different operating system!) or a different language.

C++ convert floating point number to string

I am trying to convert a floating point number to string. I know you can do it using ostringstream & sprintf etc. but in the project I am working I am trying to do it using my own functions only (I am creating my own string class without using any outside functions). I don't want a perfect representation e.g. I don't mind it if this happens with large or small number: 1.0420753e+4 like it does with the standard stringstream.
I know how floating point numbers work (e.g. sign, exponent, mantissa) and how they are represented in a different way from what they are displayed as (that is why its difficult). I know this is possible because the std c++ library can do it - I just don't know how to do it myself.
EDIT: I have created my own integer version of this (converts int to my own CString class).
First, do not do this yourself. iOS has standard C++ features for formatting floating-point objects, and I expect Android does too.
Second, do not do this yourself. It is hard to do without rounding errors. The techniques for doing it are already known and published, and you should use good references rather than the algorithms you will generally find on Stack Overflow. The classic paper for this is Correctly Rounded Binary-Decimal and Decimal-Binary Conversions by David M. Gay, and here is code from David Gay.
Simple method: Divide by 10 until the value is ≤ 1. This gives you the number of decimals after which you should print the .. Multiply the original number by 10 for each digit you want after the ., and round. Stringify the resulting integer, and insert the ..
Uhm, if you really want to reinvent your own square wheel, then probably the easiest way is to write converter from float to int(you said you know how bit pattern works), or maybe even 2 ints - one for fractional part, other for the rest, then print them REUSING code that already exists
Use ostringstream -:
double d = 2.7818;
std::ostringstream ss;
ss << d;
std::cout << ss.str() << std::endl;

C++: Store large numbers in a float like PHP?

In PHP if you go above INT_MAX it will cast it as a float, allowing very high numbers to be formed (that are non-decimal as well), is this possible to do in C++ or are the way they store floating point/double precision numbers different?
The reason why is I am wishing to benchmark large factorials, but something such as 80! is way too large for an unsigned integer..
The language will not make the switch for you, but has the datatypes float and double, which are usually 32 bit and 64 bit IEEE floats, respectively.
A 64 bit double has enough range for 80!, but doesn't have enough precision to represent it exactly. The language doesn't have anything built in that can do that: you would need to use a big integer library, for example GMP.
try using the GMP library or there are several other Big Integer libraries provided for C++. You may also use string manipulation to calculate large factorials. Click here for the algorithm and its explanation.
C++ doesn't have such kind of "automatic casting" facilities, even if you could build a class that mimics such behavior by having an int and a float (a double would be even better, IIRC it lets you get up to 170!) private fields and some operator overloading black magic.
Anyhow, going from integers to fp you're going to lose precision, so, even if you can reach higher numbers, you aren't going to represent them exactly. Actually, if you're going in fp fields with factorials usually you could just use the Stirling's approximation (but I understand that in this case it do not apply, since it's a benchmark).
If you want to get to arbitrarily big factorials without losing precision, the usual solution is to use some bigint library; you can find several of them easily with Google.
Use one of bigint libraries, which allow you to create arbitrary precission ints in cost of performance. Or you have to write your own class to emulate PHPs hybrid float-int functionality
Something like this
class IntFloat {
union {
float fval;
int ival;
} val;
bool foatUsed;
public:
setVal(float val)
{
this->val.fval = val;
floatUsed = true;
}
setVal(int val)
{
this->val.ival = val;
floatUsed = false;
}
//additional code for getters, setters, operators etc
}
However what PHP does isn't worthy of imitation.
You can find list of big int libraries on wikipedia
PS:
"or are the way they store floating point/double precision numbers different?"
Yes it is different. C++ stores them straightly in target machine format, whle PHP uses intermediate representation (or bytecode, or in case of PHP opcode). Thus PHP converts number to machine format under the hood.
You can use __float128 (long double) if precision is enough and you compiler supports it.

Is there a library class to represent floating point numbers?

I am writing an application which does a lot of manipulation with decimal numbers (e.g. 57.65). As multiplications and divisions quickly erode their accuracy, I would like to store the numbers in a class which preserves their accuracy after manipulation, rather than rely on float and double.
I am talking about something like this:
class FloatingPointNumber {
private:
long m_mantissa;
int m_dps; // decimal points
// so for example 57.65 would be represented as m_mantissa=5765, m_dps=2
public:
// Overloaded function for addition
FloatingPointNumber operator+(FloatingPointNumber n);
// Other operator overloads follow
}
While it is possible for me to write such a class, it feels a bit like reinventing the wheel and I am sure that there must be some library class somewhere which does this (although this does not seem to exist in STL).
Does anybody know of such a library? Many thanks.
Do you mean something like this ?
#include "ttmath/ttmath.h"
#include <iostream>
int main()
{
// bigdouble consists of 1024*4 bytes for the mantissa
// and 256*4 bytes for the exponent.
// This is because my machine is 32-bit!
typedef ttmath::Big<1024, 256> bigdouble; // <Mantissa, Exponent>
bigdouble x = 5.05544;
bigdouble y = "54145454.15484854120248541841854158417";
bigdouble z = x * y * 0.01;
std::cout << z;
return 0;
}
You can specify the number of machine words in the mantissa and the exponent as you like.
I have used TTMath to solve Project Euler puzzles, and I am really pleased with it. I think it is relatively stable and the author is very kind if you have questions.
EDIT:: I have also used MAPM in the past. It represents big floats in base 100, so there would be no problem converting decimal numbers to base 100, unlike base 2. TTMAT uses base 2 to represents big floats. It is stable since 2000 as the library page claims. It has been used in many applications as you can see in the library page. It is a C library with a nice C++ wrapper.
MAPM nextPrime(){
static MAPM prime = 3;
MAPM retPrime = prime;
prime += 2;
while( isPrime( prime ) == false )
prime += 2;
return retPrime;
}
BTW, If you are interested in GMP and you are using VS, then you can check the MPIR which is GMP port for Windows ;) for me I find TTMath more than pleasing and easier/faster than all of what I tried because the library does stack allocations without touching the heap in anyway. Basically it is not an arbitrary precision library, you specify the precision at compile-time as shown above.
There is a list of libraries here.
I have never tried any of them so I can't recommend a single one, however this one is part of the GNU Project so it can't be half bad.
If you want to roll your own, Binary Coded Decimal is probably your best bet.
A list of decimal arithmetic packages, included Robert Klarer’s decNumber++, which implements the interfaces specified in the forthcoming ISO Technical Report on decimal arithmetic types in C++: ISO/IEC TR 24733: C++ Decimal Floating-Point Arithmetic Extensions
The Multiple Precision Floating point with correct Rounding library, but if I remember correctly, it is binary floating point
I have no experience with these libraries, but just as a matter of awareness, there have been 2 major developments that I think are relevant to this question in the last few years...
"posits" - a new floating-point format that is more efficient and less "messy" than IEEE754.
C# 11 has introduced "static abstract interface members" which enables (for our purposes) implementing new numeric types while getting all the same benefits of the built-in numeric types in terms of operator overloading, etc... i.e. truly generic numeric types in C#.
I know of no implementations of "posits" in C#, nor is C# 11 released yet. But again -- these are salient developments related to the question.

C++: How to Convert From Float to String Without Rounding, Truncation or Padding? [duplicate]

This question already has answers here:
Why do I see a double variable initialized to some value like 21.4 as 21.399999618530273?
(14 answers)
Closed 6 years ago.
I am facing a problem and unable to resolve it. Need help from gurus. Here is sample code:-
float f=0.01f;
printf("%f",f);
if we check value in variable during debugging f contains '0.0099999998' value and output of printf is 0.010000.
a. Is there any way that we may force the compiler to assign same values to variable of float type?
b. I want to convert float to string/character array. How is it possible that only and only exactly same value be converted to string/character array. I want to make sure that no zeros are padded, no unwanted values are padded, no changes in digits as in above example.
It is impossible to accurately represent a base 10 decimal number using base 2 values, except for a very small number of values (such as 0.25). To get what you need, you have to switch from the float/double built-in types to some kind of decimal number package.
You could use boost::lexical_cast in this way:
float blah = 0.01;
string w = boost::lexical_cast<string>( blah );
The variable w will contain the text value 0.00999999978. But I can't see when you really need it.
It is preferred to use boost::format to accurately format a float as an string. The following code shows how to do it:
float blah = 0.01;
string w = str( boost::format("%d") % blah ); // w contains exactly "0.01" now
Have a look at this C++ reference. Specifically the section on precision:
float blah = 0.01;
printf ("%.2f\n", blah);
There are uncountably many real numbers.
There are only a finite number of values which the data types float, double, and long double can take.
That is, there will be uncountably many real numbers that cannot be represented exactly using those data types.
The reason that your debugger is giving you a different value is well explained in Mark Ransom's post.
Regarding printing a float without roundup, truncation and with fuller precision, you are missing the precision specifier - default precision for printf is typically 6 fractional digits.
try the following to get a precision of 10 digits:
float amount = 0.0099999998;
printf("%.10f", amount);
As a side note, a more C++ way (vs. C-style) to do things is with cout:
float amount = 0.0099999998;
cout.precision(10);
cout << amount << endl;
For (b), you could do
std::ostringstream os;
os << f;
std::string s = os.str();
In truth using the floating point processor or co-processor or section of the chip itself (most are now intergrated into the CPU), will never result in accurate mathematical results, but they do give a fairly rough accuracy, for more accurate results, you could consider defining a class "DecimalString", which uses nybbles as decimal characters and symbols... and attempt to mimic base 10 mathematics using strings... in that case, depending on how long you want to make the strings, you could even do away with the exponent part altogether a string 256 can represent 1x10^-254 upto 1^+255 in straight decimal using actual ASCII, shorter if you want a sign, but this may prove significantly slower. You could speed this by reversing the digit order, so from left to right they read
units,tens,hundreds,thousands....
Simple example
eg. "0021" becomes 1200
This would need "shifting" left and right to make the decimal points line up before routines as well, the best bet is to start with the ADD and SUB functions, as you will then build on them in the MUL and DIV functions. If you are on a large machine, you could make them theoretically as long as your heart desired!
Equally, you could use the stdlib.h, in there are the sprintf, ecvt and fcvt functions (or at least, there should be!).
int sprintf(char* dst,const char* fmt,...);
char *ecvt(double value, int ndig, int *dec, int *sign);
char *fcvt(double value, int ndig, int *dec, int *sign);
sprintf returns the number of characters it wrote to the string, for example
float f=12.00;
char buffer[32];
sprintf(buffer,"%4.2f",f) // will return 5, if it is an error it will return -1
ecvt and fcvt return characters to static char* locations containing the null terminated decimal representations of the numbers, with no decimal point, most significant number first, the offset of the decimal point is stored in dec, the sign in "sign" (1=-,0=+) ndig is the number of significant digits to store. If dec<0 then you have to pad with -dec zeros pror to the decimal point. I fyou are unsure, and you are not working on a Windows7 system (which will not run old DOS3 programs sometimes) look for TurboC version 2 for Dos 3, there are still one or two downloads available, it's a relatively small program from Borland which is a small Dos C/C++ edito/compiler and even comes with TASM, the 16 bit machine code 386/486 compile, it is covered in the help files as are many other useful nuggets of information.
All three routines are in "stdlib.h", or should be, though I have found that on VisualStudio2010 they are anything but standard, often overloaded with function dealing with WORD sized characters and asking you to use its own specific functions instead... "so much for standard library," I mutter to myself almost each and every time, "Maybe they out to get a better dictionary!"
You would need to consult your platform standards to determine how to best determine the correct format, you would need to display it as a*b^C, where 'a' is the integral component that holds the sign, 'b' is implementation defined (Likely fixed by a standard), and 'C' is the exponent used for that number.
Alternatively, you could just display it in hex, it'd mean nothing to a human, though, and it would still be binary for all practical purposes. (And just as portable!)
To answer your second question:
it IS possible to exactly and unambiguously represent floats as strings. However, this requires a hexadecimal representation. For instance, 1/16 = 0.1 and 10/16 is 0.A.
With hex floats, you can define a canonical representation. I'd personally use a fixed number of digits representing the underlying number of bits, but you could also decide to strip trailing zeroes. There's no confusion possible on which trailing digits are zero.
Since the representation is exact, the conversions are reversible: f==hexstring2float(float2hexstring(f))