Why cents for std::put_money()? - c++

I'm wondering why the std::put_money() function accepts cents instead of dollars. Also looking at the definition on cppreference, it does not say what the input number should be.
Is it true that whatever the currency we have to use a decimal number at the lowest possible decimal value of said currency? (i.e. so * 1.0, * 100.0, or * 1000.0 as the case may be?) Because that seems to incorporate knowledge of the currency opposed to the current locale...

The general idea is that you don't want to use floating point with currency, because values with a finite number of decimal digits can be periodic in binary, and given that floating point values have finite precision this leads to surprises when summing them; the usual example is
#include <stdio.h>
int main(void) {
double v = 0.;
for(int i=0; i<10; ++i) v+=0.1;
printf("%0.18g\n", v-1.0f);
return 0;
}
which prints -1.11022302462515654e-16.
A simple approach to deal with the problem is to use integral values for "the smallest non-fractional units of the currency" (thanks #Justin for the quote); this makes sure that when the user inputs $ 0.10 it's exactly represented, and does not lead to any rounding surprise, at least as long as we are dealing with values where exact precision is expected.
This is fine and explains the cents, but why long double and not some integral type? Here I'm speculating, but I see two reasonable motivations:
fractional amounts of currency are something that exists, typically for unitary prices (e.g. the price per liter of gasoline); the precision there is generally less of an issue - you are going to multiply it by another floating point value anyway - but you want to be able to read such values;
but most importantly, historically floating point values had the best precision over a wide spectrum of platforms, even for integral values. long long (guaranteed to be at least 64 bit) is a recent addition to the standard, and long was generally 32 bit wide: it would have capped monetary values to a meager ~21 million dollars.
OTOH, even a plain double on most platforms has a 53 digits mantissa, which means that it can represent exactly integral values up to 9007199254740991 - so, something like 90 thousand billion dollars; that's good enough to represent exactly the US public debt down to cents, so it's probably precise enough for pretty much anything else. They probably chose long double as "the biggest hammer they can throw at the problem" (even if nowadays it's generally as big as a plain double).
Because that seems to incorporate knowledge of the currency opposed to the current locale...
Yes and no; I think that the idea was that, as long as you use the relevant locale facets both for input and for output, you simply shouldn't really care - the library should do the conversions for you, and you just work with numbers whose exact magnitude shouldn't really matter to you.
That's the theory; but as said in the comments, C and C++ locales are a badly designed piece of software, with an overly complicated design which however falls short when tested for real-world usage.
Honestly, I would never use this stuff "for real":
you can never be sure of how updated the standard library is, how broken it is (I once had VC++ not being able to do a roundtrip of Italian-localized numbers), if it actually supports the currencies you care about.
you do need to care about what is its idea of "smallest non-fractional unit of the currency" if you need to talk with anything besides textual IO in the format expected by the library - say, you have to get the price of a stock from a web service, or if you have built-in data to combine with the user input;
same for serialization in a machine readable format; you don't want to expose yourself to the vagaries of your C runtime and of OS configuration when storing the user data, especially if they are to be exchanged with other applications, especially if said applications run on a different C runtime (it may even be your own application compiled for a different operating system!) or a different language.

Related

What data type, scheme, and how many bits should be used to store a FOREX price? [duplicate]

I know that a float isn't appropriate to store currency values because of rounding errors. Is there a standard way to represent money in C++?
I've looked in the boost library and found nothing about it. In java, it seems that BigInteger is the way but I couldn't find an equivalent in C++. I could write my own money class, but prefer not to do so if there is something tested.
Don't store it just as cents, since you'll accumulate errors when multiplying for taxes and interest pretty quickly. At the very least, keep an extra two significant digits: $12.45 would be stored as 124,500. If you keep it in a signed 32 bit integer, you'll have $200,000 to work with (positive or negative). If you need bigger numbers or more precision, a signed 64 bit integer will likely give you all the space you'll need for a long time.
It might be of some help to wrap this value in a class, to give you one place for creating these values, doing arithmetic on them, and formatting them for display. This would also give you a central place to carry around which currency it being stored (USD, CAD, EURO, etc).
Having dealt with this in actual financial systems, I can tell you you probably want to use a number with at least 6 decimal places of precision (assuming USD). Hopefully since you're talking about currency values you won't go way out of whack here. There are proposals for adding decimal types to C++, but I don't know of any that are actually out there yet.
The best native C++ type to use here would be long double.
The problem with other approaches that simply use an int is that you have to store more than just your cents. Often financial transactions are multiplied by non-integer values and that's going to get you in trouble since $100.25 translated to 10025 * 0.000123523 (e.g. APR) is going cause problems. You're going to eventually end up in floating point land and the conversions are going to cost you a lot.
Now the problem doesn't happen in most simple situations. I'll give you a precise example:
Given several thousand currency values, if you multiply each by a percentage and then add them up, you will end up with a different number than if you had multiplied the total by that percentage if you do not keep enough decimal places. Now this might work in some situations, but you'll often be several pennies off pretty quickly. In my general experience making sure you keep a precision of up to 6 decimal places (making sure that the remaining precision is available for the whole number part).
Also understand that it doesn't matter what type you store it with if you do math in a less precise fashion. If your math is being done in single precision land, then it doesn't matter if you're storing it in double precision. Your precision will be correct to the least precise calculation.
Now that said, if you do no math other than simple addition or subtraction and then store the number then you'll be fine, but as soon as anything more complex than that shows up, you're going to be in trouble.
Look in to the relatively recent Intelr Decimal Floating-Point Math Library. It's specifically for finance applications and implements some of the new standards for binary floating point arithmetic (IEEE 754r).
The biggest issue is rounding itself!
19% of 42,50 € = 8,075 €. Due to the German rules for rounding this is 8,08 €. The problem is, that (at least on my machine) 8,075 can't be represented as double. Even if I change the variable in the debugger to this value, I end up with 8,0749999....
And this is where my rounding function (and any other on floating point logic that I can think of) fails, since it produces 8,07 €. The significant digit is 4 and so the value is rounded down. And that is plain wrong and you can't do anything about it unless you avoid using floating point values wherever possible.
It works great if you represent 42,50 € as Integer 42500000.
42500000 * 19 / 100 = 8075000. Now you can apply the rounding rule above 8080000. This can easily be transformed to a currency value for display reasons. 8,08 €.
But I would always wrap that up in a class.
I would suggest that you keep a variable for the number of cents instead of dollars. That should remove the rounding errors. Displaying it in the standards dollars/cents format should be a view concern.
You can try decimal data type:
https://github.com/vpiotr/decimal_for_cpp
Designed to store money-oriented values (money balance, currency rate, interest rate), user-defined precision. Up to 19 digits.
It's header-only solution for C++.
You say you've looked in the boost library and found nothing about there.
But there you have multiprecision/cpp_dec_float which says:
The radix of this type is 10. As a result it can behave subtly differently from base-2 types.
So if you're already using Boost, this should be good to currency values and operations, as its base 10 number and 50 or 100 digits precision (a lot).
See:
#include <iostream>
#include <iomanip>
#include <boost/multiprecision/cpp_dec_float.hpp>
int main()
{
float bogus = 1.0 / 3.0;
boost::multiprecision::cpp_dec_float_50 correct = 1.0 / 3.0;
std::cout << std::setprecision(16) << std::fixed
<< "float: " << bogus << std::endl
<< "cpp_dec_float: " << correct << std::endl;
return 0;
}
Output:
float: 0.3333333432674408
cpp_dec_float: 0.3333333333333333
*I'm not saying float (base 2) is bad and decimal (base 10) is good. They just behave differently...
** I know this is an old post and boost::multiprecision was introduced in 2013, so wanted to remark it here.
Know YOUR range of data.
A float is only good for 6 to 7 digits of precision, so that means a max of about +-9999.99 without rounding. It is useless for most financial applications.
A double is good for 13 digits, thus: +-99,999,999,999.99, Still be careful when using large numbers. Recognize the subtracting two similar results strips away much of the precision (See a book on Numerical Analysis for potential problems).
32 bit integer is good to +-2Billion (scaling to pennies will drop 2 decimal places)
64 bit integer will handle any money, but again, be careful when converting, and multiplying by various rates in your app that might be floats/doubles.
The key is to understand your problem domain. What legal requirements do you have for accuracy? How will you display the values? How often will conversion take place? Do you need internationalization? Make sure you can answer these questions before you make your decision.
Whatever type you do decide on, I would recommend wrapping it up in a "typedef" so you can change it at a different time.
It depends on your business requirements with regards to rounding. The safest way is to store an integer with the required precision and know when/how to apply rounding.
Store the dollar and cent amount as two separate integers.
Integers, always--store it as cents (or whatever your lowest currency is where you are programming for.) The problem is that no matter what you do with floating point someday you'll find a situation where the calculation will differ if you do it in floating point. Rounding at the last minute is not the answer as real currency calculations are rounded as they go.
You can't avoid the problem by changing the order of operations, either--this fails when you have a percentage that leaves you without a proper binary representation. Accountants will freak if you are off by a single penny.
I would recommend using a long int to store the currency in the smallest denomination (for example, American money would be cents), if a decimal based currency is being used.
Very important: be sure to name all of your currency values according to what they actually contain. (Example: account_balance_cents) This will avoid a lot of problems down the line.
(Another example where this comes up is percentages. Never name a value "XXX_percent" when it actually contains a ratio not multiplied by a hundred.)
The solution is simple, store to whatever accuracy is required, as a shifted integer. But when reading in convert to a double float, so that calculations suffer fewer rounding errors. Then when storing in the database multiply to whatever integer accuracy is needed, but before truncating as an integer add +/- 1/10 to compensate for truncation errors, or +/- 51/100 to round.
Easy peasy.
The GMP library has "bignum" implementations that you can use for arbitrary sized integer calculations needed for dealing with money. See the documentation for mpz_class (warning: this is horribly incomplete though, full range of arithmetic operators are provided).
One option is to store $10.01 as 1001, and do all calculations in pennies, dividing by 100D when you display the values.
Or, use floats, and only round at the last possible moment.
Often the problems can be mitigated by changing order of operations.
Instead of value * .10 for a 10% discount, use (value * 10)/100, which will help significantly. (remember .1 is a repeating binary)
I'd use signed long for 32-bit and signed long long for 64-bit. This will give you maximum storage capacity for the underlying quantity itself. I would then develop two custom manipulators. One that converts that quantity based on exchange rates, and one that formats that quantity into your currency of choice. You can develop more manipulators for various financial operations / and rules.
This is a very old post, but I figured I update it a little since it's been a while and things have changed. I have posted some code below which represents the best way I have been able to represent money using the long long integer data type in the C programming language.
#include <stdio.h>
int main()
{
// make BIG money from cents and dollars
signed long long int cents = 0;
signed long long int dollars = 0;
// get the amount of cents
printf("Enter the amount of cents: ");
scanf("%lld", &cents);
// get the amount of dollars
printf("Enter the amount of dollars: ");
scanf("%lld", &dollars);
// calculate the amount of dollars
long long int totalDollars = dollars + (cents / 100);
// calculate the amount of cents
long long int totalCents = cents % 100;
// print the amount of dollars and cents
printf("The total amount is: %lld dollars and %lld cents\n", totalDollars, totalCents);
}
As other answers have pointed out, you should either:
Use an integer type to store whole units of your currency (ex: $1) and fractional units (ex: 10 cents) separately.
Use a base 10 decimal data type that can exactly represent real decimal numbers such as 0.1. This is important since financial calculations are based on a base 10 number system.
The choice will depend on the problem you are trying to solve. For example, if you only need to add or subtract currency values then the integer approach might be sensible. If you are building a more complex system dealing with financial securities then the decimal data type approach may be more appropriate.
As another answer points out, Boost provides a base 10 floating point number type that serves as a drop-in replacement for the native C++ floating-point types, but with much greater precision. This might be convenient to use if your project already uses other Boost libraries.
The following example shows how to properly use this decimal type:
#include <iostream>
#include <boost/multiprecision/cpp_dec_float.hpp>
using namespace std;
using namespace boost::multiprecision;
int main() {
std::cout << std::setprecision(std::numeric_limits<cpp_dec_float_50>::max_digits10) << std::endl;
double d1 = 1.0 / 10.0;
cpp_dec_float_50 dec_incorrect = 1.0 / 10.0; // Incorrect! We are constructing our decimal data type from the binary representation of the double value of 1.0 / 10.0
cpp_dec_float_50 dec_correct(cpp_dec_float_50(1.0) / 10.0);
cpp_dec_float_50 dec_correct2("0.1"); // Constructing from a decimal digit string.
std::cout << d1 << std::endl; // 0.1000000000000000055511151231257827021181583404541015625
std::cout << dec_incorrect << std::endl; // 0.1000000000000000055511151231257827021181583404541015625
std::cout << dec_correct << std::endl; // 0.1
std::cout << dec_correct2 << std::endl; // 0.1
return 0;
}
Notice how even if we define a decimal data type but construct it from a binary representation of a double, then we will not obtain the precision that we expect. In the example above, both the double d1 and the cpp_dec_float_50 dec_incorrect are the same because of this. Notice how they are both "correct" to about 17 decimal places which is what we would expect of a double in a 64-bit system.
Finally, note that the boost multiprecision library can be significantly slower than the fastest high precision implementations available. This becomes evident at high digit counts (about 50+); at low digit counts the Boost implementation can be comparable other, faster implementations.
Sources:
https://www.boost.org/doc/libs/1_80_0/libs/multiprecision/doc/html/boost_multiprecision/tut/floats/fp_eg/floatbuiltinctor.html
https://www.boost.org/doc/libs/1_80_0/libs/multiprecision/doc/html/boost_multiprecision/tut/floats/fp_eg/caveats.html
Our financial institution uses "double". Since we're a "fixed income" shop, we have lots of nasty complicated algorithms that use double anyway. The trick is to be sure that your end-user presentation does not overstep the precision of double. For example, when we have a list of trades with a total in trillions of dollars, we got to be sure that we don't print garbage due to rounding issues.
go ahead and write you own money (http://junit.sourceforge.net/doc/testinfected/testing.htm) or currency () class (depending on what you need). and test it.

XCODE C++ Decimal/Money data type? [duplicate]

I know that a float isn't appropriate to store currency values because of rounding errors. Is there a standard way to represent money in C++?
I've looked in the boost library and found nothing about it. In java, it seems that BigInteger is the way but I couldn't find an equivalent in C++. I could write my own money class, but prefer not to do so if there is something tested.
Don't store it just as cents, since you'll accumulate errors when multiplying for taxes and interest pretty quickly. At the very least, keep an extra two significant digits: $12.45 would be stored as 124,500. If you keep it in a signed 32 bit integer, you'll have $200,000 to work with (positive or negative). If you need bigger numbers or more precision, a signed 64 bit integer will likely give you all the space you'll need for a long time.
It might be of some help to wrap this value in a class, to give you one place for creating these values, doing arithmetic on them, and formatting them for display. This would also give you a central place to carry around which currency it being stored (USD, CAD, EURO, etc).
Having dealt with this in actual financial systems, I can tell you you probably want to use a number with at least 6 decimal places of precision (assuming USD). Hopefully since you're talking about currency values you won't go way out of whack here. There are proposals for adding decimal types to C++, but I don't know of any that are actually out there yet.
The best native C++ type to use here would be long double.
The problem with other approaches that simply use an int is that you have to store more than just your cents. Often financial transactions are multiplied by non-integer values and that's going to get you in trouble since $100.25 translated to 10025 * 0.000123523 (e.g. APR) is going cause problems. You're going to eventually end up in floating point land and the conversions are going to cost you a lot.
Now the problem doesn't happen in most simple situations. I'll give you a precise example:
Given several thousand currency values, if you multiply each by a percentage and then add them up, you will end up with a different number than if you had multiplied the total by that percentage if you do not keep enough decimal places. Now this might work in some situations, but you'll often be several pennies off pretty quickly. In my general experience making sure you keep a precision of up to 6 decimal places (making sure that the remaining precision is available for the whole number part).
Also understand that it doesn't matter what type you store it with if you do math in a less precise fashion. If your math is being done in single precision land, then it doesn't matter if you're storing it in double precision. Your precision will be correct to the least precise calculation.
Now that said, if you do no math other than simple addition or subtraction and then store the number then you'll be fine, but as soon as anything more complex than that shows up, you're going to be in trouble.
Look in to the relatively recent Intelr Decimal Floating-Point Math Library. It's specifically for finance applications and implements some of the new standards for binary floating point arithmetic (IEEE 754r).
The biggest issue is rounding itself!
19% of 42,50 € = 8,075 €. Due to the German rules for rounding this is 8,08 €. The problem is, that (at least on my machine) 8,075 can't be represented as double. Even if I change the variable in the debugger to this value, I end up with 8,0749999....
And this is where my rounding function (and any other on floating point logic that I can think of) fails, since it produces 8,07 €. The significant digit is 4 and so the value is rounded down. And that is plain wrong and you can't do anything about it unless you avoid using floating point values wherever possible.
It works great if you represent 42,50 € as Integer 42500000.
42500000 * 19 / 100 = 8075000. Now you can apply the rounding rule above 8080000. This can easily be transformed to a currency value for display reasons. 8,08 €.
But I would always wrap that up in a class.
I would suggest that you keep a variable for the number of cents instead of dollars. That should remove the rounding errors. Displaying it in the standards dollars/cents format should be a view concern.
You can try decimal data type:
https://github.com/vpiotr/decimal_for_cpp
Designed to store money-oriented values (money balance, currency rate, interest rate), user-defined precision. Up to 19 digits.
It's header-only solution for C++.
You say you've looked in the boost library and found nothing about there.
But there you have multiprecision/cpp_dec_float which says:
The radix of this type is 10. As a result it can behave subtly differently from base-2 types.
So if you're already using Boost, this should be good to currency values and operations, as its base 10 number and 50 or 100 digits precision (a lot).
See:
#include <iostream>
#include <iomanip>
#include <boost/multiprecision/cpp_dec_float.hpp>
int main()
{
float bogus = 1.0 / 3.0;
boost::multiprecision::cpp_dec_float_50 correct = 1.0 / 3.0;
std::cout << std::setprecision(16) << std::fixed
<< "float: " << bogus << std::endl
<< "cpp_dec_float: " << correct << std::endl;
return 0;
}
Output:
float: 0.3333333432674408
cpp_dec_float: 0.3333333333333333
*I'm not saying float (base 2) is bad and decimal (base 10) is good. They just behave differently...
** I know this is an old post and boost::multiprecision was introduced in 2013, so wanted to remark it here.
Know YOUR range of data.
A float is only good for 6 to 7 digits of precision, so that means a max of about +-9999.99 without rounding. It is useless for most financial applications.
A double is good for 13 digits, thus: +-99,999,999,999.99, Still be careful when using large numbers. Recognize the subtracting two similar results strips away much of the precision (See a book on Numerical Analysis for potential problems).
32 bit integer is good to +-2Billion (scaling to pennies will drop 2 decimal places)
64 bit integer will handle any money, but again, be careful when converting, and multiplying by various rates in your app that might be floats/doubles.
The key is to understand your problem domain. What legal requirements do you have for accuracy? How will you display the values? How often will conversion take place? Do you need internationalization? Make sure you can answer these questions before you make your decision.
Whatever type you do decide on, I would recommend wrapping it up in a "typedef" so you can change it at a different time.
It depends on your business requirements with regards to rounding. The safest way is to store an integer with the required precision and know when/how to apply rounding.
Store the dollar and cent amount as two separate integers.
Integers, always--store it as cents (or whatever your lowest currency is where you are programming for.) The problem is that no matter what you do with floating point someday you'll find a situation where the calculation will differ if you do it in floating point. Rounding at the last minute is not the answer as real currency calculations are rounded as they go.
You can't avoid the problem by changing the order of operations, either--this fails when you have a percentage that leaves you without a proper binary representation. Accountants will freak if you are off by a single penny.
I would recommend using a long int to store the currency in the smallest denomination (for example, American money would be cents), if a decimal based currency is being used.
Very important: be sure to name all of your currency values according to what they actually contain. (Example: account_balance_cents) This will avoid a lot of problems down the line.
(Another example where this comes up is percentages. Never name a value "XXX_percent" when it actually contains a ratio not multiplied by a hundred.)
The solution is simple, store to whatever accuracy is required, as a shifted integer. But when reading in convert to a double float, so that calculations suffer fewer rounding errors. Then when storing in the database multiply to whatever integer accuracy is needed, but before truncating as an integer add +/- 1/10 to compensate for truncation errors, or +/- 51/100 to round.
Easy peasy.
The GMP library has "bignum" implementations that you can use for arbitrary sized integer calculations needed for dealing with money. See the documentation for mpz_class (warning: this is horribly incomplete though, full range of arithmetic operators are provided).
One option is to store $10.01 as 1001, and do all calculations in pennies, dividing by 100D when you display the values.
Or, use floats, and only round at the last possible moment.
Often the problems can be mitigated by changing order of operations.
Instead of value * .10 for a 10% discount, use (value * 10)/100, which will help significantly. (remember .1 is a repeating binary)
I'd use signed long for 32-bit and signed long long for 64-bit. This will give you maximum storage capacity for the underlying quantity itself. I would then develop two custom manipulators. One that converts that quantity based on exchange rates, and one that formats that quantity into your currency of choice. You can develop more manipulators for various financial operations / and rules.
This is a very old post, but I figured I update it a little since it's been a while and things have changed. I have posted some code below which represents the best way I have been able to represent money using the long long integer data type in the C programming language.
#include <stdio.h>
int main()
{
// make BIG money from cents and dollars
signed long long int cents = 0;
signed long long int dollars = 0;
// get the amount of cents
printf("Enter the amount of cents: ");
scanf("%lld", &cents);
// get the amount of dollars
printf("Enter the amount of dollars: ");
scanf("%lld", &dollars);
// calculate the amount of dollars
long long int totalDollars = dollars + (cents / 100);
// calculate the amount of cents
long long int totalCents = cents % 100;
// print the amount of dollars and cents
printf("The total amount is: %lld dollars and %lld cents\n", totalDollars, totalCents);
}
As other answers have pointed out, you should either:
Use an integer type to store whole units of your currency (ex: $1) and fractional units (ex: 10 cents) separately.
Use a base 10 decimal data type that can exactly represent real decimal numbers such as 0.1. This is important since financial calculations are based on a base 10 number system.
The choice will depend on the problem you are trying to solve. For example, if you only need to add or subtract currency values then the integer approach might be sensible. If you are building a more complex system dealing with financial securities then the decimal data type approach may be more appropriate.
As another answer points out, Boost provides a base 10 floating point number type that serves as a drop-in replacement for the native C++ floating-point types, but with much greater precision. This might be convenient to use if your project already uses other Boost libraries.
The following example shows how to properly use this decimal type:
#include <iostream>
#include <boost/multiprecision/cpp_dec_float.hpp>
using namespace std;
using namespace boost::multiprecision;
int main() {
std::cout << std::setprecision(std::numeric_limits<cpp_dec_float_50>::max_digits10) << std::endl;
double d1 = 1.0 / 10.0;
cpp_dec_float_50 dec_incorrect = 1.0 / 10.0; // Incorrect! We are constructing our decimal data type from the binary representation of the double value of 1.0 / 10.0
cpp_dec_float_50 dec_correct(cpp_dec_float_50(1.0) / 10.0);
cpp_dec_float_50 dec_correct2("0.1"); // Constructing from a decimal digit string.
std::cout << d1 << std::endl; // 0.1000000000000000055511151231257827021181583404541015625
std::cout << dec_incorrect << std::endl; // 0.1000000000000000055511151231257827021181583404541015625
std::cout << dec_correct << std::endl; // 0.1
std::cout << dec_correct2 << std::endl; // 0.1
return 0;
}
Notice how even if we define a decimal data type but construct it from a binary representation of a double, then we will not obtain the precision that we expect. In the example above, both the double d1 and the cpp_dec_float_50 dec_incorrect are the same because of this. Notice how they are both "correct" to about 17 decimal places which is what we would expect of a double in a 64-bit system.
Finally, note that the boost multiprecision library can be significantly slower than the fastest high precision implementations available. This becomes evident at high digit counts (about 50+); at low digit counts the Boost implementation can be comparable other, faster implementations.
Sources:
https://www.boost.org/doc/libs/1_80_0/libs/multiprecision/doc/html/boost_multiprecision/tut/floats/fp_eg/floatbuiltinctor.html
https://www.boost.org/doc/libs/1_80_0/libs/multiprecision/doc/html/boost_multiprecision/tut/floats/fp_eg/caveats.html
Our financial institution uses "double". Since we're a "fixed income" shop, we have lots of nasty complicated algorithms that use double anyway. The trick is to be sure that your end-user presentation does not overstep the precision of double. For example, when we have a list of trades with a total in trillions of dollars, we got to be sure that we don't print garbage due to rounding issues.
go ahead and write you own money (http://junit.sourceforge.net/doc/testinfected/testing.htm) or currency () class (depending on what you need). and test it.

why decimal float should be used in financial calculations while it has rounding error

I am currently working on stock market related project using c++, involving a lot float type like prices and indexes.
I read a lot says that you should use decimal float in money related arithmetic.
Why not use Double or Float to represent currency?
Difference between decimal, float and double in .NET?
To my understanding, the difference between float and decimal float is at what base the exponent part is interpreted, float use 2 as base and decimal float use 10. When using decimal float you still got rounding errors, you still could not express 1/3(correct me if I am wrong), I guess it's quite possible to multiply someone's account balance by 30% and then the round error occurs, after a few more calculations, the rounding error might propagate even more serious. Besides a bigger number range, why should I use decimal float in financial arithmetic?
Depending on what financial transactions you're performing, rounding errors are likely to be inevitable. If an item costs $1.50 with 7% sales tax, you aren't going to be charged $1.605; the price you pay will be either $1.60 or $1.61. (US currency units theoretically include "mils", or thousandths of a dollar, but the smallest denomination coin is $0.01, and almost all transactions are rounded to the nearest cent.)
If you're doing simple calculations (just adding and subtracting quantities and multiplying them by integers), all the results will be whole numbers of cents. If you use binary floating-point representing the number of dollars, most amounts will not be representable; a calculation that should yield $0.01 might yield $0.01000000000000000020816681711721685132943093776702880859375.
You can avoid that problem by using integers to represent the number of cents (or, equivalently, using fixed-point if the language supports it) or by using decimal floating-point that can represent 0.01 exactly.
But for more complex operations, like computing 7% sales tax, dividing a sum of money into 3 equal parts, or especially compound interest, there are still going to be results that aren't exactly representable unless you use an arbitrary-precision package like GMP.
As I understand it, there are laws and regulations that specify exactly how rounding errors are to be resolved. If you apply 7% sales tax to $1.50, you can't legally pick between $1.60 and $1.61; the law tells you exactly which one is legally correct.
If you're writing financial software to be used by other people, you need to find out exactly what the regulations say. Once you know that, you can determine what representation (integers, fixed-point, decimal floating-point, or whatever) can best be used to get the legally required results.
(Disclaimer: I do not know what these regulations actually say.)
At least in the USA, most financial type companies are required to use decimal based math. Mainframes since the days of the IBM 360 can perform math on variable length strings of packed decimal. Typically some form of fixed point numbers are used, with a set number of digits after the decimal point. High level languages like Cobol support packed (or unpacked) decimal numbers. In the case of IBM mainframes, there's a lot of legacy assembly code to go along with the Cobol code, partly because at one time certain types of databases were accessed via macros in assembly (now called HLASM - high level assembly).

If float and double are not accurate, how do banks perform accurate calculations involving money?

Currently learning C++ and this has just occurred to me. I'm just curious about this as I'm about do develop a simple bank program. I'll be using double for calculating dollars/interest rate etc., but there are some tiny differences between computer calculations and human calculations.
I imagine that those extra .pennies in the real world can make all the difference!
In many cases, financial calculations are done using fixed-point arithmetic instead of floating point.
For example, the .NET Decimal type, or the VB6 Currency type. These are basically just integer types, where everyone has agreed that the units are some fraction of a cent, like $.0001.
And yes, some rounding has to occur, but it is done very systematically. Usually the rounding rules are somewhere deep in the fine print of your contract (the interest rate is x%, compounded every T, rounded up to the nearest penny, but not less than $y every statement period).
The range of a 8 byte long long is: -9223372036854775808 max: 9223372036854775807 do everything as thousands of a cent/penny and you still can handle numbers up to the trillions of dollars/pounds/whatever.
It depends on the application. All calculations with decimals will
require rounding when you output them as dollars and cents (or whatever
the local currency is): the base price of an article may only have two
digits after the decimal, but when you add on sales tax or VAT, there
will be more, and if you need to calculate interest on an investment,
there will be more.
Generally, using double results in the most accurate results,
however... if your software is being used for some sort of bookkeeping
required by law (e.g. for tax purposes), you may be required to follow
standard accepted rounding practices, and these are based on decimal
arithmetic, not binary, hexadecimal or octal (which are the usual bases
for floating point—binary is universal on everything but
mainframes). In such cases, you'll need to use some sort of Decimal
class, which ensures the correct rounding. For other uses (e.g. risk
analysis), double is fine.
Just because a number is not an integer does not mean that it cannot be calculated exactly. Consider that a dollars-and-cents value is an integer if one counts the number of pennies (cents), so it is a simple matter for a fixed-point library using two decimals of precision to simply multiply each number by 100, perform the calculation as an integer, and then divide by 100 again.

Best way to store currency values in C++

I know that a float isn't appropriate to store currency values because of rounding errors. Is there a standard way to represent money in C++?
I've looked in the boost library and found nothing about it. In java, it seems that BigInteger is the way but I couldn't find an equivalent in C++. I could write my own money class, but prefer not to do so if there is something tested.
Don't store it just as cents, since you'll accumulate errors when multiplying for taxes and interest pretty quickly. At the very least, keep an extra two significant digits: $12.45 would be stored as 124,500. If you keep it in a signed 32 bit integer, you'll have $200,000 to work with (positive or negative). If you need bigger numbers or more precision, a signed 64 bit integer will likely give you all the space you'll need for a long time.
It might be of some help to wrap this value in a class, to give you one place for creating these values, doing arithmetic on them, and formatting them for display. This would also give you a central place to carry around which currency it being stored (USD, CAD, EURO, etc).
Having dealt with this in actual financial systems, I can tell you you probably want to use a number with at least 6 decimal places of precision (assuming USD). Hopefully since you're talking about currency values you won't go way out of whack here. There are proposals for adding decimal types to C++, but I don't know of any that are actually out there yet.
The best native C++ type to use here would be long double.
The problem with other approaches that simply use an int is that you have to store more than just your cents. Often financial transactions are multiplied by non-integer values and that's going to get you in trouble since $100.25 translated to 10025 * 0.000123523 (e.g. APR) is going cause problems. You're going to eventually end up in floating point land and the conversions are going to cost you a lot.
Now the problem doesn't happen in most simple situations. I'll give you a precise example:
Given several thousand currency values, if you multiply each by a percentage and then add them up, you will end up with a different number than if you had multiplied the total by that percentage if you do not keep enough decimal places. Now this might work in some situations, but you'll often be several pennies off pretty quickly. In my general experience making sure you keep a precision of up to 6 decimal places (making sure that the remaining precision is available for the whole number part).
Also understand that it doesn't matter what type you store it with if you do math in a less precise fashion. If your math is being done in single precision land, then it doesn't matter if you're storing it in double precision. Your precision will be correct to the least precise calculation.
Now that said, if you do no math other than simple addition or subtraction and then store the number then you'll be fine, but as soon as anything more complex than that shows up, you're going to be in trouble.
Look in to the relatively recent Intelr Decimal Floating-Point Math Library. It's specifically for finance applications and implements some of the new standards for binary floating point arithmetic (IEEE 754r).
The biggest issue is rounding itself!
19% of 42,50 € = 8,075 €. Due to the German rules for rounding this is 8,08 €. The problem is, that (at least on my machine) 8,075 can't be represented as double. Even if I change the variable in the debugger to this value, I end up with 8,0749999....
And this is where my rounding function (and any other on floating point logic that I can think of) fails, since it produces 8,07 €. The significant digit is 4 and so the value is rounded down. And that is plain wrong and you can't do anything about it unless you avoid using floating point values wherever possible.
It works great if you represent 42,50 € as Integer 42500000.
42500000 * 19 / 100 = 8075000. Now you can apply the rounding rule above 8080000. This can easily be transformed to a currency value for display reasons. 8,08 €.
But I would always wrap that up in a class.
I would suggest that you keep a variable for the number of cents instead of dollars. That should remove the rounding errors. Displaying it in the standards dollars/cents format should be a view concern.
You can try decimal data type:
https://github.com/vpiotr/decimal_for_cpp
Designed to store money-oriented values (money balance, currency rate, interest rate), user-defined precision. Up to 19 digits.
It's header-only solution for C++.
You say you've looked in the boost library and found nothing about there.
But there you have multiprecision/cpp_dec_float which says:
The radix of this type is 10. As a result it can behave subtly differently from base-2 types.
So if you're already using Boost, this should be good to currency values and operations, as its base 10 number and 50 or 100 digits precision (a lot).
See:
#include <iostream>
#include <iomanip>
#include <boost/multiprecision/cpp_dec_float.hpp>
int main()
{
float bogus = 1.0 / 3.0;
boost::multiprecision::cpp_dec_float_50 correct = 1.0 / 3.0;
std::cout << std::setprecision(16) << std::fixed
<< "float: " << bogus << std::endl
<< "cpp_dec_float: " << correct << std::endl;
return 0;
}
Output:
float: 0.3333333432674408
cpp_dec_float: 0.3333333333333333
*I'm not saying float (base 2) is bad and decimal (base 10) is good. They just behave differently...
** I know this is an old post and boost::multiprecision was introduced in 2013, so wanted to remark it here.
Know YOUR range of data.
A float is only good for 6 to 7 digits of precision, so that means a max of about +-9999.99 without rounding. It is useless for most financial applications.
A double is good for 13 digits, thus: +-99,999,999,999.99, Still be careful when using large numbers. Recognize the subtracting two similar results strips away much of the precision (See a book on Numerical Analysis for potential problems).
32 bit integer is good to +-2Billion (scaling to pennies will drop 2 decimal places)
64 bit integer will handle any money, but again, be careful when converting, and multiplying by various rates in your app that might be floats/doubles.
The key is to understand your problem domain. What legal requirements do you have for accuracy? How will you display the values? How often will conversion take place? Do you need internationalization? Make sure you can answer these questions before you make your decision.
Whatever type you do decide on, I would recommend wrapping it up in a "typedef" so you can change it at a different time.
It depends on your business requirements with regards to rounding. The safest way is to store an integer with the required precision and know when/how to apply rounding.
Store the dollar and cent amount as two separate integers.
Integers, always--store it as cents (or whatever your lowest currency is where you are programming for.) The problem is that no matter what you do with floating point someday you'll find a situation where the calculation will differ if you do it in floating point. Rounding at the last minute is not the answer as real currency calculations are rounded as they go.
You can't avoid the problem by changing the order of operations, either--this fails when you have a percentage that leaves you without a proper binary representation. Accountants will freak if you are off by a single penny.
I would recommend using a long int to store the currency in the smallest denomination (for example, American money would be cents), if a decimal based currency is being used.
Very important: be sure to name all of your currency values according to what they actually contain. (Example: account_balance_cents) This will avoid a lot of problems down the line.
(Another example where this comes up is percentages. Never name a value "XXX_percent" when it actually contains a ratio not multiplied by a hundred.)
The solution is simple, store to whatever accuracy is required, as a shifted integer. But when reading in convert to a double float, so that calculations suffer fewer rounding errors. Then when storing in the database multiply to whatever integer accuracy is needed, but before truncating as an integer add +/- 1/10 to compensate for truncation errors, or +/- 51/100 to round.
Easy peasy.
The GMP library has "bignum" implementations that you can use for arbitrary sized integer calculations needed for dealing with money. See the documentation for mpz_class (warning: this is horribly incomplete though, full range of arithmetic operators are provided).
One option is to store $10.01 as 1001, and do all calculations in pennies, dividing by 100D when you display the values.
Or, use floats, and only round at the last possible moment.
Often the problems can be mitigated by changing order of operations.
Instead of value * .10 for a 10% discount, use (value * 10)/100, which will help significantly. (remember .1 is a repeating binary)
I'd use signed long for 32-bit and signed long long for 64-bit. This will give you maximum storage capacity for the underlying quantity itself. I would then develop two custom manipulators. One that converts that quantity based on exchange rates, and one that formats that quantity into your currency of choice. You can develop more manipulators for various financial operations / and rules.
This is a very old post, but I figured I update it a little since it's been a while and things have changed. I have posted some code below which represents the best way I have been able to represent money using the long long integer data type in the C programming language.
#include <stdio.h>
int main()
{
// make BIG money from cents and dollars
signed long long int cents = 0;
signed long long int dollars = 0;
// get the amount of cents
printf("Enter the amount of cents: ");
scanf("%lld", &cents);
// get the amount of dollars
printf("Enter the amount of dollars: ");
scanf("%lld", &dollars);
// calculate the amount of dollars
long long int totalDollars = dollars + (cents / 100);
// calculate the amount of cents
long long int totalCents = cents % 100;
// print the amount of dollars and cents
printf("The total amount is: %lld dollars and %lld cents\n", totalDollars, totalCents);
}
As other answers have pointed out, you should either:
Use an integer type to store whole units of your currency (ex: $1) and fractional units (ex: 10 cents) separately.
Use a base 10 decimal data type that can exactly represent real decimal numbers such as 0.1. This is important since financial calculations are based on a base 10 number system.
The choice will depend on the problem you are trying to solve. For example, if you only need to add or subtract currency values then the integer approach might be sensible. If you are building a more complex system dealing with financial securities then the decimal data type approach may be more appropriate.
As another answer points out, Boost provides a base 10 floating point number type that serves as a drop-in replacement for the native C++ floating-point types, but with much greater precision. This might be convenient to use if your project already uses other Boost libraries.
The following example shows how to properly use this decimal type:
#include <iostream>
#include <boost/multiprecision/cpp_dec_float.hpp>
using namespace std;
using namespace boost::multiprecision;
int main() {
std::cout << std::setprecision(std::numeric_limits<cpp_dec_float_50>::max_digits10) << std::endl;
double d1 = 1.0 / 10.0;
cpp_dec_float_50 dec_incorrect = 1.0 / 10.0; // Incorrect! We are constructing our decimal data type from the binary representation of the double value of 1.0 / 10.0
cpp_dec_float_50 dec_correct(cpp_dec_float_50(1.0) / 10.0);
cpp_dec_float_50 dec_correct2("0.1"); // Constructing from a decimal digit string.
std::cout << d1 << std::endl; // 0.1000000000000000055511151231257827021181583404541015625
std::cout << dec_incorrect << std::endl; // 0.1000000000000000055511151231257827021181583404541015625
std::cout << dec_correct << std::endl; // 0.1
std::cout << dec_correct2 << std::endl; // 0.1
return 0;
}
Notice how even if we define a decimal data type but construct it from a binary representation of a double, then we will not obtain the precision that we expect. In the example above, both the double d1 and the cpp_dec_float_50 dec_incorrect are the same because of this. Notice how they are both "correct" to about 17 decimal places which is what we would expect of a double in a 64-bit system.
Finally, note that the boost multiprecision library can be significantly slower than the fastest high precision implementations available. This becomes evident at high digit counts (about 50+); at low digit counts the Boost implementation can be comparable other, faster implementations.
Sources:
https://www.boost.org/doc/libs/1_80_0/libs/multiprecision/doc/html/boost_multiprecision/tut/floats/fp_eg/floatbuiltinctor.html
https://www.boost.org/doc/libs/1_80_0/libs/multiprecision/doc/html/boost_multiprecision/tut/floats/fp_eg/caveats.html
Our financial institution uses "double". Since we're a "fixed income" shop, we have lots of nasty complicated algorithms that use double anyway. The trick is to be sure that your end-user presentation does not overstep the precision of double. For example, when we have a list of trades with a total in trillions of dollars, we got to be sure that we don't print garbage due to rounding issues.
go ahead and write you own money (http://junit.sourceforge.net/doc/testinfected/testing.htm) or currency () class (depending on what you need). and test it.