Why the digits after decimal are all zero? - c++

I want to perform some calculations and I want the result correct up to some decimal places, say 12.
So I wrote a sample:
#define PI 3.1415926535897932384626433832795028841971693993751
double d, k, h;
k = 999999/(2*PI);
h = 999999;
d = PI*k*k*h;
printf("%.12f\n", d);
But it gives the output:
79577232813771760.000000000000
I even used setprecision(), but same answer rather in exponential form.
cout<<setprecision(12)<<d<<endl;
prints
7.95772328138e+16
Used long double also, but in vain.
Now is there any way other than storing the integer part and the fractional part separately in long long int types?
If so, what can be done to get the answer precisely?

A double has only about 16 decimal digits of precision. Everything after the decimal point would be nonsense. (In fact, the last digit or two left of the point may not agree with an infinite-precision calculation.)
Long double is not standardized, AFAIK. It may be that on your system it is the same as double, or no more precise. That would slightly surprise me, but it doesn't violate anything.

You need to read Double-Precision concepts again; more carefully.
The double has increased precision by using 64 bits.
Stuff before the decimal is more important than that after it.
So, when you have a large integer part, it will truncate the lower precision -- this is being described to you in various answers here as rounding off.
Update:
To increase precision, you'll need to use some library or change your language.
Check this other question: Best coding language for dealing with large numbers (50000+ digits)
Yet, I'll ask you to re-check your intent once more.
Do you really need 12 decimal places for numbers that have really high values
(over 10 digits in the integer part like in your example)?
Maybe you won't really have large integer parts
(in which case such code should work fine).
But if you are tracking a value like 10000000000.123456789,
I am really interested in exactly which application you are working on (astronomy?).
If the integer part of your values is some way under 10000, you should be fine here.
Update2:
IF you must demonstrate the ability of a specific formula to work accurately within constrained error limits, the way to go is fixing the processing of your formula such that the least error is introduced.
Example,
If you want to do say, (x * y) / z
it would be prudent to try something like max(x,y)/z * min(x,y)
rather than, the original form which may overflow after (x * y), loosing precision if that did not fit in the 16 decimals of double
If you had just 2 digit precision,
. 2-digit regular-precision
`42 * 7 290 297
(42 * 7)/2 290/2 294/2
Result ==> 145 147
But ==> 42/2 = 21
21 * 7 = 147
This is probably the intent of your contest.

The double-precision binary format used by most computers can only hold about 16 digits, after that you'll get rounding. See http://en.wikipedia.org/wiki/Double-precision_floating-point_format

Floating point values have a limit range of digits. Just because your "PI" value has six times as many digits as a double will support doesn't alter the way the hardware works.
A typical (IEEE754) double will produce approximately 15-16 decimal places. Whether that's 0.12345678901235, 1234567.8901235, 12345678901235 or 12345678901235000000000, or some other variation.
In other words, yes, if you calculate your calculation EXACTLY, you'll get lots of decimal places, because pi never ends. On a computer, you get about 15-16 digits, no matter what input values you use - all that changes is where in that sequence the decimal place sits. To get more, you need "big number support", such as the Gnu Multiprcession (GMP) library.

You're looking for std::fixed. That tells the ostream not to use exponential form.
cout << setprecision(12) << std::fixed << d << endl;

Related

C++ set precision of a double (not for output)

Alright so I am trying to truncate actual values from a double with a given number of digits precision (total digits before and after, or without, decimal), not just output them, not just round them. The only built in functions I found for this truncates all decimals, or rounds to given decimal precision.
Other solutions I have found online, can only do it when you know the number of digits before the decimal, or the entire number.
This solution should be dynamic enough to handle any number. I whipped up some code that does the trick below, however I can't shake the feeling there is a better way to do it. Does anyone know of something more elegant? Maybe a built in function that I don't know about?
I should mention the reason for this. There are 3 different sources of observed values. All 3 of these sources agree to some level in precision. Such as below, they all agree within 10 digits.
4659.96751751236
4659.96751721355
4659.96751764253
However I need to only pull from 1 of the sources. So the best approach, is to only use up to the precision all 3 sources agree on. So its not like I am manipulating numbers and then need to truncate precision, they are observed values. The desired result is
4659.967517
double truncate(double num, int digits)
{
// check valid digits
if (digits < 0)
return num;
// create string stream for full precision (string conversion rounds at 10)
ostringstream numO;
// read in number to stream, at 17+ precision things get wonky
numO << setprecision(16) << num;
// convert to string, for character manipulation
string numS = numO.str();
// check if we have a decimal
int decimalIndex = numS.find('.');
// if we have a decimal, erase it for now, logging its position
if(decimalIndex != -1)
numS.erase(decimalIndex, 1);
// make sure our target precision is not higher than current precision
digits = min((int)numS.size(), digits);
// replace unwanted precision with zeroes
numS.replace(digits, numS.size() - digits, numS.size() - digits, '0');
// if we had a decimal, add it back
if (decimalIndex != -1)
numS.insert(numS.begin() + decimalIndex, '.');
return atof(numS.c_str());
}
This will never work since a double is not a decimal type. Truncating what you think are a certain number of decimal digits will merely introduce a new set of joke digits at the end. It could even be pernicious: e.g. 0.125 is an exact double, but neither 0.12 nor 0.13 are.
If you want to work in decimals, then use a decimal type, or a large integral type with a convention that part of it holds a decimal portion.
I disagree with "So the best approach, is to only use up to the precision all 3 sources agree on."
If these are different measurements of a physical quantity, or represent rounding error due to different ways of calculating from measurements, you will get a better estimate of the true value by taking their mean than by forcing the digits they disagree about to any arbitrary value, including zero.
The ultimate justification for taking the mean is the Central Limit Theorem, which suggests treating your measurements as a sample from a normal distribution. If so, the sample mean is the best available estimate of the population mean. Your truncation process will tend to underestimate the actual value.
It is generally better to keep every scrap of information you have through the calculations, and then remember you have limited precision when outputting results.
As well as giving a better estimate, taking the mean of three numbers is an extremely simple calculation.

What data type, scheme, and how many bits should be used to store a FOREX price? [duplicate]

I know that a float isn't appropriate to store currency values because of rounding errors. Is there a standard way to represent money in C++?
I've looked in the boost library and found nothing about it. In java, it seems that BigInteger is the way but I couldn't find an equivalent in C++. I could write my own money class, but prefer not to do so if there is something tested.
Don't store it just as cents, since you'll accumulate errors when multiplying for taxes and interest pretty quickly. At the very least, keep an extra two significant digits: $12.45 would be stored as 124,500. If you keep it in a signed 32 bit integer, you'll have $200,000 to work with (positive or negative). If you need bigger numbers or more precision, a signed 64 bit integer will likely give you all the space you'll need for a long time.
It might be of some help to wrap this value in a class, to give you one place for creating these values, doing arithmetic on them, and formatting them for display. This would also give you a central place to carry around which currency it being stored (USD, CAD, EURO, etc).
Having dealt with this in actual financial systems, I can tell you you probably want to use a number with at least 6 decimal places of precision (assuming USD). Hopefully since you're talking about currency values you won't go way out of whack here. There are proposals for adding decimal types to C++, but I don't know of any that are actually out there yet.
The best native C++ type to use here would be long double.
The problem with other approaches that simply use an int is that you have to store more than just your cents. Often financial transactions are multiplied by non-integer values and that's going to get you in trouble since $100.25 translated to 10025 * 0.000123523 (e.g. APR) is going cause problems. You're going to eventually end up in floating point land and the conversions are going to cost you a lot.
Now the problem doesn't happen in most simple situations. I'll give you a precise example:
Given several thousand currency values, if you multiply each by a percentage and then add them up, you will end up with a different number than if you had multiplied the total by that percentage if you do not keep enough decimal places. Now this might work in some situations, but you'll often be several pennies off pretty quickly. In my general experience making sure you keep a precision of up to 6 decimal places (making sure that the remaining precision is available for the whole number part).
Also understand that it doesn't matter what type you store it with if you do math in a less precise fashion. If your math is being done in single precision land, then it doesn't matter if you're storing it in double precision. Your precision will be correct to the least precise calculation.
Now that said, if you do no math other than simple addition or subtraction and then store the number then you'll be fine, but as soon as anything more complex than that shows up, you're going to be in trouble.
Look in to the relatively recent Intelr Decimal Floating-Point Math Library. It's specifically for finance applications and implements some of the new standards for binary floating point arithmetic (IEEE 754r).
The biggest issue is rounding itself!
19% of 42,50 € = 8,075 €. Due to the German rules for rounding this is 8,08 €. The problem is, that (at least on my machine) 8,075 can't be represented as double. Even if I change the variable in the debugger to this value, I end up with 8,0749999....
And this is where my rounding function (and any other on floating point logic that I can think of) fails, since it produces 8,07 €. The significant digit is 4 and so the value is rounded down. And that is plain wrong and you can't do anything about it unless you avoid using floating point values wherever possible.
It works great if you represent 42,50 € as Integer 42500000.
42500000 * 19 / 100 = 8075000. Now you can apply the rounding rule above 8080000. This can easily be transformed to a currency value for display reasons. 8,08 €.
But I would always wrap that up in a class.
I would suggest that you keep a variable for the number of cents instead of dollars. That should remove the rounding errors. Displaying it in the standards dollars/cents format should be a view concern.
You can try decimal data type:
https://github.com/vpiotr/decimal_for_cpp
Designed to store money-oriented values (money balance, currency rate, interest rate), user-defined precision. Up to 19 digits.
It's header-only solution for C++.
You say you've looked in the boost library and found nothing about there.
But there you have multiprecision/cpp_dec_float which says:
The radix of this type is 10. As a result it can behave subtly differently from base-2 types.
So if you're already using Boost, this should be good to currency values and operations, as its base 10 number and 50 or 100 digits precision (a lot).
See:
#include <iostream>
#include <iomanip>
#include <boost/multiprecision/cpp_dec_float.hpp>
int main()
{
float bogus = 1.0 / 3.0;
boost::multiprecision::cpp_dec_float_50 correct = 1.0 / 3.0;
std::cout << std::setprecision(16) << std::fixed
<< "float: " << bogus << std::endl
<< "cpp_dec_float: " << correct << std::endl;
return 0;
}
Output:
float: 0.3333333432674408
cpp_dec_float: 0.3333333333333333
*I'm not saying float (base 2) is bad and decimal (base 10) is good. They just behave differently...
** I know this is an old post and boost::multiprecision was introduced in 2013, so wanted to remark it here.
Know YOUR range of data.
A float is only good for 6 to 7 digits of precision, so that means a max of about +-9999.99 without rounding. It is useless for most financial applications.
A double is good for 13 digits, thus: +-99,999,999,999.99, Still be careful when using large numbers. Recognize the subtracting two similar results strips away much of the precision (See a book on Numerical Analysis for potential problems).
32 bit integer is good to +-2Billion (scaling to pennies will drop 2 decimal places)
64 bit integer will handle any money, but again, be careful when converting, and multiplying by various rates in your app that might be floats/doubles.
The key is to understand your problem domain. What legal requirements do you have for accuracy? How will you display the values? How often will conversion take place? Do you need internationalization? Make sure you can answer these questions before you make your decision.
Whatever type you do decide on, I would recommend wrapping it up in a "typedef" so you can change it at a different time.
It depends on your business requirements with regards to rounding. The safest way is to store an integer with the required precision and know when/how to apply rounding.
Store the dollar and cent amount as two separate integers.
Integers, always--store it as cents (or whatever your lowest currency is where you are programming for.) The problem is that no matter what you do with floating point someday you'll find a situation where the calculation will differ if you do it in floating point. Rounding at the last minute is not the answer as real currency calculations are rounded as they go.
You can't avoid the problem by changing the order of operations, either--this fails when you have a percentage that leaves you without a proper binary representation. Accountants will freak if you are off by a single penny.
I would recommend using a long int to store the currency in the smallest denomination (for example, American money would be cents), if a decimal based currency is being used.
Very important: be sure to name all of your currency values according to what they actually contain. (Example: account_balance_cents) This will avoid a lot of problems down the line.
(Another example where this comes up is percentages. Never name a value "XXX_percent" when it actually contains a ratio not multiplied by a hundred.)
The solution is simple, store to whatever accuracy is required, as a shifted integer. But when reading in convert to a double float, so that calculations suffer fewer rounding errors. Then when storing in the database multiply to whatever integer accuracy is needed, but before truncating as an integer add +/- 1/10 to compensate for truncation errors, or +/- 51/100 to round.
Easy peasy.
The GMP library has "bignum" implementations that you can use for arbitrary sized integer calculations needed for dealing with money. See the documentation for mpz_class (warning: this is horribly incomplete though, full range of arithmetic operators are provided).
One option is to store $10.01 as 1001, and do all calculations in pennies, dividing by 100D when you display the values.
Or, use floats, and only round at the last possible moment.
Often the problems can be mitigated by changing order of operations.
Instead of value * .10 for a 10% discount, use (value * 10)/100, which will help significantly. (remember .1 is a repeating binary)
I'd use signed long for 32-bit and signed long long for 64-bit. This will give you maximum storage capacity for the underlying quantity itself. I would then develop two custom manipulators. One that converts that quantity based on exchange rates, and one that formats that quantity into your currency of choice. You can develop more manipulators for various financial operations / and rules.
This is a very old post, but I figured I update it a little since it's been a while and things have changed. I have posted some code below which represents the best way I have been able to represent money using the long long integer data type in the C programming language.
#include <stdio.h>
int main()
{
// make BIG money from cents and dollars
signed long long int cents = 0;
signed long long int dollars = 0;
// get the amount of cents
printf("Enter the amount of cents: ");
scanf("%lld", &cents);
// get the amount of dollars
printf("Enter the amount of dollars: ");
scanf("%lld", &dollars);
// calculate the amount of dollars
long long int totalDollars = dollars + (cents / 100);
// calculate the amount of cents
long long int totalCents = cents % 100;
// print the amount of dollars and cents
printf("The total amount is: %lld dollars and %lld cents\n", totalDollars, totalCents);
}
As other answers have pointed out, you should either:
Use an integer type to store whole units of your currency (ex: $1) and fractional units (ex: 10 cents) separately.
Use a base 10 decimal data type that can exactly represent real decimal numbers such as 0.1. This is important since financial calculations are based on a base 10 number system.
The choice will depend on the problem you are trying to solve. For example, if you only need to add or subtract currency values then the integer approach might be sensible. If you are building a more complex system dealing with financial securities then the decimal data type approach may be more appropriate.
As another answer points out, Boost provides a base 10 floating point number type that serves as a drop-in replacement for the native C++ floating-point types, but with much greater precision. This might be convenient to use if your project already uses other Boost libraries.
The following example shows how to properly use this decimal type:
#include <iostream>
#include <boost/multiprecision/cpp_dec_float.hpp>
using namespace std;
using namespace boost::multiprecision;
int main() {
std::cout << std::setprecision(std::numeric_limits<cpp_dec_float_50>::max_digits10) << std::endl;
double d1 = 1.0 / 10.0;
cpp_dec_float_50 dec_incorrect = 1.0 / 10.0; // Incorrect! We are constructing our decimal data type from the binary representation of the double value of 1.0 / 10.0
cpp_dec_float_50 dec_correct(cpp_dec_float_50(1.0) / 10.0);
cpp_dec_float_50 dec_correct2("0.1"); // Constructing from a decimal digit string.
std::cout << d1 << std::endl; // 0.1000000000000000055511151231257827021181583404541015625
std::cout << dec_incorrect << std::endl; // 0.1000000000000000055511151231257827021181583404541015625
std::cout << dec_correct << std::endl; // 0.1
std::cout << dec_correct2 << std::endl; // 0.1
return 0;
}
Notice how even if we define a decimal data type but construct it from a binary representation of a double, then we will not obtain the precision that we expect. In the example above, both the double d1 and the cpp_dec_float_50 dec_incorrect are the same because of this. Notice how they are both "correct" to about 17 decimal places which is what we would expect of a double in a 64-bit system.
Finally, note that the boost multiprecision library can be significantly slower than the fastest high precision implementations available. This becomes evident at high digit counts (about 50+); at low digit counts the Boost implementation can be comparable other, faster implementations.
Sources:
https://www.boost.org/doc/libs/1_80_0/libs/multiprecision/doc/html/boost_multiprecision/tut/floats/fp_eg/floatbuiltinctor.html
https://www.boost.org/doc/libs/1_80_0/libs/multiprecision/doc/html/boost_multiprecision/tut/floats/fp_eg/caveats.html
Our financial institution uses "double". Since we're a "fixed income" shop, we have lots of nasty complicated algorithms that use double anyway. The trick is to be sure that your end-user presentation does not overstep the precision of double. For example, when we have a list of trades with a total in trillions of dollars, we got to be sure that we don't print garbage due to rounding issues.
go ahead and write you own money (http://junit.sourceforge.net/doc/testinfected/testing.htm) or currency () class (depending on what you need). and test it.

XCODE C++ Decimal/Money data type? [duplicate]

I know that a float isn't appropriate to store currency values because of rounding errors. Is there a standard way to represent money in C++?
I've looked in the boost library and found nothing about it. In java, it seems that BigInteger is the way but I couldn't find an equivalent in C++. I could write my own money class, but prefer not to do so if there is something tested.
Don't store it just as cents, since you'll accumulate errors when multiplying for taxes and interest pretty quickly. At the very least, keep an extra two significant digits: $12.45 would be stored as 124,500. If you keep it in a signed 32 bit integer, you'll have $200,000 to work with (positive or negative). If you need bigger numbers or more precision, a signed 64 bit integer will likely give you all the space you'll need for a long time.
It might be of some help to wrap this value in a class, to give you one place for creating these values, doing arithmetic on them, and formatting them for display. This would also give you a central place to carry around which currency it being stored (USD, CAD, EURO, etc).
Having dealt with this in actual financial systems, I can tell you you probably want to use a number with at least 6 decimal places of precision (assuming USD). Hopefully since you're talking about currency values you won't go way out of whack here. There are proposals for adding decimal types to C++, but I don't know of any that are actually out there yet.
The best native C++ type to use here would be long double.
The problem with other approaches that simply use an int is that you have to store more than just your cents. Often financial transactions are multiplied by non-integer values and that's going to get you in trouble since $100.25 translated to 10025 * 0.000123523 (e.g. APR) is going cause problems. You're going to eventually end up in floating point land and the conversions are going to cost you a lot.
Now the problem doesn't happen in most simple situations. I'll give you a precise example:
Given several thousand currency values, if you multiply each by a percentage and then add them up, you will end up with a different number than if you had multiplied the total by that percentage if you do not keep enough decimal places. Now this might work in some situations, but you'll often be several pennies off pretty quickly. In my general experience making sure you keep a precision of up to 6 decimal places (making sure that the remaining precision is available for the whole number part).
Also understand that it doesn't matter what type you store it with if you do math in a less precise fashion. If your math is being done in single precision land, then it doesn't matter if you're storing it in double precision. Your precision will be correct to the least precise calculation.
Now that said, if you do no math other than simple addition or subtraction and then store the number then you'll be fine, but as soon as anything more complex than that shows up, you're going to be in trouble.
Look in to the relatively recent Intelr Decimal Floating-Point Math Library. It's specifically for finance applications and implements some of the new standards for binary floating point arithmetic (IEEE 754r).
The biggest issue is rounding itself!
19% of 42,50 € = 8,075 €. Due to the German rules for rounding this is 8,08 €. The problem is, that (at least on my machine) 8,075 can't be represented as double. Even if I change the variable in the debugger to this value, I end up with 8,0749999....
And this is where my rounding function (and any other on floating point logic that I can think of) fails, since it produces 8,07 €. The significant digit is 4 and so the value is rounded down. And that is plain wrong and you can't do anything about it unless you avoid using floating point values wherever possible.
It works great if you represent 42,50 € as Integer 42500000.
42500000 * 19 / 100 = 8075000. Now you can apply the rounding rule above 8080000. This can easily be transformed to a currency value for display reasons. 8,08 €.
But I would always wrap that up in a class.
I would suggest that you keep a variable for the number of cents instead of dollars. That should remove the rounding errors. Displaying it in the standards dollars/cents format should be a view concern.
You can try decimal data type:
https://github.com/vpiotr/decimal_for_cpp
Designed to store money-oriented values (money balance, currency rate, interest rate), user-defined precision. Up to 19 digits.
It's header-only solution for C++.
You say you've looked in the boost library and found nothing about there.
But there you have multiprecision/cpp_dec_float which says:
The radix of this type is 10. As a result it can behave subtly differently from base-2 types.
So if you're already using Boost, this should be good to currency values and operations, as its base 10 number and 50 or 100 digits precision (a lot).
See:
#include <iostream>
#include <iomanip>
#include <boost/multiprecision/cpp_dec_float.hpp>
int main()
{
float bogus = 1.0 / 3.0;
boost::multiprecision::cpp_dec_float_50 correct = 1.0 / 3.0;
std::cout << std::setprecision(16) << std::fixed
<< "float: " << bogus << std::endl
<< "cpp_dec_float: " << correct << std::endl;
return 0;
}
Output:
float: 0.3333333432674408
cpp_dec_float: 0.3333333333333333
*I'm not saying float (base 2) is bad and decimal (base 10) is good. They just behave differently...
** I know this is an old post and boost::multiprecision was introduced in 2013, so wanted to remark it here.
Know YOUR range of data.
A float is only good for 6 to 7 digits of precision, so that means a max of about +-9999.99 without rounding. It is useless for most financial applications.
A double is good for 13 digits, thus: +-99,999,999,999.99, Still be careful when using large numbers. Recognize the subtracting two similar results strips away much of the precision (See a book on Numerical Analysis for potential problems).
32 bit integer is good to +-2Billion (scaling to pennies will drop 2 decimal places)
64 bit integer will handle any money, but again, be careful when converting, and multiplying by various rates in your app that might be floats/doubles.
The key is to understand your problem domain. What legal requirements do you have for accuracy? How will you display the values? How often will conversion take place? Do you need internationalization? Make sure you can answer these questions before you make your decision.
Whatever type you do decide on, I would recommend wrapping it up in a "typedef" so you can change it at a different time.
It depends on your business requirements with regards to rounding. The safest way is to store an integer with the required precision and know when/how to apply rounding.
Store the dollar and cent amount as two separate integers.
Integers, always--store it as cents (or whatever your lowest currency is where you are programming for.) The problem is that no matter what you do with floating point someday you'll find a situation where the calculation will differ if you do it in floating point. Rounding at the last minute is not the answer as real currency calculations are rounded as they go.
You can't avoid the problem by changing the order of operations, either--this fails when you have a percentage that leaves you without a proper binary representation. Accountants will freak if you are off by a single penny.
I would recommend using a long int to store the currency in the smallest denomination (for example, American money would be cents), if a decimal based currency is being used.
Very important: be sure to name all of your currency values according to what they actually contain. (Example: account_balance_cents) This will avoid a lot of problems down the line.
(Another example where this comes up is percentages. Never name a value "XXX_percent" when it actually contains a ratio not multiplied by a hundred.)
The solution is simple, store to whatever accuracy is required, as a shifted integer. But when reading in convert to a double float, so that calculations suffer fewer rounding errors. Then when storing in the database multiply to whatever integer accuracy is needed, but before truncating as an integer add +/- 1/10 to compensate for truncation errors, or +/- 51/100 to round.
Easy peasy.
The GMP library has "bignum" implementations that you can use for arbitrary sized integer calculations needed for dealing with money. See the documentation for mpz_class (warning: this is horribly incomplete though, full range of arithmetic operators are provided).
One option is to store $10.01 as 1001, and do all calculations in pennies, dividing by 100D when you display the values.
Or, use floats, and only round at the last possible moment.
Often the problems can be mitigated by changing order of operations.
Instead of value * .10 for a 10% discount, use (value * 10)/100, which will help significantly. (remember .1 is a repeating binary)
I'd use signed long for 32-bit and signed long long for 64-bit. This will give you maximum storage capacity for the underlying quantity itself. I would then develop two custom manipulators. One that converts that quantity based on exchange rates, and one that formats that quantity into your currency of choice. You can develop more manipulators for various financial operations / and rules.
This is a very old post, but I figured I update it a little since it's been a while and things have changed. I have posted some code below which represents the best way I have been able to represent money using the long long integer data type in the C programming language.
#include <stdio.h>
int main()
{
// make BIG money from cents and dollars
signed long long int cents = 0;
signed long long int dollars = 0;
// get the amount of cents
printf("Enter the amount of cents: ");
scanf("%lld", &cents);
// get the amount of dollars
printf("Enter the amount of dollars: ");
scanf("%lld", &dollars);
// calculate the amount of dollars
long long int totalDollars = dollars + (cents / 100);
// calculate the amount of cents
long long int totalCents = cents % 100;
// print the amount of dollars and cents
printf("The total amount is: %lld dollars and %lld cents\n", totalDollars, totalCents);
}
As other answers have pointed out, you should either:
Use an integer type to store whole units of your currency (ex: $1) and fractional units (ex: 10 cents) separately.
Use a base 10 decimal data type that can exactly represent real decimal numbers such as 0.1. This is important since financial calculations are based on a base 10 number system.
The choice will depend on the problem you are trying to solve. For example, if you only need to add or subtract currency values then the integer approach might be sensible. If you are building a more complex system dealing with financial securities then the decimal data type approach may be more appropriate.
As another answer points out, Boost provides a base 10 floating point number type that serves as a drop-in replacement for the native C++ floating-point types, but with much greater precision. This might be convenient to use if your project already uses other Boost libraries.
The following example shows how to properly use this decimal type:
#include <iostream>
#include <boost/multiprecision/cpp_dec_float.hpp>
using namespace std;
using namespace boost::multiprecision;
int main() {
std::cout << std::setprecision(std::numeric_limits<cpp_dec_float_50>::max_digits10) << std::endl;
double d1 = 1.0 / 10.0;
cpp_dec_float_50 dec_incorrect = 1.0 / 10.0; // Incorrect! We are constructing our decimal data type from the binary representation of the double value of 1.0 / 10.0
cpp_dec_float_50 dec_correct(cpp_dec_float_50(1.0) / 10.0);
cpp_dec_float_50 dec_correct2("0.1"); // Constructing from a decimal digit string.
std::cout << d1 << std::endl; // 0.1000000000000000055511151231257827021181583404541015625
std::cout << dec_incorrect << std::endl; // 0.1000000000000000055511151231257827021181583404541015625
std::cout << dec_correct << std::endl; // 0.1
std::cout << dec_correct2 << std::endl; // 0.1
return 0;
}
Notice how even if we define a decimal data type but construct it from a binary representation of a double, then we will not obtain the precision that we expect. In the example above, both the double d1 and the cpp_dec_float_50 dec_incorrect are the same because of this. Notice how they are both "correct" to about 17 decimal places which is what we would expect of a double in a 64-bit system.
Finally, note that the boost multiprecision library can be significantly slower than the fastest high precision implementations available. This becomes evident at high digit counts (about 50+); at low digit counts the Boost implementation can be comparable other, faster implementations.
Sources:
https://www.boost.org/doc/libs/1_80_0/libs/multiprecision/doc/html/boost_multiprecision/tut/floats/fp_eg/floatbuiltinctor.html
https://www.boost.org/doc/libs/1_80_0/libs/multiprecision/doc/html/boost_multiprecision/tut/floats/fp_eg/caveats.html
Our financial institution uses "double". Since we're a "fixed income" shop, we have lots of nasty complicated algorithms that use double anyway. The trick is to be sure that your end-user presentation does not overstep the precision of double. For example, when we have a list of trades with a total in trillions of dollars, we got to be sure that we don't print garbage due to rounding issues.
go ahead and write you own money (http://junit.sourceforge.net/doc/testinfected/testing.htm) or currency () class (depending on what you need). and test it.

Float subtraction returns incorrect value

So I have a calculation whereby two floats that are components of vector objects are subtracted and then seem to return an incorrect result.
The code I'm attempting to use is:
cout << xresult.x << " " << vec1.x << endl;
float xpart1 = xresult.x - vec1.x;
cout << xpart1 << endl;
Where running this code will return
16 17
-1.00002
As you can see, printing out the values of xresult.x and vec1.x tells you that they are 16 and 17 respectively, yet the subtraction operation seems to introduce an error.
Any ideas why?
As you can see, printing out the
values of xresult.x and vec1.x tells
you that they are 16 and 17
respectively, yet the subtraction
operation seems to introduce an error.
No, it doesn't tell us that at all. It tells us that the input values are approximately 16 and 17. The imprecision might, generally, come from two sources: the nature of floating-point representation, and the precision with which the numbers are printed.
Output streams print floating-point values to a certain level of precision. From a description of the std::setprecision function:
On the default floating-point
notation, the precision field
specifies the maximum number of
meaningful digits to display in total
counting both those before and those
after the decimal point.
So, the values of xresult.x and vec1.x are 16 and 17 with 5 decimal digits of accuracy. In fact, one is slightly less than 16 and the other slightly more than 17. (Note that this has nothing to do with imprecise floating-point representation. The declarations float f = 16 and float g = 17 both assign exact values. A float can hold the exact integers 16 and 17 (although there are infinitely many other integers a float cannot hold.)) When we subtract slightly-more-than-17 from slightly-less-than-16, we get an answer of slightly-larger-than-negative-1.
To prove to yourself that this is the case, do one or both of these experiments. First, in your own code, add "cout << std::setprecision(10)" before printing those values. Second, run this test program
#include <iostream>
#include <iomanip>
int main() {
for(int i = 0; i < 10; i++) {
std::cout << std::setprecision(i) <<
15.99999f << " - " << 17.00001f << " = " <<
15.99999f - 17.00001f << "\n";
}
}
Notice how the 7th line of output matches your case:
16 - 17 = -1.00002
P.s. All of the other advice about imprecise floating-point representation is valid, it just doesn't apply to your particular circumstance. You really should read "What Every Computer Scientist Should Know About Floating-Point Arithmetic".
This is called floating point arithmetic. It is why numerical code is so "tricky" and filled with pitfalls. That result is expected. And what is more, it can depend on the processor that you're working with as to what and to what extent you'll see it.
I'd like to add that each type of variable of the floating point variables: float, double, long double have different precision factors. That is, one may be more able to represent more accurately the value of the floating point number. That is evidenced by how these numbers are held in memory.
When you look at a float, it contains less significant digits than say a double or long double. Hence, when you perform numerics on them, you must expect that floats will suffer from larger rounding errors. When dealing with financial data, developers often use some semblance of a "decimal." These are much better designed to handle currency type manipulations with better accuracy of the significant digits. It comes with a price however.
Take a look at the IEEE 745-2008 specification.
Its because of how floating points work. http://en.wikipedia.org/wiki/Floating_point
Because you can't accurately represent all numbers using a float. Wikipedia has a good description of it: http://en.wikipedia.org/wiki/Floating_point
How much do you know about the way numbers are stored in a computer?
Also, what are xresult.x and vec1.x - as in are they ints etc or floats.
I'd be suprised that if they were all floats the error occured, but you are converting between types and binary is not the same as decimal.
If there was a small decimal portion on the 16 and 17 that wasn't printed out, when the values are normalized to the same base for subtraction, that could introduce extra error, especially for 32 bit types like float.
When you use floating point values, you need to be prepared within your application to deal with the fact that you won't get 100% accurate decimal results. Your results will be as accurate as possible in the internal binary representation. Addition and subtraction especially can introduce a significant amount of relative error for operands that are orders of magnitude apart and for results that should be close to 0.
People keep talking about how computer representations cannot perfectly represent real numbers, and how computer operations on floating point numbers cannot be perfectly precise.
This is true, but the same is true of the real world.
Real measurements are approximations to some degree of precision. Operations on real measurements result in approximations to some degree of precision.
If I count 17 bowling balls, I have 17 bowling balls. If I remove 16 bowling balls, I have one bowling ball.
But if I have a stick that is 17 inches long, what I really have is a stick that is about 17 inches long. If I cut off 16 inches, I'm really cutting off is about 16 inches, and what I'm left with is about 1 inch.
You have to keep track of the accuracy of your measurements, and the precision of your results. If I have 17.0, accurate to three significant digits, and subtract 16.0, also accurate to three significant digits, the result is 1.0, accurate to two significant digits. And that's what you got. Your mistake was in assuming that the extra precision provided by your results, beyond the accuracy you were given, was meaningful. It's not. It's meaningless noise.
This isn't something specific to computer floating point numbers, you have the same issue whether using a calculator or working out the problems by hand.
Keep track of your significant digits, and format your answers to suppress precision beyond what is significant.
Make your variables doubles instead of floats. Youl get more precision.
EDIT
Computers store numbers using a sequence of bits. The more bits you store the higher the precision of the result. In fact floats usually have half the number of bits as doubles so they have lower precision.

Best way to store currency values in C++

I know that a float isn't appropriate to store currency values because of rounding errors. Is there a standard way to represent money in C++?
I've looked in the boost library and found nothing about it. In java, it seems that BigInteger is the way but I couldn't find an equivalent in C++. I could write my own money class, but prefer not to do so if there is something tested.
Don't store it just as cents, since you'll accumulate errors when multiplying for taxes and interest pretty quickly. At the very least, keep an extra two significant digits: $12.45 would be stored as 124,500. If you keep it in a signed 32 bit integer, you'll have $200,000 to work with (positive or negative). If you need bigger numbers or more precision, a signed 64 bit integer will likely give you all the space you'll need for a long time.
It might be of some help to wrap this value in a class, to give you one place for creating these values, doing arithmetic on them, and formatting them for display. This would also give you a central place to carry around which currency it being stored (USD, CAD, EURO, etc).
Having dealt with this in actual financial systems, I can tell you you probably want to use a number with at least 6 decimal places of precision (assuming USD). Hopefully since you're talking about currency values you won't go way out of whack here. There are proposals for adding decimal types to C++, but I don't know of any that are actually out there yet.
The best native C++ type to use here would be long double.
The problem with other approaches that simply use an int is that you have to store more than just your cents. Often financial transactions are multiplied by non-integer values and that's going to get you in trouble since $100.25 translated to 10025 * 0.000123523 (e.g. APR) is going cause problems. You're going to eventually end up in floating point land and the conversions are going to cost you a lot.
Now the problem doesn't happen in most simple situations. I'll give you a precise example:
Given several thousand currency values, if you multiply each by a percentage and then add them up, you will end up with a different number than if you had multiplied the total by that percentage if you do not keep enough decimal places. Now this might work in some situations, but you'll often be several pennies off pretty quickly. In my general experience making sure you keep a precision of up to 6 decimal places (making sure that the remaining precision is available for the whole number part).
Also understand that it doesn't matter what type you store it with if you do math in a less precise fashion. If your math is being done in single precision land, then it doesn't matter if you're storing it in double precision. Your precision will be correct to the least precise calculation.
Now that said, if you do no math other than simple addition or subtraction and then store the number then you'll be fine, but as soon as anything more complex than that shows up, you're going to be in trouble.
Look in to the relatively recent Intelr Decimal Floating-Point Math Library. It's specifically for finance applications and implements some of the new standards for binary floating point arithmetic (IEEE 754r).
The biggest issue is rounding itself!
19% of 42,50 € = 8,075 €. Due to the German rules for rounding this is 8,08 €. The problem is, that (at least on my machine) 8,075 can't be represented as double. Even if I change the variable in the debugger to this value, I end up with 8,0749999....
And this is where my rounding function (and any other on floating point logic that I can think of) fails, since it produces 8,07 €. The significant digit is 4 and so the value is rounded down. And that is plain wrong and you can't do anything about it unless you avoid using floating point values wherever possible.
It works great if you represent 42,50 € as Integer 42500000.
42500000 * 19 / 100 = 8075000. Now you can apply the rounding rule above 8080000. This can easily be transformed to a currency value for display reasons. 8,08 €.
But I would always wrap that up in a class.
I would suggest that you keep a variable for the number of cents instead of dollars. That should remove the rounding errors. Displaying it in the standards dollars/cents format should be a view concern.
You can try decimal data type:
https://github.com/vpiotr/decimal_for_cpp
Designed to store money-oriented values (money balance, currency rate, interest rate), user-defined precision. Up to 19 digits.
It's header-only solution for C++.
You say you've looked in the boost library and found nothing about there.
But there you have multiprecision/cpp_dec_float which says:
The radix of this type is 10. As a result it can behave subtly differently from base-2 types.
So if you're already using Boost, this should be good to currency values and operations, as its base 10 number and 50 or 100 digits precision (a lot).
See:
#include <iostream>
#include <iomanip>
#include <boost/multiprecision/cpp_dec_float.hpp>
int main()
{
float bogus = 1.0 / 3.0;
boost::multiprecision::cpp_dec_float_50 correct = 1.0 / 3.0;
std::cout << std::setprecision(16) << std::fixed
<< "float: " << bogus << std::endl
<< "cpp_dec_float: " << correct << std::endl;
return 0;
}
Output:
float: 0.3333333432674408
cpp_dec_float: 0.3333333333333333
*I'm not saying float (base 2) is bad and decimal (base 10) is good. They just behave differently...
** I know this is an old post and boost::multiprecision was introduced in 2013, so wanted to remark it here.
Know YOUR range of data.
A float is only good for 6 to 7 digits of precision, so that means a max of about +-9999.99 without rounding. It is useless for most financial applications.
A double is good for 13 digits, thus: +-99,999,999,999.99, Still be careful when using large numbers. Recognize the subtracting two similar results strips away much of the precision (See a book on Numerical Analysis for potential problems).
32 bit integer is good to +-2Billion (scaling to pennies will drop 2 decimal places)
64 bit integer will handle any money, but again, be careful when converting, and multiplying by various rates in your app that might be floats/doubles.
The key is to understand your problem domain. What legal requirements do you have for accuracy? How will you display the values? How often will conversion take place? Do you need internationalization? Make sure you can answer these questions before you make your decision.
Whatever type you do decide on, I would recommend wrapping it up in a "typedef" so you can change it at a different time.
It depends on your business requirements with regards to rounding. The safest way is to store an integer with the required precision and know when/how to apply rounding.
Store the dollar and cent amount as two separate integers.
Integers, always--store it as cents (or whatever your lowest currency is where you are programming for.) The problem is that no matter what you do with floating point someday you'll find a situation where the calculation will differ if you do it in floating point. Rounding at the last minute is not the answer as real currency calculations are rounded as they go.
You can't avoid the problem by changing the order of operations, either--this fails when you have a percentage that leaves you without a proper binary representation. Accountants will freak if you are off by a single penny.
I would recommend using a long int to store the currency in the smallest denomination (for example, American money would be cents), if a decimal based currency is being used.
Very important: be sure to name all of your currency values according to what they actually contain. (Example: account_balance_cents) This will avoid a lot of problems down the line.
(Another example where this comes up is percentages. Never name a value "XXX_percent" when it actually contains a ratio not multiplied by a hundred.)
The solution is simple, store to whatever accuracy is required, as a shifted integer. But when reading in convert to a double float, so that calculations suffer fewer rounding errors. Then when storing in the database multiply to whatever integer accuracy is needed, but before truncating as an integer add +/- 1/10 to compensate for truncation errors, or +/- 51/100 to round.
Easy peasy.
The GMP library has "bignum" implementations that you can use for arbitrary sized integer calculations needed for dealing with money. See the documentation for mpz_class (warning: this is horribly incomplete though, full range of arithmetic operators are provided).
One option is to store $10.01 as 1001, and do all calculations in pennies, dividing by 100D when you display the values.
Or, use floats, and only round at the last possible moment.
Often the problems can be mitigated by changing order of operations.
Instead of value * .10 for a 10% discount, use (value * 10)/100, which will help significantly. (remember .1 is a repeating binary)
I'd use signed long for 32-bit and signed long long for 64-bit. This will give you maximum storage capacity for the underlying quantity itself. I would then develop two custom manipulators. One that converts that quantity based on exchange rates, and one that formats that quantity into your currency of choice. You can develop more manipulators for various financial operations / and rules.
This is a very old post, but I figured I update it a little since it's been a while and things have changed. I have posted some code below which represents the best way I have been able to represent money using the long long integer data type in the C programming language.
#include <stdio.h>
int main()
{
// make BIG money from cents and dollars
signed long long int cents = 0;
signed long long int dollars = 0;
// get the amount of cents
printf("Enter the amount of cents: ");
scanf("%lld", &cents);
// get the amount of dollars
printf("Enter the amount of dollars: ");
scanf("%lld", &dollars);
// calculate the amount of dollars
long long int totalDollars = dollars + (cents / 100);
// calculate the amount of cents
long long int totalCents = cents % 100;
// print the amount of dollars and cents
printf("The total amount is: %lld dollars and %lld cents\n", totalDollars, totalCents);
}
As other answers have pointed out, you should either:
Use an integer type to store whole units of your currency (ex: $1) and fractional units (ex: 10 cents) separately.
Use a base 10 decimal data type that can exactly represent real decimal numbers such as 0.1. This is important since financial calculations are based on a base 10 number system.
The choice will depend on the problem you are trying to solve. For example, if you only need to add or subtract currency values then the integer approach might be sensible. If you are building a more complex system dealing with financial securities then the decimal data type approach may be more appropriate.
As another answer points out, Boost provides a base 10 floating point number type that serves as a drop-in replacement for the native C++ floating-point types, but with much greater precision. This might be convenient to use if your project already uses other Boost libraries.
The following example shows how to properly use this decimal type:
#include <iostream>
#include <boost/multiprecision/cpp_dec_float.hpp>
using namespace std;
using namespace boost::multiprecision;
int main() {
std::cout << std::setprecision(std::numeric_limits<cpp_dec_float_50>::max_digits10) << std::endl;
double d1 = 1.0 / 10.0;
cpp_dec_float_50 dec_incorrect = 1.0 / 10.0; // Incorrect! We are constructing our decimal data type from the binary representation of the double value of 1.0 / 10.0
cpp_dec_float_50 dec_correct(cpp_dec_float_50(1.0) / 10.0);
cpp_dec_float_50 dec_correct2("0.1"); // Constructing from a decimal digit string.
std::cout << d1 << std::endl; // 0.1000000000000000055511151231257827021181583404541015625
std::cout << dec_incorrect << std::endl; // 0.1000000000000000055511151231257827021181583404541015625
std::cout << dec_correct << std::endl; // 0.1
std::cout << dec_correct2 << std::endl; // 0.1
return 0;
}
Notice how even if we define a decimal data type but construct it from a binary representation of a double, then we will not obtain the precision that we expect. In the example above, both the double d1 and the cpp_dec_float_50 dec_incorrect are the same because of this. Notice how they are both "correct" to about 17 decimal places which is what we would expect of a double in a 64-bit system.
Finally, note that the boost multiprecision library can be significantly slower than the fastest high precision implementations available. This becomes evident at high digit counts (about 50+); at low digit counts the Boost implementation can be comparable other, faster implementations.
Sources:
https://www.boost.org/doc/libs/1_80_0/libs/multiprecision/doc/html/boost_multiprecision/tut/floats/fp_eg/floatbuiltinctor.html
https://www.boost.org/doc/libs/1_80_0/libs/multiprecision/doc/html/boost_multiprecision/tut/floats/fp_eg/caveats.html
Our financial institution uses "double". Since we're a "fixed income" shop, we have lots of nasty complicated algorithms that use double anyway. The trick is to be sure that your end-user presentation does not overstep the precision of double. For example, when we have a list of trades with a total in trillions of dollars, we got to be sure that we don't print garbage due to rounding issues.
go ahead and write you own money (http://junit.sourceforge.net/doc/testinfected/testing.htm) or currency () class (depending on what you need). and test it.