C++ accounting application floating point to fixed point - c++

I have a c++ ledger application in which floating point is used for calculations, now what should I do to convert to fix point arthimatic (considerably upto 4 digits past decimal point) without generating more bugs in the program. Any step wise process which I should adopt or tips to prevent errors? Please suggest some test cases which will be helpful

Introduce a type Currency used in relevant computations (if not done already)
Make sure all relevant numbers are stored as Currency, not as double, or float
Define Currency with a fixed-point real type. You can use existing implementations, like CodeF00's numeric::Fixed. See also What's the best way to do fixed-point math?

Related

Is it safe to use double for scientific constants in C++?

I want to do some calculations in C++ using several scientific constants like,
effective mass of electron(m) 9.109e-31 kg
charge of electron 1.602e-19 C
Boltzman constant(k) 1.38×10−23
Time 8.92e-13
And I have calculations like, sqrt((2kT)/m)
Is it safe to use double for these constants and for results?
floating point arithmetic and accuracy is a very tricky subject. Read absolutely the floating-point-gui.de site.
Errors of many floating point operations can accumulate to the point of giving meaningless results. Several catastrophic events (loss of life, billions of dollars crashes) happened because of this. More will happen in the future.
There are some static source analyzers dedicated to detect them, for example Fluctuat (by my CEA colleagues, several now at Ecole Polytechnique, Palaiseau, France) and others. But Rice's theorem applies so that static analysis problem is unsolvable in general.
(but static analysis of floating point accuracy could sometimes practically work on some small programs of a few thousand lines, and do not scale well to large programs)
There are also some programs instrumenting calculations, for example CADNA from LIP6 in Paris, France.
(but instrumention may give a huge over-approximation of the error)
You could design your numerical algorithms to be less sensitive to floating point errors. This is very difficult (and you'll need years of work to acquire the relevant skills and expertise).
(you need both numerical, mathematical, and computer science skills, PhD-level)
You could also use arbitrary-precision arithmetic, or extended precision one (e.g. 128 bit floats or quad-precision). This slows down the computations.
An important consideration is how much effort (time and money) you can allocate to hunt floating point errors, and how much do they matter to your particular problem. But there is No Silver Bullet, and the question of floating point accurary remains a very difficult issue (you could work your entire life on it).
PS. I am not a floating point expert. I just happen to know some.
With the particular example you gave (constants and calculations) : YES
You didn't define 'safe' in your problem. I will assume that you want to keep the same number of correct significant digits.
doubles are correct to 15 significant digits
you have constants that have 4 significant digits
the operations involves use multiplication, division, and one square root
it doesn't seem that your results are going to the 'edge' cases of doubles (for very small or large exponent value, where mantissa loses precision)
In this particular order, the result would be correct to 4 significant digits.
In the general case, however, you have to be careful. (probably not, and this depend on your definition of 'safe' of course).
This is a large and complicated subject. In particular, your result might not be correct to the same number of significant digits if you have :
a lot more operations,
if you have substractions of numbers close to each other
other problematic operations
Obligatory reading : What Every Computer Scientist Should Know About Floating-Point Arithmetic
See the good answer of #Basile Starynkevitch for other references.
Also, for complex calculations, it is relevant to have some notion of the Condition number of a problem.
If you need a yes or no answer, No.

Any reason why Fortran is outputting some strange numbers for Project Euler 57?

learned python in the beginning of the summer, need to switch to Fortran for lab work. Could someone please help me discern why Fortran is outputting such odd numbers when doing simple addition? The photo below should be a good explanation of what I am trying to do with the program.
Fortran vs Python program
From Python's floating-point tutorial
almost all platforms map Python floats to IEEE-754 "double precision"
which in Fortran terms is a double precision or real(kind=REAL64) variable. Note that Python is weakly typed; you can stuff pretty much whatever you want into a Python variable and it just sort of knows what to do with it. Fortran is strongly typed so if you want your floating point data stored as REAL32, REAL64, or REAL128 (whatever your compiler defines in the ISO_Fortran_env module), you have to explicitly tell Fortran which specific type of float you want. By default, Fortran reals are REAL32 (so-called 'single precision') so you shouldn't be surprised that the results don't match what Python is generating.
That, of course, presumes you know the Secret Mystery Knowledge of the default numerical precision of both Fortran and Python, something we are all born with but which most of us lose along with our baby teeth.
Put another way, there's no way you could know this unless you knew the right question to ask in the first place, which nobody does the first time they see weird, seemingly-inconsistent floating point behavior. Back when FORTRAN was still taught, this sort of problem was introduced pretty early in the curriculum because the language is intended for crunching numbers and the problems with mixed-type and mixed-precision arithmetic are serious and well known. You had to learn about these pitfalls quickly because it was the difference between getting believable answers and garbage.
Modern languages are designed to simplify the delivery of cat videos. No real computer scientist would be caught dead discussing floating point mathematics so you need to search obscure backwater websites for information on how to make your numbers add up good and do other stuff good too. There is good info out there but again, you need to know what you're looking for in order to find it which most programmers don't when they hit this problem for the first time.
The short answer is to understand how computers simulate real numbers, how the languages you're using store those sorts of numbers, and ensure that the precision your application needs is supported by the data types you use. Hopefully that's more helpful than telling you to rephrase your question or RTFM.
And for what it's worth, I've been bitten by a similar problem recently where I had converted a code from single to double precision, forgetting that one of the binary files I was writing expected a single precision value. I only found this out during testing when visualization software choked on the broken binary file. The solution was obvious in hindsight; I reverted one variable back to single precision and all was well. The point is, even experienced people get tripped up by floating point. Barbie was right; math is hard...

How to get specifics of floating point type C/C++

I am developing a program for a couple of rather poorly documented MCUs. So far, the most pressing problem is that I have to get all of these MCUs to constantly communicate (send/recieve) floating-point data, and I have no idea exactly what the specifications are for the floating point types. In other words, I cannot make sure that one floating point type will have the same value if I send it along a serial/parallel connection to another MCU. Everywhere I have looked does not give me specifics for how they handle them (precision, mantissa, location of sign bit, etc...)
I have the standard fixed point integer types like int and long figured out; this applies explicitly to floating point types like float and double.
The worst part is that I do not have access to the standard library for every MCU. That means I cannot use std::numeric_limits or other stuff like that.
As a last resort, I can create my own struct, class, or other type and use some well-placed logic operators to get each data type to do what I want, but this is ultimately undesirable for my project. The same goes for trial-and-error of what the bit structure of every floating point type is for every MCU.
So, is it possible to not only see the specifics of the floating point types, but also possibly change them if they don't follow the standard? Or, is it as simple as "You need to get a better MCU"?
EDIT #1:
As I have recently tested my 12 MCUs, only 5 of them support IEEE standard for single and double precision. The other seven all use unique formats for both single and double precision.
EDIT #2:
As suggested, I ran Kahan's Paranoia Test Script, suggested by Simon Byrne:
You could try running Kahan's paranoia test script, which is available in several different languages from Netlib. This tries to figure out the floating point characteristics by test computations.
This worked well
for two of my MCUs. The remaining five do not have enough memory to run the test. However, the two I did manage to decode have extremely weird ways of handling the sign bit and endianess, and I'll have to look into some weird logical operations so as to make a foolproof compatibility layer.
You could try running Kahan's paranoia test script, which is available in several different languages from Netlib. This tries to figure out the floating point characteristics by test computations.

Working around float or double numbers in C++. Errors of representation. Loss of decimal values [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 8 years ago.
I heard that C/C++ has a problem with the management of the float point numbers.
I've implemented a simple program to try it. It consists in a change machine: the user enter the quantity to charge and the quantity paid, and the program calculates the number of coins for each coin type to give as change.
Here is the code: Link to my google drive folder with the code
The thing is, when you insert a non-integer value, the program enter in a loop and never ends.
I've printed the content of the variables to find out what's going on, and, somehow, from a 2 decimal value let's say: 0.10, the program changes its value to a 0.0999998.
Then, the remaining change to be processed never is 0 and it enters in a infinite loop.
I've heard that this is due to the machine representation of the float point numbers. I've experimented the same either windows and Linux; and also programming it in Java, but I don't remember to have had the same issue in pascal.
Well, Now the question is: what is the best workaround for this?
I've thought that one possible solution is using fixed point representation, via external libraries as: http://www.trenki.net/content/view/17/1/ or http://www.codef00.com/code/Fixed.h . Other maybe is to use a precision arithmetic library as: GMP
Neither C nor C++ has a problem with floating point values. You as the programmer are trusted to use floating point appropriately in any language supporting it.
While integer variables cannot store fractions nor out of bounds values, floating point can only store a specific subset of fractions. A high quality floating point implementation also gives tight guarantees for the accuracy of calculation.
Floating point numbers are not rational numbers, which would need infinite space to store reliably.

Checking Numerical Precision in Algorithms

what is the best practice to check for numerical precision in algorithms?
Is there any suggested technique to resolve the problem "how do we know the result we calculated is correct"?
If possible: are there some example of numerical precision enhancement in C++?
Thank you for any suggestion!
Math::BigFloat / Math::BigInt will help. I must say there are many libraries that do this, I don't know which would be best. Maybe someone else has a nice answer for you.
In general though, you can write it twice: once with unlimited precision, and one without then verify the two. That's what I do with the scientific software I write. Then I'll write a third that does fancier speed enhancements. This way I can verify all three. Mind you, I know the three won't be exactly equal, but they should have enough significant figures of corroboration.
To actually know how much error is difficult to obtain accurately --remember order of operations of floating point numbers can cause large differences. It's really problem specific but if you know the relative magnitude of certain numbers you can change the order of operations to get accuracy (multiply a list in sorted order for example). Two places to look for investigating this is,
Handbook of Floating-Point Arithmetic
What Every Computer Scientist needs to know about Floating Point Arithmetic.
Anatomy of a Floating Point Number
Have a look at interval arithmetic, for example
http://www.boost.org/doc/libs/1_53_0/libs/numeric/interval/doc/interval.htm
It will produce upper and lower bounds on results
PS: also have a look at http://www.cs.cmu.edu/~quake/robust.html