In my trading application I have to use decimal to represent prices. I need lowest possible latency so the only acceptable solution would be to use int64 to represent decimal. I can configure globally that I do not need for example more then 5 digits after dot, then everywhere
0.0000001 is not supported
0.000001 is not supported
1 should be used instead of 0.00001
10 should be used instead of 0.0001
100 should be used instead of 0.001
1000 should be used instead of 0.01
10000 should be used instead of 0.1
100000 should be used instead of 1
and so on
Are there any libraries that help to do such kind of work? I don't understand completely if I need any library, probably I should just work with int64 and that's it? Any hints and suggestions are welcome.
upd I now realized that devide and multiply are not obvios at all. So i'm looking for some header only library that add some macros of function to devide/multiply fixed point stored in INT64.
What you're suggesting is basically fixed point arithmetic. It's a way of achieving decimal fraction calculations using only integer operations. It can have some speed advantages (on some systems), and if it's done correctly can avoid some of the errors introduced through floating point.
There will be libraries which can help, although the maths involved is quite simple. You might find it's easy enough to read up on the subject and implement it yourself.
Related
I have a c++ ledger application in which floating point is used for calculations, now what should I do to convert to fix point arthimatic (considerably upto 4 digits past decimal point) without generating more bugs in the program. Any step wise process which I should adopt or tips to prevent errors? Please suggest some test cases which will be helpful
Introduce a type Currency used in relevant computations (if not done already)
Make sure all relevant numbers are stored as Currency, not as double, or float
Define Currency with a fixed-point real type. You can use existing implementations, like CodeF00's numeric::Fixed. See also What's the best way to do fixed-point math?
I am trying to come up with a good tolerance when comparing doubles in unit tests.
If I allow a fixed tolerance as I've seen mentioned on this site (eg return abs(actual-expected) < 0.00001;), this will frequently fail when numbers are very big due to the nature of floating point representation.
If I use a relative tolerance in terms of % error allowed (eg return abs(actual-expected) < abs(actual * 0.001); this fails too often for small numbers (and for very small numbers, the computation itself can introduce rounding error). Additionally, it allows too much tolerance in certain ranges (eg comparing 2000 and 2001 would pass).
I'm wondering if there's any standard algorithm for allowing tolerance that will work for both small and large numbers. Should I try for some kind of base 2 logarithmic tolerance to mirror floating point storage? Should I do a hybrid approach based on the size of the inputs?
Since this is in unit test code, performance is not a big factor.
The specification of tolerance is a business function. There aren't standards that say "all tolerance must be within +/- .001%". So you have to go to your requirements and figure out what's going to work for you.
Look at your application. Let's say it's for some kind of cutting machine. Are they metal machining tolerances? .005 inches is common. Are they wood cabinet sawing tolerances? 1/32" is sloppy, 1/64" is better. Are they house framing tolerances? Don't expect a carpenter to come closer than 1/4". Hand cutting with a welding torch? Hope for about an inch. The point is simply that every application depends on something different, even when they're doing equivalent things.
If you're just talking "doubles" in general, they're usually good to no better than 15 digits of precision. Floats are good to 7 digits. I round those down by one when I'm thinking about the problem (I don't rely on a double being accurate to more than 14 digits and I stop with floats at six digits); however, if I'm worried about more than the 12th digit of precision I'm generally working with large dollar amounts that have to balance precisely, and I'd be a fool to use non-integer math for them. Business people want their stuff to balance to the penny, and wouldn't approve of rounding off addition operations!
If you're looking at math library operations such as the trig functions, read the library's documentation on each function.
what is the best practice to check for numerical precision in algorithms?
Is there any suggested technique to resolve the problem "how do we know the result we calculated is correct"?
If possible: are there some example of numerical precision enhancement in C++?
Thank you for any suggestion!
Math::BigFloat / Math::BigInt will help. I must say there are many libraries that do this, I don't know which would be best. Maybe someone else has a nice answer for you.
In general though, you can write it twice: once with unlimited precision, and one without then verify the two. That's what I do with the scientific software I write. Then I'll write a third that does fancier speed enhancements. This way I can verify all three. Mind you, I know the three won't be exactly equal, but they should have enough significant figures of corroboration.
To actually know how much error is difficult to obtain accurately --remember order of operations of floating point numbers can cause large differences. It's really problem specific but if you know the relative magnitude of certain numbers you can change the order of operations to get accuracy (multiply a list in sorted order for example). Two places to look for investigating this is,
Handbook of Floating-Point Arithmetic
What Every Computer Scientist needs to know about Floating Point Arithmetic.
Anatomy of a Floating Point Number
Have a look at interval arithmetic, for example
http://www.boost.org/doc/libs/1_53_0/libs/numeric/interval/doc/interval.htm
It will produce upper and lower bounds on results
PS: also have a look at http://www.cs.cmu.edu/~quake/robust.html
i have a float value that is hundreds of digits long (like the first 100 digits of pi - 3) and need a way to operate on it. is there any way to store and operate on the float that has a large number of decimals and maintain much precision with built in libraries? is there anything like python's Decimal module in c++?
The other answers all point to high precision integer libraries. There are however a few floating point libraries around:
The High Precision Arithmetic library
The GNU Multiple Precision Arithmetic Library (GMP) "Arithmetic without limitations"
The GNU multiple-precision floating-point computations with correct rounding (the GNU MPFR library). There's also a C++ wrapper.
NTL: A Library for doing Number Theory. Together with NTL::RR you can use this even within boost.
The LBNL double-double precision, quad-double precision and arbitrary precision software.
... and don't forget the possibility that you can always implement your own solution. (Might not be the most effective or fastest solution, but it's "the" solution if you want to learn something.
No built-in library, but you can do that using Bignum arithmetics :) http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic.
What a Bignum is: an array (vector) of digits. You can easily implement sum/difference....
I've actually asked something simillar here: STL big int class implementation
Unless it is some extra exotic platform, where a float is 100+ bytes long, you will find it hard to archive what you want without a library for big numbers.
I want to compute 10 raised to the power minus m. In addition to use the math function pow(10, -m), is there any fast and efficient way to do that?
What I ask such a simple question to the c++ gurus from SO is that, as you know, just like base 2, 10 is also a special base. If some value n times the 10's power minus m, it is equivalent to move n's decimal point to the left m times. I think it must be a fast and efficient way to cope with.
For floating point m, so long as your standard library implementation is well written, then pow will be efficient.
If m is an integer, and you hinted that it is, then you could use an array of pre calculated values.
You should only be worrying about this kind of thing if that routine is a bottleneck in your code. That is if the calls to that routine take a significant proportion of the total running time.
Ten is not a special value on a binary machine, only two is. Use pow or exponentiation by squaring.
Unfortunately there is no fast and efficient way to calculate it using IEEE 754 floating point representation. The fastest way to get the result is to build a table for every value of m that you care about, and then just perform a lookup.
If there's a fast and efficient way to do it then I'm sure your CPU supports it, unless you're running on an embedded system in which case I'd hope that the pow(...) implementation is well written.
10 is special to us as most of us have ten fingers. Computers only have two digits, so 2 is special to them. :)
Use lookup table there cant be more than 1000 floats and especially if m is integer.
If you could operate with log n instead of n for a significant time, you could save time because instead of
n = pow(10*n,-m)
you now have to calculate (using the definition l = log10(n))
l = -m*(l+1)
Just some more ideas which may lead you to further solutions...
If you are interested in
optimization on algorithm level you
might look for a parallelized
approach.
You may speed up on
system/archtectural level on using Ipp
(for Intel Processors), or e.g. AMD
Core Math Library (ACML) for AMD
To use the power of your graphics
card may be another way (e.g. CUDA for NVIDEA cards)
I think it's also worth to look at
OpenCL
IEEE 754 specifies a bunch of floating-point formats. Those that are in widespread use are binary, which means that base 10 isn't in any way special. This is contrary to your assumption that "10 is also a special base".
Interestingly, IEEE 754-2008 does add decimal floating-point formats (decimal32 and friends). However, I'm yet to come across hardware implementations of those.
In any case, you shouldn't be micro-optimizing your code before you've profiled it and established that this is indeed the bottleneck.